Out on Marginal Revolution, Alex Tabarrok has an excellent post on why “sexism and racism will never diminish“, even when people on the whole become less sexist and racist. The basic idea is that there is always a frontier – even when we all become less sexist or racist, there will be people who will be more sexist or racist than the others and they will get called out as extremists.
To quote a paper that Tabarrok has quoted (I would’ve used a double block-quote for this if WordPress allowed it):
…When blue dots became rare, purple dots began to look blue; when threatening faces became rare, neutral faces began to appear threatening; and when unethical research proposals became rare, ambiguous research proposals began to seem unethical. This happened even when the change in the prevalence of instances was abrupt, even when participants were explicitly told that the prevalence of instances would change, and even when participants were instructed and paid to ignore these changes.
Elsewhere, Kaiser Fung has a nice post on some of his learnings from a recent conference on Artificial Intelligence that he attended. The entire post is good, and I’ll probably comment on it in detail in my next newsletter, but there is one part that reminded me of Tabarrok’s post – on bias in AI.
Quoting Fung (no, this is not a two-level quote. it’s from his blog post):
Another moment of the day is when one speaker turned to the conference organizer and said “It’s become obvious that we need to have a bias seminar. Have a single day focused on talking about bias in AI.” That was his reaction to yet another question from the audience about “how to eliminate bias from AI”.
As a statistician, I was curious to hear of the earnest belief that bias can be eliminated from AI. Food for thought: let’s say an algorithm is found to use race as a predictor and therefore it is racially biased. On discovering this bias, you remove the race data from the equation. But if you look at the differential impact on racial groups, it will still exhibit bias. That’s because most useful variables – like income, education, occupation, religion, what you do, who you know – are correlated with race.
This is exactly like what Tabarrok mentioned about humans being extremist in whatever way. You take out the most obvious biases, and the next level of biases will stand out. And so on ad infinatum.
WordPress does allow double blockquotes. See an example here: https://usemymarker.c306.net/2018/07/09/tyler-cowen-consider-change-at-the-margin/