editorials / risks, opinion,

Why Your AI Might Be Racist

On the risks of enshrining all sorts of injustices into computer programs, where they could fester undetected in perpetuity

Why Your AI Might Be Racist

Image credit: A screen shows a demonstration of SenseTime Group's SenseVideo pedestrian and vehicle recognition system at the company's showroom in Beijing in June. (Gilles Sabrie/Bloomberg)

The following opinion piece originally appeared in The Washington Post on December 17th, 2018, and has been replicated with permission here.


Jerry Kaplan is a research affiliate at Stanford University’s Center on Democracy, Development and the Rule of Law at the Freeman Spogli Institute for International Studies, where he teaches “Social and Economic Impact of Artificial Intelligence.”

President Trump has worried — along with House Majority Leader Kevin McCarthy, then-Attorney General Jeff Sessions and other prominent opinion leaders — that Google exhibits intentional bias against conservatives. How do I know? I googled it, of course. Google, needless to say, denies that its “products or actions” are biased.

The problem is, both sides are wrong. Google has no incentive to favor one perspective over another, yet it’s impossible for its products to be entirely neutral. Search results, like so many other kinds of algorithms that rely on external sources of information, naturally expose whatever leanings or affinities that data reflects. This effect — called “algorithmic bias” — is fast becoming a common feature of our digital world, and in far more insidious ways than simple search results.

Whether Google is partisan is a matter of opinion. But consider the subtle ways that it reinforces racial stereotypes. Try typing “Asian girls” into the Google search bar. If your results are like mine, you’ll see a selection of come-ons to view sexy pictures, advice for “white men” on how to pick up Asian girls and some items unprintable in this publication. Try “Caucasian girls” and you’ll get wonky definitions of “Caucasian,” pointers to stock photos of wholesome women and kids, and some anodyne dating advice. Does Google really think so little of me?

Of course not. Google has no a priori desire to pander to baser instincts. But its search results, like it or not, reflect the actual behavior of its audience. And if that’s what folks like me click on most frequently, that’s what Google assumes I want to see. While I might take offense at being lumped in with people whose values I deplore, it’s hard to argue that Google is at fault. Yet it’s clear that such racially tinged results are demeaning to all parties involved.

Algorithmic bias can even influence whether you are sent to jail. A 2016 study by ProPublica discovered that software designed to predict the likelihood an arrestee will re-offend incorrectly flagged black defendants twice as frequently as white defendants in a decision-support system widely used by judges. You might expect such predictive systems to be wholly impartial and therefore to be blind to skin color. But surprisingly, the program can’t give black and white defendants who are otherwise identical the same risk score, and at the same time match the actual recidivism rates for these two groups. This is because blacks are re-arrested at higher rates than whites (52 percent vs. 39 percent in this study), at least in part because of racial profiling, inequities in enforcement, and harsher treatment of blacks within the justice system.

From the standpoint of a defendant, this is patently unfair: Blacks are scored as “high risk” much more often than whites with similar characteristics. But from the standpoint of the courts, the percentage of each group predicted to re-offend that went on to do so was equal. (That is, black and white defendants who scored 7 out of a possible 10 by the algorithm were re-arrested at the same rate.) In short, the algorithm can’t correct for an actual imbalance in the treatment of blacks and whites; at best it can accurately reproduce this unfortunate reality.

In the financial domain, this issue raises its head in the granting of credit. It might be true that home buyers in certain neighborhoods are more likely to default on their loans. Yet using this fact to deny any individual a mortgage violates the Fair Housing Act (a practice known as “redlining”). Steering clear of such prohibited factors is tricky since an automated credit-evaluation program might simply latch on to some correlated item, such as how many families reside at the same address. To compound the problem, artificial intelligence (AI) systems do not lend themselves to easy interrogation or explanation for their decisions, which reduces accountability. Sometimes, black boxes are truly opaque.

One particularly difficult-to-detect source of algorithmic bias stems not from the data itself, but from sampling errors that may over- or underrepresent certain portions of the target population. For instance, a face recognition system trained mainly on light-skinned images is likely to perform better on Caucasians than on people of color. In a recent famous case of such bias, a Google search for “gorillas” included many images of African Americans, to the company’s embarrassment.

As we delegate more of our decision-making to machines, we run the risk of enshrining all sorts of injustices into computer programs, where they could fester undetected in perpetuity. Addressing this critical risk should be an urgent social priority. We need to educate the public to understand that computers are not infallible mechanical sages incapable of malice and bias. Rather, in our increasingly data-driven world, they are mirrors of ourselves — reflecting both our best and worst tendencies, whether or not we wish to acknowledge the flaws. Like the Evil Queen in the legend of Snow White, how we react to this new mirror on the wall might say more about us than any computer program ever can.

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x