editorials / AGI, risks, opinion,

Worry about present-day AI first, and far off AGI hypotheticals second

The whole debate about existential risks AI poses to humanity in the far off future is a huge distraction.

Worry about present-day AI first, and far off AGI hypotheticals second

The following opinion piece originally appeared on the author’s blog , and has been replicated here with permission.

Elon Musk, Bill Gates, Stephen Hawking, Steve Wozniak, and many others have famously raised alarm about AI posing an existential risk to humanity. Eric Horvitz, Director of Microsoft’s Research Lab, expressed their worries as “we [as humankind] could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes – and that such powerful systems would threaten humanity.”

That sounds dramatic. And accordingly, the open letter signed by Hawking, Musk and several AI experts in 2015 has received widespread media attention.

However, I believe this whole debate about existential risks to humanity coming from AI is a huge distraction. It distracts us from looking more closely at the risks that our currently existing AI technologies pose to us. Those risks do not come from autonomous AI attacking us. Instead, they come from tech companies and governments using AI in ways that were not examined enough by the public.

We can uncover much more immediate threats from present-day AI systems if we look past the futuristic doomsday scenarios. Rather than panicking about unlikely unfriendly superintelligences, I want to encourage us to look closer at how AI is being used today in ways that are not beneficial to us.

How close are we to autonomous AI?

Those who want to alarm us all about existential dangers coming from autonomous AI assume that what is called “artificial general intelligence” or even “superintelligence” will be a reality very soon.

Wikipedia describes “Artificial general intelligence” (AGI as “the intelligence of a machine that could successfully perform any intellectual task that a human being can”. That includes elusive skills such as self-awareness and anything from conducting conversation to composing sophisticated music to teaching essay composition to other machines (or people!). While different machines can currently do some of these things, no single software is right now capable of all of these tasks.

Superintelligence” goes beyond AGI in assuming that machines will one day “possess intelligence far surpassing that of the brightest and most gifted human minds”. By definition, what tasks and skills that results in is completely beyond our imagination.

In contrast, current AI systems are usually called “narrow” AI, because they are only capable of performing very narrowly scoped tasks. Examples include programs that are able to distinguish between cats and dogs in images or recognize words in English speech.

We are currently very far away from achieving artificial general or even superintelligence. In fact, we are not much closer than we were in the early stages of AI research in the 1950s. We are just developing better and better narrow AI.

One of the main reasons for that is that we don’t even know yet what “Human intelligence” really is, let alone how it works.  This is not a useful starting point for engineering a machine capable of reproducing these skills. And we don’t know yet either whether we will be able to recreate human intelligence recreating human brains and bodies.

For the sake of argument, let us assume that we will one day build artificial general intelligence. Even AI researcher themselves, who, by profession, need to be optimistic on this matter, seem to agree that day is still far in the future — 80 years atleast. It is hard to imagine any research project reliably delivering on such a long timeline.

By that time, climate models suggest that the earth’s average temperature could be up to 5 degrees Celsius (8.6F) hotter than today. Which might give us more immediate things to worry about than the chance of AI taking over.

Most scenarios of existential threats coming from AI assume creative agency on the part of the AI. In other words, they assume that the AI is capable of evolving beyond the purpose it was designed for. Either by setting its own threatening goals. Or by using destructive means to achieve its goals that it was not supposed to use.

Yet current AI is not capable of setting its own goals or changing its means of achieving these. While current AI can tell you whether a photo shows you with the same friend that it saw in a different photo, it cannot decide whether to play beautiful violin music instead of classifying images. Those people who describe the risks of autonomous AI do not provide plausible explanations of how AI could get these abilities.

The dangers of present-day AI systems

Just because it is not superintelligent, unfortunately does not mean that present-day narrow AI is always beneficial to us. Or more precisely, it is not always used in ways that are beneficial to all of us.

Biased decision making with far-reaching consequences

AI is used to help in far-reaching decisions such as hiring, predictive policing or airport security. As researchers found out, many of these AI systems have biases against people of color, women and other disadvantaged social groups.

Hiring: Amazon used artificial intelligence to rate job applications on a five-star scale. Those stars should reflect a candidates suitability for the job.

For technical roles such as software development, these scores turned out to also reflect the candidate’s gender. The algorithm penalized resumes using terms associated with women. Examples include terms like “women’s chess club” or women-only colleges. In these male-dominated fields, it was a safe bet for the AI system to learn to recommend male candidates over women.

Amazon tried to correct for the specific biases found. But they could not ensure that the system would not learn novel ways to discriminate on the basis of gender or other sensitive attributes.

Law enforcement: In the US and in other countries, AI is being employed to help with law enforcement decisions.  Examples include: Which areas are most prone to crime and need more patrolling (PredPol system)? Which people are most likely to reoffend after a prison sentence (COMPAS system)?

These systems have often been put into place to reduce the risk of human biases in such decisions. But studies show that they produce their own biased decisions.

US news organization ProPublica has analyzed COMPAS. They found that it systematically overestimated the risk that black defendants will reoffend. And underestimated that risk for white defendants.

In a similar manner, the Human Rights Data Analysis Group studied PredPol. They found that the software could send police officers more often to neighborhoods with high proportions of people of Black and Latino inhabitants. Even when the true crime rate in these areas does not justify those decisions.

How do supposedly neutral machines end up making biased decisions?

The main source of bias comes from the data on which those machines are trained. Current AI algorithms usually generalize from large sets of example cases. Examples can be datasets of past hiring decisions or police reports of arrests in a neighborhood.

These datasets are usually based on past human decisions, with all their known biases. So, the algorithms can only learn to reproduce those human biases. If past human hiring decisions have favored men, then there is no reason for the machine learning algorithm to decide any different.

And unfortunately, such biased decisions tend to reinforce themselves. Say an AI-based system sends more police to one neighborhood due to bias.  If there is more police in a place, the chances of officers finding something offending are much higher. This increases the number of reported crimes in the area. And finally, that fact then justifies more policing as per the system’s decision making criteria.

Use of AI in mass surveillance

Governments around the world also claim that law enforcement and national security improve due to the use of AI in surveillance.

The responsible agencies face more and more data from sources such as surveillance cameras, internet traffic, financial transactions, and phone wiretaps. In some cases adopt AI to help them track domestic or foreign activities.

One such AI use case is facial recognition. Surveillance cameras have been around for years. But until recently, they required human oversight to react to recorded people and events. Face recognition allows identifying individuals without human involvement across different video streams. This allows for tracking a person’s movement over time with no particular effort or warrant.

When face recognition works, that is. As with other AI systems, studies have uncovered biases in face recognition systems. Researchers from the University of Toronto and MIT audited the commercial face recognition system used by the Orlando police. They found that recognition was much more reliable for fair over dark skin tones and male over female faces. In other words, darker-skinned women had a 31% chance of being misidentified, as opposed to 0% error on white-skinned men. This could, for example, lead to being mistakenly targeted for policing efforts. Or denied entry at automatic boarding checks.  

And it is not only governments who increasingly rely on AI for surveillance.

Companies are profiling their customers in invasive ways for ad targeting. Ad providers compile demographic and usage data about websites, e-mail, desktop, and mobile applications. They then use AI to group potential customers into micro-segments.  AI systems match those segments with ads that are supposed to appeal to those interests and demographics. To improve their systems further, ad providers track the reactions of their audience to those ads. That feedback helps ad providers refine their AI-based targeting models more and more.

And we can raise more concerns

There are more concerns we can raise about the current ways that corporations and governments use AI.

How for building and improving AI-based systems, organizations collect a lot of personal data and analyze it in invasive ways.

Or how corporations use the rhetoric of automation being an inevitable force to mask the fact that they are making very deliberate choices when using automation to reduce their human workforce. “The robots are not coming for your job, management is.

Or how new armies of invisible working poor are needed to organize and categorize the datasets that are used to train AI systems.

Let’s not get distracted from such issues!

There is a real danger in buying into the narrative that the biggest threat from AI is it taking over the world in the distant future. This distraction can lead us to minimize the risks from current AI because it is not yet “real AI”.

For example, Intel CEO Brian Krzanich has argued against AI regulation because we should not regulate a technology that is still in its infancy.

I disagree. AI is not in its infancy, it is already widely used commercially. And not all these uses are beneficial to society.

So let’s step back from the speculative science fiction and take a closer look at how AI is being used right now. That is where we will find some real and intermediate threats to us.

Again, thanks to the awesome Lisa Hehnke and Katherine Jarmul for providing feedback on this post.

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x