editorials / overview,

Please Stop Saying 'An AI'

Popular usage of the term ‘Artificial Intelligence’ adds to misunderstanding of AI as it exists today.

Please Stop Saying 'An AI'

Image credit: Google

The Options

Definitions of the term ‘Artificial Intelligence’ tend to fit one of the following categories:

  1. ‘field of research’ definitions, e.g.: “a branch of computer science dealing with the simulation of intelligent behavior in computers” (Merriam-Webster) , “the theory and development of computer systems able to perform tasks that normally require human intelligence” (Oxford)
  2. ‘machine intelligence’ definitions, e.g.: “the capability of a machine to imitate intelligent human behavior “(Merriam-Webster) , “intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals.” (Wikipedia) , “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” (Encyclopedia Brittanica)
  3. ‘intelligent entity’ definitions, e.g.: “a computer, robot, or other programmed mechanical device having [the] humanlike capacity [to perform operations and tasks analogous to learning and decision making in humans, as speech recognition or question answering]”. (Dictionary.com)

While all of these options are similar in that they deal with ‘intelligent behavior’ in computers, they are also quite different. The first refers to a research discipline, while the second and third describe what that research discipline seeks to create. The ways in which the term ‘AI’ can be used depend on which of these definitions you consider valid. For instance, news articles often have titles to the effect of “Google’s new AI learned X” or “A new AI can do Y,” such as:

But, such usage (“An AI Developed”, “AI can now”, etc.) is only valid with that third ‘intelligent entity’ definition. If the first ‘field of research’ definition is chosen instead, these titles would have to be rewritten as “Google’s new AI algorithm learned X” or “A new AI system can do Y.” In this piece, I’ll make the case that this definition and form of usage is superior to the alternatives, and should be adopted in most cases.

Why Should We Care

It may seem pedantic to say one of these definitions is better than any of the others, and tempting to just say all of them are fine. However, I argue that the ‘field of research’ definition of AI is better than the alternatives, primarily because of the common misunderstanding that AI programs today are independent agents with some amount of ‘free will’. In fact, what AI researchers and engineers build today are just computer programs, which are capable of emulating some aspects of human intelligence but are otherwise (for the most part) no more independent than the apps on our smartphones. Nevertheless, AI researchers recently ranked the idea that the AI algorithms they create have some human-like independence as the most common and problematic myth about AI; it was “number one by a long shot” according to a survey asking which myths about AI are most common.

I believe part of why this myth is so predominant has to do with thinking of AI in terms of the ‘intelligent entity’ definition type and using the term accordingly by saying statements such as “A new AI can do Y”. Expanding that statement results in “A new Artificial Intelligence can do Y,” and the notion that the sentence refers to ‘An Intelligence’ inevitably implies agency similar to those of animals and humans. AI researcher Julian Togelius addresses this notion well in his blog post “Some advice for journalists writing about artificial intelligence”:

Keep in mind: There is no such thing as “an artificial intelligence”. AI is a collection of methods and ideas for building software that can do some of the things that humans can do with their brains. Researchers and developers develop new AI methods (and use existing AI methods) to build software (and sometimes also hardware) that can do something impressive, such as playing a game or drawing pictures of cats.

AI researcher Zachary Lipton made this point more bluntly:

As someone who has tried to use my knowledge as an AI researcher to address popular misconceptions about AI, I have observed and continue to observe the usage of “An AI” often, and think it has the negative side effect of feeding misunderstanding of what AI is today. While it may be tempting to once again say this is being pedantic and that it’s fine to have a more ‘pop culture’ view of AI, with AI becoming increasingly embedded in our society it is more important than ever that all of us understand it. Such understanding is crucial so that we can collectively correctly focus on the most pressing issues with respect to AI (such as bias, automation, its use for surveillance, its safety) and not the issues that the agency myth implies (its potential to go rogue a la Skynet in the near term).

Therefore, given the existence of the ‘agency’ myth with respect to AI and the importance of correctly understanding what AI is actually like today, I would argue the first ‘field of research’ definition of the term ‘AI’ is better than the alternatives. As I’ll show next, this is (unsurprisingly) typically how AI researchers themselves use the term.

How Researchers Define AI

John McCarthy, one of the founders of the field of AI, defined AI as follows:

“It is the science and engineering of making intelligent machines, especially intelligent computer programs.”

Professor Christopher Manning recently cited this as the definition of ‘Artificial Intelligence’ in his summary of definitions of terms related to AI. Similarly, in “Artificial Intelligence: A Modern Approach”, Stuart Russell and Peter Norvig defined AI as:

“the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.”

To cite just one more example, in “The Quest for Artificial Intelligence: A History of Ideas and Achievements” Nils J. Nilsson defines AI as follows:

“[the] activity devoted to making machines intelligent”

There are of course many other definitions, but they tend to share the quality of considering AI a research or engineering discipline rather than a term that can be used to refer to singular algorithms or systems.

I conducted an informal survey to check whether AI researchers generally agree with this form of definition, with the following results:

While hardly a careful study of what most AI researchers feel, the result of this poll agrees with my personal observations as an AI researcher that most in the field seem to prefer the John McCarthy definition of AI to the alternatives. I point this out because I think it further lends credence to the idea that the ‘field of research’ definition that does not allow for “An AI” as a phrase is superior to the alternatives. Granted, no one has any right to impose a particular way of defining terms on others, but I still think using the term ‘AI’ the same way the people actually working on AI in the real world (as opposed to science fiction, where ‘An AI’ is appropriate) makes a lot of sense.

TLDR

AI researchers tend to define ‘Artificial Intelligence’ to mean something along the lines of “the science and engineering of making intelligent machines, especially intelligent computer programs” – the ‘field of research’ definition. This is in contrast to definitions often used in popular media, which use ‘AI’ to refer to particular programs or systems with sentences – the ‘intelligent entity’ definition. The latter usage of AI reinforces an already common myth that present day AI has some amount of human-like agency, when in fact this is not really the case. Therefore, I believe that the ‘intelligent entity’ type definition should be avoided in favor of the ‘field of research’ definition, and the term should be used accordingly.

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x