briefs / Facebook, NLP, panic,

The Crazy Coverage of Facebook's Unremarkable 'AI Invented Language'

Sometimes the narratives media conjures up just serve to make real life seem boring

The Crazy Coverage of Facebook's Unremarkable 'AI Invented Language'

A truly astounding exemplar of how coverage of AI goes horribly wrong.

What Happened

The paper that originated all this terror has the fun seemingly un-scary title “Deal or No Deal? End-to-End Learning for Negotiation Dialogues” and was done by researchers from Facebook and the Georgia Institute of Technology. As the title implies, the problem being addressed is the creation of AI models for human-like negotation through natural language. To tackle this, the researchers first collected a brand new dataset of 5808 negotations between plain ol’ humans with the data-collection powerhorse Mechanical Turk.

These negotiations involve two people agreeing on a split of some set of items, which have different values for each negotiator. This is a nicer problem to tackle than generic chatbot conversation, because there is a straighforward evaluation metric (how much value each negotiator manages to secure) and the language needed is less broad than generic speech. The final approach used a combination of supervised learning (just emulating the dialogues from the dataset) and reinforcement learning (learning to generate speech based on maximizing negotiation outcomes). This is a pretty hefty paper, and all this drama came about because of one tiny section:

During reinforcement learning, an agent A attempts to improve its parameters from conversations with another agent B. While the other agent B could be a human, in our experiments we used our fixed supervised model that was trained to imitate humans. The second model is fixed as we found that updating the parameters of both agents led to divergence from human language.

To be clear, this is not all that surprising, since the optimization criterion here is much more specific that developing a robust generic language to communicate about the world. And even if the criterion were that broad, there is no reason for the optimization to converge upon English without some supervised loss to guide it there. I am no professor, but even with my meager knowledge of AI I am fairly confident in saying this is a truly, utterly unremarkble outcome.

The Reactions

It started with a tweet:

And then, things developed swiftly:

Our Perspective

As the multiple hype-debunking articles just above imply, this small aspect of the research is really not a big deal. The only big deal about all this is how crazy coverage of this story got, given how mundane the AI development was. After all, few media stories have ever gotten AI researchers this heated:

And that right there is all that needs to be said.

TLDR

AI models optimizing to use nonsensical communication is not surprising nor impressive, which makes the extremely hyperbolic media coverage of this story downright impressive.

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x