briefs / OpenAI, hype, NLP,

OpenAI’s GPT2 - Food to Media hype or Wake Up Call?

An effort to encourage the AI research community to talk about responsible disclosure of technology was met by strong criticism

OpenAI’s GPT2 - Food to Media hype or Wake Up Call?

Image credit: Delip Rao, When OpenAI tried to build more than a Language Model

What Happened

On February 14 of 2019 the non-profit AI research company OpenAI released the blog post Better Language Models and Their Implications, which covered new research based on a scaled up version of their transformer-based language model 1 initially released in June 2018. The model, called GPT-2 2, was shown to be capable of writing long form coherent passages after being provided a short prompt. This is an impressive feat that previous models have struggled to do effectively but something that the field was moving towards.

Unicorn story.
The first prompt and example output highlighted in OpenAI's blog post.

In addition to presenting these impressive new results, the post commented on several potential misuses of their state-of-the-art model (such as generating misleading news articles, impersonating others online, and large-scale production of spam) and explained that OpenAI chose to pursue an unusually closed release strategy because of these potential misuses:

“Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights… This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas. Other disciplines such as biotechnology and cybersecurity have long had active debates about responsible publication in cases with clear misuse potential, and we hope that our experiment will serve as a case study for more nuanced discussions of model and code release decisions in the AI community.”

This goes against what is increasingly the norm in AI research — when sharing new findings, it is now typical to also share data, code, and pre-trained models for running the relevant experiments. This has been one of the main reasons the field has progressed so rapidly over the past decade, as researchers don’t have to spend months reproducing each other’s work and can verify and build on new results quickly. Although this has been benefitial, many in the field have also started to discuss the potential of open source code and pre-trained models for dual-use 3 and the implications of that for how researchers should share new developments.

Such a coherent set of output, as well as OpenAI’s strong stance on not releasing their model, data, and code, were notable enough to turn much attention towards it. This alone would have been enough to incite much discussion, but there was a second element to this event: prior to the AI research community being aware of this new work, multiple journalists were informed of it and provided with access to write articles to be released soon after the blog post. The flurry of media coverage almost instantly made OpenAI’s blog post and coverage of it a huge source of discussion for the AI research community.

The Reactions

As is often the case, most reporting from reputable sources covered the details accurately. For example, from OpenAI’s new multitalented AI writes, translates, and slanders:

“…But as is usually the case with technological developments, these advances could also lead to potential harms. In a world where information warfare is increasingly prevalent and where nations deploy bots on social media in attempts to sway elections and sow discord, the idea of AI programs that spout unceasing but cogent nonsense is unsettling.

“For that reason, OpenAI is treading cautiously with the unveiling of GPT-2. Unlike most significant research milestones in AI, the lab won’t be sharing the dataset it used for training the algorithm or all of the code it runs on… ”

At the same time, most coverage went with eye-catching headlines that ranged from “New AI fake text generator may be too dangerous to release, say creators” to “Researchers, scared by their own work, hold back “deepfakes for text” AI”.

Headlines about GPT2.
Screenshot by Delip Rao, from When OpenAI tried to build more than a Language Model

Hyped up results don’t make anybody happy. It hurts AI researchers by promoting fear mongering or setting unrealistic expectations, and the public by promoting incorrect understanding of their work. So finding out about new results in their field through such headlines prompted many in the AI research community to voice criticism of OpenAI’s arguably PR-first communication of research and the decision of not publicly releasing their models. Some lauded the move as a bold and necessary step towards responsible AI:

Others considered it either a futile attempt or a poor decision doing more harm than good:

  • Anima Anandkumar, Director of AI NVIDIA
  • Zachary Lipton, Professor CMU
  • Denny Britz, former Google-Brain researcher
  • Richard Socher, Principal Scientist, Salesforce Research
  • Mark O Riedl, Professor at Georgia Tech
  • Francis Chollet, author of Keras, researcher at Google
  • Matt Gardner, research scientist at AllenAI

OpenAI’s reasoning was also openly satirized by many in the AI community:

  • Yoav Goldberg, Professor at Bar Ilan University and Research Director of the Israeli branch of the Allen Institute for Artificial Intelligence
  • Yann LeCun, Chief AI Scientist at Facebook AI Research, Professor at NYU

During this, Jack Clark and Miles Brundage at OpenAI were active in trying to provide answers, share the thought process, and take feedback on how they can improve things in the future.

The conversation was prolonged enough for many follow up articles to be written going more into-depth on these topics:

In summary, while most people agreed with the need for talking about the possible implications of the technology AI researchers are building, many criticized OpenAI’s tact for the following reasons:

  • Giving reporters early information and preferential access to cutting edge research shows a focus on PR over making research contributions.

  • The malicious uses of GPT2 were merely hypothesized, and OpenAI did not even make it possible for researchers in the field to request access to the model or data. Such practices may lead to the rise of gatekeeping and disincentivize transparency, reproducibility, and inclusivity in machine learning research.

  • Given the existence of similar models and open source code, the choice to not release the model seems likely to impact researchers and individuals not interested in large-scale malicious use. The cost for training the model was estimated to $43,000, an amount insignificant for malicious actors, and a number of clones of the dataset used have popped up since OpenAI’s announcement as well.

  • OpenAI did a poor job of acknowledging prior considerations about dual use in this space.

Our Perspective

There is not much left to be said on this story, and the series of tweets above as well as the articles Who’s afraid of OpenAI’s big, bad text generator? and OpenAI’s Recent Announcement: What Went Wrong, and How It Could Be Better cover what really needs to be said well. Even if OpenAI’s stated intentions were authentic (which is likely the case given the company’s prior focus on dual-use and stated focus on promoting practices that prevent misuse of AI technologies), a better thought out approach to communicating their new research and hypothetical concerns about it was definitely possible and needed. As stated well in Who’s afraid of OpenAI’s big, bad text generator?:

“The general public likely still believes OpenAI made a text generator so dangerous it couldn’t be released, because that’s what they saw when they scrolled through their news aggregator of choice. But it’s not true, there’s nothing definitively dangerous about this particular text generator. Just like Facebook never developed an AI so dangerous it had to be shut down after inventing its own language. The kernels of truth in these stories are far more interesting than the lies in the headlines about them – but sadly, nowhere near as exciting.

The biggest problem here is that by virtue of on onslaught of misleading headlines, the general public’s perception of what AI can and cannot do is now even further skewed from reality. It’s too late for damage-control, though OpenAI did try to set the record straight.

No amount of slow news reporting can entirely undo the damage that’s done when dozens of news outlets report that an AI system is “too dangerous,” when it’s clearly not the case. It hurts research, destroys media credibility, and distorts politicians’ views.

To paraphrase Anima Anandkumar: I’m not worried about AI-generated fake news, I’m worried about fake news about AI.”

TLDR

GPT-2 is not known to be ‘too dangerous to release’ , even if it might be. Whatever the motives of OpenAI may have been, discussion of how to most responsibly share new technology with a potential for misuse is good — and misleading articles calling largely incremental AI advances ‘dangerous’ are bad. Hopefully, any future ‘experiments’ from OpenAI will results in more of the former and less of the latter.

  1. A transformer is a popular new idea for machine learning with language – read more here. A language model is an AI algorithm meant to injest sequences of text and predict what words are most likely to occur next. 

  2. Generative Pre-Training Transformer 2; an overview of the original GPT model can be read here

  3. Technologies which are designed for civilian purposes but which may have military applications, or more broadly designed for certain beneficial uses but can be abused for negative impacts 

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x