F
overviews / overview, news,

AI News in 2021: a Digest

An overview of the big AI-related stories from 2021

AI News in 2021: a Digest

Overview

With 2021 over, we’d like to reflect on what’s happened in AI during a year that began in the midst of the pandemic, and ended still in the midst of the pandemic. Above is a wordcloud of the most common words used in titles of articles we’ve curated in our ‘Last Week in AI’ newsletter over this past year. This reflects about 1250 articles that we’ve included in the newsletter in 2021:

Counts of terms in articles vs time

Digging a bit deeper, we can see the most popular topics covered in these news articles based on keywords in their titles:

Counts of terms in article titles vs time

Among institutions, Google still receives by far the most coverage:

Counts of terms in article titles vs time

But enough overview – let’s go through the most significant articles we’ve curated from the past year, month by month.

As in with our newsletter, these articles will be about Advances & Business, Concerns & Hype, Analysis & Policy, and in some cases Expert Opinions & Discussion within the field. They will be presented in chronological order, and represent a curated selection that we believe are particularly noteworthy.

January

Chris Mills Rodrigo / Getty Images via The Hill

Newsletter links: Week 1 | Week 2 | Week 3 | Week 4
Highlights:

Our Short Summary: A big topic carried over from last year is the explosion of facial recognition software and the growing public pushback they are receiving, and January started off with new lawsuits and regulations on their uses. Separately, OpenAI publicized a new impressive work called CLIP that is able to generate pictures from language prompts.

February

Benedikt Geyer and Michael Daniels / Unsplash

Newsletter links: Week 1 | Week 2 | Week 3 | Week 4 | Week 5
Highlights:

Our Short Summary: Another thread carried over from last year is the turmoil at Google’s Ethical AI team. Following the firing of researcher Timnit Gebru, Google fired Margaret Mitchell, leading to more controversy in the press. Again on the topic of facial recognition, a crowdsourced map was produced by Amnesty International to expose where these cameras might be watching. An OECD task force led by former OpenAI policy director Jack Clark was formed to calculate compute needs for national governments in an effort to craft better-informed AI policy.

March

Leon Neal / Getty Images via Scientific American

Newsletter links: Week 1 | Week 2 | Week 3 | Week 4
Highlights:

Our Short Summary: The AI Index report released in March paints an optimistic outlook on the future of AI development - we are seeing significant increases in private AI R&D, especially in the healthcare. Simultaneously, concerns about AI continue to manifest. Karen Hao of the MIT Technology Review interviewed an important player in Facebook’s AI Ethics group, and found that Facebook was over-focusing on AI bias at the expense of grappling with the more destructive features of its AI systems. In another development stemming from Google’s Ethical AI fallout, a researcher publicly rejected a grant from the behemoth.

April

Boston Dynamics via The Verge

Newsletter links: Week 1 | Week 2 | Week 3 | Week 4 | Week 5
Highlights:

Our Short Summary: Another report release in April, the Analysis on U.S. AI Workforce, shows how AI workers grew 4x as fast as all U.S. occupations. In this month we also saw more reports of EU’s growing regulations on AI commercial applications in high-risk areas. This is an important legislative framework that may be borrowed by other governments in thef future.

May

James Vincent / The Verge

Newsletter links: Week 1 | Week 2 | Week 3 | Week 4
Highlights:

Our Short Summary: Late May saw Google announcing their new large language models that can significantly impact how search and other Google products work in the future. This builds on top of the explosion of language model sizes over the last two years. Perhaps this drive to commercialize large language models is behind the firing of its Ethical AI team leads months before, who at the time were focused on characterizing the potential harm of using such models.

June

Caroline Brehman / Getty Images via WIRED

Newsletter links: Week 1 | Week 2 | Week 3 | Week 4 | Week 5
Highlights:

Our Short Summary: A worrying report from the U.N. surfaced in June that describes the possibility of a drone making autonomous target and attack soldiers during last year’s Libya conflicts. The report remains to be independently verified, but proliferation of autonomous lethal weapons looms large as there are no global agreements that limit their use. Also as a sign of things to come, June also saw King County banning government use of facial recognition technology, the first of such regulations in the U.S.

July

MORDECHAI RORVIG / Vice

Newsletter links: Week 1 | Week 2 | Week 3 | Week 4
Highlights:

Our Short Summary: CLIP+VQGAN is the technology developed by OpenAI that allows users to direct AI image generation with text prompts. While these tools have been publicized since January, AI-generated art using this technique has really taken off during the summer. The details of DeepMind’s AlphaFold2 were also shared this month and the technology was open sourced. As throughout the year as a whole, concerns and actions related to facial recognition were prominent this month. The GitHub CoPilot, an AI powered ‘autocomplete for programming’, was announced to much excitement.

August

Stanford

Newsletter links: Week 1 | Week 2 | Week 3 | Week 4 | Week 5
Highlights:

Our Short Summary: This month a team of 100+ Stanford researchers published a 200+ pages paper On the Opportunities and Risks of Foundation Models. Foundation models refer to large models trained on vast amounts of data, like GPT-3, that can be transferred to downstream tasks that don’t have a lot of data. While the name of “foundation” and how the paper was released stirred up some controversy, it is undeniable that large pre-trained models will continue to play a large role in AI moving forward, and the paper gives many insights on their capabilities and limitations. Otherwise, the typical trends of seeing many stories about facial recognition, bias, and self-driving persisted.

September

Lawfare

Newsletter links: Week 1 | Week 2 | Week 3 | Week 4
Highlights:

Our Short Summary: A U.S. federal had ruled that under current U.S. law, only people can be listed as inventors of patents, not AI algorithms. Supporters of this ruling cite the concern of AI-powered patent trolls, which may use AI to “generate” countless patents in the hope of benefiting from potential patent infringement lawsuits. Critics of this ruling say that AI-authored patents can incentivize the development of such AIs. Regardless, the U.S. can still allow AI-authored patents, but it will need new laws from Congress. As the highlights above show, there was a healthy mix of positive news related to AI as well as new information regarding scandals surrounding misinformation and military contracts.

October

Amazon via IEEE Spectrum

Newsletter links: Week 1 | Week 2 | Week 3 | Week 4
Highlights:

Our Short Summary: We saw several articles relevant to one of the biggest stories in AI from the past year – the firing of Dr. Timnit Gebru from Google. The above highlights suggest that lawyers are involved in reviewing research, which is quite unusual, and include a discussion about AI ethids research in industry more broadly. We saw yet another story about Clearview AI, a constant throghought the year. On the research side, many roboticists were excited about DeepMind acquiring and open sourcing the Mujoco physics simulator. DeepMind also continued its trend of applying Machine Learning to practical problems as it had done with AlphaFold 2, this time with new research on impressively accurate rain prediction.

November

Adobe

Newsletter links: Week 1 | Week 2 | Week 3 | Week 4 | Week 5
Highlights:

Our Short Summary: On Oct. 28 Adobe announced a new AI-powered video editing tool called Project Morpheus. It can be used to edit people’s expressions and other facial attributes, leading to some being concerned about it being a DeepFake tool despite its capabilities being quite limited. In a surprise announcement, Facebook has stated it will no longer use facial recognition on its service, and will even delete billions of records that were used as part of it. This was hailed as a welcome step by privacy advocates. Alphabet has announced that its Everyday Robots Project team has been deploying its robots to carry out custodial tasks on Google’s Bay Area campuses, in a cool demonstration of the progress their team has made since starting to work on getting robots out of their labs and out into the real world.

December

DeepMind via NewScientist

Newsletter links: Week 1 | Week 2 | Week 3 | Week 4
Highlights:

Our Short Summary: Most dramatically, exactly a year ago since announcing her being fired, Dr Timnit Gebru announced her new Distributed AI Research institute. This organization is meant to be independent of big tech funding, and therefore more effective at doing impactful AI ethics research. DeepMind once again impressed with another cross-disciplinary research work, this time with the focus being on mathematics. Yet more developments were covered with regards to Clearview, this time about it being penalized outside of the US. On the whole, the year ended without much excitement.

Conclusion

And so another year of AI news comes to an end. We obviously could not cover all the developments in this digest, so if you’d like to keep up with AI news on an on-going basis, subscribe to our ‘Last Week in AI’ newsletter!

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x