overviews / overview, news,

AI News in First Half of 2021: a Digest

An overview of the big AI-related stories of the first half of 2021

AI News in First Half of 2021: a Digest

Overview

With 2021 quickly approaching its second half, we’d like to reflect on what’s happened in AI during a year that began in the midst of the pandemic. Above is a wordcloud of the most common words used in titles of articles we’ve curated in our ‘Last Week in AI’ newsletter over this past year. This reflects just about 500 articles that we’ve included in the newsletter in 2021 so far:

Counts of terms in articles vs time

Digging a bit deeper, we find that COVID 19 is no longer at the forefront of the news, with facial recognition, bias, and deepfakes being covered most:

Counts of terms in article titles vs time

Among institutions, Google still receives by far the most coverage:

Counts of terms in article titles vs time

But enough overview – let’s go through the most significant articles we’ve curated from the past year, month by month.

As in with our newsletter, these articles will be about Advances & Business, Concerns & Hype, Analysis & Policy, and in some cases Expert Opinions & Discussion within the field. They will be presented in chronological order, and represent a curated selection that we believe are particularly noteworthy. Click on the name of the month for the full newsletter release that started out that month.

January

Chris Mills Rodrigo / Getty Images via The Hill

A big topic carried over from last year is the explosion of facial recognition software and the growing public pushback they are receiving, and January started off with new lawsuits and regulations on their uses. Separately, OpenAI publicized a new impressive work called CLIP that is able to generate pictures from language prompts.

February

Benedikt Geyer and Michael Daniels / Unsplash

Another thread carried over from last year is the turmoil at Google’s Ethical AI team. Following the firing of researcher Timnit Gebru, Google fired Margaret Mitchell, leading to more controversy in the press. Again on the topic of facial recognition, a crowdsourced map was produced by Amnesty International to expose where these cameras might be watching. An OECD task force led by former OpenAI policy director Jack Clark was formed to calculate compute needs for national governments in an effort to craft better-informed AI policy.

March

Leon Neal / Getty Images via Scientific American

The AI Index report released in March paints an optimistic outlook on the future of AI development - we are seeing significant increases in private AI R&D, especially in the healthcare. Simultaneously, concerns about AI continue to manifest. Karen Hao of the MIT Technology Review interviewed an important player in Facebook’s AI Ethics group, and found that Facebook was over-focusing on AI bias at the expense of grappling with the more destructive features of its AI systems. In another development stemming from Google’s Ethical AI fallout, a researcher publicly rejected a grant from the behemoth.

April

Boston Dynamics via The Verge

Another report release in April, the Analysis on U.S. AI Workforce, shows how AI workers grew 4x as fast as all U.S. occupations. In this month we also saw more reports of EU’s growing regulations on AI commercial applications in high-risk areas. This is an important legislative framework that may be borrowed by other governments in thef future.

May

James Vincent / The Verge

Late May saw Google announcing their new large language models that can significantly impact how search and other Google products work in the future. This builds on top of the explosion of language model sizes over the last two years. Perhaps this drive to commercialize large lanugage models is behind the firing of its Ethical AI team leads months before, who at the time were focused on characterizing the potential harm of using such models.

June

Caroline Brehman / Getty Images via WIRED

A worrying report from the U.N. surfaced in June that describes the possibility of a drone making autonomous target and attack soldiers during last year’s Libya conflicts. The report remains to be independently verified, but proliferation of autonomous lethal weapons looms large as there are no global agreements that limit their use. Also as a sign of things to come, June also saw King County banning government use of facial recognition technology, the first of such regulations in the U.S.

Conclusion

If you’ve enjoyed this piece, subscribe to our ‘Last Week in AI’ newsletter!

Also, check out our weekly podcast covering these stories! Website | RSS | iTunes | Spotify | YouTube

More like this
Follow us
Get more AI coverage in your inbox:
x