overviews / overview, news,

AI News in 2020: a Digest

An overview of the big AI-related stories of 2020

AI News in 2020: a Digest

Overview

With 2020 (finally) drawing to a close, it’s a good time to reflect on what happened with AI in the this most weird year. Above is a wordcloud of the most common words used in titles of articles we’ve curated in our ‘Last Week in AI’ newsletter over this past year. This reflects just about 1000 articles that we’ve included in the newsletter in 2020:

Counts of terms in articles vs time

Unsurprisingly, the vague but recognizable term “AI” remained the most popular term to use in article titles, with specifics such as “Deep Learning” or “neural network” remaining comparatively rare:

Counts of terms in article titles vs time

Digging a bit deeper, we find that Coronavirus and Facial Recognition were the biggest topics of the year, followed by bias, deepfakes, and other topics:

Counts of terms in article titles vs time

But enough overview – let’s go through the most significant articles we’ve curated from the past year, month by month. As in with our newsletter, these articles will be about Advances & Business, Concerns & Hype, Analysis & Policy, and in some cases Expert Opinions & Discussion within the field. They will be presented in chronological order, and represent a curated selection that we believe are particularly noteworthy. Click on the name of the month for the full newsletter release that started out that month.

January

Things started pretty calm in 2020, with a lot of discussion about what to expect from AI in the future, and some articles discussing issues with facial recognition and bias which will become a trend throughout the year:

February

February saw more discussions of the negative impacts of AI, along with some pieces highlighting efforts to use it for good, and the begginings of AI being connected to the Coronavirus pandemic:

March

March was a big month with three stories standing out. First is the closing of Starsky Robotics, a promising startup that worked on self-driving trucks. In a detailed blog post, the founder discussed the immense challenges in technology, safety, and economics that face the autonomous driving industry.

Second is the publicity of Clearview AI, which violated many ethical and legal norms by scraping pictures of faces on the Internet to power its facial recognition system that allows its customers, from law enforcement to retail chains, to search for anyone with a picture of their face.

Lastly is the flood of reports on the fast-developing Covid-19 and the roles AI/robotics can (and cannot) play to help alleviate the pandemic.

April

April saw a continuation of many stories centered on Covid-19, with some exceptions more related to ethical AI development:

May

May was much like April, with a lot of focus on Covid-19 and a mix of stories on ethics, jobs, and advancements:

June

This month saw the massive protests following George Floyd’s killing, leading many to re-examine police conducts in the U.S. Within the AI community, this often meant questioning police use of facial recognition technologies and the inherent bias in the deployed AI algorithms. It is under this backdrop that companies like Amazon and IBM put a pause to selling facial recognition software to law enforcement, and many nuanced conversations followed.

Other news included:

July

This month the publicitly around OpenAI’s GPT-3, a very large and flexible language model, began to soar as the company released results from its private-beta trials. Although the GPT-3 paper was published in May, it wasn’t until now that people started to realize the extent of its potential applications, from writing code to translating legalese, as well as its limitations and potentials for abuse.

Other news included:

August

Next, there was more discussion over GPT-3 kept poping up along with more of the usual concerns about facial recognition, bias, and jobs. Discussion of the Coronavirus has mostly dwindled.

September

Concerns over bias, facial recognition, and other issues with AI really came to the fore this month, with some discussion of progress also mixed in.

October

October was a more positive month, with many more stories regarding advancements and uses of AI and fewer concerning its negative aspects.

November

This month was much like the last, with ongoing discussions around ethics and issues with AI continuing as well as various advancements demonstrating how fast the field is moving.

December

This month saw another impressive AI development touted as a breakthrough by academics and the press. DeepMind’s AlphaFold 2 made a significant advance with its results in the biannual protein structure prediction competition, beating competitors and the previous AlphaFold 1 by large margins. While many experts agree that protein folding hasn’t been “solved” and caution against unfounded optimism regarding the algorithm’s immediate applications, there is little doubt that AlphaFold 2 and similar systems will have big implications for the future of biology.

In the AI community, another big news story was Google’s firing of Timnit Gebru, a leading AI ethics researcher, over disagreements regrading her recent work that highlights the bias and concerns with AI language models. This sparked a number of pointed conversations in the field, from the role of race and the lack of diversity in AI, as well as corporate censorship in industry labs.

Other news:

Podcast

Check out our weekly podcast covering these stories! Website | RSS | iTunes | Spotify | YouTube

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x