overviews /

Analysis of 100 Weeks of Curated AI News

A look at trends from 100 weeks of AI news, and AI-generated AI news snippets!

Analysis of 100 Weeks of Curated AI News

Image credit: AI News Word Cloud across 100 Weeks

With this week’s “Last Week in AI,” Skynet Today has just released its 100th edition of our weekly newsletter summarizing each week’s major AI news! That’s almost 2 years of covering the developments in AI, sifting through the often noisy AI news landscape for important trends, while highlighting potential hypes and concerns.

To commemorate this occasion, we’re publishing a few notable statistics on these past 100 weeks. And, for fun, we’ve fine-tuned a GPT-2 model on all the AI news articles we’ve covered so far to generate a few AI news snippets.

Article Count
The number of articles per newsletter increased after mid-2019 when we adopted a more automated pipeline for collecting and filtering AI news articles.
AI Topics
It seems like at least for the articles we collect, ones related to "robots" grew much faster than other areas. Note: keyword trends also count related terms - for example, "hardware" also counts mentions of "GPU"
AI News Topics
After late 2019, articles regarding facial recognition grew much faster than others, with Covid-19 articles making a sudden entrance starting early 2020. Articles regarding Deep Fakes also saw a sudden jump starting mid-2019.
AI Institution Topics
Perhaps unsurprisingly, Google dominates the AI news cycle in terms of mentions of institution names. In distant seconds are the other tech giants of Amazon, Facebook, and Microsoft.
Sentiment
Sentiment analysis of article summaries across various topics has some expected but neat results; some AI applications, such as art, have mostly positive coverage, while others such as surveillance have generally negative coverage.

AI-Generated AI News Articles

We finetuned a GPT-2 model on the AI articles we linked throughout our newsletters to generate the follow AI news snippets. While they’re written in the style of AI news, mentioning the appropriate keywords and phrases, they’re generally logically inconsistent and contain obvious factual errors. For example, the first snippet mentions a researcher by the name of Atushi Sakamoto. While this name does belong to a real researcher, in real life he is a Professor of Integrated Life Sciences at Hiroshima University, not someone studying AI at the University of Tokyo. This is unsurprising given how these models work (see here, here, and here).

But, reading these generated snippets can still be an interesting experience - some are as bad as others are hilarious:

Researchers develop a traffic light for self-driving cars
A pair of new papers from University of Tokyo Institute of Technology researchers and the World Health Organization claim to have addressed one of the great challenges of artificial intelligence: traffic lights. Thanks to machine learning, they can illuminate traffic for much more effective and safer operation than they could on paper. In research authored by Atsushi Sakamoto and colleagues at the Institute of Intelligent Systems and Decision Support in Tokyo, the team presents an efficient way to turn traffic lights on and off. In practice, however, identifying and setting the timing for lighting within a traffic space can prove difficult, as it’s often unpredictable and there may even be visual clues that indicate the start of a car ahead or the end of a short one. Sakamoto and colleagues came up with an efficiency-enabling solution based on approaches developed by researchers at Google, IBM, and Toyota.
How I used NLP's GPT-3 to write the AI I Created
When it comes to creating AI, there are 2 main choices I have:- write a clean code or- just use the output from the original app to clean code. Either way, there are a lot of good articles and code examples on arxiv.com.The reason I choose the latter is both lovely and surprising: with a solid foundation of 30 lines of code (more on that later), this way of creating and using AI is fairly simple. I wanted a clean, sharp and concise writing experience that would let my coworkers and I take on difficult creative problems with clear writing. What I didn't expect was for the first part of the code to become a bottleneck, exposing the messiness of the AI. The second reason may be more complex: I was writing this sentence while driving, and as I was about to cross the street, I accidentally sent the car driving by on the bright day and ended up writing another code to keep driving. Despite its simplicity, the problem with GPT-3 is its inability to generate long sequences.
AI researchers propose making chatbots and voice assistants more like their real-world peers
Chatbots, which are a form of virtual personas, have long been the interface between human and machine. Advances in the past few years have democratized voice assistants, responding to the needs of multiple speakers at once, aiming to reduce set times and improve duration. But much of the field has proven difficult to apply conversational AI effectively in the real world, especially on sensitive tasks involving such as sales or customer service.To address this issue, a team from MIT Technology Review (MIT Tech Review) and Beijing-based research institute Beijing Brain Institute have developed a novel approach to training conversational AI systems on a dataset of short conversations over long periods of time. The researchers developed a chatbot named QUEBO after their lab name, which stands for QUEBO, "Quō-" or "QUE-BO." They say the research will serve as a benchmark to benchmark conversational AI’s capability to handle small conveniences like social media conversations in real time.
Amazon to Retrain a Third of Its U.S. Workers as Automation Advances
So far, the American workforce has been largely in workers of color. But Amazon is about to ramp up hiring of some of its more 200,000 hourly workers, according to a person familiar with the company’s plans. The new hires won’t be those in the same field as its more self-professed “white-collar workers,” as the company abbreviates its growing staff. Instead, the roughly 1,000 new workers will be drawn from two large and multiracial groups: Amazon has laid off thousands of Black and Latinx workers, has laid off dozens of Latinx employees, and is currently seeking Federal Employment Development money to pay for new workers. “Right now, the numbers of Black, Latinx, and African American workers are shrinking, because there’s still not enough talent available to continue these high-volume, high-pressure work,” the person familiar with Amazon’s plans explained. Amazon declined to comment on the more than 1,000 new positions it plans to employ.
Meet Microsoft's first AI-powered coffee machine
Jeff Dean, chair of Microsoft’s artificial intelligence (AI) division, recently sat down with WIRED senior writer Will Knight to discuss the value of building products that do a lot of good, but don’t always do much of anything else. Dean tells WIRED that he thinks we’ll all need to find some sort of “general purpose product” — a conceptual mashup of software and hardware that completely reimagines the capabilities of a specific computer (or, more generally, a business) in order to tackle one task. It’ll probably require buy-in from product builders, analyst agnostics, human resources departments, call centers, product managers, and a few other well-intentioned minds, but Microsoft has a pretty good shot at that right now. Dean tells WIRED that while there’s certainly value in building product companies that “do something interesting,” they also need to do it in a way that can tackle customer needs.

See here for all the raw generated article snippets.

For those who are interested, see here for the code we used to generate the plots above as well as train and run the GPT-2 model. Or, play with the data right away by copying and messing around with this Colab Notebook.

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x