digests /

Skynet This Week #15: top AI news from 01/07-01/21

Hype, concern, great research blog posts, and more!

Our latest bi-weekly quick take on a bunch of the most important recent media stories about AI.

Advances & Business

AI’s failure to live up to the hype is starting to put off investors

Mike Lynch, Wired

Exaggerated and over the top claims of what AI techniques are capable of doing are starting to take their toll. Mike points out that while the advances in the field have been significant, the expectations that adding AI to every piece of technology will make it better will eventually lead to a decline in investor enthusiasm. He goes on to predict that once the hype passes, there will be a resurgence of interest.

AI Is About To Take The Ship’s Helm Away From Humans

Jeremy Bogaisky, Forbes

Yet another significant application of AI has become apparent: automating the control of ships. In particular, the many cargo ships that enable much of today’s globalized world are clear applications for autonomous control. Still, as with much of today’s AI technology, what we have today is nowhere near replacing humans in this context entirely.

5 Great Human-Centered AI Papers from 2018

Emma Brunskill, Medium

Countering popular views of Artificial Intelligence that it will conquer humanity or make us irrelevant, there is a building alternate narrative: the possibility of a deeply human-focused AI destiny.

Emma Brunskill, lists five important Human centered AI (HAI) papers from 2018. The papers span natural language understanding, fairness, accountability and learning for healthcare.

Facebook and Stanford researchers design a chatbot that learns from its mistakes

Kyle Wiggers, VentureBeat

It is a testament to the times that freshly released AI research now often gets covered in largely nontechnical news outlets. And so it was with “Learning from Dialogue after Deployment: Feed Yourself, Chatbot!“, a recent paper from Facebook and Stanford. In this case, the attention may well be merrited, as the paper outlines a promising approach to continuously improving AI-powered chatbots:

“As our agent engages in conversation, it also estimates user satisfaction in its responses. When the conversation appears to be going well, the user’s responses become new training examples to imitate. When the agent believes it has made a mistake, it asks for feedback; learning to predict the feedback that will be given improves the chatbot’s dialogue abilities further. On the PersonaChat chit-chat dataset with over 131k training examples, we find that learning from dialogue with a self-feeding chatbot significantly improves performance, regardless of the amount of traditional supervision.”


Concerns & Hype

How a Feel-Good AI Story Went Wrong in Flint

Alexis C. Madrigal, The Atlantic

Machine Learning was doing a very good job at diagnosing which water pipes in Flint (Michigan) were leaking lead in the tap water, but a combination of poor communication and fear caused the program to be abandoned for a virtually inefficient one. This articles explains how the city traded efficiency for the illusion of “fair treatment”.

The American public is already worried about AI catastrophe

Kelsey Piper, Vox

““People are not convinced that advanced AI will be to the benefit of humanity,” Allan Dafoe, an associate professor of international politics of artificial intelligence at Oxford and a co-author of the report, told me.”

A good summary of report on people’s perception of AI by The Center for the Governance of AI, which is based on a 2018 survey of 2,000 US adults. Surprisingly, people worry not only about the pragmatic near term prospect of unemployment due to AI but also the longer term possibility of existential risk due to AI.

For owners of Amazon’s Ring security cameras, strangers may have been watching too

Sam Biddle, The Intercept_

The weakness of their image processing software pushed the home surveillance company Ring to provide access to video feeds to humans, so that the human operators could identify objects on the video as the recognition software was supposed to do. The videos were unencrypted, easily accessible, and the customers were never informed that their private videos were watched by humans instead of algorithms.

This clever AI hid data from its creators to cheat at its appointed task

Devin Coldewey, Techcrunch

The anthropomorphization of machine learning models is one of the core things wrong with how popular publications report AI news and this piece exemplifies just that. While the article itself does a good job of explaining, in simple terms, the results of the research being reported, the clickbait title leaves much to be desired.

Researchers experiment with CycleGAN to produce Google Maps style images from satellite imagery. When an anomaly was found in the output images, they discovered encoded information in the pixel values of the output used for reconstruction. The research highlights the need for using complete, well designed reward or loss functions while creating these models.


Analysis & Policy

Learning China’s Forbidden History, So They Can Censor It

Li Yuan, The New York Times

Machine learning screening is not enough to censor China’s internet thoroughly. Humans have to complement automatic screening, but for that they have to learn what to censor. The NYT dives into an unexpected consequence of algorithmic weakness.

How Artificial Intelligence Will Reshape the Global Order

Nicholas Wright, Foreign Affairs

“For decades, most political theorists have believed that liberal democracy offers the only path to sustained economic success. Either governments could repress their people and remain poor or liberate them and reap the economic benefits. Some repressive countries managed to grow their economies for a time, but in the long run authoritarianism always meant stagnation. AI promises to upend that dichotomy. It offers a plausible way for big, economically advanced countries to make their citizens rich while maintaining control over them.”

A foreboding piece that covers the very real implications of AI for surveillance and the power of repressive governments.


Expert Opinions & Discussion within the field

The ‘Godfather of Deep Learning’ on Why We Need to Ensure AI Doesn’t Just Benefit the Rich

Martin Ford, Gizmodo

This is an engaging interview with Geoffrey Hinton, the person widely considered as the godfather of deep learning. Hinton provides his views on a variety of topics such as use of AI in weapons of mass destruction, AI’s impact on the job market, Cambridge Analytica and advice he has for people entering the field in order to make progress.

”The one piece of advice I give people is that if you have intuitions that what people are doing is wrong and that there could be something better, you should follow your intuitions. You’re quite likely to be wrong, but unless people follow the intuitions when they have them about how to change things radically, we’re going to get stuck.”

The 4 Biggest Open Problems in NLP

Sebastian Ruder

Sebastian provides context and summary of a panel discussion held at Deep Learning Indaba 2018 discussing the major open problems and the next steps in NLP. The discussion delineates important topics being talked about in the field and things we need to address like whether the agents are learning any semantics or are we just doing fancy pattern matching? Other main themes include need for work in low resource languages, inducing common-sense reasoning and drawbacks in the current datasets and evaluation procedures.

Tech Giants, Gorging on AI Professors Is Bad for You

Ariel Procaccia, Bloomberg Opinion

With knowledge of advanced AI techniques and ability to conduct research in the field becoming one of the most sought after skills today, big companies have been poaching professors from universities to be part of their labs offering lucrative salaries and abundance of data and compute. The article highlights the need for said professors to do their academic duties in order to train the next generation of researchers and keep the research incentive free. The author goes on to discuss hybrid models adopted by some companies as possible solutions to this important conundrum.


Explainers

Looking Back at Google’s Research Efforts in 2018

Jeff Dean, Google AI Blog

It is well known that Google is one of many companies that are actively involved in AI research. However, the sheer scale at which AI touches all parts of the company are made apparent in this end-of-year blog post that summarizes all the research done in 2018.

Neural Ordinary Differential Equations

Adrian Coyler, The Morning Paper

A clearly written explanation for the “Neural Ordinary Differential Equations” paper in which the authors pose neural networks as differential equations.

Interpretable Machine Learning

Christoph Molnar, Interpretable Machine Learning

A guide to building interpretable machine learning models that focuses on tabular data. It introduces various concepts of interpretability before going on to introduce ways to interpret black box models.

POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the Paired Open-Ended Trailblazer

Rui Wang, Uber Engineering

A nice blog post to complement new research from Uber on evolving AI agents better by making the task they are meant to solve increasingly tougher.

Favourite Tweet

Favorite goof


That’s all for this digest! If you are not subscribed and liked this, feel free to subscribe below!

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x