digests /

Skynet This Week #2

Smart speaker drama, self driving anxiety, soul searching, and more!

The Hype

Why AI will not replace radiologists

Dr. Hugh Harvery, Medium – Why AI will not replace radiologists

Popular press is abuzz with proclamations that AI will consume all human jobs starting with ones that involve labor and repetition; experts suggest this would create severe problems for people who cannot afford to transition out of those positions. In the medical field, AI experts praise the ability for machines to perform faster and better diagnostics than their human radiologist counterparts. But will AI fully replace them? One doctor thinks otherwise.

More researchers call out hype

Steve Levine, Axios – AI researchers are halting work on human-like machines

Andrew Moore, dean of computer science at Carnegie-Mellon University, has joined the crowd of researchers calling attention to the limits of deep learning. Although he acknowledges its successes in areas like speech recognition, he joins Geoff Hinton in arguing that deep learning will not lead to the human-level intelligence. The headline is a bit overdramatic in that it is arguable AI researchers are simply continuing what they have always done – exploiting the best currently known approaches as well as thinking of new ideas in the very long term quest towards fully human-like machines – but it is important to acknowledge the growing trend of academics making it clear the AI we have today is very far from human intelligence.


The Panic

Amazon echo accidentally recorded some people

Gary Horcher, KIRO-7 – Woman says her Amazon device recorded private conversation, sent it out to random contact

One family got an uncomfortable surprise when their Amazon Echo sent a voice recording of their household to a friend. This sounds sinister, but the explanation was a series of mishearings — Alexa thought it heard “Alexa, send a message” and somehow completed the entire action sequence. It’s worth noting that this story has little to do with AI and doesn’t even require a smart speaker — any cell phone can accidentally “pocket dial” a contact and leave a voicemail. What it does have to do is the increasingly relevant problem of designing conversational UI well, which this event will surely inform going forward.

The Google Assistant fired a gun

Cherlynn Low, Engadget – Google Assistant fired a gun: We need to talk

We are happy when a computer can help turn on a light or tell us the weather today, but what about when one fires a gun? In today’s America of mass shootings, this piece emphasizes fear and assumes that Google Assistant is sentient. Unfortunately for the author, “teaching” Google Assistant amounts to mapping a set of words to an action (i.e. “call an Uber” -> “tell Google to tell Uber to call one to the address that I saved”) and not some technological super-villain training. Fear not, for the little Google Assistant is child’s play in comparison to autonomous military sentry guns.


The good coverage

Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots

Cade Metz, New York Times – Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots

Mark Zuckerberg and Elon Musk had dinner in 2014, and it sounds like it was awkward. The New York Times has the full story on the origins of a long-running feud between Mark Zuckerberg and Elon Musk over the risks of AI. Key events also include a Palm Springs conference and several Twitter tussles. The story is a great introduction to the AI safety debate and most of the key players.

Uber’s autonomous car killed a pedestrian because two safety features were disabled

Daisuke Wakabayashi, NYTimes – Emergency Braking Was Disabled When Self-Driving Uber Killed Woman, Report Says

The National Transportation Safety Board has released its initial report on the pedestrian fatality in Uber’s self-driving car program. Uber’s AI had a tendency to detect non-existent obstacles, so they disabled two emergency braking features. They also operated self-driving cars with only a single human safety driver, requiring that driver to simultaneously watch the road and monitor the self driving system. The incident illustrates the importance of redundant safety features in safety-critical AI systems.

Emails Show How Amazon is Selling Facial Recognition System to Law Enforcement

ACLU Northern California – Emails Show How Amazon is Selling Facial Recognition System to Law Enforcement

Maybe we’re not quite at Minority Report levels of surveillance, but the move by governments to incorporate facial recognition technology sets a troublesome precedent–one that is strongly criticized by privacy and civil rights advocates alike. The city of Orlando and the Washington County Sheriff’s Office have already deployed Rekognition, Amazon’s proprietary platform that compares up to 100 faces in a given image against its database of millions of identities. China is already using similar technology to track its citizens; will the United States be next?

Microsoft is creating a tool for catching biased AI algorithms

Will Knight, MIT Tech Review – Microsoft is creating an oracle for catching biased AI algorithms

As AI becomes more and more mainstream from personal assistants to self-driving cars, the idea of fairness has become increasingly important. If AI is supposed to mimic human cognition, and humans are implicitly biased then what about AI? Algorithmic flaws may inadvertently have very real consequences, and major companies investing in this area of research are finally doing something about it.


Expert Opinions & Discussion within the field

Timely Discussions on the implications of self driving cars

Jayson Demers, The Next Web – Self-driving cars will kill people and we need to accept that

Jerry Kaplan, The Wall Street Journal – Why We Find Self-Driving Cars So Scary

Will self-driving cars be the perfect chauffeur we always wanted? No, but we do have statistical arguments for how autonomous vehicles will improve driver safety and reduce the number of automobile-related deaths every year. In fact, some research even suggests that waiting for self-driving technology to be perfect could actually be counterproductive. Still, as Jerry Kaplan argues that it is likely true that “even if autonomous cars are safer overall, the public will accept the new technology only when it fails in predictable and reasonable ways.” The increasing role technology plays in human life is heating up conversation as lawmakers begin to craft policy.

AI can be weaponized, like any tech - should researchers help?

Gregory C. Allen, Nature – AI researchers should help with some military work

When news leaked that Google was working with the military to analyze drone footage using AI there was plenty of uproar, with thousands of employees signing a petition demanding Google to stop. Subsequently, Google released a set of principles for developing AI systems, outlining that they would not develop AI systems for the military. Greg Allen argues that such an all-or-nothing attitude towards working with the military is not only a security risk but could prevent progress in AI being used as a defensive measure. He proposes a more nuanced approach that will help the military use AI in ethical and moral ways.


Explainers

Shreya Shankar, The Gradient – How AI learned to be creative

The Gradient has a great review of deep learning-based artwork, including neural style transfer and music generation. Check it out!

Favourite headline

TechCrunch – Eric Schmidt says Elon Musk is ‘exactly wrong’ about AI

Favourite Tweet

Favorite meme


That’s all for this digest! If you are not subscribed and liked this, feel free to subscribe below!

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x