Last Week in AI #39
Comment
digests /

Last Week in AI #39

Self-driving car challenges, Deepfake detections, and more!

Last Week in AI #39

Image credit: Aaron Josefczyk / Reuters

Mini Briefs

Uber’s Self-Driving Car Didn’t Know Pedestrians Could Jaywalk

After a 20 month investigation, the National Transportation Safety Board (NTSB) released some documents about its findings that detail how Uber’s self-driving car system caused the fatal accident.

In particular, the pedestrian detection system did not expect pedestrians outside of sidewalks, and it constantly went back and forth between the predictions “other” and “bicycle” in the seconds leading up to the crash. When the car realized it was on a collision course, the system triggered “action suppression,” where it held off hitting the brakes for a second for the safety operator to respond. These factors, combined with the fact that the safety operator was not looking at the road, ultimately led to the crash.

Uber has made improvements to its safety team since then, and the NTSB will be recommending regulations on preventing such accidents in the future. Still, this is a solemn reminder of the difficulty and challenges of deploying machine learning systems in the real world.

Deep fake videos could upend an election - but Silicon Valley may have a way to combat them

While there has been no documented case of deepfakes being deployed to influence an election, there is a growing fear of the possibility that a “fake video surfacing days before a major election that could throw a race into turmoil.” In response, companies and researchers across the world are working hard on deepfake detection technologies with hopes to limit their potential threat to the public trust in media and election campaigns.

For example, UC Berkeley is working on a tool that screens videos and checks if the person’s “mannierism” deviates from the norm, and Google has recently released a new dataset of deepfake videos that can aid detectionr research. However, the article notes that:

even if the detection technology turns out to be flawless, the reluctance of Facebook and other social media giants to take down even demonstrably false and misleading content threatens to limit its effectiveness.

Advances & Business

Concerns & Hype

Analysis & Policy

Expert Opinions & Discussion within the field

  • A.I. Is Not as Advanced as You Might Think - It starts with the systems it was built off of.

  • The Idea That Eats Smart People - In 1945, as American physicists were preparing to test the atomic bomb, it occurred to someone to ask if such a test could set the atmosphere on fire. This was a legitimate concern. Nitrogen, which makes up most of the atmosphere, is not energetically stable.

  • Human Art By Artificial Intelligence - When art is made by artificial intelligence, it can still be considered human art. Learn why in this excerpt from Janelle Shane’s new book.


That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!

Leave a comment

comments powered by Disqus
More like this
Last Week in AI #41
Follow us
Get more AI news in your inbox:
x