Image credit: Todd St. John / The New York Times
For AI to be widely deployed in the real world it must “earn trust from [users], civil society, governments, and other stakeholders.” Instead of relying on abstract AI principles, which many countries and companies have published, stakeholders should focus on concrete ways that can verify responsible behavior. This both makes oversight more effective and protects users from “potentially ambiguous, misleading, or false claims.”
Some of the concrete measures the article suggests include third-party auditing, AI bias and safety bounties, and developing more privacy-preserving AI algorithms.
Implementation of such mechanisms can help make progress on the multifaceted problem of ensuring that AI development is conducted in a trustworthy fashion.
This article interviews a few prominent AI and robotics experts about the emphasis on unsupervised and self-supervised learning, as opposed to supervised learning, for future AI research. Most recent breakthroughs in AI have been in supervised learning, which uses a database of “question-answer” pairs, often times tediously annotated by humans, to train a model to answer questions. Many experts in the field believe that other forms of learning that do not need labeled data may be more critical to AI development:
“My money is on self-supervised learning,” [Dr. Yann LeCun] said, referring to computer systems that ingest huge amounts of unlabeled data and make sense of it all without supervision or reward. He is working on models that learn by observation, accumulating enough background knowledge that some sort of common sense can emerge.
Check out our weekly podcast covering these stories! Website | RSS | iTunes | Spotify | YouTube
AI Will Help Scientists Ask More Powerful Questions - Self-learning systems can discover hidden patterns in immense data sets, transcending what humans could ever find on their own.
Google releases benchmark to spur development of multilingual AI models - Google today released a natural language processing systems benchmark with nine tasks that require reasoning about semantics across 40 languages and 12 language families.
How we improved computer vision metrics by more than 5% only by cleaning labelling errors - Using simple techniques, we found annotation errors on more than 20% of popular open source datasets like VOC or COCO. By manually correcting those errors, we got an average error reduction of 5% for state-of-the-art computer vision models (and up to 8.3% for one dataset).
AI taught to instantly transform objects in image-editing software - Researchers at Adobe have devised AI-controlled software that lets you transform the shape of objects in images, and adjust the lighting and perspective, with a few simple controls.
Robots Welcome to Take Over, as Pandemic Accelerates Automation - Broad unease about losing jobs to machines could dissipate as people focus on the benefits of minimizing close human contact.
How AI Is Helping Humans Fight The Invisible Enemy - This text will focus on three aspects of how AI is helping to deal with COVID-19: before the outbreak, throughout the outbreak and during the aftermath.
Why you (probably) don’t need AI - 88% of brands are now using AI. And yet, 55% are disappointed with the results of their investment. As underwhelming as this satisfaction statistic may be, it doesn’t necessarily mean that AI technology itself is at fault. Rather, misguided adoption of AI is more likely to drive disappointment.
Security lapse exposed Clearview AI source code - Since it exploded onto the scene in January after a newspaper expose, Clearview AI quickly became one of the most elusive, secretive, and reviled companies in the tech startup scene.
Clearview AI Privacy Request Forms - This page contains links to automated forms from Clearview AI to obtain and remove personal data company collects.
That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!