Last Week in AI #31
Comment
digests /

Last Week in AI #31

White House AI Summit, AI in Standardized Testing, and more!

Last Week in AI #31

Image credit: Nchole / Flickr

Mini Briefs

White House Summit on AI in Government

The White House recently hosted a summit on current and future applications of AI in government. It highlighted 3 current use cases of AI in different federal agencies - detecting and tracking wildfires with the Department of Defense, automate indexing and improve search for medical citations with the Natinoal Institute of Health, and discover redundant, outdated regulations with the Department of Health and Services.

While the summit didn’t produce concrete new AI initiatives, it is still a sign that the U.S. government is paying more attention to the potential benefits of AI in its operations. The report highlights the need to train more workers with knowledge of AI and AI tools in the federal government, which may stem further collaborations in worker upskilling between the government and industry.

AI Can Pass Standardized Tests—But It Would Fail Preschool

Resesarchers at the Allen Institute for Artificial Intelligence built a language model that is capable of scoring 90% on the eigth-grade New York State Regents Exam in science, consisting of fill-in-the-blank multiple-choice questions.

While impressive,

The truth is that while these systems perform well on specific language-processing tests, they can only take the test. None come anywhere close to matching humans in reading comprehension or other general abilities that the test was designed to measure.

Crucially, a language model is not trained to specifically reason about anything. It is trained to predict the next word given the previous words in a sentence or paragraph.

As such, answering many of these multiple-choice questions may not require as much comprehension and reasoning ability as one might imagine. Many times the answer tend to be the option that has the highest probability of appearing next to the words in the question, which can be easily obtained for a model trained on a large enough corpus of sentences.

Making an AI system that comprehends and reasons about these questions will probably require “common sense” knowledge not explicitly encoded in any dataset. As the author of the article puts it:

Rather than being ready for high school or college, AI has a lot of growing to do before it’s even ready for preschool.

Advances & Business

Concerns & Hype

  • Six questions to ask yourself when reading about AI - Hardly a week goes by without some breathless bit of AI news touting a “major” new discovery or warning us we are about to lose our jobs to the newest breed of smart machines. Rest easy. As two scientists who have spent our careers studying AI, we can tell you that a large fraction of what’s reported is overhyped.

  • Superhuman AI Bots Pose a Threat to Online Poker Firms, Morgan Stanley Says - The threat for online poker players is not the human desktop card sharks playing against you, but the superhuman artificial intelligence bots that could infiltrate games, according to analysts at Morgan Stanley.

  • Robots Won’t Take Away All Our Jobs, MIT Report Finds - The robots are coming, but not necessarily for your job. The likelihood that robots, automation and artificial intelligence (AI) will completely wipe out large swaths of the workforce is exaggerated, a new MIT report finds.

  • IEEE Ranks Robot Creepiness: Sophia Is Not Even Close to the Top - Since its first appearance in 2016, the humanoid bot Sophia has become something of a celebrity. Sophia’s android body and face are realistic to the point that some say “she” makes then feel uncomfortable.

  • If Computers Are So Smart, How Come They Can’t Read? - Deep learning excels at learning statistical correlations, but lacks robust ways of understanding how the meanings of sentences relate to their parts.

  • In the Deepfake Era, Counterterrorism Is Harder - The potential for deepfake deceptions in global politics gets scary very quickly. Imagine a realistic-seeming video showing an invasion, or a clandestine nuclear program, or policy makers discussing how to rig an election. Soon, even seeing won’t be believing. Deception has always been part of espionage and warfare, but not like this.

Analysis & Policy

Expert Opinions & Discussion within the field

Explainers

  • AI For Filmmaking - A new neural network is trained to recognize shot types in movies (pan, tilt, close up, etc) and can help filmmakers better analyze films.

  • Introducing a Conditional Transformer Language Model for Controllable Generation - Large-scale language models show promising text generation capabilities, but users cannot control their generated content, style or train them for multiple supervised language generation tasks.

  • The Birthplace of AI - The Dartmouth Summer Research Project on Artificial Intelligence was a summer workshop widely considered to be the founding moment of artificial intelligence as a field of research.

  • Teaching a Robot to Swipe on Tinder - When friends and family ask me how I feel about my machine-learning Tinder adventure, I tell them I’m a little embarrassed, but also a little proud. After all, it worked, didn’t it?

  • Dungeon crawling or lucid dreaming? - I’ve done several experiments with a text-generating neural network called GPT-2. Trained at great expense by OpenAI (to the tune of tens of thousands of dollars worth of computing power), GPT-2 learned to imitate all kinds of text from the internet.


That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!

Leave a comment

comments powered by Disqus
More like this
Last Week in AI #41
Follow us
Get more AI news in your inbox:
x