digests /

Last Week in AI #37

AI Publication Norms, Google Search Improvements, and more!

Last Week in AI #37

Image credit: Synced

Mini Briefs

Artificial Intelligence Research Needs Responsible Publication Norms

This article describes the growing conversation around responsible publication norms in AI recently started by OpenAI’s limited release of its GPT-2 language model. While there are no such established norms yet in AI, the field doesn’t have to “reinvent the wheel” and can instead borrow ideas from “nuclear, life sciences, cryptography, and othe researchers working on potentially dangerous technologies.”

Beyond risks on “intentional misuses and implicit bias,” AI research publication norms should also include a broader range of factors, including considerations on the source, likelihood, and permanence of harm the research may bring, as well as the opportunity costs of not sharing or limiting the research release.

Understanding searches better than ever before

Google has made big change to its search engine by incorporating BERT (Bidirectional Encoder Representations from Transformers), a neural network language model, to improve the relevance of its search results. This change is likely to impact “one in 10 searches in the U.S. in English.”

The main improvement BERT offers is an “understanding” of context and the ability to parse an entire sentence at once, instead of word by word:

Particularly for longer, more conversational queries, or searches where prepositions like “for” and “to” matter a lot to the meaning, Search will be able to understand the context of the words in your query.

Google hopes that with BERT, users can let go of some of the keyword-centric queries and instead search in a way that “feels natural.”

Advances & Business

Concerns & Hype

  • Military artificial intelligence can be easily and dangerously fooled - Last March, Chinese researchers announced an ingenious and potentially devastating attack against one of America’s most prized technological assets - a Tesla electric car.

  • Why Terminator: Dark Fate is sending a shudder through AI labs - Arnold Schwarzenegger means it when he says: “I’ll be back,” but not everyone is thrilled there’s a new Terminator film out this week.

  • Axon AI Ethics Board: ALPR Report - Axon’s AI and Policing Technology Ethics Board is an independent advisory board created in 2018 to advise Axon Enterprise, Inc. on ethical issues relating to its development or deployment of new artificial intelligence (AI)-powered policing technologies.

  • A face-scanning algorithm increasingly decides whether you deserve the job - An artificial intelligence hiring system has become a powerful gatekeeper for some of America’s most prominent employers, reshaping how companies assess their workforce and how prospective employees prove their worth.

  • The danger of AI is weirder than you think - The danger of artificial intelligence isn’t that it’s going to rebel against us, but that it’s going to do exactly what we ask it to do, says AI researcher Janelle Shane. Sharing the weird, sometimes alarming antics of AI algorithms as they try to solve human problems – like creating new ice cream flavors or recognizing cars on the road – Shane shows why AI doesn’t yet measure up to real brains.

  • Hospital Algorithms Are Biased Against Black Patients, New Research Shows - While the researchers studied one specific algorithm in use at Brigham and Women’s Hospital in Boston, they say their audit found that all algorithms of this kind being sold to hospitals function the same way.

  • What Do We Do About the Biases in AI? - Human biases are well-documented, from implicit association tests that demonstrate biases we may not even be aware of, to field experiments that demonstrate how much these biases can affect outcomes.

Analysis & Policy

Expert Opinions & Discussion within the field


  • Collaborating with Humans Requires Understanding Them - To demonstrate how important it is to model humans, we used the most naive human model we could and showed that even that leads to significant improvements over self-play.

  • A user-friendly approach for active reward learning in robots - In recent years, researchers have been trying to develop methods that enable robots to learn new skills. One option is for a robot to learn these new skills from humans, asking questions whenever it is unsure about how to behave, and learning from the human user’s responses.

  • The evolution of intelligence in robots: Part 1 - There’s a lot of hype playing into the robot takeover narrative. The purpose of this blog post is to present some exciting breakthroughs in robotics research while debunking fact from fiction.

  • The State of NLP Literature: Part I - This series of posts presents a diachronic analysis of the ACL Anthology, or, as I like to think of it, making sense of NLP Literature through pictures. Natural Language Processing addresses a wide range of research questions and tasks pertaining to language and computing.

  • Reproducing Google Research Football RL Results - This post documents my journey of trying (and succeeding) to reproduce some of the results presented in the Google Research Football (GRF) paper.

That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe