digests /

Skynet This Week #3

Robot uprisings, AI bias, and new policy initiatives, and more!

The Hype

What Happens if AI Doesn’t Live Up to the Hype?

Jeremy Kahn, Bloomberg – What Happens if AI Doesn’t Live Up to the Hype?

Jeremy Kahn covers the increasing commentary in the artificial intelligence community on the limits of deep learning, including recent articles by Filip Piekniewski, Gary Marcus, and John Langford. On one hand, the critique of deep learning that it is unlikely to lead to artificial general intelligence doesn’t detract from its ability to solve many smaller problems. On the other hand, lofty unmet expectations could foretell disillusionment and future funding problems for the field.


The Panic

Manager forgets to renew contract and automated systems lock employee out

Jane Wakefield, BBC News – The man who was fired by a machine

When technology is not well-understood, everything revolves around buzzwords–in this case, AI. A man’s employment was suddenly cut short when his information wasn’t updated in a new system implemented to manage his employer’s contracts. What ensued was a system of automated processes that unplugged him from everything, whether it was his email account or the keycard that let him into the office. Clearly, this story could benefit from a guiding human–or AI–hand to make sure he wasn’t accidentally forgotten.

This is how the robot uprising finally begins

Will Knight, MIT Technology Review – This is how the robot uprising finally begins

This article discusses the latest advancements in industrial robotic arms, especially developments in intelligent object gripping–allowing robots to learn how to grab unknown objects such as pieces of raw chicken. While full mastery of object manipulation might require “something that’s pretty close to full, human-level intelligence,” this type of generalized intelligence is far, far away on the horizon. Honestly, if the robot uprising involves an arm learning how to unpack my groceries into my fridge, I welcome our new overlords.


The good coverage

Bias detectives: the researchers striving to make algorithms fair

Rachel Courtland, Nature – Bias detectives: the researchers striving to make algorithms fair

As algorithms are increasingly being used to automate decision making, the accountability of these algorithms becomes critical. As highlighted in our post on facial recognition, solving bias requires not only a technical solution but also a push for increased diversity. In this Nature article, Rachel Courtland looks at how some researchers are tackling this issue from different technical viewpoints. It is a great introduction to some of the challenges and difficulties that these researchers are having to grapple with.

ML used to identify photoshopped images

James Vincent, The Verge – Adobe is using machine learning to make it easier to spot Photoshopped images

Deep fakes – convincing fake imagery created by cutting edge AI techniques – have caused a lot of worry lately about exacerbating fake news lately. But, can we also use AI to combat fake news? We’ve had photoshop and similar tools for quite a while now, and new research shows how we can leverage AI to pinpoint proof an image has been manipulated.

Reporters get to try Google Duplex in a controlled setting

Lauren Goode, Wired – Google Gives Its Human-Like Phone Chatbot A Demo Redo

Google unveiled Duplex, an AI powered chatbot that could call and talk to humans to make reservations and bookings, during their annual developer conference I/O to equal amounts of skepticism, shock and ethical concerns. In an effort to provide some transparency to how the system works, they invited reporters to test out the system in a controlled environment. The major breakthrough seems to be how life-like the system is capable of sounding. It supposedly works so well, that Lauren Goode of the Wired, was not sure if her call had been taken over by a human after she managed to confuse the chatbot. For all the nitty gritty details of how she managed to confuse Duplex read the full article.


Expert Opinions & Discussion within the field

Regulating AI in the era of big tech

Melody Guan, Melody Guan’s Blog Regulating AI in the era of big tech

While many countries in the world have developed strategies to guardrail the advancement of AI technologies, the US continues to stumble–only exacerbated by the onset of the current administration. As the Trump administration continues to relent by taking a position that government “is not in the business of conquering imaginary beasts,” US corporations have made a laudable step in self-regulation through collaborations such as the Partnership on AI. However, without the legal power to verify such mutual trust, many wonder how far “good faith” extends. Will common morality from individuals in the field prevail, or will corporate interests take over?

World-leading expert Demis Hassabis to advise new Government Office for Artificial Intelligence in the UK

Govt. of UK, GOV.UK– World-leading expert Demis Hassabis to advise new Government Office for Artificial Intelligence

Let’s take a specific example of a country’s focus on artificial intelligence: the UK. Its strategy focuses on the interactions of AI with a citizenry increasingly concerned with data governance and the effect of technology on labor. Indeed, one of the four Grand Challenges announced by the Prime Minister revolves around the very topic, particularly around using AI to revolutionize medicine. To better understand how to implement efficient policy around and harness the potential of AI, a new Government Office for Artificial Intelligence represents the government’s effort. In order to aid them in this endeavor, key individuals in industry, such as Demis Hassabis–best known for the company behind the world-renowned Go-playing AI AlphaGo, have been invited (and whose invites have largely been accepted) to participate as advisors.


Explainers

How Libratus Beat Poker, or, turns out AI involves more than just Deep Learning

Jiren Zhu, The Gradient – Libratus: the world’s best poker player

Libratus “was the first AI agent to beat professional players in heads-up no-limit Texas hold ’em.” The secret to its success? Game theory, not deep learning.

Teaching robots new tricks, without tons of data

Jiren Zhu, BAIR Blog – One-Shot Imitation from Watching Videos

daml

Robots cannot be taught with millions of examples of how to do each task, and this post explains one method to getting around needing data while still leveraging Deep Learning to generalize to new situations.

Favourite Tweet

Favorite meme


That’s all for this digest! If you are not subscribed and liked this, feel free to subscribe below!

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x