OpenAI has followed up on its 2017 achievement of beating pros at a 1v1 variation of the popular strategy game DoTA with a far more impressive feat: managing to beat a team of human players at a much more complex 5v5 variation of the game. Interestingly, the achievement was reached without any algorithmic advances, as OpenAI explain:
“OpenAI Five plays 180 years worth of games against itself every day, learning via self-play. It trains using a scaled-up version of Proximal Policy Optimization running on 256 GPUs and 128,000 CPU cores … This indicates that reinforcement learning can yield long-term planning with large but achievable scale — without fundamental advances, contrary to our own expectations upon starting the project.”
Though definitely impressive, it should be remembered that these systems took hundreds of human lifetimes to train just to play one game. So, just as with prior achievements in Go, they represent the success of present day AI at mastering single narrow skills using a ton of computation, and not the ability to match humans at learning many skills with much less experience.
Normally a robot has to learn everything from scratch, on its own. New research from UC Berkeley changes that - allowing robots to learn from experience and demonstrations. Matt Simon at the WIRED explains how such a systems works and what implications this has for future research.
In China, the police are aggressive using AI. Using a network of over 200 million surveillance cameras and even “facial recognition glasses,” local law enforcement is able to sweep crowded areas in the hunt for criminals. Despite the tone of this article, it is worth noting that United States law enforcement uses similar tactics such as license plate readers to perform dragnet surveillance, though such tactics are more troubling in a country of single-party rule.
Facebook open sourced DensePose, a deep learning based system that can make 3D models of humans from 2D images or videos. Jack Clark from OpenAI raises concerns about how such a system could be used in real-time surveillance. It is up to the researchers to ask how their systems might be used, before releasing them to the world.
The race to become the leading country in AI is on. Russian president Vladimir Putin has said “Whoever becomes the leader in this (AI) sphere will become the ruler of the world”. Various countries have developed a national strategy for AI. Tim Dutton, an AI Policy researcher at CIFAR summarizes the key policies and goals of each national strategy. It makes for an interesting read to look at and compare the various perspectives of different countries regarding AI.
It’s increasingly clear government has a role in making AI is not abused. Unfortunately, it seems the US is making little progress towards that direction:
“Unfortunately for U.S. citizens, little change has been made by lawmakers to protect their interests. On the subject of private and ethical AI, the U.S. government has been disinterested, lacking in expertise, and impotent to stand up to tech corporations.”
Let’s hope more progress is made soon.
Radiological deep learning models are identifying the type of scanner used for X-rays and weighting predictions accordingly. Specifically, the model identifies if the scanner is portable (indicated by the word “PORTABLE” on the radiograph) and skewing predictions accordingly. Use of portable scanners is an indication that the patient is too ill to leave the hospital bed and likely to have a disease. Check out John Zech’s blog post for the details!
This is an accompanying blog post to an ICML 2018 debates paper by Zach Lipton and Jacob Steinhardt. They identify four patterns observed in ML scholarship which take away from making a great paper. They suggest ways to combat these patterns as a community and as an author. The aim of their work is to start an important discussion within the community as papers have a much broader audience now.
“Flawed scholarship threatens to mislead the public and stymie future research by compromising ML’s intellectual foundations. By promoting clear scientific thinking and communication, we can sustain the trust and investment currently enjoyed by our community.”
It’s easy to become desensitized to the term AI and misunderstand the current state of AI. After Google unveiled Duplex, Becca Farsace of the Verge wanted to understand the current state of the field. She talks to AI journalist James Vincent and AI expert Oren Etzioni about how close AI is to having “common sense”. This is a great high level overview of what AI is capable of today, the concerns due to those capabilities, and regulation.
“How close are we to Skynet? Is that coming? No, it’s not!”
MIT’s CSAIL is one of the leading AI research institutions in the world. This video provides a great summary of their recent exciting research – seeing people through walls, helping people with disabilities generate more natural sounding speech with AI, and more.
A nice summary of exciting recent NLP research for learning better representations.
A fun summary of the experience of attending the 2018 ICRA conference.
Successfully training neural networks requires almost no math skills, but does require knowing a large number of otherwise useless tricks. https://t.co/3iQGYBHYCB— David Sussillo ☝️🤓 (@SussilloDavid) July 3, 2018
I decided to get in on the sick meme action pic.twitter.com/VCWjc4ZXQd— Geoffrey (@GarrulousGeoff) April 5, 2018
That’s all for this digest! If you are not subscribed and liked this, feel free to subscribe below!