Our bi-weekly quick take on a bunch of the most important recent media stories about AI for the period 22nd October 2018 - 5th November 2018.
Advances & Business
Davey Alba, Buzzfeed News
The Orlando Police Department has been running tests using Amazon’s face recognition cloud service on 3 surveillance cameras placed in the streets. This article raises valid concerns about the fact that there is no law regulating these activities. What happens to the data of people who are not “persons of interest”? Is police personnel correctly trained to handle the technology? What about the past examples of unregulated mass surveillance impacting the behavior of communities?
It’s hard for citizens to have confidence in a pioneering new program when the leaders don’t seem to fully understand what the hell they’re pioneering.
The poet in the machine: Auto-generation of poetry directly from images through multi-adversarial training – and a little inspiration
Microsoft Research Blog
The point of this research is not to have AI replace poets. It’s about the myriad applications that can augment creative activity and achievement that the existence of even mildly creative AI could represent. Although the researchers acknowledge achieving truly creative AI is yet very far away, the boldness of their project and the encouraging results have been inspiring.
A team of researchers at Microsoft Research Asia attempted to train a research model that generates poems from images directly using an end to end approach. The task of generating accurate captions from images is still an “unsolved” NLP problem but the researchers attempted to do something even harder to pave way for future research in this domain. In the process, the researchers also managed to assemble two poem datasets using living annotators.
James Vincent, The Verge
Last week turned out to be an exciting one for AI researchers, with multiple papers tackling the problem of exploration in reinforcement learning coming about:
The above piece covers one of those papers, which introduced a surprisingly simple approach to tackling one of the outstanding challenges of the field and showed impressive performance.
Concerns & Hype
Spain has rolled out a text analysis system that purportedly detects fake robbery claims 8 out of 10 times. The system, named VeriPol, has been tested in the field in 2017, and represents a 15% increase in accuracy compared to human judgement alone. There would not be much concern if the tool was used only as assistance to human police officers, but the following quote appears extremely suspicious:
VeriPol was put to task on a real-life pilot study in the urban areas of Murcia and Malaga in Spain in June 2017. In one week, 25 cases of false robbery reports were detected in Murcia, resulting in the cases being closed, and a further 39 were detected and closed in Malaga. For comparison, over the course of eight years between 2008 and 2016, the average number of false reports detected and cases closed by police officers in the month of June was 3.33 for Murcia and 12.14 for Malaga.
Such high numbers suggest that the police might be far too reliant on the algorithm. The main concern here, as for any computer-based decision, is that VeriPol is being used as a god-like oracle rather than like an error-prone assistant.
Jan Krikke, Asia Times
The first two lines reveal all we need to know about the credibility of this piece.
If the experts are to be believed, AI will develop its own consciousness. A closer look suggests they got it backwards – human consciousness will be embedded in AI.
There is no such consensus in the field, and no “closer look” that would reveal the opposite of this absence of consensus either. What seems to be the main message, that cultural biases get embedded in machine learning algorithms, is buried under strange misconceptions (sampling loss being caused by quantum physics?) and errors that basic fact checking would have eliminated (“Big Blue” did not beat Kasparov, Deep Blue did…). Nothing much to do with consciousness.
Daniel Oberhaus, Motherboard
Researchers from MIT trained an AI model to predict the motion of a body in two-dimensions as accurately as possible. They incorporated four strategies commonly used by scientists: divide-and-conquer, occam’s razor, unification and lifelong learning. The experiments resulted in a massive decrease in error for predicting the motion of a body.
Analysis & Policy
Jeanna Smialek, Bloomberg
Despite worries about AI and automation leading to widespread unemployment being common, the precursor to that – current workers becoming more productive and able to do more work aided by AI-powered tools – has been curiously missing. The Fed has been looking into this, and has found that the effects of AI on productivity may be tricky to find in the numbers and may also take a while to be fully felt:
“In the footnotes of his speech last week, Clarida cited “Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics,” a study by Massachusetts Institute of Technology economist Brynjolfsson and co-authors Daniel Rock and Chad Syverson. In it, the trio suggest that the “most impressive” capabilities of AI haven’t yet diffused widely.”
Dani Deahl, The Verge
The EU is running an experimental trial to use AI based lie detectors at border crossing points in Hungary, Latvia and Greece. Given the issues with poor error rates and bias in facial recognition systems, application of such a system could be a problem. For now, the experimental trial cannot prohibit anyone from crossing the border and refers doubtful cases to a human agent.
Expert Opinions & Discussion within the field
Google has launched an AI for Social Good program and will fund the best ideas.
AI is a powerful tool for improving society. I'm excited about our AI for Social Good program, focusing on both research & empowering the ecosystem.— Jeff Dean (@JeffDean) October 29, 2018
If you have ideas, submit to the AI Impact Challenge! We're funding the best ideas w/$25M. Learn more at https://t.co/B4oIYJhJCq
Stephen Merity, smerity.com
A great piece that discusses large compute and data being significant advantages in the long run in machine learning.
“For machine learning, history has shown compute and data advantages rarely matter in the long run. The ongoing trends indicate this will only become more true over time than less. You can still contribute to this field with limited compute and even data. It is especially true that you can get almost all the advances of the field with limited compute and data. Those limits may even be to your advantage.”
Alex Constantino, Skynet Today
We analyze coverage and contextualize Google’s recent LYmph Node Assistant - a recently announced AI system that can assist pathologists in cancer detection.
Jochen Görtler, Rebecca Kehlbeck, Oliver Deussen, Workshop on Visualization for AI Explainability
A great explainer of Gaussian Processes using interactive visualization.
Pierre Barreau, TED
That’s all for this digest! If you are not subscribed and liked this, feel free to subscribe below!