Image credit: 'MIT Human-Centered Autonomous Vehicle' on YouTube by Lex Fridman
Our bi-weekly quick take on a bunch of the most important recent media stories about AI for the period 24th September 2018 - 8th October 2018
Jonathan Vanian , Fortune.com
Google’s cloud engineers are working with Facebook’s PyTorch team to get PyTorch version 1.0 running on Google’s custom TPU machines.
Shelly Fan, Singularity Hub
Humans have the remarkable ability to integrate information from different memories and make conclusions from them. Deepmind along with collaborators has conducted a study using fMRI images and specially designed algorithms to better understand how this mechanism works in humans. This singularity hub article does a particularly good job of explaining the study conducted in an easy to understand way.
Tiernan Ray, ZDNet
A recent paper from Google Brain showcased the advantages of using Reinforcement Learning for best-effort computing. This article explains in simple terms what is at stakes behind the deceptively simple task of sorting numbers which was solved in the paper: software that trades absolute reliability for greater tolerance of errors and interruptions.
Business Wire
IEEE seeks to define a standardized rating system focused on transparency, accountability and reduction of algorithmic bias in autonomous and intelligent systems. The goal is laudable, but it is unclear whether this standard will be a self referential loop from the tech industry for the tech industry, or if it can make an impact on general consumers of tech.
Tim Schneider and Naomi Rea, artnet news
A piece generated by a deep learning network is scheduled for sale for the first time in the history of Christie’s. This article points out that the choice is controversial, as the collective behind has neither precedence (the piece uses an out-of-the-box algorithm) nor claims to creativity (the dataset was used by others before them in the exact same way). We find the hype around the sale undeserved, as many artists have been working with neural networks in much more involved ways.
John Biggs, TechCrunch
Cutting edge AI algorithms and techniques are increasingly being democratized for anyone to easily use. The latest example: AI for helping with music composition. Though this has been done for a while in research labs, there is now an app for the IPhone Amadeus Code which makes the technology more accessible. As is so often the case though, the headline for this article is overdramatic and directly contradicted by its own content:
When asked if AI will ever replace his favorite musicians, folks like Michael and Janet Jackson or George Gershwin, [the creator] laughed.
“Absolutely not. This AI will not tell you about its struggles and illuminate your inner worlds through real human storytelling, which is ultimately what makes music so intimate and compelling. Similarly to how the sampler, drum machine, multitrack recorder and many other creative technologies have done in the past, we see AI to be a creative tool for artists to push the boundaries of popular music. When these AI tools eventually find their place in the right creative hands, it will have the potential to create a new entire economy of opportunities,” he said.
Stephen Johnson, Big Think
Big Think reports that “Human-like A.I. will emerge in 5 to 10 years, say experts.” We find this claim over-optimistic. The experts in question were researchers at the “Joint Multi-Conference on Human-Level Artificial Intelligence,” who are likely more optimistic about the average researcher. Even in that favorable sample, a minority of researchers agreed with the 5-10 year range, with 63% giving longer or uncertain estimates.
Richard Wike and Bruce Stokes, Pew Research Center
This article reports the pessimistic results of a Pew opinion survey on the future impact of automation on the job market, conducted in 9 countries. In all countries, most people believe that in 50 years robots will do much of the work currently done by humans, while few believe that new jobs will be created by advances in automation (we would like to point out that this last opinion does not seem confirmed by historical data, as both factory machines and computers did create a substantial part of what we think as “modern jobs”). People are also worried that automation will worsen inequalities, and do not believe that it will improve the economy.
James Vincent , The Verge
The Verge dissects the lie behind Burger King’s latest series of ads, presented by the fast food firm as being “created by artificial intelligence” while actually written by humans. This is symptomatic of a recent trend of fake “created by AI” content, revealing the general public’s worrying overestimation of the state of the art in content generation.
Adam Smith, PC Magazine
California has banned bots that don’t disclose they are bots. This seems like a reasonable measure to promote transparency, but it remains to be seen whether Russian trolls will decide to follow California law.
Jocelyn Blore, OnlineEducation.com
This article discusses current numbers, data-based conclusions and statistics about women in AI-related fields and ends up with case studies of tech institutions that successfully brought their ratios around 50% women or more. The systematic justification of every claim with backing studies and the final practical advice are particularly useful.
Tamara Khandaker, Vice News
Vice News is worried about AI in immigration in Canada. AI bias is a hot topic these days, and of course it’s reasonable to fear that AI used in screening immigrants could exhibit bias. However, we also know that humans are equally capable of bias, and in either case robust controls are required to identify and reduce it.
Nick Heath, Tech Republic
Demis Hassabis, founder of the Google research lab most-famous for its Go victory, gave a speech outlining some predictions for AI. Demis, perhaps not surprisingly, is optimistic about the future of AI and its ability to confront problems like climate change and lead to Nobel Prize-winning scientific breakthroughs. However, he tempers his enthusiasm by noting that deep learning alone is not likely to lead to human-level artificial intelligence.
Adnan Darwiche, Communications of the ACM
Public perceptions about AI progress and its future are very important. The current misperceptions and associated fears are being nurtured by the absence of scientific, precise, and bold perspectives on what just happened, leaving much to the imagination.
While the headline might not reflect it, Prof. Adnan Darwiche, from UCLA, writes about how perception around current progress in AI can have a negative effect on the field. He believes that current and future generations of AI researchers should be well-informed about the history of the field. While you might not agree with everything covered in the article, the article is thought-provoking and will hopefully facilitate discussion.
while current AI technology is still quite limited, the impact it may have on automation, and hence society, may be substantial (such as in jobs and safety). This in turn calls for profound treatments at the technological, policy, and regulatory levels.
Sebastian Ruder, AYLIEN Blog
Sebastian Ruder, a research scientist at AYLIEN, takes a look at eight milestones from recent years in the field of NLP. Additionally, he also covers important milestones that laid the foundation for much of the recent work. Sebastian is great at explaining and summarizing topics, and this blog post does not disappoint.
Terence Parr and Prince Grover, explained.ai
A detailed explainer on generating good visualisations for decision trees, accompanied by a convenient python package.
Pedro A. Ortega, Vishal Maini, Deepmind Safety Research Blog
Deepmind has recently set up a safety research team. This team is focused on technical AI safety and in their inaugural post cover three broad categorizations: specification (defining the purpose of the system), robustness (design systems to withstand perturbations), and assurance (monitor and control system activity). This post makes for an easy to understand introduction to the field of technical AI safety.
Lex Fridman, MIT
Lex Fridman, MIT
That’s all for this digest! If you are not subscribed and liked this, feel free to subscribe below!