Wired, “DeepMind has trained an AI to unlock the mysteries of your brain”
Well, almost. In 2005, May-Britt and Edvard Moser discovered a new type of neuron that helps the brain track the movement of objects our eyes see. While the full function of these ‘grid cells’ is yet to be discovered, researchers at DeepMind 1 trained a navigation AI which consequently emulated a similar pattern on its own. While this exciting research resulted in a Nature paper, the mysteries of the brain have yet to be solved. If we ignore the headlines and manage to make it past the first few paragraphs, the piece actually does a pretty good job at telling us what really happened. Another victim in the infamous clickbait wars?
In a week filled with coverage of Google’s Duplex 2, it was certainly hard to pick a single headline that encapsulates the panic in all its glory. It looks like Slate did it, exclaiming that ‘Google Wants to Turn You Into a Cyborg’. Though we know some people who would be quite excited about this possibility, we are still talking about a couple of recorded conversations here to a hair salon and a restaurant. In case we forgot. No cyborgs in sight yet, we’re afraid to disappoint, but for a much fuller discussion of the biggest AI storm of the week (and the ethics it entails) – check out our upcoming piece in the next couple of days! The downright silly
Quartz, “Boston Dynamics is going to start selling its creepy robots in 2019”
Or so tells us the first sentence, quite literally: “The robot apocalypse has been tentatively scheduled for late 2019”. The robots in focus are products of Boston Dynamics, now a SoftBank company, which . Where to start? Perhaps at the start - that first sentence is obviously hyperbolic, and though it may be in jest given the amount of anxiety about AI and robotics that may not be the best tact to take. Towards the end of the piece the author speculates ‘It’s easy to see how a robot like this could be used in office security, or trained to hunt and kill us. (Oh wait, that was Black Mirror.)’ It’s easy to see why this piece earned itself a place in our downright silly section; the gap between a robot dog with 90 minutes of battery life and Terminator-esque apocalypse is, well, significant.
Popular Science, “Self-driving cars should earn people’s trust with good communication”
Another self-driving project is quietly becoming a reality: this time it’s Drive.ai’s orange taxi-bus service that will start running this July in Frisco, Texas. With all the hype and panic the Uber and Tesla crashes received in the weeks leading up to this announcement, this story was somehow received more modestly than it could have. Andrew Ng, the main powerhouse behind this initiative wrote an accompanying Medium piece, ‘self-driving cars are here’, which is a recommended read. But back to coverage - what’s interesting about this Popular Science piece is the angle it took. It’s not so much fear-mongering as a sincere look at the challenge ahead and the effort Drive.ai is making to rise to the occasion. Meanwhile, the New York Times chose a slightly scarier headline, though the piece itself is certainly worth a read. Either way, this should be an interesting summer for self-driving-cars enthusiasts and denouncers alike. We hope it’ll be a safe one.
South China Morning Post, “China looks to school kids to win the global AI race”
The global race for AI has been ongoing for a while, and news of China’s efforts to close the talent gap with the US are slowly making their way across the Pacific. A new book was published last month, accompanying a curriculum for secondary school currently piloted at 40 Chinese schools, mainly in urban centers. Behind the book and the program is SenseTime, the world’s most valuable AI startup. The book is currently sold out, if you’re curious. With Carnegie Mellon announcing a new dedicated AI undergraduate degree, and Facebook opening a new AI lab in Seattle, it looks like the game is on.
Science, “AI researchers allege that machine learning is alchemy” Quartz, “Google’s engineers say that “magic spells” are ruining AI research”
While the media and public discussion about AI is gearing up spiced with the occasional bitter criticism, it seems like the controversies don’t stop at the doors of AI researchers. A fervent internal discussion is happening at the same time about the scientific rigor and standards in the field as notable AI experts voice concerns about the quality of research being published. An article in Science featured part of this discussion, pointed to a talk given by Ali Rahimi, a Google AI researcher, who alleged that AI has become a form of ‘alchemy’. A couple of weeks ago, he and a few colleagues presented a paper on the topic at the International Conference on Learning Representations in Vancouver. The allegations - that researchers can’t tell why certain algorithms work and others don’t, for example - have sparked much debate in the field. And a worthy debate it is - including substantial come-backs, e.g. by Yann LeCun, who pointed out AI is not alchemy, just Engineering, which is by definition messy. Check out the links for more information.
Technology Review, “A startup is pitching a mind-uploading service that is ‘100 percent fatal’”
If we loosen our definition of artificial intelligence just for this week, we cannot ignore the value proposition of a new Y combinator startup: brain uploads are here. Maybe. If you’re willing to be euthanized for it. An MIT graduate, Robert McIntyre, founded Nectome, and asks on his website “What if we told you we could back up your mind?”. What if indeed. The process is 100 percent fatal and seems to be currently targeted at the fatally ill, who are willing to end their lives and immediately go through a brain mapping process with the expectation that future scientists could archive the individual’s past life. Such a future process would hopefully mean preserving experiences and memories, but no one can tell if that goal could actually be achieved given that we know so little about consciousness. That did not stop 25 excited customers from getting on a waiting list for the service at a cost of only a measly $10,000 deposit, fully-refunded if the operation never takes place.
And not a word on Elon Musk and Grimes. Let’s pretend to be adults here. Instead, here are a fun pair of fun memes we made to announce our most recent piece about the trully terrible ‘Gratuitous fear-mongering dressed up as a documentary’ “Do You Trust This Computer”:
They speak for themselves.