Image credit: Screencap via Tesla Autonomy Day
On April 22nd, two days prior to Tesla’s April earnings report, Elon Musk and several senior executives and directors of Tesla hosted a live event dubbed “Tesla Autonomy Day,” which was intended to bridge the gap between Tesla’s internal perception of their progress on self-driving vehicles and the external perception of it. To kick things off Pete Bannon, VP of Silicon Engineering, showcased a chip designed specifically to speed up the execution of the self driving algorithms used by Tesla’s cars.
Next, Senior Director of AI Andrej Karpathy gave a substantive overview of the deep learning based computer vision solutions used by Tesla in pursuit of fully autonomous vehicles. Karpathy covered a lot of ground, from an introduction to neural networks all the way to the specific means by which Tesla leverages “the fleet” to mine footage to improve upon their existing algorithms. Stuart Bowers, the VP of Engineering, also spoke on some of the less technical aspects of computer vision and the engineering challenges associated with building the infrastructure to handle self driving, continuously learning vehicles.
For the majority of the event, Musk took more of a backseat role, helping introduce speakers and occasionally chiming in to emphasize or clarify points. However, he also provided forward looking statements regarding the future of Tesla’s self driving functionality. Musk spoke at length regarding Tesla’s goal of getting fully autonomous robotaxis on the road:
Following the event, investors immediately seemed skeptical of Elon’s claims. Tesla shares dropped 3.9%, and one investor felt so strongly about the presentation that he personally wagered $10,000 that Tesla would not succeed in their plan to roll out fully autonomous robotaxis in 2020. He was certainly not alone in his feelings:
It wasn’t just investors. Many articles and blog posts have also surfaced regarding the technical aspects of fully autonomous self driving in the context of Musk’s comments on autonomy day.
Some researchers and AI experts were also eager to share their thoughts of Musk’s claims:
There’s clearly an air of skepticism in the technology community regarding Musk’s posed timeline for fully autonomous vehicles.
In the aftermath of Autonomy Day, Tesla pushed an update to their Autopilot system in an effort to increase their functionality, including automatic lane changing and using highway ramps to exit. Consumer Reports summarizes the seemingly universally negative reactions of Tesla drivers in this article.
While Musk has certainly succeeded in creating a large buzz around Tesla as a force to be reckoned with in the realm of self driving technology, it’s clear that by and large, not a lot of people are fully buying into some of the more grandiose claims.
Tesla is poised exceptionally well to tackle the problem of self driving cars. Their world class team of researchers and engineers combined with the infrastructure they have in place to constantly mine training data puts them in an incredibly advantageous position. That being said, Autonomy Day left us with almost as many questions as it did answers.
While existing technologies like Tesla’s AutoPilot and Cadillac’s Super Cruise may make the prospect of fully autonomous vehicles seem feasible, there’s a massive difference between these systems and what Tesla is claiming they’re on the cusp of releasing to the public. The task of self-driving cars can be summarized into five levels. Level one is comprised of basic autonomy (think adaptive cruise control or parking assistance), with level five being 100% autonomous in all circumstances with zero need for human intervention. With all of the hype in the news, a consumer would be forgiven for thinking we’re closer to the latter than the former. In reality, production level autonomy is around level 2: cars are capable of accelerating, braking and steering (in limited circumstances, e.g. a highway), but still require a driver to remain 100% attentive to the road and intervene when necessary. For level 5 to be feasible (i.e., for the idea of robotaxis to even be close to plausible), cars need to be able to drive themselves in virtually any circumstance (on and off the highway, in all weather conditions, etc.) without the need for human intervention. As it stands, Tesla still requires full driver attention, even when it comes to navigating relatively simple situations like staying within lanes on highways. As mentioned above, Tesla’s latest software rollout, which featured the ability to change lanes automatically, has left a lot of drivers disappointed, consistently reporting sub-human performance.
An interesting thing to note regarding Tesla’s autopilot system is its reliance on deep learning based computer vision. Most if not all of Tesla’s competitors in the self driving space utilize a very specific type of sensor known as a LIDAR sensor to get depth information. In contrast, Musk rejects the idea of LIDAR being necessary at all. As he puts it: “once you solve vision, it’s worthless.” As obviously true as this statement may seem, the belief that vision has been “solved” is not universally held. Convolutional neural networks/CNNs, the source of the vast majority of computer vision breakthroughs in the last decade and a major topic of discussion at Tesla’s autonomy day, are not infallible. One of the best examples of this is the existence of adversarial methods of “beating” these algorithms by intentionally confusing them into mis-classifying objects. In one paper, researchers were able to successfully game a CNN into classifying a stop sign as a “Speed Limit 45” sign. The negative implications of an adversarial attack like this in the real world are self-evident and very concerning.
Another open question is how Elon plans to get legislative approval to roll out a fleet of over one million robotaxis. While he does address the fact that he doesn’t expect immediate approval nationwide, legislators are surely aware of the risk they’d be incurring by allowing fully self driving cars to roam the streets when there have been several instances of Autopilot related fatalities. While Elon stated during the event that robotaxi riders would likely have to assume the driver seat at first, this poses a question of its own: if these cars can’t be fully trusted to get a rider safely from point A to point B, why should they be able to drive themselves to point A in the first place?
Tesla is by all accounts an incredibly innovative company that is doing a lot to progress the field of autonomous vehicles. However, by oversimplifying an incredibly complex and difficult problem, their actions serve to weaken the trust of consumers and investors alike. The claim that there will be one million Tesla robotaxis on the streets before the end of 2020 is dubious, and rushing fully autonomous cars to market before they’re truly ready may hurt the industry in the long run. Inspiring the public and getting people interested in the future are noble goals, but they can be accomplished without setting unrealistic deadlines in an apparent attempt to win the hearts of investors. Besides, they don’t seem to be buying it.