Elon Musk Biography Reveals Tesla’s Cutting-Edge Approach to Full Self-Driving v12 Development
Isaacson's Book Unveils Tesla's Shift to AI-Powered FSD v12 and Regulatory Challenges Ahead
Walter Isaacson’s highly anticipated biography of Elon Musk is poised for release on Tuesday, and a new preview of the book offers intriguing insights into Tesla’s development of the upcoming Full Self-Driving (FSD) version 12.
In a recent CNBC preview of his Musk biography, Isaacson delves into the pivotal role of artificial intelligence (AI) in the evolution of Tesla’s FSD v12, marking a significant shift that unfolded over the last several months. Isaacson underscores how Tesla’s approach to the impending FSD v12 has veered away from the conventional “rules-based” methodology.
Notably, FSD v12 is slated to harness billions of real-world driving video frames to train its neural network, departing from the conventional approach that relied on thousands of lines of code in previous iterations. In a conversation with Elon Musk last December, Dhaval Shroff, a Tesla Autopilot employee, aptly likened this concept to ChatGPT but tailored for driving.
“It’s like ChatGPT, but for cars,” Shroff explained. “We process an enormous amount of data on how real human drivers acted in complex driving situations, and then we train a computer’s neural network to mimic that.”
Remarkably, Tesla’s transition to this “neural network planner” approach occurred relatively recently. By the onset of this year, however, the neural network had already analyzed 10 million video clips, derived from the best-case-scenario drivers the system had access to. Musk directed the employees at Tesla’s Buffalo, New York facility, responsible for scrutinizing the footage, to train the AI based on actions akin to “a five-star Uber driver.” Shifting from a rules-based to a network-path-based AI approach empowered FSD to utilize human driving data to navigate obstacles, even if it occasionally required bending a few rules. Shroff illustrated this shift using a demo featuring trash bins, debris, and upturned traffic cones, all of which the car handled remarkably well.
“Here’s what happens when we move from rules-based to network-path-based,” Shroff elucidated. “The car will never get into a collision if you turn this thing on, even in unstructured environments.”
Musk swiftly embraced the idea, as evidenced in a recent livestream showcasing Tesla’s FSD v12 software in Palo Alto, alongside Autopilot software director Ashok Elluswamy. Despite a minor incident where the car nearly ran a red light during the demonstration, Musk has consistently touted the forthcoming software version’s exceptional driving capabilities.
In any event, Musk contends that such moments highlight the necessity for self-driving software to continuously learn. Given its constant training on video data from real-world drivers’ cameras, it should theoretically become safer over time, according to Musk.
During its development, Musk also took note of the fact that it required over a million video clips for the neural network to start performing admirably. Nevertheless, he anticipates even more data will further enhance FSD’s performance.
However, critics and regulators remain apprehensive about the idea of human drivers training AI-based driving systems. The National Highway Traffic Safety Administration (NHTSA) has repeatedly raised concerns regarding Tesla’s Autopilot and FSD beta systems.
According to Isaacson, Tesla intends to release FSD v12 once it receives regulatory approval. Concurrently, the National Highway Safety Board is conducting an ongoing study to determine whether self-driving cars should be permitted to emulate human driving actions that occasionally defy traffic rules, such as edging forward at stop signs.
Musk expressed in April that he anticipates Tesla achieving full autonomy within a year. Nevertheless, he has been known to set ambitious targets for the software in the past.