Elon Musk’s ambitious claims about Tesla’s self-driving capabilities have come under intense scrutiny following a massive recall of over 2 million vehicles. This development has cast a spotlight on what critics and close observers have long argued: Tesla’s touted “self-driving” technology is far from autonomous and heavily reliant on human intervention.
Back in 2016, Musk boldly asserted that Tesla cars could “drive autonomously with greater safety than a person. Right now.” This claim, which significantly boosted Tesla’s stock and Musk’s wealth, is now unraveling. The recall reveals a critical truth that Tesla admits in its legal fine print: the technology requires constant human attention.
Tesla’s approach to driving automation has been one of the most significant yet obscured issues in the tech and automotive industries. The company’s strategy mirrors the Mechanical Turk of 1770, where the focus on technology overshadowed the essential human elements powering it. This oversight is particularly concerning as it introduces new risks on public roads, with humans required to supervise these incomplete systems.
The recall notice for Tesla’s vehicles highlights an issue not with the Autopilot technology itself, but with human behavior. When drivers use a system that handles steering, braking, and acceleration, they tend to become less attentive. This wouldn’t be an issue if Teslas were fully capable of safe autonomous driving and the company assumed legal responsibility for its software’s actions. However, since this is not the case, drivers must be ready to intervene at any moment, a situation that has led to several high-speed accidents.
The irony of automation, as noted in a 1983 paper, is that excessive automation can lead to human inattention, especially in critical, time-sensitive tasks like preventing a crash. This problem has been evident in Tesla’s Autopilot for years. The National Transportation Safety Board (NTSB) investigated several fatal crashes involving Autopilot, finding that the drivers were not paying attention when their Teslas collided with unexpected obstacles.
Despite these findings, regulatory action was slow. The National Highway Traffic Safety Administration (NHTSA) only began a more thorough investigation in 2021 after multiple crashes involving Autopilot and emergency responder vehicles. Meanwhile, Musk continued to hype the self-driving technology, collecting deposits for a “Full Self-Driving” version of the system, despite clear evidence of its limitations and risks.
Tesla’s “Quarterly Safety Reports” have claimed that Autopilot is safer than human drivers since 2018, a claim that has been contested by road safety researchers. Adjusted for factors like road type and driver age, Tesla’s alleged 43% reduction in crashes turns into an 11% increase, according to researcher Noah Goodall.
The ideal design for an Autopilot-like system would have combined sensor technologies with human cognitive abilities, creating an augmented “cyborg” system. Instead, Tesla built a system that appeared self-driving, boosting profits and stock prices but compromising safety.
Tesla’s response to the recall is a software update, a limited solution that cannot match the capabilities of competitor systems with infrared eye-tracking cameras or laser-mapped roads. This update is meant to remind drivers of their responsibility, countering the long-standing narrative that the system can ensure safety.
NHTSA’s recall, albeit a small victory, contributes to the growing realization that Tesla’s claims about its technology are both untrue and unsafe. Musk’s argument against adding driver monitoring, claiming it would introduce errors, is now being challenged by NHTSA, calling into question the future of Tesla’s self-driving technology and its implications for driver safety.





