Tesla's Self-Driving Safety Under Scrutiny After Insider Leak

19-12-2023 | By Robin Mitchell

Recently, an ex-employee of Tesla has leaked data regarding the safety of the Tesla AutoPilot feature and has cast doubt on the safety of the AutoPilot system. What exactly happened with the leak, why was this to be expected, and what does this mean for future self-driving systems?

White Tesla Model 3 cruising through Los Angeles, CA - Captured on 4th December 2022. White Tesla Model 3 cruising through Los Angeles, CA - Captured on 4th December 2022. 

Tesla data leaked by ex-employee

Tesla’s AutoPilot has been touted as the world’s safest and most advanced self-driving system, capable of allowing their fleet of vehicles to automatically park, be summoned by owners, and even drive from coast to coast in the US. However, despite its name and claims, there is growing concern over its true capabilities and safety record, with numerous crashes having been reported and, in some cases, fatalities. 

As most of the data gathered on Tesla-related crashes is kept secret from the public (being proprietary to Tesla), it has been hard for researchers to estimate the true risks associated with self-driving systems. However, a recent data leak from an ex-Tesla employee has not only cast doubt on the safety of the AutoPilot system but revealed that certain safety protocols have not been followed.

The whistle-blower Lukasz Krupski provided ample amounts of internal Tesla data (around 100GB) to the German newspaper Handelsblatt in May but has only recently come out with regard to his identity. According to Lukasz, while at Tesla, he raised concerns regarding the AutoPilot system on numerous occasions, but they were ignored. 

Additionally, Lukasz also noted that both customers and Tesla employees have repeatedly noticed abnormal behaviour in Tesla vehicles, often breaking to avoid collision with non-existent objects (this effect is often referred to as Phantom breaking). 

With regard to crash data, Tesla has claimed that AutoPilot resulted in one crash requiring an airbag every 5 million miles travelled, while those not using AutoPilot average around 1.5 million miles, and the average US driver averages just 600,000 miles. While these figures cannot be verified with the leaked data, what was discovered was that between 2015 and 2022, there have been at least 1,500 reports of breaking issues, 2,500 cases of self-acceleration, 139 cases of emergency stops, and 383 reports of phantom breaking (bring a total of around 4500 individual issues). 

Further Insights into Tesla's AutoPilot Concerns

Adding to the complexity of the situation, whistleblower Lukasz Krupski's revelations have brought to light specific issues within Tesla's self-driving technology. Krupski's concerns, primarily about the safety of Tesla's AutoPilot on public roads, stem from his findings in the company's internal data, including numerous customer complaints about erratic braking behaviour and software reliability.

This phenomenon, known as "phantom braking," where Tesla vehicles unexpectedly apply brakes, has raised serious questions about the system's ability to accurately interpret road conditions. Such incidents not only pose a risk to Tesla drivers but also to other road users, highlighting the need for more rigorous safety measures and oversight.

While Tesla claims that its AutoPilot feature significantly reduces the likelihood of crashes, with statistics suggesting a much lower incidence rate compared to non-AutoPilot users, these figures have not been independently verified. This discrepancy between company claims and whistleblower reports underscores the necessity for transparent and independent safety assessments of autonomous driving technologies.

The legal and ethical implications of these revelations are far-reaching. The US Department of Justice, along with other regulatory bodies, is currently investigating Tesla's claims about its assisted driving features. These probes could lead to more stringent regulations and standards for self-driving technologies, ensuring that advancements in this field are matched with adequate safety protocols and ethical considerations.

Why was this to be expected?

Despite the numerous claims that have been made regarding self-driving, it has become apparent over the past few years that self-driving is an immensely difficult challenge to solve. While there is no doubt that Tesla may indeed have some of the most advanced self-driving AI systems in existence, that doesn’t mean that they are ready for real-world applications. 

To start, Tesla vehicles have been recalled on numerous occasions due to poor build quality, safety issues, and faulty software. In some cases, software updates have managed to solve the issue, but in other cases, vehicles had to be taken back to a garage for upgrades and repairs. Of course, all car manufacturers eventually recall a product line as no manufacturer is perfect, but considering that Tesla only makes up around 0.03% of all cars in the US (based on 1 million Tesla vehicles and 300 million vehicles in the USA), they accounted for 20% of all vehicles recalled in 2022.

Another piece of evidence that suggests that Tesla is simply not ready to deploy fully self-driving vehicles is that they are actively moving away from sensors, including ultrasonic, radar, and LiDAR, in favour of a camera-only system. The reason for this decision came down to a simple argument that because humans drive using their eyes, cars can do it, too.

Challenges in Tesla's Sensor Technology and Design Decisions

While this claim may sound logical, it is anything but. Despite humans having a pair of eyes, bad weather conditions, lack of light, or blind spots can all contribute to crashes. Thus, the same visual challenges that humans face will likely impact self-driving systems. 

For this reason, every other manufacturer working on autonomous driving integrates a whole range of sensing technologies so that regardless if one fails, others can take over. Simply put, it is arguably backward for a vehicle manufacturer to remove vital sensors with long-range capabilities in favour of a sensor system that can easily be fooled by visual artefacts that rely on good lighting and visibility conditions. 

Finally, with regard to design decisions, one only has to look at the Cybertruck to recognise a poor engineering mindset. The Cybertruck has been advertised with bulletproof glass, steel plating, and a large battery pack for extended range. While this vehicle may be applicable in a warzone, it contains a design vulnerability that could be life-threatening to passengers. 

Tesla vehicles have been known to catch fire at times due to the use of Lithium-ion batteries, and in such a situation, it is critical that all occupants of a vehicle can get out. However, considering that Tesla vehicles are entirely electric, in such an incident, it is impossible to open the windows or doors as they need power (this happened to a Tesla owner who nearly burned alive in his vehicle). 

Now, this isn’t an issue as the window can be smashed open, providing an emergency exit from almost all directions. However, in the Cybertruck, the windows are bulletproof, meaning that in such an emergency, it would be impossible to escape the vehicle. What this indicates is a poor mindset with regard to engineering practices and safety, and this doesn’t even consider the safety of pedestrians who are provided with no crumple zones should they get hit. 

Comparative Analysis: Tesla AutoPilot vs. Other Self-Driving Technologies

When evaluating the landscape of self-driving technology, it's insightful to compare Tesla's AutoPilot with its contemporaries. Companies like Waymo, Cruise (a subsidiary of General Motors), and Ford's Argo AI have taken distinct paths in developing their autonomous driving systems, each with its unique approach and challenges.

Waymo: Waymo, a subsidiary of Alphabet Inc., has focused heavily on integrating a comprehensive suite of sensors, including LiDAR, radar, and cameras. This redundancy ensures that the system has multiple data points to cross-reference, enhancing safety and reliability, especially in challenging weather conditions. Waymo's cautious and methodical approach contrasts with Tesla's reliance on a primarily camera-based system.

Cruise: Cruise's strategy involves creating an ecosystem where self-driving cars are part of a shared network, reducing the need for personal vehicle ownership. Their technology also relies on a combination of sensors, similar to Waymo, but with an added emphasis on urban driving scenarios. Cruise's focus on densely populated city environments presents a different set of challenges and solutions compared to Tesla's more generalist approach.

Ford's Argo AI: Argo AI, backed by Ford, has been developing its self-driving technology with a focus on both goods delivery and passenger transport. Their approach includes high-definition maps, a robust suite of sensors, and significant testing in multiple cities. Argo AI's balanced focus on both delivery and passenger services offers a different perspective compared to Tesla's primarily consumer-focused AutoPilot.

These comparisons reveal that while Tesla's AutoPilot is undoubtedly advanced, its approach differs significantly from its competitors, particularly in sensor technology and application focus. Each company's strategy reflects its vision for the future of autonomous driving, and the diversity in these approaches is a testament to the complexity and potential of this rapidly evolving field.

What does this mean for future self-driving systems?

When taking all of this into account, it is highly likely that self-driving systems will be hit with numerous regulations before they can be rolled out. While Tesla may continue to advertise its technologies as a major feature, the truth is that these systems are simply not ready for the market. 

In all likelihood, it will be other major brands (such as Ford and Toyota) who will crack the self-driving challenge before Tesla, as these brands not only have the experience of manufacturing automotive vehicles at scale but understand their industry far better than Tesla. However, it is also possible that Tesla could wind down its production as other competitors take over and switch over to an OEM service that other manufacturers deploy in their vehicles (such as AutoPilot and battery management).

But Tesla is undeniably testing and refining its services using customer data, which puts not only them but also the public at risk. For every Tesla crash (fatal or not), Tesla will likely have access to that data, which it then uses to train its models further. 

Using such data could be considered highly immoral as Tesla is utilising the public to test its own services without consent from those who are affected by incidences. To make the situation worse, it is highly likely that Tesla can find legal loopholes to absolve itself of responsibility in crashes, thus allowing it to continue developing its AI systems with minimal risk.

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.