18-08-2022 | By Robin Mitchell
As Tesla strives to develop self-driving vehicles, beta releases to the public allow Tesla to test their systems in real-world environments, but is this type of testing putting customers’ lives at risk? What challenges do self-driving vehicles present, how is Tesla using customers to train their models, and is this type of “guinea pig” testing immoral and dangerous?
Each year, Elon Musk seems to suggest that self-driving vehicles are just around the corner and that they will be used to power vehicles travelling in small tunnels at high speed as well as rovers on Mars. But for all his positive intentions, the truth is that the concept of a self-driving vehicle is still very far away from reality.
It is true that there are test vehicles capable of navigating throughout city centres and can reliably detect obstacles and avoid them, but in most cases, these environments are highly controlled by developers, which eliminates numerous unknowns. A classic example of this can be seen in California with the self-driving taxi service run by Waymo. The vehicles operated by Waymo are restricted to specific areas that have been well documented and mapped, and a video demonstration by YouTuber Veritasium showed that construction signs requiring vehicles to merge and move around confused the Waymo vehicle so much that it had to pull over and ask for assistance.
But why is self-driving such a difficult challenge to solve?
Simply put, driving is an incredibly complex task requiring a combination of visual processing, understanding, and reacting. Furthermore, decisions have to be made in rapidly changing situations that have never been considered by the driver, and numerous factors such as environmental conditions and road changes have to all be taken into account.
Self-driving cars cannot be powered by basic computational algorithms that run analytical models for every conceivable event. Instead, self-driving vehicles undeniably require the use of artificial intelligence that is trained to recognise the most important situations while also learning how to tackle evolving situations.
For example, a self-driving AI system doesn’t need to be told precisely what to do in every situation as it can understand that if something is blocking its path, it should take safe measures to either turn the vehicle around or carefully move around the obstacle. Furthermore, the AI should also be able to read road signs and recognise that map data may be old (i.e. new junctions, roundabouts, and roads).
Creating such an AI requires massive amounts of data; this is where we see one of the biggest hurdles; real-world data. Creating a self-driving car that crashes into other cars may be great for AI learning but is entirely inappropriate as a method for gathering data. Connecting cars driven by humans to the internet and streaming live data is another option, but this also introduces challenges such as connectivity and privacy.
Overall, the lack of data and the current state of AI technology is unable to produce a self-driving car that is safe to operate.
Tesla is well known for its development of electric vehicles, but while most see Tesla as a vehicle manufacturer, it could be argued that what Tesla really is, is a data company. Every Tesla vehicle comes with an array of sensors, cameras, and onboard data loggers that stream this data to Tesla and serve the sole purpose of training self-driving AI. With every mile driven by Tesla vehicles, AI models being developed by Tesla are improved, and this use of customer data creates a feedback loop that helps to improve future versions of Tesla Autopilot software.
At the same time, the use of firmware updates allows customers to upgrade their vehicle’s capabilities over time, and those that sign up for beta testing programs can even have access to features still in development. This use of beta testing allows Tesla to get real-world data that would otherwise be too difficult to obtain, giving Tesla an edge over its competitors in the field of autonomous driving.
In most circumstances, using customers to test beta versions of software before rolling them out can be an excellent way to test the waters. Potential bugs that might cause grief can be ironed out, and customer feedback can help improve existing services and add new features. However, in the case of self-driving, giving customers access to beta versions of software for the sake of gathering data and improving performance is not only dangerous but borderline immoral.
Recently, Tesla has been giving early access to its full self-driving system to select beta customers, and videos are already being posted of drivers using the software to navigate around mountains and other dangerous environments. While drivers have to stay alert and be ready to take control should something go wrong, it is highly likely that those testing the new features do not fully appreciate that the so-called “Full Self-Driving” is closer to adaptive cruise control with automatic lane keeping than an actual self-driving vehicle.
Worse, those testing out the Full Self-Driving software may also not fully appreciate that the software is a beta and, as such, could easily run into errors. This is especially dangerous for other drivers who may end up in a collision over a mistake made by the beta software.
Simply put, the beta release of the “Full Self-Driving” will undoubtedly encourage illegal driving and endanger the lives of drivers, passengers, and other vehicles. If Tesla wants to test self-driving systems, it should do so in company-owned vehicles in environments that don’t endanger others.