02-09-2021 | | By Robin Mitchell
Recently, the US federal agency responsible for traffic safety announced that it will be launching a full investigation into the use of self-driving systems on public roads. What challenges do self-driving vehicles face, why is the US launching an investigation, and how could this affect development into AI-based technologies in the future?
Developing self-driving vehicles is one of the biggest challenges faced by automakers to date due to the complicated task of combining sensor systems that can map their surrounding environment while creating an AI that can interpret all of this data. Self-driving is further complicated when considering how people’s definitions of self-driving are varied and how driving a vehicle requires the driver to expect the unexpected.
To begin, a self-driving car must be able to see its surrounding environment. Humans do this almost entirely with two eyes. The use of two eyes combined with sensitive peripheral vision gives humans the ability to determine the depth (i.e. the distance of surrounding objects) and react quickly to anything in our peripheral vision. It is incredibly complicated to use two cameras on a car and then combine them for stereoscopic vision, especially when considering that this data needs to be processed in real-time.
Other sensory technologies that can help increase the awareness of a vehicle of its surroundings include LiDAR for distance mapping, RADAR for long-range object detection, and SONAR for short-range detection. GPS and other mapping technologies can also help vehicles determine obstacles in the distance thanks to traffic reporting systems (this allows a car to be pre-emptive of the road ahead).
While such sensory technologies are somewhat trivial to implement, using such data to create a vehicle that can drive itself is entirely another challenge. For a computer to drive a road, it has to be more than just aware of road boundaries and traffic signs. It needs to anticipate bad drivers, react to any situation as it unfolds, identify pedestrians that may be looking to quickly cross a road, and respond to emergency vehicles.
As such, developers of such systems require advanced computing systems, accurate sensory systems, and the freedom to experiment. Rules and regulations exist about experimental vehicles, but making such regulations more restrictive will hurt the development of self-driving systems. This is why automakers need to be careful when releasing such technology to the public.
Recently, the US federal agency dealing with road safety has decided to launch an official investigation into the Tesla self-driving “Auto-Pilot”. While there have been plenty of auto-pilot-related crashes, the increasing number of collisions with parked emergency vehicles is proving to be a cause for concern.
It is not uncommon to see emergency vehicles parked on the edge of a road, whether due to a traffic stop or to attend an incident. However, what is common is drivers ploughing into the backs of emergency vehicles as they almost always have bright flashing lights while being parked on the slowest lane.
However, the auto-pilot system developed by Tesla has demonstrated that it can be easily fooled and miss rather apparent obstacles. For example, a video last month showed how the Tesla auto-pilot thinks a full moon is an amber traffic light. Furthermore, the federal agency stated that in all of the traffic incidents involving a Tesla and emergency vehicle, either the auto-pilot or the Traffic-Aware Cruise Control was used.
How will such investigations affect the development of future self-driving systems?
It is obvious that such investigations create bad press as well as false narratives surrounding the safety of such vehicles. That does not mean that regulations should not exist, but their introduction will make developing such systems more complex. It should also be noted that many customers are misusing Tesla self-driving technology as such systems require the driver to have their hands on the wheel and be in complete control of the vehicle.
However, this does raise the question of the term “self-driving”. It is obvious that Tesla vehicles cannot drive themselves, and their auto-pilot system is more of a neat trick for keeping a car in a lane. A true self-driving vehicle would not require a driver in the seat to function at all, but this is not the case with Tesla vehicles. Therefore, why is Tesla advertising their cars with “self-driving” and “auto-pilot”. Such terms give the customer the impression that they can sit in the passenger’s seat while the car does all the driving.
Overall. Tesla is making fundamental advances in the field of autonomous driving. Still, by using terms such as “self-driving” and “auto-pilot” attached to their commercial vehicles, they are helping to create an environment that will encourage more regulation, thereby making it harder to advance and test such systems.