How AI is still too young for self-driving vehicles

17-08-2021 | By Robin Mitchell

A recent video going viral demonstrates a Tesla car mistaking the moon for an overhead amber light. What challenges does driving present self-driving systems, what does the video show, and why does this demonstrate that AI is still in its infancy?


What challenges face AI in self-driving vehicles?


Self-driving cars are a potential holy grail for the automotive industry as they could potentially eliminate road traffic accidents and save countless lives. While genuine acts of god (such as lightning or tree felling) are hard to react to, self-driving cars could see the end of human error, such as not indicating when changing lane, sleeping at the wheel, and not reacting to slowing traffic.

However, for a car to be self-driving, it has to be entirely aware of its surroundings. From monitoring all pedestrians to determining the speed of all nearby cars, if a self-driving system fails to account for anything, then not only does it become unreliable, users will lose trust in the system.

Many sensor technologies can be implemented in self-driving vehicles, including LiDAR, RADAR, SONAR, and imaging, but simply reading such data is not enough. Instead, a self-driving car should be able to anticipate what will happen to react sooner to incidents. This is where AI becomes a critical tool in self-driving vehicles as AI systems can develop algorithms that predict behaviour based on past experiences.

However, AI has its drawbacks, and one of these is the inability to explain its findings. For example, an AI system could be trained to detect drivers of other vehicles that may pose a threat, but it would never be able to reason its findings. In other words, we would know that an AI system can function well, but because we do not understand it, we cannot predict how it will react to different scenarios.

Thus, any AI system integrated into a self-driving system will function until it does not, and when it fails, we will not understand why. This can make it harder to fix as the only real solution to fixing such situations is to keep training the AI with new data.


Tesla mistakes the moon for an amber light


Recently, an online viral video demonstrated how a Tesla could confuse the moon with a traffic amber light. Upon detecting the moon, the vehicle indicates the amber light to the driver, warning them that they should slow down. This warning may also potentially trigger an automatic slowdown of the vehicle when in auto-pilot mode that, despite the name, requires the user to be fully in control of the vehicle.

In the video, it is clear that the moon is low in the sky and has a deep shade of yellow. From the car’s perspective, the moon may have very well looked like an amber traffic light. However, the human brain is far more intelligent and would instantly recognize that the glowing shape circle in the sky is the moon.

Exactly why the Tesla vehicle confused the moon for a traffic light is beyond human understanding as AI cannot explain their thought process. Most likely, the car has never been shown pictures of the moon and therefore decided that it was a light upon first seeing it. Another possible solution is the colour of the moon; anything that is orange and circular is a warning light. Does this mean that the vehicle would not stop for square lights or those that are not quite amber?


Is AI not ready for self-driving cars?


Before we consider the possibility that AI is simply not ready for advanced tasks such as self-driving, we first need to wonder why an AI would mistake the moon for a traffic light. To be more specific, Tesla vehicles have many integrated cameras, but being unable to determine that the moon is very far away means that the vehicle is most likely not taking advantage of stereoscopic distancing. In other words, we know the moon is far away as it does not change in shape when we move, it does not move in the sky as we move around, and our eyes are parallel when looking at it. A stereoscopic vision system would determine that the moon is a distant object and therefore cannot be a traffic signal.

Despite Tesla calling their driving assistant system “auto-pilot”, it is far from being capable of driving the vehicle itself. Furthermore, no matter how many sensors are mounted on any vehicle, modern AI may be unable to reliably drive a vehicle as there are many too unknowns. Sure, self-driving vehicles may have a million miles under their belt, but they are most likely a controlled one million miles with no snow driving, falling trees, bad drivers, and random events.

For AI to be reliable, it needs to have more than just training from a series of perfect drives followed by data from a few random events; it needs to be able to anticipate, adapt, and react to new situations without fail.

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.