The dangers of internet data – How Alexa told a 10-year-old girl to touch live mains

11-01-2022 |   |  By Robin Mitchell

Recently, a mother of a 10-year-old girl was left shocked when Amazons Alexa told her to take a penny and touch live mains. What exactly happened in this incident, how does this demonstrate the dangers of internet-gathered data, and how can engineers take measures in the future?


The Penny Challenge


In December 2021, a mother and daughter were looking for things to do around the home during bad weather. To help with such time, Amazons AI assistant Alexa can present unique challenges curated online, including finding all items in a kitchen starting with a specific letter or timing individuals how many push-ups they can do in a given time.

However, the indoor activities took a dark turn when Alexa presented the two with a new challenge known as the “Penny Challenge”. Simply put, Alexa asked the 10-year-old to take a charger, pull it out partially from a socket, and drop a penny onto the exposed terminals.

Anyone who understands electricity will know that this challenge is hazardous. Firstly, exposed prongs with a penny in contact can easily result in fatal electrocution. Secondly, intense current and sparks can cause fires to form either at the socket or along with the wiring.

The challenge was obtained from TikTok, where the penny challenge has become viral and popular. While Alexa is based on AI for voice processing, speech synthesis, and determination, it could not recognise the dangers of the challenge. Fortunately, both the girl and mother realised that the challenge was outright stupid and took to social media to make others aware of the problem with Alexa.


How internet curated data can be dangerous


Creating an intelligent and reactive system is easier said than done, but some methods can create perceived intelligence. For example, CleverBot is an online AI system that learns from millions of conversations to develop a chat system that closely resembles a real person. However, the truth is that CleverBot doesn’t understand the context of its conversations, nor does it understand the responsibility of what it says.

In the case of Amazon Alexa, the engineers responsible for its design wanted to create functions for Alexa to do that would make it interesting and relevant. One way to make such a system would be to create a genuine AI that could think and feel for itself, look at what the world is doing, and then report back on whatever it found interesting that was appropriate. However, this is currently impossible as no such AI exists (and may not do for a very long time).

Another option is to utilise the ability of the general public to create its own content. In the case of Alexa challenges, any challenge that trends on social media (such as TikTok) can be scored on its popularity and then presented to users. However, such a system would be unable to determine if the trending challenge is safe, nor would it understand any moral issues associated with the challenge.

The internet may be an amazing development that has helped increase productivity and technological development, but it is also a landfill of rotten data. Whether it is the comments and opinions of exceptionally loud individuals, or sites stating facts not based on credible sources, engineers must be careful when using data from the internet.


What can engineers do to prevent such incidents?


The sole reason why this incident with Amazon Alexa occurred was that the challenge app designed by Alexa engineers had no vetting process. Trending data would be searched via AI algorithms automatically, the challenge would be determined (using various algorithms), and then the challenge would be added to a list of challenges. Adding a human vetting process would prevent dangerous tasks from being presented to users, but this can create a bottleneck when creating a system that can think for itself and present challenges based on real-time data.

Fundamentally, engineers need to recognise that any data generated outside of their control and posted online will have integrity issues. As such, systems that automatically take data from the internet need to have measures in place that can grade said data. In the case of the Penny Challenge, a basic Google search would show many online news articles stating the dangers of the challenge, and an AI algorithm can mark comments on penny challenge videos either negatively or positively. If the overwhelming number of comments are negative, it could signify that the challenge is unpopular and/or dangerous even if it is trending.


Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.

Related articles