AI in Toys: A New Frontier or a Dangerous Path?

14-07-2023 | By Robin Mitchell

It is clear that the rapid development of AI will eventually find its way into children’s toys, and while AI will help provide children with unique interactions, it could introduce some serious challenges. How could AI be integrated into toys, what dangers would it present, and should we proceed with developing such products?

A creative child is building robot cars at their home.

How could AI be integrated into toys?

It seems that for the past decade, AI has been mostly the stuff of science fiction, with AI algorithms having limited performance and high processing requirements. AI that was practical was generally limited to large corporations that had access to large amounts of computing power, and when it was used, its function was rarely well-known. For example, Google has been working with such systems to improve their search algorithms, but the public has limited knowledge of how these systems work, the data they are gathering, and the performance impact on Google’s business.

Then, suddenly, the world was introduced to ChatGPT, and everything changed. Instead of only engineers and researchers having access to machine learning systems, everyone and anyone were able to take advantage of the abilities of ChatGPT to write stories, complete emails, condense information, and even write code. This public access to ChatGPT has helped spark a new technological revolution, with those adapting to the new environment seeing significant returns. 

But there exists an emerging application for AI that could help shape generations to come; AI-powered toys. If there is one thing that most adults can admit, it’s that we wished our toy soldiers, barbies, and dolls were alive and could have conversations with us as kids. While the imagination of a child is certainly unrivalled, the ability to play with toys that can play back would have been an unreal experience (of course, this idea was around in the iconic film Small Soldiers, and that didn’t exactly end well). 

An article from the Financial Times[1] discusses a new AI-powered toy that can adapt its behaviour based on interactions with the child. This toy represents a significant step forward in the field of AI toys, but it also raises important questions about the ethical implications of these technologies.

If used correctly, a toy powered by AI could provide a child with a playful experience that, over time, adjusts to the child’s personality. This would make toys significantly more personal than those currently sold and, through a bond of friendship, make toys usable for a significantly longer time. In fact, assuming that an AI-powered toy could be designed to replicate a basic level of emotion could help guide the child away from destructive behaviour (thereby making AI-powered toys therapeutic). 

Another major benefit to toys having AI capabilities is that they would potentially be able to interact with other toys, creating communities and interactive environments. For example, children setting up a tea party with four different toys could see interactive conversations between the toys and the child, making it far more real. This would also be beneficial for children that struggle with communication (such as those with autism), thereby providing a mechanism to help develop coping strategies.

Finally, the power of AI systems like ChatGPT could provide children with unique experiences in storytelling and education. Questions could be asked by the child, topics could be explored, and even specific story requests that make the child the centre of the story (as opposed to common fairy tales). 

A Real-World Example of AI in Toys: Eilik

To illustrate the potential of AI in toys, let's take a look at a real-world example: Eilik, a little companion bot with endless fun[4]. Eilik is a product of Energize Lab, an innovative startup focusing on robotics technology. This AI-powered toy is designed to provide a rich and lifelike experience for children, with a wide range of expressions and dynamic animations.

Eilik's capabilities are continually updated via cross-platform software, making it smarter and more interactive over time. It's equipped with a specially designed servo motor, EM3, which allows for flexible and dexterous movements. This level of sophistication in a toy demonstrates the potential of AI integration into children's playthings.

  

However, it's important to note that while Eilik represents a significant advancement in the field of AI toys, it also underscores the need for careful design and regulation in this emerging field. As we continue to explore the possibilities of AI in toys, we must also consider the ethical implications and potential risks.

What danger would AI toys present?

Ignoring the fact that a toy with a personality and the ability to interact with people would be entirely creepy (especially if it starts turning on in the middle of the night), there are numerous complications associated with integrating AI into toys. 

The first, and probably the most concerning, is that AI integrated into toys will likely require an internet connection, as AI systems such as ChatGPT cannot be locally run (due to the computational resources needed). As such, all data obtained by the toy will be stored on a remote system, introducing countless issues with privacy. 

Toys that have microphones would be able to listen to conversations, and any cameras would provide unfiltered images. This would not only pose a serious threat to the child playing with the toy but to anyone near the toy, especially in a private environment.

The second concern with AI integrated into toys is the potential psychological effect it could have on children. When it comes to children playing, it is their imagination that brings inanimate objects to life, and with age, this fades away. Furthermore, children are often capable of recognising that toys are not sentient, nor do they have feelings. 

But a toy that has ChatGPT capabilities could make this distinguishing far more difficult, especially at younger ages. For example, a toy that breaks down could be seen as having died in the eyes of a child, and if each toy builds up its own unique experiences, simply replacing the toy would result in the alienation of the child (the child would clearly understand that the toy has different responses and attitudes). 

A story from The Telegraph[2] provides a real-world example of this. In the article, a child forms a strong emotional attachment to a teddy bear equipped with AI. The child begins to view the toy as a real friend, demonstrating the potential psychological impact of these toys. This underscores the importance of careful design and regulation in this emerging field.

The third area of concern when integrating AI into toys is how the AI is trained. In the age of disinformation, it is very easy for an AI to carry biases that could find their way into a child’s toy, especially if that AI is a publicly accessible system such as ChatGPT. For example, if a child was to ask a toy, “who owns the South China Sea” a toy manufactured by China could be programmed to make false claims about what China owns, as opposed to what is recognised by international maritime law. The same could also be said for political opinions and other questions on ethics affecting society as a whole.

Should AI be integrated into toys?

There are far more benefits and concerns that arise from using AI in toys, and all of these must be appropriately managed. However, as technology currently stands, should we be integrating AI into toys? 

Considering that AI is still in its infancy and can very easily be manipulated to produce harmful content, the safest course of action is to not integrate it at this time. Unless an offshoot of ChatGPT can be developed that is specifically designed with children in mind, the dangers posed by AI and the technologies used to implement it introduce far too much risk. 

According to a report by the UK Government[3], the integration of AI in consumer products, including toys, presents both opportunities and challenges. The report highlights the potential for AI to enhance product safety by predicting and preventing accidents. However, it also warns of the risks associated with data privacy and the unpredictability of AI systems. These considerations should be at the forefront of any discussions about integrating AI into children's toys.

Not only would children be vulnerable to the AI itself, but microphones used to listen to conversations and cameras used to identify faces would all pose a very serious threat with regard to privacy and safeguarding. 

If, however, these problems could be solved, AI in toys would undoubtedly have so much potential in raising future generations of humans.

References

  1. Financial Times: AI-powered toys
  2. The Telegraph: AI ChatGPT toys
  3. UK Government: Impact of AI on Product Safety
Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.