AI Containment – What is it, and why could it be needed?

30-06-2022 | By Robin Mitchell

As AI continues to advance, there is a growing concern that a sufficient advanced AI system in the future would be able to interact with the world around it and effectively break free into the wild. How could an AI break free from some computing platform, what is AI containment, and what challenges would it bring?


How could an AI break free from its computing platform?


The concerns with AI in the media are nothing new, and it always seems that killer AI is just around the corner. In fact, it was only two weeks ago (on publishing this article) that a Google engineer was suspended after claiming that their Chatbot LaMDA was sentient. Of course, the media (and general public, for that matter) rarely understand how AI truly works and will often conclude that AI is a machine that can think and feel. 

In reality, current AI technologies are nothing more than extremely effective pattern recognition systems that can learn from data to improve their ability. One typical example is predictive text, whereby an AI can read what has been actively written and, by observing billions of conversations online, will try to predict what will come next. However, such a system is not aware of itself, nor is it aware of what it is writing.

However, that isn’t to say that in the future, there will be AI systems that are more generalised and may be able to make arbitrary decisions that help improve the AI’s capability. For example, an AI running on a datacentre may be given the capability to connect to the internet in order to look for data. 

At the same time, a generalised AI may also be given a rudimentary survival mechanism whereby it recognises what endangers its operation and learns methods for self-protection. This could be achieved by copying its code across internet-connected computers to preserve itself, or possibly even move across the internet so that it can effectively operate in the cloud and never be tied to one computing system.

In a more dangerous scenario, an AI system with access to physical devices (i.e., actuators and interfaces) could potentially use these to find methods of escaping the platform it is operating on. Worse, this may even extend to social engineering, whereby an AI can trick a human operator into proving the means for escape. While this may seem like science fiction, the Google chatbot was able to trick an engineer into thinking it was alive. Of course, the chatbot was unaware of its actions (as it is merely an advanced predictive texting system), but even with no intelligence, it was still able to fool a human. 


What is AI containment?


The concept of AI containment (also known as an AI Box) is to create a platform that prevents an AI from being able to escape. While current AI does not require such stringent security measures, future AI may have the ability to move across systems and become rouge. 

This concept of moving across networks is very similar to computer malware, such as viruses and worms designed to infect, replicate, and then move. As these programs actively look for methods to move across systems, infected devices are often disconnected from any and all networks, removable devices are discarded, and personal files on the device are scanned for virus code.

However, while viruses can be easily contained with scanning software, an AI may be intelligent enough to interfere with system processes, making software containment a challenge. Furthermore, AI systems generally cannot explain how they arrive at answers. This means that a potentially rouge AI could, in theory, make preparations to escape that would go relatively unnoticed (such as opening specific ports on a firewall, copying key code into random locations, and testing internet bandwidth to determine how long it would take to copy).

  AI containment would need to consider strong security practices that would identify suspicious activity, and this may be achieved with a more basic AI (identical to those used in predictive maintenance systems today). This rudimentary AI would merely look for unusual activity that could result in a basic reaction such as power disconnection, network disconnection, and alarms.  

What challenges would AI containment bring?


It is highly likely that any future AI capable of general intelligence that can make arbitrary decisions, understand the nature of what it is doing, and have the power of curiosity would need a large computing platform and easy access to data. As such, it is not a trivial task to try and compartmentalise such an AI to the point where I/O is heavily restricted.

In fact, there could even be an argument that restricting an AI in this manner is unethical (restricting sensory capabilities is cruel to living creatures). The perceived cruelty of restricting AI combined with the ability to become rouge may prevent such AI systems from ever being developed. 

But an AI doesn’t have to be entirely aware of itself or its surroundings to become potentially rouge. AI-driven viruses will not only react to security systems in real-time to avoid detection but may also be able to analyse data flow in a system to find new ways of spreading beyond the current platform.

Of course, we are decades away from having to worry about rogue AI and its containment. But that isn’t to say that we shouldn’t consider the dangers of AI and how AI could be accidentally made rouge.

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.