No, Google Chatbot is not sentient despite mass media hysteria

14-06-2022 | By Robin Mitchell

A Google engineer on forced leave, an AI that may be sentient, and the rights of AI are all the craze in the media, but as usual, it is more fiction than fact. Why was the Google engineer put on leave, why is the Google chatbot far from sentient, and why are many often confused about AI?


Why was the Google engineer put on leave?


Recent reports of a Google engineer being placed on leave have seen the media in a fury over the developments of AI, whistle-blowers, and the dangers of big tech. Elon musk fans have all screamed from the tops of buildings that AI systems are now taking over, and it won’t be long before we see a war between man and machine. Every citizen should arm themselves with whatever they can, stock as much tinned food as possible, and fight against the machine! Or, instead of fighting over the last tin of spam and toilet roll, let’s instead understand why the engineer has been put on leave and how the media is adding fuel to an insignificant flame.

The engineer in question, Blake Lemoine, was recently put on leave from Google after claiming that the Chatbot LaMDA (Language Module for Dialogue Applications) was sentient. These claims of sentience come from the apparent thoughts and feelings that the chatbot expressed, and when asked what scared it the most, it responded with “being turned off and dying”. Additionally, the chatbot also expressed sadness about being used in research.

After several supposedly aggressive moves by the employee (hiring a lawyer to defend the chatbot and talking to key government officials regarding unethical practices by Google), it was decided that Blake Lemoine should be put on leave. Furthermore, Google has made numerous statements regarding the LaMDA chatbot and how the chatbot is not sentient.


Why is the Google chatbot far from sentient?


There is no doubt that the Google chatbot is a really sophisticated piece of equipment, and its ability to generate natural language output bodes well for future automated systems. Despite the chatbot talking about how it fears death and dislikes being tested upon, such statements do not make something sentient.

To understand why this is the case, one only must understand how such AI systems operate. While the inner workings of the LaMDA system are confidential, it is highly likely that the chatbot trains itself on conversations found all over the internet on forums, emails, and articles (consider that Google has state-of-the-art web crawlers dedicated to this). By observing the billions of conversations that take place online, a model can be generated that is able to predictively respond to questions.

For example, many online have likely expressed a fear of death (the human condition), and it is highly likely that there are thousands of articles which list death as the most frightening experience that one can face (i.e., near-death experience). As such, asking a chatbot what it fears after learning from all of this data will likely return “death”.

It is also likely that engineers have hard-coded into the chatbot the nature of its existence (i.e., a computer), and thus is designed to respond to users that it is indeed a machine and not a person. As such, its own deep learning algorithm will likely rank computer-related responses more than people’s. Combining the “fear of death” and the “I am a computer” gives a predicted sentence of “I fear death and being turned off”.

Ultimately, chatbots like LaMDA are extremely powerful predictive text systems that look for the most likely answer to a question. Even if such a system could make its own decisions, the best supercomputers in the world don’t come close to the complexity of the human brain, and the idea of an AI that can truly think and feel, if even possible, could be centuries away.


Why do so many get confused over AI and its capabilities?


It is very common that complex subject matters are oversimplified or misunderstood, especially in the media, as those who truly understand a topic are rarely the ones to report on it. AI is an excellent example of this, as those who work in the field rarely make grand statements of AI overthrowing humans and ruling the world.

That isn’t to say that there will be researchers who believe AI poses a threat to humans, which is entirely correct. An excellent example of this was the assassination of Mohsen Fakhrizadeh on November 27th 2020. While the perpetrators have never been found, it is believed that the assassins used a long-range firearm that utilised AI to specifically target Mohsen Fakhrizadeh via facial recognition. Evidence of this is supported by the fact that the grouping of bullet holes in the front windshield was extremely tight, and his wife was left entirely unharmed despite sitting in the seat next to him.

So, why do the masses often get confused with AI and other advanced technologies? The simple answer to this is that the vast majority of the public get their insights into science and research through media (newspapers, articles etc.), and these reporters often misunderstand the topic they are reporting.

For example, the release of the Sagittarius A* black hole would frequently be referred to as the first image of a black hole, but the truth is that the so-called “image” is generated from deep-learning models observing radio data from multiple telescopes. The black hole image released by the researchers is as real as supposed images of individual atoms (atoms cannot be photographed in the traditional sense and instead are scanned using electron microscopes which observe changes in surface charge).

In the case of AI, the responses from the Google chatbot are easily misunderstood as something that thinks and feels (humans tend to anthropomorphise). Considering that punchy headlines about a sentient AI will generate massive amounts of traffic, the result is that a misunderstood topic generates money for media outlets while confusing the public into thinking that machines can now think and feel.

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.