Google's AI Demo Causes $100M Loss in Share Value

20-02-2023 | By Robin Mitchell

Recently, a mistake was made by Google during a demonstration of its own AI named Bard. The challenge for Bard was to compete with the established AI model, ChatGPT, created by OpenAI. However, due to a bug or some other issue, the demonstration led to the spread of misinformation, which posed a danger and threat to the truth and facts. This caused a $100m loss in share valuation for Google and a blow to the confidence in Bard's capabilities. In the world of AI, even a small mistake can have significant consequences. What exactly happened to cause the drop in value? How does this demonstrate Google’s challenges with AI development, and how does this demonstrate the dangers of AI systems?

Google demonstrates new AI, wipes $100m in shares

There is no doubt that Google is increasingly under threat from numerous tech companies making advances in their own respective fields. While Google started as a search engine, its rapid growth, large amounts of capital, and huge pool of engineers allowed it to rapidly grow into other industries, including self-driving cars, social media, and cloud-based services. While some of these ventures have been highly profitable, many others have struggled to make returns on their investment, demonstrating that trying to expand sideways is far more complicated than many would think.

One area that Google has been heavily working in is AI, and they have made enormous achievements in this field, including their developments in virtual AI assistants that use natural language models to have almost-human conversations with real people. However, other companies, such as OpenAI, have recently taken the world by storm with their natural language models, such as ChatGPT. The fact that other companies are producing AI services faster than Google has many wondering whether Google is falling behind in the AI race.

So, to try and demonstrate its capabilities, Google recently revealed their latest AI, called Bard, which showed examples of what it is able to produce when asked questions. However, in the advert, the AI was asked, “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?” the response suggested that the James Webb Space Telescope was the first to image exoplanets. Unfortunately for Google, this is factually incorrect, as the European Very Large Telescope imaged the first exoplanets in 2004. 

Once this error was pointed out, shareholders seemed to have pulled out from Google, resulting in a $100m fall in valuation.

How does this demonstrate Google’s challenges with AI?

It is perfectly understandable for new tech demonstrations to include mistakes like the famous BSOD that occurred live when Microsoft demonstrated its USB functionality in Windows 98 or when Tesla was demonstrating its armoured glass, only to find it cracked twice when hit with a steel ball. 

However, in the context of Google’s Bard, AI engines designed to ease workloads by providing factually correct information have no room for subtle errors in truth. Worse, Google’s marketing team took the misinformation as fact, didn’t fact-check what was provided, and posted it to the world, further demonstrating the dangers of faulty AI. Thus, it comes as no surprise that Google saw significant shareholder panic.

Of course, OpenAI’s ChatGPT has its fair share of issues, but unlike Google’s Baird, it doesn’t claim to be accurate or complete. Instead, OpenAI tells its customer base to take the results from ChatGPT with a pinch of salt and has implemented numerous restrictions to prevent its misuse. ChatGPT even has a limited dataset, with no data after 2021 being available, but this raises questions on what response ChatGPT would have given to the same question had it known what the James Webb Space Telescope has achieved to date.

Thus, we arrive at the inevitable conclusion that Google is undoubtedly facing issues with its AI development. It is difficult to determine the accuracy of the new AI, but considering that the one example they chose was wrong, it is possible that it could be riddled with issues resulting from poor datasets and the inability to distinguish between good and bad data.

A man walks past the Google office, the Chelsea campus, in New York City. Google LLC is a global technology company headquartered in Mountain View, California.

How does this demonstrate the dangers of AI systems?

Many look at AI and worry that it will eventually take over all jobs, and this worry is perfectly justified; AI can now be used to power online assistants, place calls, write blogs, draft articles, and even compose music. But while these concerns are primarily focused on employment, AI presents far bigger dangers, and one of these is truth.

Simply put, AI is only as good as the information it’s fed, and this undeniable fact has seen numerous controversies in the engineering field involving sensitive subjects (such as women in the workplace, race, and culture). Considering that the internet is full of memes, jokes, and misinformation, it is clear that training an AI using internet resources is likely to produce a confused machine that cannot differentiate between fact from fiction.

But even if an AI is selectively trained on resources that are believed to be trusted, small mistakes can creep in that render the AI unreliable. For example, Google’s Bard AI likely trained itself using news reports, and it would only take a single news report to get facts surrounding JWST and exoplanets wrong. This is also particularly problematic for research papers that frequently include errors, many of which have to be retracted, altered, and republished.

Overall, the real danger posed by AI isn’t that it will take over jobs and destroy humanity with an army of machines but that humans may come to rely on AI, whose reliability may be far from perfect. Over decades of use, these small mistakes can drip-feed into work done by AI (such as hardware design), and this could lead to a future riddled with bugs and issues.

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.