Should governments interfere with AI development?

07-12-2021 |   |  By Robin Mitchell

A recent report in the Guardian describes one individual’s thoughts on AI and how its development should be done for the betterment of society and not for profit. Why is AI such a powerful tool, what challenges does AI face, and why should AI be kept as far away from governments as possible?


Why is AI such a powerful tool?


Undoubtedly, the most important invention of the 20th century was the transistor; the ability to create an electrical switch that is electrically controlled AND that can be shrunk down to the atomic scale enables for circuits of unimaginable complexity that powers every aspect of modern life. While we are only 21 years into the 21st century, we can already see which new technologies are having the most significant impact on modern life.

Quantum computing is one technology that shows great promise as it allows for decreased computation time for complex tasks (such as route finding). However, quantum computers continue to be used in laboratories and high-end facilities with no sign of availability to the general public. Another example of modern technology is flexible semiconductors that could power a new generation of wearable devices that allow for advanced medical uses. But of all modern technologies, it can be said that the most important invention of the 21st (so far) is Artificial Intelligence.

AI is an incredibly powerful tool for the same reason that the human brain is powerful; pattern recognition. Simply put, an AI can analyse large amounts of information and train itself to see patterns in that data. Once trained, the AI can be presented with new data that it has never seen before but still be able to recognise patterns.

One typical example is face detection; an AI can be shown many different faces in different environments. Once trained, it can be used to identify new faces in new environments with a high degree of success.

Another example of AI is predictive maintenance in industrial environments. Industrial equipment is often costly, and maintenance after failure can be expensive. AI can be used to recognise how the machine should behave during regular operation and then detect minute changes in its behaviour to indicate that it requires maintenance.

To summarise, AI is becoming a major tool as it does not require the developer to know about every possible situation that the AI may encounter. So long as the AI is trained well enough, it will be able to infer what is going on in any given dataset, eliminating the need for a billion if statements in code trying to describe every possible action the AI can take.


What challenges does AI face?


It is amazing how many challenges it faces for such a capable technology. When AI was first developed, these challenges mostly revolved around not having enough processing power or enough RAM to run neural nets. These days, AI faces moral challenges, how it should be implemented, what regulations should apply to it, and giving AI acceptable sources of information.

For AI to be effective, it needs large amounts of data to learn from. For example, giving an AI a picture of a billion faces would allow that AI to detect faces in images and video with close to 100% accuracy. Getting a billion images of faces is more accessible than one would think; the internet is full of social media sites with this kind of data. But is it moral for an AI company to download this public data and then train their AI to detect faces like this?

This is similar to a challenge faced by Clearview AI who has recently created a database of millions of users faces attached with personal data accessible to law enforcement. Any image of a potential criminal can be fed into the AI that will search the database of faces and return any matches, including names and links. While this may be acceptable in the USA, Clearview AI has been hit with a massive fine in the UK for breaching data protection laws and violating the privacy of individuals.

AI also faces challenges in its ability to replace human workers in multiple fields. Many jobs are being replaced with automated systems, including warehouse packing, administrative tasks, and even human resources. In fact, the ability of AI to replace humans has become so powerful that even creative tasks such as writing can now be done by AI.

Morality is also another challenge faced by AI. The best (should really be the worse) case of this is China who has developed AI technology to monitor all of its citizens. Any citizen who is seen doing something undesirable by society (e.g. littering, jaywalking, and anti-government views) is automatically identified and lowered their social credit score. Too low of a social credit score sees citizens unable to use public transport, go on holiday, and even have their details sent to friends telling them to keep their distance.


Why government should never get involved with AI development


AI is a powerful tool that faces many challenges. There are calls for government and independent bodies to step in and create AI that helps society does not take into account how AI works, why it is becoming popular, and how non-accountable organisations can horribly abuse systems.

The general argument listed in the article published by the Guardian is that AI is being developed too much by large tech corporations who control the development of AI and often see themselves on the boards of groups that want to regulate and monitor AI usage. It also argues that independent non-profit developers should be the ones to decide how AI is used and developed and that governments should be funding such development.

However, the article doesn’t address how the private sector is held accountable, whereas governments and independent bodies rarely are.

Big tech developed AI as these are the only companies with enough data to create complex algorithms. Developing such technology is expensive, and thus the only way to fuel development is to utilise AI technology to generate revenue.

Clearview AI is an example of a small tech start-up that was able to utilise AI for extremely immoral reasons while being funded by governments. China is another example of a government using AI to impose restrictions and draconian rules on its population. No higher power can dictate what these organisations can or cannot do in both cases. While Clearview AI may have been fined by the UK government, it is a US company that continues to work with law enforcement in the US and faces no restrictions.

Furthermore, the writer of the Guardian article fails to understand why developing AI technologies at an accelerated rate is vital for any nation, cyber defence.

Countries such as China can ignore all morality and develop AI systems with the full force of the Chinese government, while countries such as the UK and US face limitations on what they can and cannot do. The best way for the west to develop AI tech is to take advantage of free markets and capitalism to find ways of profiting from AI systems, as this will lead to accelerated development and implementation. From there, governments can utilise these AI algorithms in their defence networks to ensure that they can always match the capabilities of other nations.

This article could continue for many more thousands of words. Still, to keep things short and concise, governments that are not answerable to anyone should never interfere with AI development. It is one thing to impose regulation to protect user data; it is another thing to fund its development with incentives behind that funding.


Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.

Related articles