AI and Regulation – What engineers need to watch out for

18-10-2021 | By Sam Brown

In the last decade, AI has undergone leaps and bounds and now finds itself in many daily applications, including website traffic management, advertisement, and predictive maintenance. What challenges does AI present from a moral perspective, what legislation will the EU be introducing, and why should engineers be careful with regards to AI?


What challenges does AI present?


Some key individuals worldwide have expressed their concern with AI and tell tales of how AI will conquer humanity in a similar fashion to The Terminator. However, the truth is that AI is very unlikely to behave in a sadistic killer manner with a singular purpose of wiping out humanity. But that is not to say that AI won’t present challenges and could run amuck if not kept in check.

There are many aspects to AI that present challenges from engineering, privacy, and moral perspectives. Unless the community can come together to solve these issues, then governments worldwide may start to regulate AI and potentially hinder its progress.

To start, one of AIs bigger challenges is its use in deciding the fate of an individual. An AI could very easily be used to predict an individual’s behaviour and use these results to determine credit rating and insurance premiums. Still, such decisions are potentially life-changing to an individual. Therefore, one could argue that AI’s lack of compassion and humanity would lead to unfair judgments based on prediction alone. However, the counterargument is that AI cannot discriminate as AI is fundamentally unable to hold an opinion (i.e. it simply learns from data fed to it).

Another big challenge that AI faces is from the point of view of privacy. For AI to improve its ability to perform a task, it requires data. When it comes to AI being used in marketing and advertisement, such data would be in the form of personal information such as name, age, gender, and nationality. However, collecting large amounts of this data essentially opens up every user to potential infringements of privacy. This data could be sold to third parties or stolen by malicious activities without the users’ permission.

However, where AI is becoming very problematic is the ability for authorities to profile citizens and restrict personal liberties. The prime example of this is the Chinese government deploying AI to track each citizen and give them a social credit score based on how loyal those individuals are to the ruling party. Those who have low social credit are not allowed to use public transport such as planes and trains, and those who have very low social credit scores are shamed publicly, while those who are friends with the individual are warned not to go near them.



EU will bring in AI legislation soon


Recently, the EU has been signalling its interest in introducing AI regulation to prevent AI from being used in high-risk categories that could infringe on individual liberties and privacy. According to information published by the EU, AI use will fall into one of three categories; Unacceptable Risk, High Risk, and Limited Risk.

Unacceptable Risks are applications that involve subliminal and manipulative messaging, real-time biometric identification used by authorities, and any social scoring system. It is the EU’s intention to ban these areas outright for AI and any related tools.

High Risk is applications that rank individuals for use in credit and other financial services. For example, recruiting tools that pick candidates using AI is considered a high-risk application. This category also includes biometric systems for identification purposes in non-public spaces and any system used for administration in justice.

The last category essentially includes another application for AI that is considered to have minimal impact on society, data, and rights. Such applications include AI chatbots, spam filters, AI in video games, and inventory management.


Why should engineers be careful with AI?


There is no doubt that AI has proven to be a valuable tool for predictive analysis and data management. Still, there is an increasing number of applications that continue to abuse the power of AI for immoral and malicious reasons.

As previously discussed, the use of AI in the Chinese social credit system is a prime example of how AI can be horrendously abused. What makes the social credit system worse is that the widespread use of AI and vision technology is self-improving, meaning that the longer the social credit system is used, the better the AI becomes. Thus, the use of an immoral technology that would otherwise not be seen in the west is now pushing Chinese AI technologies well beyond what the west is capable of.

Surprisingly, AI can be implemented into most applications, and it is this fact that makes it a tempting tool for engineers. Instead of describing every possible decision a system can make, an AI can be trained over time, arguably an easier task.

However, just because an AI can do a task does not mean that it should. This can be clearly seen with the use of tracking systems, biometric data gathering, and other unnecessary applications that are gathering personal data for the sake of using AI. For example, a company could use an AI system to enable staff entry to a building based on facial recognition. Still, a keycard would do the same task without needing to store personal information.

If engineers are not careful to self-regulate when using AI, then governments worldwide may start to introduce laws and regulations that make it harder to develop AI. The last thing that engineers ever need is red tape, so don’t fall for the same traps that the industry fell for during the rise of IoT.

By Sam Brown