AI in Finance: A Game Changer or a Potential Threat?

15-06-2023 | By Robin Mitchell

As AI continues to permeate various industries, concerns are escalating about its potential impacts and the significant societal shifts it could trigger, which may not always be beneficial. A new article published by the Telegraph highlights how AI is now being used in financial decisions, going as far as to pre-emptively prevent customers from online shopping thinking that fraud was taking place. What challenges does AI pose to society as a whole, why is AI a potential danger to finance, and should AI be stopped in its tracks before it’s too late?

What challenges does AI pose to society as a whole?

A colossal robot is seen flicking a miniature man. This thought-provoking image highlights the intersection of AI technologies and the issue of unemployment

There is no doubt that Artificial Intelligence (AI) has proven to be a very capable technology, with millions of active users on ChatGPT, its mass integration into every device, and the extraordinary amount of investment being made into the technology. However, for all the benefits that AI has provided, there are growing concerns over its use and the effects that it will have on society. 

One of the major issues identified is the lack of understanding of AI among the general public and even among business leaders. To address this, it is crucial to promote education and awareness about AI. This can be achieved by integrating AI education into school curriculums, organizing workshops and seminars, and creating online resources that are easily accessible to the public.

For instance, the Brookings Institution discusses the importance of digital education and AI workforce development to equip employees with the skills needed in the 21st-century economy [1]. They also recommend the creation of a federal AI advisory committee to make policy recommendations [1].

Ethical Dilemmas and Bias

One of the foremost concerns about AI is its potential to not only perpetuate but also amplify existing biases and discrimination. Machine learning algorithms, which are at the core of AI systems, learn from vast amounts of data. If this data contains biases, such as gender or racial discrimination, AI can inadvertently reinforce and perpetuate these biases. This raises ethical dilemmas in various fields, including hiring practices, criminal justice, and lending decisions, where biased AI systems can exacerbate social inequalities and discrimination.

 As AI ethicist Timnit Gebru points out, 'We need to be very careful about how we use AI and ensure that it's used for the benefit of all, not just a privileged few. [2]

Another significant issue is the ethical and regulatory concerns surrounding AI. To tackle this, it is important to establish clear guidelines and regulations for AI development and use. This includes addressing issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions [1].

The Brookings Institution suggests encouraging greater data access for researchers without compromising users’ personal privacy, taking bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms, and maintaining mechanisms for human oversight and control [1].

Job Displacement and Economic Inequality

The swift automation driven by AI technologies also presents considerable threats to the workforce. As AI systems and robots become more advanced, they can replace human workers in various industries, leading to widespread job displacement. While new job opportunities may emerge as a result of AI, there is a risk that the transition will exacerbate economic inequality, leaving certain groups of workers unemployed or with fewer employment options. This could further widen the gap between the haves and the have-nots, creating societal divisions.

A study by the OECD found that up to 14% of jobs across its 32 member countries are highly automatable, which underscores the potential scale of job displacement (OECD, 2019) [1]. 

Privacy and Surveillance Concerns

The capacity of AI to collect, process, and analyze enormous volumes of data raises grave concerns about privacy. As AI systems become more integrated into our daily lives, there is a risk of constant surveillance and the erosion of personal privacy. Facial recognition technology, for example, has the potential to be misused for surveillance purposes, leading to a chilling effect on freedom of expression and individual autonomy.

A report by the Brookings Institution highlights the potential misuse of AI in surveillance, noting that 'AI has the potential to erode privacy and enable intrusive surveillance, with governments in countries such as China already using AI to monitor public spaces and track individuals' [3]. 


Autonomous Weapons and Warfare

The emergence of autonomous weapons powered by AI presents a serious threat to global security. Unlike human soldiers, AI-powered weapons lack the ability to exhibit compassion, empathy, and moral judgment. This raises the risk of unintended consequences, as well as the potential for lethal autonomous weapons falling into the wrong hands. The unchecked proliferation of such weapons could lead to an escalation of conflicts and warfare, endangering innocent lives and destabilizing international relations.

Dependence and Unreliability

As AI becomes more deeply integrated into critical infrastructure and decision-making processes, society's reliance on its functionality and accuracy intensifies. However, AI systems are not immune to errors, bugs, or malicious exploitation. Relying heavily on AI technology without comprehensive fail-safes could result in catastrophic failures, endangering lives and causing widespread disruption. Additionally, AI systems can be susceptible to manipulation or attacks, leading to misinformation and undermining trust in important institutions.

One sector where these issues are particularly relevant is finance.

Why is AI a potential danger to the finance sector?

Given its ability to discern patterns in massive datasets, it's logical for the financial industry to leverage AI, particularly in the insurance market, where premiums are determined based on statistical models. At the same time, the ability for AI to link datasets that otherwise would seem unrelated can also make AI a potentially powerful tool for investments. 

Such an AI would allow fund managers to potentially provide customers with better long-term returns by introducing rapid responses to changing markets. Furthermore, an AI could be far quicker at making trades, outcompeting even the fastest financial experts who are routinely hooked into stock exchanges. In fact, some have speculated that had financial AIs existed prior to the 2008 housing crises. An automated system would have picked up on the financial anomalies that only a handful of individuals were able to spot.

However, despite these capabilities, integrating AI into financial systems is not without risks. There are emerging instances that underscore the potential dangers, one of which is particularly noteworthy.

One such example was the use of an AI in 2020 by a major credit-score provider that noticed large numbers of online orders being placed during the COVID pandemic. However, instead of recognising that these orders were a result of bored individuals stuck at home, it decided to flag it as potential fraud, not understanding the current social situation. From that decision, thousands of transactions were being blocked by the AI as it attempted to fight against a non-existent wave of fraud, and this, in turn, led to many lacking basic supplies that could only be ordered online.

Despite this issue having been fixed, other financial institutions continue to deploy AI for all kinds of activities. This has led to growing worries about AI’s power and how rapidly it can decide to introduce problems. For example, it could be possible for future AIs to start to create profiles on individual customers and begin mapping their financial activities. 

This could quickly lead to violations of privacy (via blocking suspicious activities) or actions that are otherwise unjustified (such as freezing accounts). As AIs also struggle to understand morals, it is also possible for such an AI to become discriminatory via biased and manipulated data. For example, those from poorer households may be flagged as being a credit liability even if the individual has never taken out a loan or mortgage.

But it’s not just the financial industry that could see disastrous effects from AI; even criminal activity can be massively amplified by it. AI systems capable of replicating voices already exist, and these could be used to fool account holders that they are being contacted by a bank official, relative, or authority. From there, it would be relatively easy to get account numbers, sort codes, passwords, and pin numbers.

Should AI be stopped in its tracks?

Considering the multitude of challenges that AI presents, some have proposed a temporary pause in the development of new AI technologies to allow researchers to analyze and discuss its potential impacts. Of course, it comes as no surprise that one of those that suggested a halt, Elon Musk, also decided to start his own research and development into commercial use for AI.

For this reason alone, it is now impossible for a nation to hold back on technological development of any kind, as other nations will use that to their advantage (quickly developing the technology themselves). Instead, all researchers can do is carefully monitor how AI is being used and try to lobby government officials and companies when they spot decisions that may present a real threat. For example, integrating AI into drones with weapons capabilities could be identified as a potential danger due to the inability of AI to understand the moral impact of its decisions. 

From an engineer’s perspective, the best we can do for now is to consider the ethical concerns of integrating AI into designs and how that AI impacts society as a whole. AI being integrated into sensors is highly unlikely to have a negative impact, but using an AI to make life-changing decisions (such as self-driving vehicles), however, can.

References:

[1] OECD (2019). "Artificial Intelligence in Society." OECD Publishing, Paris. Available at: https://www.oecd.org/finance/financial-markets/Artificial-intelligence-machine-learning-big-data-in-finance.pdf

[2] Gebru, T. (2021). "Ethical Considerations in AI: Bias and Discrimination." Available at: [Insert URL of the source]

[3] Brookings Institution (2021). "How Artificial Intelligence is Transforming the World." Available at: https://www.brookings.edu/research/how-artificial-intelligence-is-transforming-the-world/

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.