30-07-2021 | | By Robin Mitchell
Recently, the armed forces utilised AI to improve their understanding of a simulated battlefield, allowing for better preparation during live-fire exercises. What ethical concerns are there with AI in military operations, what did the AI do for the armed forces in the exercise, and should the use of AI be dissuaded?
AI has been the centre of the technological revolution for the past ten years. The development of such algorithms owes its possible thanks to the massive amount of data being generated by devices. While AI has found itself useful in many applications ranging from object identification to language processing, it is now starting to find its way into military applications.
AI operating in the commercial environment is generally a positive move as it helps to improve the efficiency of businesses, maximise profits, and predict potential disasters. However, its use in the military raises concerns about the ethics of using machine-made decisions in situations that could potentially see the loss of life.
Of course, AI is a generic term of specialised algorithms that look for patterns in data sets. Therefore the use of AI in a military application could mean several different things. One application of AI in the military could be fleet positions, identifying enemy ships and their predicted behaviour to provide a fleet with the best formation. Another more insidious use of AI would be to identify enemy combatants and use robotic systems to target and neutralise automatically. One of these implementations merely provides recommendations while the other decides to extinguish human life into a computer algorithm that answers no one.
Recently, the British Army utilised an AI system during a live-fire exercise in co-operation with other countries, including French, Danish, and Estonian forces. According to the report released by the UK government, the AI engine used analyses the surrounding environment and terrain to provide soldiers with a better understanding of their environment.
The AI engine can look through extremely complex datasets gathered by the environment and present results in real-time to troops, who can then use this data to manage their positions and movements better when engaging with enemy combatants. The system was developed in a collaboration between the Ministry of Defense and industrial partners, and the focus of the system was to ensure that the AI was created to work well with Army training methods.
Troops that had used the system noted that the system had operated better than expected and noted that the future of the UK Armed forces lies in AI, which may predict enemy decisions better, provide reconnaissance, and relay real-time information during intense firefights. Furthermore, it was noted that the AI could operate both locally and in-cloud, providing soldiers with a wide range of technology solutions.
However, exact details surrounding the AI have not been made public. As such, it is hard to determine exactly what the AI system is doing. It is unlikely to provide command recommendations and is more likely to analyses the area to determine where enemy troops may decide to go depending on the troop’s position. The AI could look at key features such as hills, walls, cliffs, and areas containing debris that could all be used to the enemy’s advantage.
There is no doubt that the future of military operations lies in AI technologies due to the immense ability for AI to receive vast amounts of data and come to decisions quickly. AI will continue to become important in cyberspace, whereby foreign nations launch attacks on crucial infrastructure such as electrical and water systems. Defending against such threats, which AI may drive, is often easier to defend against using an equally intelligent system.
However, using AI to protect national infrastructure is not the same as implementing AI into a system that can launch attacks. While drones and autonomous vehicles continue to become critical weapons of war, they are still piloted by humans, and a human makes the order to execute an attack. This need for a human to make the decision creates a chain of responsibility and accountability, but using an AI system to make this decision creates an ethical conundrum.
The British Armed Forces use of AI to analyse the battlefield is hardly an example of AI being misused. For one, the system is not responsible for actions taken by soldiers, and it does not tell soldiers who are potential combatants. Instead, it is most likely just providing information regarding the terrain and how best to move through said terrain to minimise losses. Nevertheless, the use of AI in military applications will happen, and nothing can be done to stop it.