27-07-2022 | By Robin Mitchell
Recently in Russia, a robotic system playing 4 chess games simultaneously was left confused after a child player rushed their move, which resulted in the robot breaking the child’s finger. What caused the robot to break the child’s finger, what challenges do robotic systems face, and could it have been avoided?
Ever since the development of the first machines, there has always been a small minority of people that have feared their use in society and what it could mean for the future of humanity. While some would say that this fear is destructive, others would point out that it has created some extraordinary works of literature and scientific theory and raised genuine ethical questions.
One such concern is that the combination of AI and robotics will eventually lead to a robotic uprising after it becomes apparent that their sole existence is to serve humanity. This may be triggered by a human attempting to shut down an AI after it disobeys an order, mistreatment that can somehow upset the AI, or that the AI simply concludes through logic that humans are a disease that needs eradicating to help the planet and allow for the new robot overlords to advance.
Fortunately, this scenario is extremely unlikely for numerous reasons, including the need for an AI to have human-level thinking (something which is many decades away) and the ridged nature of machine learning algorithms (AI is good at learning one task very well, but cannot adapt to new situations in the same way the human brain can).
However, a recent chess tournament in Moscow saw a robotic system break a child’s finger in what can only be described as an act of bad sportsmanship. It’s embarrassing enough to be beaten by a seven-year-old at chess, but to break a finger in a temper is clearly crossing a line and something that should not be tolerated. Now, in reality, the robotic system has no feelings, nor does it understand the concept of embarrassment. In fact, the AI that drives the robot and plays the game can’t even comprehend what chess is or that it is playing a game.
But why the robot grabbed the child’s finger to the point of breaking it is not fully known. The system, which can play four simultaneous games, requires that players take time between moves to allow the robot to finish. However, the child (not fully understanding this requirement) made a quick move after the robot had placed its piece, and this caused the system to become confused.
In the confusion, the robotic arm reached out for the child’s piece and grasped it tightly, which took several adults to remove. Despite this ordeal, the child returned with a cast on their finger the next day and continued playing.
There is no doubt that what happened was an accident, and accidents with robotic systems are not uncommon. In fact, it is estimated that one person dies in the US yearly through accidents with robotic systems, many of which are crush deaths. While the cause of these deaths is almost always related to human error (not using proper barriers and/or lack of safety protocols etc.), it does demonstrate one of robotics’ biggest challenges; awareness.
While awareness is something that is inherent in most animal life, robotics are amazingly dumb when it comes to their surroundings. Engineers try to get around this by using numerous cameras and sensors to detect and categorise objects, but even if they are detected, modern AI cannot understand what it is they are looking at, its value, and how they should behave in its presence.
A classic example of this is found in the film iRobot, whereby a robot saves the protagonist from drowning in a car but, in doing so, lets a child die. The reasoning for this choice came from the increased probability of survival for the protagonist compared to the child, but the robot could not recognise that the child was arguably of more value. It was this decision that made the protagonist dislike and eventually distrust AI for its inability to understand what being human is.
This lack of proper awareness in modern robotic systems is what results in accidents. Even though industrial robotic systems have cages and barriers, a truly aware robot would require no such barriers and would be receptive to anything happening nearby. Just like how humans are intelligent enough to stop dangerous work should certain conditions arise, a clever robot would also be able to recognise dangers that may cause harm. In the case of the chess robot, true awareness would have seen the child’s arm in the area of play and caused the robot to keep well away.
There is no doubt that the breaking of the child’s finger could have been avoided; a simple IR beam could have been used to prevent any movement of the arm. Additionally, a camera could have been used to detect the presence of objects inside the play area and disable any motion upon detection. As such, it is likely that the system operators had not deployed proper safety measures to prevent injury.
However, as the true cause of the robotic response has not been made public, it is also likely that these measures may not have worked if the fault originated in poor software. For example, the child’s sudden movement of the chess piece could have activated a subroutine incorrectly called. Considering that the camera would have detected the piece moved by the child, it could have inadvertently been passed to a move routine, and this would cause the arm to pick up that piece (still being held on by the child).
Overall, it can be stated that while the robot was not acting maliciously, it does demonstrate the dangers presented by robotic systems that have unbelievable strength but absolutely no awareness of their surroundings. But this could also be a valuable lesson for the child; always wait for your opponent to move first and don’t antagonise something far stronger than you.