Man versus machine. What happens when robots think like humans?

30-06-2016 | By Mirko Bernacchi

The long running story of man versus machine reached an important new milestone in March, this year, when Google’s AlphaGo program scored a 4-1 victory in the ancient board game Go, against one of the world’s best players: Lee Sedol of South Korea.

This defeat – at the 2500 year-old game of strategy that is many times more complex than Chess - is arguably more significant for humankind than the victory by IBM’s Deep Blue Chess computer against the world champion Garry Kasparov in 1997. It’s more significant because, unlike Deep Blue’s 2-1 (and three draws) victory, AlphaGo’s triumph was not achieved by brute-force calculation – analysing all possible moves and outcomes at each turn - but by using experience to choose the strategy most likely to be successful. This is closer to the way a human plays the game, using intuition.

IBM says the research that created Deep Blue has been a critical enabler for activities such as medical research, database handling and risk analysis that drive the way we live today and that Chess playing is a way to measure the progress of computer science rather than an end in itself.

AlphaGo has delivered a shock by winning so soon. Even last year pundits were predicting that top human players would have the upper hand for the next decade. Predicting progress may help humans feel in control. The close link between brute force computing and Moore’s Law has supported our extrapolations, but the neural programming technology behind AlphaGo is clearly not so constrained. That’s both exciting and frightening.

But should we be surprised that the machines we create become better than us? In industrial applications for example it’s one of the very reasons we build machines. Maybe we see a greater threat in robots that actually look like us. Right now, Honda’s Asimo is a lovable little chap who moves awkwardly with small and tentative steps. We might respond differently to a taller, faster, swaggering upgrade.

Apparently we will never stop trying to make robots increasingly like ourselves. Autonomous driving is today’s big thing in the automotive world. Starting with systems like park assist and emergency braking, the accepted roadmap to fully autonomous vehicles involves increasing the number of black boxes on-board. Yamaha, on the other hand, unveiled its vision of autonomous motorcycling at the Tokyo Motorcycle Show in 2015: a humanoid robot called Motobot astride a standard-specification sports bike. Arguably more a robotic test-bed than a serious offer to bikers, Motobot is expected to race against Yamaha MotoGP rider and nine times world champion Valentino Rossi some time in 2017. Let’s see if Rossi takes defeat as graciously as Sedol or Kasparov.

If being beaten by the robots we create in our own likeness is unappealing, the prospect of being dominated by our machines is potentially terrifying. Alexander Reben’s, somewhat provocatively named, First Law project is the first robot capable of deciding for itself whether or not to harm a human being, based on its own previous experiences. Reben has argued that a robot programmed to carry out harmful actions, and rewarded for doing so, will learn to defend its ability to continue.

There is already a debate about harmful artificial intelligence, or Lethal Autonomous Weapon Systems (LAWS), going on at the highest levels. Some well-known figures in science and technology, such as Stephen Hawking and Elon Musk, are urging action to ban such systems. Others say a ban will not work. Sound familiar? Berkeley computer-science professor Stuart Russell and others have commented that the electronics and computing community must take a position on this issue, just as other scientists have done on nuclear, chemical and biological weapons.

LAWS could be more difficult to control than nuclear weapons, which are expensive to develop and require access to materials that can be easily restricted. The technologies behind AI, in contrast, are readily available and relatively inexpensive, and so could be acquired by terrorist groups or warlords, and not confined to government organisations.

Today, less than 20 years after the breakthrough by Deep Blue, the smartphone in everyone’s pocket is able to run a champion-beating Chess computer. Perhaps in the future, a guard bot in every home will provide essential protection against malicious AIs.

Mouser Electronics

www.mouser.com

By Mirko Bernacchi

Mirko Bernacchi is a technical support specialist with Mouser Electronics. With more than 25 years experience in electronics, Mike has worked as a test development engineer at Celestica and Service for Electronic Manufacturing. At IBM he was a burn-in memory modules test engineer and an optical transceiver card test engineer.