Artificial Intelligence: will it be a blessing or curse?

Artificial Intelligence: will it be a blessing or curse?

by Martin Livermore
article from Wednesday 21, February, 2018

FOR DECADES, writers and filmmakers have imagined a world in which computers and robots have advanced to a stage where they are, at least in some respects, more capable than their creators. Science fiction allows us to explore both the practical and moral implications of such changes, but we are now perhaps on the cusp of science fiction becoming science fact, when potential problems will become of more than just theoretical importance.

Many of the imagined worlds are dystopian and serve as a warning and we should certainly always be aware of the unintended consequences of what we do. But artificial intelligence – the usual name for the technologies that will underpin our brave new world – is likely to bring enormous benefits. The debate has started, but it has already polarised, with some public figures (Stephen Hawking and Elon Musk among them) being deeply pessimistic about the impact of AI.

Pessimism and risk avoidance seem to be the default position of many people today, and precaution is increasingly being codified into regulation, at least in Europe. But precaution has its costs and could endanger the very innovation on which our future may depend.

On the other hand, innovation doesn’t have to come from Europe or America as it usually has in the past. Many citizens of rich countries, despite real problems for those struggling on the lower rungs of the social scale, have lost the drive to improve and tackle challenges. Tomorrow’s game-changing developments may come from China, from India or from migrants benefitting from a rich country’s university education.

The deciding factor may be the regulatory and cultural environment, which can either foster or discourage them – while the ready acceptance of things that improve lives and of novelty may still militate against stagnation and a quashing of inventiveness in rich countries.

AI has the capacity to deliver both in abundance. The problem is that, as the capabilities of AI increase, they may get out of human control in ways that we cannot conceive. Already, large computer programs are so complex that even the programmers don’t fully understand all that goes on through the intricate lines of code they have written.

When machine learning is involved, as employed by the chess- or go-playing computers that can now beat any human player, really understanding what is going on in silico becomes even more difficult. It’s easy under these circumstances to believe that we may be capable of creating something approaching artificial consciousness, with unknown consequences.

Science fiction writers have, of course, explored such issues decades ago. Perhaps most famously, Isaac Asimov put forward the three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. He later added a fourth law to precede the others (the zeroth law): 
  4. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. 

These have been mulled over, tweaked and had their potential consequences examined in a variety of fictional situations, but the truth is that there is (as yet, at least) no foolproof way to codify programmed behaviour that is incapable of causing harm. In practice, what we define as artificial intelligence is a spectrum of capabilities that we are only now beginning to tap into. And the first embodiments will be very far from the humanoid robots envisaged by Asimov and others.

Autonomous vehicles may not be far away. Modern cars already use sophisticated software to manage their engines, and processing capabilities (effectively the speed of data handling) have evolved to the stage where cars can theoretically operate safely on public roads. There are remaining problems, of course. Some are of a technical nature, such as ensuring the control systems can detect all likely hazards and that they are fail-safe. Others offer moral dilemmas.

In particular, although self-driving cars should in principle be safer than ones driven by real people, they will still encounter hazardous situations in which some kind of damage limitation is needed. Then, it may come to a choice between protecting the occupants and avoiding harm to other road users. Whereas a human driver would instinctively try to take evasive action, computers don’t have instincts, only programmed behaviour.

If someone is badly injured or killed in an incident involving an autonomous car, public reaction would be different than if the car had a human driver. Human error is accepted, but mistakes by machines are less forgivable. The parallel is with railway travel: the safety record is much better than for roads, but the rare fatal accidents often lead to calls for expensive additional safety measures.

What we have to bear in mind is that, whatever the downsides of any particular application of AI, be it accidents involving autonomous cars or anything else, they ultimately have a human cause. People have designed the cars, built and installed the sensors and, most importantly, written the software to control everything. If something goes wrong, it is because of the unforeseen consequences of a particular program, something unforeseen by the human programmer.

And so it has always been with things we now take for granted. Because something is new, it is very difficult to foresee all potential problems. Nevertheless, we continue to make progress as a species, as long as we recognise and correct our mistakes.

Identifying potential major problems with AI, as Stephen Hawking and Elon Musk have done, should only serve to help us avoid them. It shouldn’t stop us trying to get the best out of innovation.

Martin Livermore writes for the Scientific Alliance, which advocates the use of rational scientific knowledge in the development of public policy. To subscribe to his regular newsletter please use this link.

ThinkScotland exists thanks to readers' support - please donate in any currency and often


Follow us on Facebook and Twitter & like and share this article