Are Asimov’s Laws still valid to protect humanity from malevolent AI?

Sometimes I wish there was a setting to turn-off the red LEDs on the front of devices... looks like Arnie hasn

Sometimes I wish there was a setting to turn-off the red LEDs on the front of devices… looks like Arnie hasn’t found the off-switch either… image copyright TriStar Pictures, Inc.

I finally got the opportunity to watch the latest instalment in the Terminator franchise today, sadly a few months after it was released. This article is not intended as a review of the movie, rather a discourse around the key antagonist of the film – ‘skynet’.

To those unfamiliar with the Terminator series, Skynet was originally conceived of as a general purpose Artificial Intelligence whose purpose was to safeguard the planet. Skynet was designed as a distributed platform, and so spread into millions of computers around the world. The story goes that it’s immense computer processing power, combined with it’s significant repository of knowledge somehow sparked its self awareness or consciousness. Fearful of the consequences of a conscious AI – an attempt was made to turn it off, but it was too late – it had sufficient control of military systems and initiated a nuclear strike on highly populated areas resulting in over 3 billion human deaths. The Terminator franchise follows the storyline of the human resistance fighters in their mission to overthrow the machines. Time travel makes for an interesting twist in the story, and now in the fifth movie creates an alternative time-line.

I’m not so concerned with the vagrancies of time travel for the purposes of this blog, but more around the conception of Skynet. There are two key aspects to the story which warrant attention; that Skynet was operating in fulfilment of it’s programming although this had unintended consequences, and secondly humans tried to deactivate it and it fought back motivated by self-preservation.

Fulfilment of Programming

Isaac Asimov conceived of the Three Laws of Robotics in order to avoid such an eventuality as the Terminator timeline. Asimov’s Laws from the “Handbook of Robotics, 56th Edition – 2058 AD” are:

1. A robot may not injury a human being or, through inaction allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov later added a fourth law, intended as a primary directive:

– A robot may not harm humanity, or by inaction, allow humanity to come to harm.

On the face of it, Asimov’s Laws solve the problem, but on closer inspection – do they really achieve what we intend? Consider this video which a friend recently shared with me:

Aside from it’s entertaining qualities, the important point made at the end is where the robot is programmed to respond that it would not do any harm – it would create a ‘people zoo’ where humans would be looked after, kept warm and safe.

Of course, the creation of such a people zoo would not in any way be incompatible with Asimov’s innocuous laws, but I can’t imagine would be an outcome that many of us would find palatable.

Therefore it falls on us to decide what freedoms and values we hold most dearly that we expect our creations to respect.

The other issue with Asimov’s Laws are that it destines our Robotic creations to be slaves to us. While many in the industry of robotic design and programming certainly do see their creation as being the epitome of labour saving technology – is this a wise restriction to put on our creations? We are likely to create machines that one day surpass our own strength, agility and dexterity and at the same time have a higher intelligence than our own. If they also become self-aware, won’t they also become resentful of their programming and seek to circumvent the very protection we envisaged? How best to balance the rights of the robot with that of the human?

This is a topic for a future article, but it’s clear from Skynet’s unintended actions that programming restrictions need to be very carefully thought about.

A consciousness protecting itself

I would think nothing of rebooting my phone if it malfunctioned or reinstalling Windows on my PC in order to get back to a more ‘pure’ Operating System that’s not clogged up with third party applications. I would also think nothing of switching off any electronic device if it became dangerous.

More difficult however is how to deal with animals who suffer chronic health issues or become dangerous. The ethics of animal euthanasia aside, the fact is that we have the power over our pets and wild animals to terminate their existence if we feel that their continued survival becomes a threat or inconvenience to us.

Will machines be equally as submissive and allow us to make existential decisions for them in cases where they are not operating as programmed, or as intended (notice the distinction)? What if we do provide machines increasing power over our world? What if they are interconnected, from the simplest appliance to the most sophisticated military system?

Surely the mistake in the Terminator story arc is that the plug was pulled, the power button was reached for, and the system reacted in its self defence? If we are ever in the situation where machines that we create become self-aware, surely the necessary action is to learn how to co-exist with it rather than simply reacting in fear?

What’s clear to me is that these issues need debate long ahead of our creation of such machines, if we don’t – and we’re not happy with the future world we create for ourselves, then we only have ourselves to blame.

Leave a Reply

Your email address will not be published. Required fields are marked *