Whether you believe that our future lies in the hands of robots or not, for some that trust was shaken when last week a VW factory worker was killed by the stationary robot he was setting up in Germany. It is reported that the 22 year old contractor was suddenly grabbed by the machine and crushed against a metal plate whilst working.
Even though this tragic incident will now form part of a deep and thorough investigation, for some it has raised a number of questions about safety, reliability and indeed, whether human error was to blame rather than a malfunction of the machine. Can we really trust robots?
Add to this the latest instalment in the Terminator film franchise, and the current Channel 4 series ‘Humans’, which sees a small band of realistic humanistic ‘synths’ with a ‘conscious’ harming humans and it appears as though robots are getting a bit of a bad rap at the moment.
But, argues Dr Blay Whitby – philosopher and technological ethicist at the Centre for Cognitive Science at the University of Sussex, current robotic technology was not yet at a level “where their decision-making allows us to treat them as blameworthy”.
“This unfortunate (VW) accident is technically and morally comparable to a machine operator being crushed because he didn’t use the safety guard,” he said.
“In this case it’s more complex and therefore more forgivable because ‘the safety guard’ was provided by computer software and he was in the process of setting it up.”
Plus, adds Dr Ron Chrisley, Director of the Centre for Cognitive Science at the University of Sussex in a recent post to The Conversation:
‘it is strikingly similar to the first recorded case of a death involving an industrial robot 34 years ago.
These incidents have happened before and will happen again. Even if safety standards continue to rise and the chance of an accident happening in any given human/robotic interaction goes down, such events will become more frequent simply because of the ever-increasing number of robots.
This means it is important to understand this kind of incident properly, and a key part of doing so is using accurate and appropriate language to describe them. Although there is a sense in which it is legitimate to refer to the Baunatal incident as a case of “robot kills worker”, as many reports have done, it is misleading, verging on the irresponsible, to do so. It would be much better to express it as a case of “worker killed in robot accident”.’
As Dr Chrisley highlights, this sort of headline does not grab the attention of the public, which although may seem rather trivial, could actually affect the development of robotic technology in the future:
‘insisting on getting this language right isn’t an academic exercise in pedantry. The stakes are high. For one thing, an unwarranted fear of robots could lead to another unnecessary “artificial intelligence winter”, a period where the technology ceases to receive research funding. This would delay or deny the considerable benefits robots can bring not just to industry but society in general.’
Dr Whitby said there also needed to be more awareness of robotic technology and public scrutiny of the ethical issues involved as the world became more automated and decision-making was delegated to machines. So – we cannot hand it all over, we are still certainly part of the equation…
Dr Chrisley sums this up nicely: ‘If there was a “problem with the robot”, be it faulty materials, a misperforming circuit board, bad programming, poor design of installation or operational protocols, that problem – or not anticipating it – would still have been due to human error. Yes, there are industrial accidents where no human or group of humans is to blame. But we mustn’t be tempted by the appearance of agency in robots to absolve their human creators of responsibility. Not yet anyway.’