Noel Sharkey is a Professor at Sheffield University in the UK, and he has just written an article for CNN. He is interested in robotics and artificial intelligence, and he is leading a call to ban the development of “autonomous” killing machines.
We might be thinking about a killer robot here, and as many will know there are already plenty of unmanned systems in operation. Drones are very much in the press, but they are flown by a pilot and the decision to kill someone is taken by a human, even if they might be several thousand miles from the action.
But Sharkey is concerned about the future development of systems that can be programmed for a task, but then autonomously make decisions during that task. He does not believe that a computer can make the types of decisions necessary in warfare, or at least not with morality and judgement.
There are 2 real sides to the argument about robotics in war. One states that mechanization of warfare would lead to less casualties, more precision, less danger for the troops and all in all a cleaner fight. There would be no more massacres of civilians because a soldier takes retribution for an unrelated attack, fewer accidental deaths etc.
But on the other side we are talking about machines making decisions that should incorporate humanity, such as how many deaths are justified for a particular objective? Is the death of an individual really of strategic advantage? What if the machines malfunction, or are taken over by hackers? Who can be held responsible for their actions? And aren’t we more likely to go to war if we can send machines and leave the boys at home?
All of these arguments are fought over within the robotics community, but we should remember that we have already travelled some way down the road of computerized and mechanized war. Anti aircraft and missile defence as is being deployed in Asia today is no longer a mechanical affair, they are computerized systems that all but fire themselves, and they certainly do not require a person to aim them like in the old films.
Bomb disposal robots, unmanned vehicles and the likes are already deployed, mechanical spider troops that really do bring the idea of cyber war to the modern scenario are under development as this article explains.
One problem is that of foresight, how can we make legislation today when we do not have any real idea of how and how much technology will advance in the foreseeable future. Also this type of robotics often comes from or aids other developments, such as the robot surgical machinery that I reviewed in a previous post. Infiltration and influence is everywhere.
If you would like to get an idea of how far we have come in terms of movement, take a look at this BBC video. A Boston company has produced a robot for military use (testing chemical suits) that moves remarkably like a human.
I have also written a couple of articles covering this issue on the Bassetti Foundation website. Read this article about recruiting robots for combat for an overview and follow the links.
Here you will also find an interview with robotics professor Ronald Arkin in which he describes how looking for funding lead him into designing robots that were paid for by the US military. They are of course the largest investor, a rather sobering thought given the current state of University funding.