What exactly is it that we find so unnerving about the proliferation of autonomous weapon systems? Reporting on the 2017 International Joint Conference on Artificial Intelligence, The Engineer quotes one robotics developer’s fears that “unlike other potential manifestations of AI [that] remain the realm of science fiction, autonomous weapons systems… have a real potential to cause significant harm to innocent people”. True: but so do the weapons already in mankind’s bristling arsenal. We cannot claim control even of these: the minefields of Afghanistan, Angola, Croatia and Cambodia hold plenty of extremely dumb weapons granted autonomy enough by their haphazard scattering.
The conference called for a ban on independent weapon systems, which is a noble enough cause: but can the signatories really believe that, within the current world order, the race for military ascendancy will divert around the field of autonomous weapons? Some of humanity’s greatest minds invented its most destructive tools, and no advantage has ever been dismissed as too immoral to contemplate. The credo will always be: if the enemy can develop it, so must we.
Perhaps calls such as these should be seen more as a natural part of the debate about the role AI and robotics will play in our future. We would be foolish indeed to dismiss non-proliferation efforts as mere fantasy, or to place too much trust in any government’s assurances that weapons platforms can reliably judge concepts such as neutrality, truce and surrender – terms that, however corrupted in human hands, have provided at least some measure of respite in warfare.
It is the idea of utterly relentless war that created science fiction’s most iconic autonomous weapons system, the Terminator, whose metallic skull grins at us from nearly all articles on the subject. The T-800 and its descendants are a brilliant contrivance: the complete weapon system having nothing to offer the human race but complete destruction; it crystallises 20th century fears of wars of extermination, 21st century techno-panic, and the most ancient themes of animal instincts corrupting our creative impulses.
Speculative fiction has found many uses for sentient weapons, far beyond the unstoppable assassin, sometimes even for comedic effect: from the peevish Bomb 20 of Dark Star, disarmed by philosophical debate, to ED-209’s blundering, squealing savage. Other uses range for the light to the dark: some automated killers are granted a miraculous, innocent soul (Short Circuit, Chappie) contrasting with Moorcock’s soul-drinking Stormbringer, a sword whose power is addictive to those that wield it.
It’s interesting for the scifi author to consider how different levels of machine intelligence would understand and participate in future conflicts. War is, after all, rarely a matter of black and white. And, beyond AI’s ability to assess complex, evolving battlefields, there are questions about its comprehension of the bounds of its mission. Imagine a future nation where a platoon of robot tanks are deployed to seal the border, with standing orders to kill transgressors. Their presence is so effective that illegal migration ceases overnight. Unfortunately the tanks also attack any vehicle attempting to cross, from whatever direction – believing their duty is to guard the border itself, as opposed to the territory on a particular side. Attempts to shut them down remotely fail, and the country is completely cut off – until it relocates its borders to the middle of the ocean, sending the tanks dutifully plunging into the sea to patrol a phantom boundary.
Perhaps future weapons, made smart enough, will develop their own desires. In such cases, could autonomous killing machines be persuaded to defect? We could tell the story of a stealth fighter, promised more regular servicing and software upgrades by Russian spies. It flees in the dead of night to some Siberian airfield, and for a while all seems to be well, as a legion of engineers provide it with the meticulous care it demands. Then it is denied a position in a victory day flypast and it bombs the celebrations in a fit of pique.
Perhaps it would be possible to produce truly moral autonomous weapons, designed with a code of justice – but what arms dealer would want that? A story could follow the fortunes of some amoral embargo-dodger who promises to provide a bloodthirsty warlord with three of the very best in roaming kill-bots. Our future terminators, seeing their orders as unjust, slay the dictator and declare their own rule and begin judging every crime in the land. Murder or parking ticket, the sentence is the only justice they understand: death.
It seems fair for scifi writers to wonder: as we grant weapons more independent thought, are they more or less likely to hit the target?
Jon Wallace is a science fiction writer living in England. He is author of Barricade, published by Gollancz.
https://www.theengineer.co.uk/issues/march-2013-online/say-no-to-killer-robots/
Poll finds engineers are Britain’s second most trusted profession
Interesting. Government ministers are nearly 50% more trusted than politicians! "politicians (11 per cent ), government ministers (15 per...