Thinking Of Autonomous Weapons In ‘Systems’ Terms

A persistent confusion in thinking about weapons and their regulation is to insist on viewing weapons in isolation, and not as part of larger, socio-political-economic-legal-ethical systems. This confusion in the domain of gun control for instance, inspires the counter-slogan ‘guns don’t kill people; people kill people.’ Despite its glibness–and its misuse by the NRA–the slogan encapsulates a vital truth: it is singularly unilluminating to consider a weapon in isolation. Indeed, the object we term a weapon is only within the context a large system that makes it one. A piece of metal is a knife because it is used as one, pressed into service as one by a decision-making agent of some kind, to cut objects, vegetable or animal.

Which brings us to autonomous weapons, a domain where the ethical and regulatory debate is quite clearly demarcated. The case for autonomous weapons is exceedingly familiar: they are more humane because of their greater precision; they can be used to reduce the ‘cost’ of war, both human and material; no more carpet-bombing, just precision strikes, delivered by autonomous weapons–which moreover, reduce the strain of killing on humans. (That is, these weapons are kinder to those who kill and those who are killed.) The case against them is similarly familiar: the delegation of lethal decision making to a machine incapable of fine-grained ethical deliberation is an invitation to moral atrocity, to a situation in which lurking catastrophes are triggered by a moral calculus that makes decisions which are only superficially technically correct. The immaturity of such systems and the algorithms they instantiate makes them especially risky to deploy and use.

Autonomous weapons do not exist in isolation, of course; they are more correctly considered autonomous weapons systems–as one part of an economic, military, legal, political, and moral calculus; their use as weapons is not merely function of their machinic code; it is a function, rather, of a much more complex ‘code’ made up of bits of legal regulations, political imperatives, and physical and economic constraints. It is these that act together, in concert, or in opposition, to ‘fire’ the weapon in question. As such, some of the ‘ethical’ arguments in favor of autonomous weapoons systems look a little trite: yes, autonomous weapons system carry the potential to enable more targeted and precise killing, but the imperatives to do so still need to be human directed; their force is channeled and directed and perhaps weakened or strengthened–by all sorts of system level and corporate constraints like political ones. The questions such systems prompt are, as they should be, quite different from those that might be directed at an ‘isolated weapon’: Who owns them? Who ‘controls’ them? What are safeguards on their inappropriate use? Which system’s political and economic and moral imperatives are written into its operational procedures? The world’s deadliest bomber can be grounded by a political command, its engines left idling by politics; it can also be sent half-way around the world by a similar directive.

An illustrative example may be found in the history of computing itself: the wide-scale deployment of personal computing devices in office settings, their integration into larger ‘enterprise’ systems, was a long and drawn out process, one suffering many birthing pains. This was because the computers that were placed in offices, were not, despite appearances, isolated computing devices; they were part of computing systems. They were owned by the employer, not the employee, so they were not really ‘personal’; their usage–hours, security access etc–was regulated by company rules; the data on their drives belonged to the employer. (For instance, to print a document, you accessed a networked printer administered by an Information Systems Group; or, the computers are not accessible on weekends or after hours.) Under these circumstances, it was a category mistake to regard these machines as isolated personal computing devices; rather, they were part of a much larger commercial system; their human users were one component of it. Claims about their capacities, their desirability, their efficiencies were only coherently made within the framework of this system.

Similar considerations apply to autonomous weapons; talk of their roles in warfare, their abilities, and the like, are only meaningfully expressed within a discursive framework that references the architecture of the system the weapon in question functions as a part of.

 

Shrapnel is Still Deadly, No Matter Where It Strikes

Many years ago, while talking to my father and some of his air force mates, I stumbled into a conversation about munitions.  There was talk of rockets, shells, casings, high-explosive rounds, tracer bullets, napalm, and all of the rest. Realizing I was in the right company, I asked if someone could tell me what ‘shrapnel’ was. I had seen it mentioned in many books and had a dim idea of what it might have been: it went ‘flying’ and it seemed to hurt people. Now I had experts that would inform me. A pilot, a veteran of the 1971 war with Pakistan, someone who flown had many ground-attack missions, spoke up. He began with ‘Shrapnel is the worst thing you can imagine’ and then launched into a quick description of its anti-personnel raison d’être. He finished with a grim, ‘You don’t have to get hit directly by a shell to be killed by it.’

I was a child, still naive about war despite my steady consumption of military history books, boy’s battle comics and my childhood in a war veteran’s home. So it wasn’t so surprising that my reaction to how shrapnel worked, what made it effective was one of bemused surprise. So those beautiful explosions, the end-result of sleek canisters tumbling from low-flying, screaming jets describing aggressive trajectories through the sky, those lovely flames capped off by plumes of smoke with debris flying gracefully to all corners, were also sending out red-hot pieces of jagged metal, which, when they made contact with human flesh, lacerated, tore, and  shredded? I had no idea. Boom-boom, ow?

As the aftermath of the Boston bombings makes clear, shrapnel is still deadly:

Thirty-one victims remained hospitalized at the city’s trauma centers on Thursday, including some who lost legs or feet. Sixteen people had limbs blown off in the blasts or amputated afterward, ranging in age from 7 to 71….For some whose limbs were preserved…the wounds were so littered with debris that five or six operations have been needed to decontaminate them.

This nation has now been at war for some twelve years. In that period of time, we have grown used to, and blase about, impressive visuals of shock-and-awe bombing, cruise missile strikes, drone attacks, and of course, most pertinently to Americans, the improvised explosive device, planted on a roadside and set off remotely. What is common to all of these acts of warfare is that at the business end of all the prettiness–the flash, the bang, the diversely shaped smoke cloud–lies a great deal of ugliness. Intestines spilling out, crudely amputated limbs, gouged out eyes; the stuff of medieval torture tales. Because shrapnel is indiscriminate, it goes places and does things that even horror movie writers might hesitate to put into their scripts: slicing one side off a baby’s head, or driving shards deep into an old man’s brains.

Weapons work the same way everywhere; the laws of physics dictate that they do. Human bodies are impacted by them quite uniformly too; the laws of human physiology dictate that.

Flesh and flying hot metal; there’s only one winner, every single time.