Thinking Of Autonomous Weapons In ‘Systems’ Terms

A persistent confusion in thinking about weapons and their regulation is to insist on viewing weapons in isolation, and not as part of larger, socio-political-economic-legal-ethical systems. This confusion in the domain of gun control for instance, inspires the counter-slogan ‘guns don’t kill people; people kill people.’ Despite its glibness–and its misuse by the NRA–the slogan encapsulates a vital truth: it is singularly unilluminating to consider a weapon in isolation. Indeed, the object we term a weapon is only within the context a large system that makes it one. A piece of metal is a knife because it is used as one, pressed into service as one by a decision-making agent of some kind, to cut objects, vegetable or animal.

Which brings us to autonomous weapons, a domain where the ethical and regulatory debate is quite clearly demarcated. The case for autonomous weapons is exceedingly familiar: they are more humane because of their greater precision; they can be used to reduce the ‘cost’ of war, both human and material; no more carpet-bombing, just precision strikes, delivered by autonomous weapons–which moreover, reduce the strain of killing on humans. (That is, these weapons are kinder to those who kill and those who are killed.) The case against them is similarly familiar: the delegation of lethal decision making to a machine incapable of fine-grained ethical deliberation is an invitation to moral atrocity, to a situation in which lurking catastrophes are triggered by a moral calculus that makes decisions which are only superficially technically correct. The immaturity of such systems and the algorithms they instantiate makes them especially risky to deploy and use.

Autonomous weapons do not exist in isolation, of course; they are more correctly considered autonomous weapons systems–as one part of an economic, military, legal, political, and moral calculus; their use as weapons is not merely function of their machinic code; it is a function, rather, of a much more complex ‘code’ made up of bits of legal regulations, political imperatives, and physical and economic constraints. It is these that act together, in concert, or in opposition, to ‘fire’ the weapon in question. As such, some of the ‘ethical’ arguments in favor of autonomous weapoons systems look a little trite: yes, autonomous weapons system carry the potential to enable more targeted and precise killing, but the imperatives to do so still need to be human directed; their force is channeled and directed and perhaps weakened or strengthened–by all sorts of system level and corporate constraints like political ones. The questions such systems prompt are, as they should be, quite different from those that might be directed at an ‘isolated weapon’: Who owns them? Who ‘controls’ them? What are safeguards on their inappropriate use? Which system’s political and economic and moral imperatives are written into its operational procedures? The world’s deadliest bomber can be grounded by a political command, its engines left idling by politics; it can also be sent half-way around the world by a similar directive.

An illustrative example may be found in the history of computing itself: the wide-scale deployment of personal computing devices in office settings, their integration into larger ‘enterprise’ systems, was a long and drawn out process, one suffering many birthing pains. This was because the computers that were placed in offices, were not, despite appearances, isolated computing devices; they were part of computing systems. They were owned by the employer, not the employee, so they were not really ‘personal’; their usage–hours, security access etc–was regulated by company rules; the data on their drives belonged to the employer. (For instance, to print a document, you accessed a networked printer administered by an Information Systems Group; or, the computers are not accessible on weekends or after hours.) Under these circumstances, it was a category mistake to regard these machines as isolated personal computing devices; rather, they were part of a much larger commercial system; their human users were one component of it. Claims about their capacities, their desirability, their efficiencies were only coherently made within the framework of this system.

Similar considerations apply to autonomous weapons; talk of their roles in warfare, their abilities, and the like, are only meaningfully expressed within a discursive framework that references the architecture of the system the weapon in question functions as a part of.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: