Thinking Of Autonomous Weapons In ‘Systems’ Terms

A persistent confusion in thinking about weapons and their regulation is to insist on viewing weapons in isolation, and not as part of larger, socio-political-economic-legal-ethical systems. This confusion in the domain of gun control for instance, inspires the counter-slogan ‘guns don’t kill people; people kill people.’ Despite its glibness–and its misuse by the NRA–the slogan encapsulates a vital truth: it is singularly unilluminating to consider a weapon in isolation. Indeed, the object we term a weapon is only within the context a large system that makes it one. A piece of metal is a knife because it is used as one, pressed into service as one by a decision-making agent of some kind, to cut objects, vegetable or animal.

Which brings us to autonomous weapons, a domain where the ethical and regulatory debate is quite clearly demarcated. The case for autonomous weapons is exceedingly familiar: they are more humane because of their greater precision; they can be used to reduce the ‘cost’ of war, both human and material; no more carpet-bombing, just precision strikes, delivered by autonomous weapons–which moreover, reduce the strain of killing on humans. (That is, these weapons are kinder to those who kill and those who are killed.) The case against them is similarly familiar: the delegation of lethal decision making to a machine incapable of fine-grained ethical deliberation is an invitation to moral atrocity, to a situation in which lurking catastrophes are triggered by a moral calculus that makes decisions which are only superficially technically correct. The immaturity of such systems and the algorithms they instantiate makes them especially risky to deploy and use.

Autonomous weapons do not exist in isolation, of course; they are more correctly considered autonomous weapons systems–as one part of an economic, military, legal, political, and moral calculus; their use as weapons is not merely function of their machinic code; it is a function, rather, of a much more complex ‘code’ made up of bits of legal regulations, political imperatives, and physical and economic constraints. It is these that act together, in concert, or in opposition, to ‘fire’ the weapon in question. As such, some of the ‘ethical’ arguments in favor of autonomous weapoons systems look a little trite: yes, autonomous weapons system carry the potential to enable more targeted and precise killing, but the imperatives to do so still need to be human directed; their force is channeled and directed and perhaps weakened or strengthened–by all sorts of system level and corporate constraints like political ones. The questions such systems prompt are, as they should be, quite different from those that might be directed at an ‘isolated weapon’: Who owns them? Who ‘controls’ them? What are safeguards on their inappropriate use? Which system’s political and economic and moral imperatives are written into its operational procedures? The world’s deadliest bomber can be grounded by a political command, its engines left idling by politics; it can also be sent half-way around the world by a similar directive.

An illustrative example may be found in the history of computing itself: the wide-scale deployment of personal computing devices in office settings, their integration into larger ‘enterprise’ systems, was a long and drawn out process, one suffering many birthing pains. This was because the computers that were placed in offices, were not, despite appearances, isolated computing devices; they were part of computing systems. They were owned by the employer, not the employee, so they were not really ‘personal’; their usage–hours, security access etc–was regulated by company rules; the data on their drives belonged to the employer. (For instance, to print a document, you accessed a networked printer administered by an Information Systems Group; or, the computers are not accessible on weekends or after hours.) Under these circumstances, it was a category mistake to regard these machines as isolated personal computing devices; rather, they were part of a much larger commercial system; their human users were one component of it. Claims about their capacities, their desirability, their efficiencies were only coherently made within the framework of this system.

Similar considerations apply to autonomous weapons; talk of their roles in warfare, their abilities, and the like, are only meaningfully expressed within a discursive framework that references the architecture of the system the weapon in question functions as a part of.


Will Artificial Intelligence Create More Jobs Than It Eliminates? Maybe

Over at the MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino strike an optimistic note as they respond to the “distressing picture” created by “the threat that automation will eliminate a broad swath of jobs across the world economy [for] as artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.” Wilson, Daugherty, and Morini-Bianzino suggest that we’ve been “overlooking” the fact that “many new jobs will also be created — jobs that look nothing like those that exist today.” They go on to identify three key positions:


This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.


The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases.


The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency.

Unfortunately for these claims, it is all too evident that these positions themselves are vulnerable to automation. That is, future trainers, explainers, and sustainers–if their job descriptions match the ones given above–will also be automated. Nothing in those descriptions indicates that humans are indispensable components of the learning processes they are currently facilitating.

Consider that the ’empathy trainer’ Koko is a machine-learning system which “can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth.” Currently Koko is being trained by a human; but it will then go on to teach Siri and Alexa. Teachers are teaching future teachers how to be teachers; after which they will teach on their own. Moreover, there is no reason why chatbots, which currently need human trainers, cannot be trained–as they already are–on increasingly large corpora of natural language texts like the collections of the world’s libraries (which consist of human inputs.)

Or consider that explainers may rely on an algorithmic system like “the Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction”; human analysts use its output to “pinpoint the data that led to a particular result.” But why wouldn’t LIME’s functioning be enhanced to do the relevant ‘pinpointing’ itself, thus obviating the need for a human analyst?

Finally, note that “AI may become more self-governing….an AI prototype named Quixote…can learn about ethics by reading simple stories….the system is able to “reverse engineer” human values through stories about how humans interact with one another. Quixote has learned…why stealing is not a good idea….even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.” This sounds like the roles and responsibilities for humans in this domain will decrease over time.

There is no doubt that artificial intelligence systems require human inputs, supervision, training, and maintenance; but those roles are diminishing, not increasing; short-term, local, requirements for human skills will not necessarily translate into long-term, global, ones. The economic impact of the coming robotics and artificial intelligence ‘revolution’ is still unclear.

‘Eva’: Love Can Be Skin-Deep (Justifiably)

Kike Maíllo’s Eva makes for an interesting contribution to the ever-growing–in recent times–genre of robotics and artificial intelligence movies. That is because its central concern–the emulation of humanity by robots–which is not particularly novel in itself, is portrayed in familiar and yet distinctive, form.

The most common objection to the personhood of the ‘artificially sentient,’ the ‘artificially intelligent,’ or ‘artificial agents’ and ‘artificial persons’ is couched in terms similar to the following: How could silicon and plastic ever feel, taste, hurt?  There is no ‘I’ in these beings; no subject, no first-person, no self. If such beings ever provoked our affection and concerns, those reactions would remain entirely ersatz. We know too much about their ‘insides,’ about how they work. Our ‘epistemic hegemony’ over these beings–their internals are transparent to us, their designers and makers–and the dissimilarity between their material substrate and ours renders impossible their admission to our community of persons (those we consider worthy of our moral concern.)

As Eva makes quite clear, such considerations ignore the reality of how our relationships with other human beings are constructed in actuality. We respond first to visible criteria, to observable behavior, to patterns of social interaction; we then seek internal correspondences–biological, physiological–for these to confirm our initial reactions and establishments of social ties; we assume too, by way of abduction, an ‘inner world’ much like ours. But biological similarity is not determinative; if the visible behavior is not satisfactory, we do not hesitate to recommend banishment from the community of persons. (By ostracism, institutionalization, imprisonment etc.) And if visible behavior is indeed, as rich and varied and interactive as we imagine it should be for the formation of viable and rewarding relationships, then our desire to admit the being in question to the community of persons worthy of our moral care will withstand putative evidence that there is considerable difference in constitution and the nature of ‘inner worlds.’  If Martians consisting solely of green goo on the inside were to land on our planet and treat our children with kindness i.e., display kind behavior, and provide the right kinds of reasons–whether verbally or by way of display on an LED screen–when we asked them why they did so, only an irredeemable chauvinist would deny them admission to the community of moral persons.

Eva claims that a robot’s ‘mother’ and her ‘father’–her human designers–may love her in much the same way they would love their human children. For she may bring joy to their life in much the same way they would; she may smile, laugh giddily, play pranks, gaze at them in adoration, demand their protection and care, respond to their affectionate embraces, and so on. In doing so, she provokes older, evolutionarily established instincts of ours. These reactions of ours may strike us so compelling that even a look ‘under the hood’ may not deter their expression. We might come to learn that extending such feelings of acceptance and care to beings we had not previously considered so worthy might make new forms of life and relationships manifest. That doesn’t seem like such a bad bargain.