Thinking Of Autonomous Weapons In ‘Systems’ Terms

A persistent confusion in thinking about weapons and their regulation is to insist on viewing weapons in isolation, and not as part of larger, socio-political-economic-legal-ethical systems. This confusion in the domain of gun control for instance, inspires the counter-slogan ‘guns don’t kill people; people kill people.’ Despite its glibness–and its misuse by the NRA–the slogan encapsulates a vital truth: it is singularly unilluminating to consider a weapon in isolation. Indeed, the object we term a weapon is only within the context a large system that makes it one. A piece of metal is a knife because it is used as one, pressed into service as one by a decision-making agent of some kind, to cut objects, vegetable or animal.

Which brings us to autonomous weapons, a domain where the ethical and regulatory debate is quite clearly demarcated. The case for autonomous weapons is exceedingly familiar: they are more humane because of their greater precision; they can be used to reduce the ‘cost’ of war, both human and material; no more carpet-bombing, just precision strikes, delivered by autonomous weapons–which moreover, reduce the strain of killing on humans. (That is, these weapons are kinder to those who kill and those who are killed.) The case against them is similarly familiar: the delegation of lethal decision making to a machine incapable of fine-grained ethical deliberation is an invitation to moral atrocity, to a situation in which lurking catastrophes are triggered by a moral calculus that makes decisions which are only superficially technically correct. The immaturity of such systems and the algorithms they instantiate makes them especially risky to deploy and use.

Autonomous weapons do not exist in isolation, of course; they are more correctly considered autonomous weapons systems–as one part of an economic, military, legal, political, and moral calculus; their use as weapons is not merely function of their machinic code; it is a function, rather, of a much more complex ‘code’ made up of bits of legal regulations, political imperatives, and physical and economic constraints. It is these that act together, in concert, or in opposition, to ‘fire’ the weapon in question. As such, some of the ‘ethical’ arguments in favor of autonomous weapoons systems look a little trite: yes, autonomous weapons system carry the potential to enable more targeted and precise killing, but the imperatives to do so still need to be human directed; their force is channeled and directed and perhaps weakened or strengthened–by all sorts of system level and corporate constraints like political ones. The questions such systems prompt are, as they should be, quite different from those that might be directed at an ‘isolated weapon’: Who owns them? Who ‘controls’ them? What are safeguards on their inappropriate use? Which system’s political and economic and moral imperatives are written into its operational procedures? The world’s deadliest bomber can be grounded by a political command, its engines left idling by politics; it can also be sent half-way around the world by a similar directive.

An illustrative example may be found in the history of computing itself: the wide-scale deployment of personal computing devices in office settings, their integration into larger ‘enterprise’ systems, was a long and drawn out process, one suffering many birthing pains. This was because the computers that were placed in offices, were not, despite appearances, isolated computing devices; they were part of computing systems. They were owned by the employer, not the employee, so they were not really ‘personal’; their usage–hours, security access etc–was regulated by company rules; the data on their drives belonged to the employer. (For instance, to print a document, you accessed a networked printer administered by an Information Systems Group; or, the computers are not accessible on weekends or after hours.) Under these circumstances, it was a category mistake to regard these machines as isolated personal computing devices; rather, they were part of a much larger commercial system; their human users were one component of it. Claims about their capacities, their desirability, their efficiencies were only coherently made within the framework of this system.

Similar considerations apply to autonomous weapons; talk of their roles in warfare, their abilities, and the like, are only meaningfully expressed within a discursive framework that references the architecture of the system the weapon in question functions as a part of.

 

Blade Runner 2049: Our Slaves Will Set Us Free

Blade Runner 2049 is a provocative visual and aural treat. It sparked many thoughts, two of which I make note of here; the relationship between the two should be apparent.

  1. What is the research project called ‘artificial intelligence’ trying to do? Is it trying to make machines that can do the things which, if done by humans, would be said to require intelligence? Regardless of the particular implementation? Is it trying to accomplish those tasks in the way that human beings do them? Or is it trying to find a non-biological method of reproducing human beings? These are three very different tasks. The first one is a purely engineering task; the machine must accomplish the task regardless of the method–any route to the solution will do, so long as it is tractable and efficient. The second is cognitive science, inspired by Giambattista Vico; “the true and the made are convertible” (Verum et factum convertuntur) or “the true is precisely what is made” (Verum esse ipsum factum); we will only understand the mind, and possess a ‘true’ model of it when we make it. The third is more curious (and related to the second)–it immediately implicates us in the task of making artificial persons. Perhaps by figuring out how the brain works, we can mimic human cognition but this capacity might be  placed in a non-human form made of silicon or plastic or some metal; the artificial persons project insists on a human form–the android or humanoid robot–and on replicating uniquely human capacities including the moral and aesthetic ones. This would require the original cognitive science project to be extended to an all-encompassing project of understanding human physiology so that its bodily functions can be replicated. Which immediately raises the question: why make artificial persons? We have a perfectly good way of making human replicants; and many people actually enjoy engaging in this process. So why make artificial persons this way? If the answer is to increase our knowledge of human beings’ workings, then we might well ask: To what end? To cure incurable diseases? To make us happier? To release us from biological prisons so that we may, in some singularity inspired fantasy, migrate our souls to these more durable containers? Or do we need them to be in human form, so that they can realistically–in all the right ways–fulfill all the functions we will require them to perform. For instance, as in Westworld, they could be our sex slaves, or as in Blade Runner, they could perform dangerous and onerous tasks that human beings are unwilling or unable to do. And, of course, prop up ecologically unstable civilizations like ours.
  2. It is a philosophical commonplace–well, at least to Goethe and Nietzsche, among others–that constraint is necessary for freedom; we cannot be free unless we are restrained, somehow, by law and rule and regulation and artifice. But is it necessary that we ourselves be restrained in order to be free? The Greeks figured out that the slave could be enslaved, lose his freedom, and through this loss, his owner, his master, could be free; as Hannah Arendt puts it in The Human Condition the work of the slaves–barbarians and women–does ‘labor’ for the owner, keeping the owner alive, taking care of his biological necessity, and freeing him up to go to the polis and do politics in a state of freedom, in the company of other property-owning householders like him. So: the slave is necessary for freedom; either we enslave ourselves, suppress our appetites and desires and drives and sublimate and channel them into the ‘right’ outlets, or we enslave someone else. (Freud noted glumly in Civilization and its Discontents that civilization enslaves our desires.) If we cannot enslave humans, with all their capricious desires to be free, then we can enslave other creatures, perhaps animals, domesticating them to turn them into companions and food. And if we ever become technologically adept at reproducing those processes that produce humans or persons, we can make copies–replicants–of ourselves, artificial persons, that mimic us in all the right ways, and keep us free. These slaves, by being slaves, make us free.

Much more on Blade Runner 2049 anon.

Will Artificial Intelligence Create More Jobs Than It Eliminates? Maybe

Over at the MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino strike an optimistic note as they respond to the “distressing picture” created by “the threat that automation will eliminate a broad swath of jobs across the world economy [for] as artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.” Wilson, Daugherty, and Morini-Bianzino suggest that we’ve been “overlooking” the fact that “many new jobs will also be created — jobs that look nothing like those that exist today.” They go on to identify three key positions:

Trainers

This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.

Explainers

The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases.

Sustainers

The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency.

Unfortunately for these claims, it is all too evident that these positions themselves are vulnerable to automation. That is, future trainers, explainers, and sustainers–if their job descriptions match the ones given above–will also be automated. Nothing in those descriptions indicates that humans are indispensable components of the learning processes they are currently facilitating.

Consider that the ’empathy trainer’ Koko is a machine-learning system which “can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth.” Currently Koko is being trained by a human; but it will then go on to teach Siri and Alexa. Teachers are teaching future teachers how to be teachers; after which they will teach on their own. Moreover, there is no reason why chatbots, which currently need human trainers, cannot be trained–as they already are–on increasingly large corpora of natural language texts like the collections of the world’s libraries (which consist of human inputs.)

Or consider that explainers may rely on an algorithmic system like “the Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction”; human analysts use its output to “pinpoint the data that led to a particular result.” But why wouldn’t LIME’s functioning be enhanced to do the relevant ‘pinpointing’ itself, thus obviating the need for a human analyst?

Finally, note that “AI may become more self-governing….an AI prototype named Quixote…can learn about ethics by reading simple stories….the system is able to “reverse engineer” human values through stories about how humans interact with one another. Quixote has learned…why stealing is not a good idea….even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.” This sounds like the roles and responsibilities for humans in this domain will decrease over time.

There is no doubt that artificial intelligence systems require human inputs, supervision, training, and maintenance; but those roles are diminishing, not increasing; short-term, local, requirements for human skills will not necessarily translate into long-term, global, ones. The economic impact of the coming robotics and artificial intelligence ‘revolution’ is still unclear.

Westworld’s ‘Analysis Mode’ For Humans

In the course of a discussion about the various motivations underlying the character Robert Ford‘s actions in HBO’s Westworld, a friend raised the following query:

In what senses would it be good, and in which bad, if human beings could put one another into ‘analysis mode’ like techs can do with hosts in the show? If analysis mode involved emotional detachment, earnest self-reflectiveness, and transparency, but not unconditional obedience.

As a reminder:

Analysis Mode is a state which hosts enter and leave on command…While in Character Mode, hosts seem unaware of what has transpired when they were in Analysis Mode….This mode is used by staff to maintain, adjust, and to diagnose problems with hosts. In this mode, hosts can answer questions and perform actions, but do not appear to initiate conversation or actions….While in Analysis Mode, hosts often do not appear to make eye contact, much like an autistic human, or it could be described as the eyes being unfocused like someone who is day dreaming. However, there are also numerous times when hosts in Analysis Mode do make eye contact with their interviewers.

One effect of the kind of ‘analysis mode’ imagined above would be that humans would be able to transition into a more ‘honest’ interactive state: they could request clarification and explanations of actions and statements from those they interact with; some of the inexplicable nature of our fellow humans could be clarified thus. This immediately suggests that: a) humans would not allow just anyone to place them in ‘analysis mode’ and b) there would be limits on the ‘level’ of analysis allowed. We rely on a great deal of masking in our interactions with others: rarely do we disclose our ‘true’ or ‘actual’ or ‘basic’ motives for an action; a great deal of artifice underwrites even our most ‘honest’ relationships. Indeed, it is not clear to me that such a capacity would permit our current social spaces to be constructed and maintained as they are; they rely for their current form on the ‘iceberg’ model–that which is visible serves to cover a far greater reservoir of the invisible. These considerations suggest that we might ask: Who would allow such access to themselves? Why would they do so? Under what circumstances? (Could you, for instance, just place an interlocutor, on the street, in the boardroom, into ‘analysis mode’?)

As might be obvious, what underwrites the suggestion above is the hope that underwrites various forms of psychotherapy, which, of course, is what ‘analysis mode’ sounds a lot like: that under persistent, guided, querying, we would make ourselves more transparent–to ourselves. Moreover, we could reduce the hurt and confusion which often results from our actions by ‘clarifying’ ourselves; by explaining why we did what we did. As the caveat about ‘unconditional obedience’ acknowledges, we generally do not allow therapeutic analysis to proceed in any direction, without limit (psychoanalysis puts this down to ‘unconscious resistance.’) The ‘bad’ here would be those usual results we imagine issuing from greater transparency: that our current relationships would not survive if we were really aware of each others’ motivations and desires.

‘Analysis mode’–understood in the way suggested above–would perhaps only be possible or desirable in a society comfortable with, and accustomed to, the greater access to each other that such interactions would produce.

‘Westworld’ And Our Constitutive Loneliness

The title sequence to HBO’s Westworld is visually and aurally beautiful, melancholic, and ultimately haunting: artifacts–whose artifice is clearly visible–take shape in front of us, manufactured and brought into being by sophisticated devices, presumably robotic ones just like them; their anatomies and shapes and forms and talents are human-like; and that is all we need to begin to empathize with them. Empathize with what? The emotions of these entities is ersatz; there is nothing and no one there. Or so we are told. But we don’t need those emotions and feelings to be ‘real’–whatever that means. We merely need a reminder–in any way, from any quarter–about the essential features of our existence, and we are off and running, sent off into that endless mope and funk that is our characteristic state of being.

The robot and the android–the ‘host’ in Westworld–is there to provide bodies to be raped, killed, and tortured by the park’s guests;  we, the spectators, are supposed to be ashamed of our species, at our endless capacity for entertainment at the expense of the easily exploited, a capacity which finds its summum malum with a demographic that is controlled by us in the most profound way possible–for we control their minds and bodies. 1984‘s schemers had nothing on this. And the right set-up, the right priming for this kind of reaction is provided by the title track–even more than the many scenes which show hosts crying, moaning with pleasure, flying into a rage–for it places squarely in front of us, our loneliness, our sense of being puppets at the beck and call of forces beyond our control. (The loneliness of the hosts being manufactured in the title sequence is enhanced by their placement in a black background; all around them, the darkness laps at the edges, held back only by the light emergent from the hosts’ bodies; we sense that their existence is fragile and provisional.)

We have known for long that humans need only the tiniest suggestion of similarity and analogy to switch on their full repertoire of empathetic reactions; we smile at faces drawn on footballs; we invent personal monikers for natural landmarks that resemble anatomic features; we deploy a language rich with psychological predicates for such interactions as soon as we possibly can, and only abandon it with reluctance when we notice that more efficient languages are available. We are desperate to make contact with anyone or anything, desperate to extend our community, to find reassurance that this terrible isolation we feel–even in, or perhaps especially in, the company of the ones we love, for they remind us, with their own unique and peculiar challenges, just how alone we actually are. We would not wish this situation on anyone else; not even on creatures whose ‘insides’ do not look like ours. The melancholia we feel when we listen to, and see, Westworld‘s title sequence tells us our silent warnings have gone unheeded; another being is among us, inaccessible to us, and to itself. And we have made it so; our greatest revenge was to visit the horrors of existence on another being.

‘Eva’: Love Can Be Skin-Deep (Justifiably)

Kike Maíllo’s Eva makes for an interesting contribution to the ever-growing–in recent times–genre of robotics and artificial intelligence movies. That is because its central concern–the emulation of humanity by robots–which is not particularly novel in itself, is portrayed in familiar and yet distinctive, form.

The most common objection to the personhood of the ‘artificially sentient,’ the ‘artificially intelligent,’ or ‘artificial agents’ and ‘artificial persons’ is couched in terms similar to the following: How could silicon and plastic ever feel, taste, hurt?  There is no ‘I’ in these beings; no subject, no first-person, no self. If such beings ever provoked our affection and concerns, those reactions would remain entirely ersatz. We know too much about their ‘insides,’ about how they work. Our ‘epistemic hegemony’ over these beings–their internals are transparent to us, their designers and makers–and the dissimilarity between their material substrate and ours renders impossible their admission to our community of persons (those we consider worthy of our moral concern.)

As Eva makes quite clear, such considerations ignore the reality of how our relationships with other human beings are constructed in actuality. We respond first to visible criteria, to observable behavior, to patterns of social interaction; we then seek internal correspondences–biological, physiological–for these to confirm our initial reactions and establishments of social ties; we assume too, by way of abduction, an ‘inner world’ much like ours. But biological similarity is not determinative; if the visible behavior is not satisfactory, we do not hesitate to recommend banishment from the community of persons. (By ostracism, institutionalization, imprisonment etc.) And if visible behavior is indeed, as rich and varied and interactive as we imagine it should be for the formation of viable and rewarding relationships, then our desire to admit the being in question to the community of persons worthy of our moral care will withstand putative evidence that there is considerable difference in constitution and the nature of ‘inner worlds.’  If Martians consisting solely of green goo on the inside were to land on our planet and treat our children with kindness i.e., display kind behavior, and provide the right kinds of reasons–whether verbally or by way of display on an LED screen–when we asked them why they did so, only an irredeemable chauvinist would deny them admission to the community of moral persons.

Eva claims that a robot’s ‘mother’ and her ‘father’–her human designers–may love her in much the same way they would love their human children. For she may bring joy to their life in much the same way they would; she may smile, laugh giddily, play pranks, gaze at them in adoration, demand their protection and care, respond to their affectionate embraces, and so on. In doing so, she provokes older, evolutionarily established instincts of ours. These reactions of ours may strike us so compelling that even a look ‘under the hood’ may not deter their expression. We might come to learn that extending such feelings of acceptance and care to beings we had not previously considered so worthy might make new forms of life and relationships manifest. That doesn’t seem like such a bad bargain.

One Vision Of A Driverless Car Future: Eliminating Private Car Ownership

Most analysis of a driverless car future concentrates on the gains in safety: ‘robotic’ cars will adhere more closely to speed limits and other traffic rules and over a period of time, by eliminating human error and idiosyncrasies, produce a safer environment on our roads. This might be seen as an architectural modification of human driving behavior to produce safer driving outcomes–rather than making unsafe driving illegal, more expensive, or socially unacceptable, just don’t let humans drive.

But there are other problems–environmental degradation and traffic–that could be addressed by mature driverless car technologies. The key to their solution lies in moving away from private car ownership.

To see this, consider that at any given time, we have too many cars on the roads. Some are being driven, yet others are parked. If you own a car, you drive it from point to point, and park it when you are done using it. Eight hours later–at the end of an average work-day–you leave your office and drive home, park it again, and then use it in the morning. Through the night, your car sits idle again, taking up space. If only someone else could use your car while you didn’t need it. They wouldn’t need to buy a separate car for themselves and add to the congestion on the highways. And in parking lots.

Why not simply replace privately owned, human-driven cars with a gigantic fleet of robotic taxis? When you need a car, you call for one. When you are done using it, you release it back into the pool. You don’t park it; it simply goes back to answering its next call.  Need to go to work in the morning? Call a car. Run an errand with heavy lifting? Call a car. And so on. Cars shared in this fashion could thus eliminate the gigantic redundancy in car ownership that leads to choked highways, mounting smog and pollution, endless, futile construction of parking towers, and elaboration congestion pricing schemes. (The key phrase here is, of course, ‘mature driver-less car technologies.’ If you need a car for an elaborate road-trip through the American West, perhaps you could place a longer, more expensive hold on it, so that it doesn’t drive off while you are taking a quick photo or two of a canyon.)

Such a future entails that there will be no more personal, ineffable, fetishized relationships with cars. They will not be your babies to be cared and loved for. Their upholstery will not remind you of days gone by. Your children will not feel sentimental about the clunker that was a part of their growing up. And so on. I suspect these sorts of attachments to the car will be very easily forgotten once we have reckoned with the sheer pleasure of not having to deal with driving tests–and the terrors of teaching our children how to drive, the DMV, buying car insurance, looking for parking, and best of all, other drivers.

I, for one, welcome our robotic overlords in this domain.