Thinking Of Autonomous Weapons In ‘Systems’ Terms

A persistent confusion in thinking about weapons and their regulation is to insist on viewing weapons in isolation, and not as part of larger, socio-political-economic-legal-ethical systems. This confusion in the domain of gun control for instance, inspires the counter-slogan ‘guns don’t kill people; people kill people.’ Despite its glibness–and its misuse by the NRA–the slogan encapsulates a vital truth: it is singularly unilluminating to consider a weapon in isolation. Indeed, the object we term a weapon is only within the context a large system that makes it one. A piece of metal is a knife because it is used as one, pressed into service as one by a decision-making agent of some kind, to cut objects, vegetable or animal.

Which brings us to autonomous weapons, a domain where the ethical and regulatory debate is quite clearly demarcated. The case for autonomous weapons is exceedingly familiar: they are more humane because of their greater precision; they can be used to reduce the ‘cost’ of war, both human and material; no more carpet-bombing, just precision strikes, delivered by autonomous weapons–which moreover, reduce the strain of killing on humans. (That is, these weapons are kinder to those who kill and those who are killed.) The case against them is similarly familiar: the delegation of lethal decision making to a machine incapable of fine-grained ethical deliberation is an invitation to moral atrocity, to a situation in which lurking catastrophes are triggered by a moral calculus that makes decisions which are only superficially technically correct. The immaturity of such systems and the algorithms they instantiate makes them especially risky to deploy and use.

Autonomous weapons do not exist in isolation, of course; they are more correctly considered autonomous weapons systems–as one part of an economic, military, legal, political, and moral calculus; their use as weapons is not merely function of their machinic code; it is a function, rather, of a much more complex ‘code’ made up of bits of legal regulations, political imperatives, and physical and economic constraints. It is these that act together, in concert, or in opposition, to ‘fire’ the weapon in question. As such, some of the ‘ethical’ arguments in favor of autonomous weapoons systems look a little trite: yes, autonomous weapons system carry the potential to enable more targeted and precise killing, but the imperatives to do so still need to be human directed; their force is channeled and directed and perhaps weakened or strengthened–by all sorts of system level and corporate constraints like political ones. The questions such systems prompt are, as they should be, quite different from those that might be directed at an ‘isolated weapon’: Who owns them? Who ‘controls’ them? What are safeguards on their inappropriate use? Which system’s political and economic and moral imperatives are written into its operational procedures? The world’s deadliest bomber can be grounded by a political command, its engines left idling by politics; it can also be sent half-way around the world by a similar directive.

An illustrative example may be found in the history of computing itself: the wide-scale deployment of personal computing devices in office settings, their integration into larger ‘enterprise’ systems, was a long and drawn out process, one suffering many birthing pains. This was because the computers that were placed in offices, were not, despite appearances, isolated computing devices; they were part of computing systems. They were owned by the employer, not the employee, so they were not really ‘personal’; their usage–hours, security access etc–was regulated by company rules; the data on their drives belonged to the employer. (For instance, to print a document, you accessed a networked printer administered by an Information Systems Group; or, the computers are not accessible on weekends or after hours.) Under these circumstances, it was a category mistake to regard these machines as isolated personal computing devices; rather, they were part of a much larger commercial system; their human users were one component of it. Claims about their capacities, their desirability, their efficiencies were only coherently made within the framework of this system.

Similar considerations apply to autonomous weapons; talk of their roles in warfare, their abilities, and the like, are only meaningfully expressed within a discursive framework that references the architecture of the system the weapon in question functions as a part of.

 

Blade Runner 2049: Our Slaves Will Set Us Free

Blade Runner 2049 is a provocative visual and aural treat. It sparked many thoughts, two of which I make note of here; the relationship between the two should be apparent.

  1. What is the research project called ‘artificial intelligence’ trying to do? Is it trying to make machines that can do the things which, if done by humans, would be said to require intelligence? Regardless of the particular implementation? Is it trying to accomplish those tasks in the way that human beings do them? Or is it trying to find a non-biological method of reproducing human beings? These are three very different tasks. The first one is a purely engineering task; the machine must accomplish the task regardless of the method–any route to the solution will do, so long as it is tractable and efficient. The second is cognitive science, inspired by Giambattista Vico; “the true and the made are convertible” (Verum et factum convertuntur) or “the true is precisely what is made” (Verum esse ipsum factum); we will only understand the mind, and possess a ‘true’ model of it when we make it. The third is more curious (and related to the second)–it immediately implicates us in the task of making artificial persons. Perhaps by figuring out how the brain works, we can mimic human cognition but this capacity might be  placed in a non-human form made of silicon or plastic or some metal; the artificial persons project insists on a human form–the android or humanoid robot–and on replicating uniquely human capacities including the moral and aesthetic ones. This would require the original cognitive science project to be extended to an all-encompassing project of understanding human physiology so that its bodily functions can be replicated. Which immediately raises the question: why make artificial persons? We have a perfectly good way of making human replicants; and many people actually enjoy engaging in this process. So why make artificial persons this way? If the answer is to increase our knowledge of human beings’ workings, then we might well ask: To what end? To cure incurable diseases? To make us happier? To release us from biological prisons so that we may, in some singularity inspired fantasy, migrate our souls to these more durable containers? Or do we need them to be in human form, so that they can realistically–in all the right ways–fulfill all the functions we will require them to perform. For instance, as in Westworld, they could be our sex slaves, or as in Blade Runner, they could perform dangerous and onerous tasks that human beings are unwilling or unable to do. And, of course, prop up ecologically unstable civilizations like ours.
  2. It is a philosophical commonplace–well, at least to Goethe and Nietzsche, among others–that constraint is necessary for freedom; we cannot be free unless we are restrained, somehow, by law and rule and regulation and artifice. But is it necessary that we ourselves be restrained in order to be free? The Greeks figured out that the slave could be enslaved, lose his freedom, and through this loss, his owner, his master, could be free; as Hannah Arendt puts it in The Human Condition the work of the slaves–barbarians and women–does ‘labor’ for the owner, keeping the owner alive, taking care of his biological necessity, and freeing him up to go to the polis and do politics in a state of freedom, in the company of other property-owning householders like him. So: the slave is necessary for freedom; either we enslave ourselves, suppress our appetites and desires and drives and sublimate and channel them into the ‘right’ outlets, or we enslave someone else. (Freud noted glumly in Civilization and its Discontents that civilization enslaves our desires.) If we cannot enslave humans, with all their capricious desires to be free, then we can enslave other creatures, perhaps animals, domesticating them to turn them into companions and food. And if we ever become technologically adept at reproducing those processes that produce humans or persons, we can make copies–replicants–of ourselves, artificial persons, that mimic us in all the right ways, and keep us free. These slaves, by being slaves, make us free.

Much more on Blade Runner 2049 anon.

Will Artificial Intelligence Create More Jobs Than It Eliminates? Maybe

Over at the MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino strike an optimistic note as they respond to the “distressing picture” created by “the threat that automation will eliminate a broad swath of jobs across the world economy [for] as artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.” Wilson, Daugherty, and Morini-Bianzino suggest that we’ve been “overlooking” the fact that “many new jobs will also be created — jobs that look nothing like those that exist today.” They go on to identify three key positions:

Trainers

This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.

Explainers

The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases.

Sustainers

The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency.

Unfortunately for these claims, it is all too evident that these positions themselves are vulnerable to automation. That is, future trainers, explainers, and sustainers–if their job descriptions match the ones given above–will also be automated. Nothing in those descriptions indicates that humans are indispensable components of the learning processes they are currently facilitating.

Consider that the ’empathy trainer’ Koko is a machine-learning system which “can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth.” Currently Koko is being trained by a human; but it will then go on to teach Siri and Alexa. Teachers are teaching future teachers how to be teachers; after which they will teach on their own. Moreover, there is no reason why chatbots, which currently need human trainers, cannot be trained–as they already are–on increasingly large corpora of natural language texts like the collections of the world’s libraries (which consist of human inputs.)

Or consider that explainers may rely on an algorithmic system like “the Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction”; human analysts use its output to “pinpoint the data that led to a particular result.” But why wouldn’t LIME’s functioning be enhanced to do the relevant ‘pinpointing’ itself, thus obviating the need for a human analyst?

Finally, note that “AI may become more self-governing….an AI prototype named Quixote…can learn about ethics by reading simple stories….the system is able to “reverse engineer” human values through stories about how humans interact with one another. Quixote has learned…why stealing is not a good idea….even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.” This sounds like the roles and responsibilities for humans in this domain will decrease over time.

There is no doubt that artificial intelligence systems require human inputs, supervision, training, and maintenance; but those roles are diminishing, not increasing; short-term, local, requirements for human skills will not necessarily translate into long-term, global, ones. The economic impact of the coming robotics and artificial intelligence ‘revolution’ is still unclear.

Westworld’s ‘Analysis Mode’ For Humans

In the course of a discussion about the various motivations underlying the character Robert Ford‘s actions in HBO’s Westworld, a friend raised the following query:

In what senses would it be good, and in which bad, if human beings could put one another into ‘analysis mode’ like techs can do with hosts in the show? If analysis mode involved emotional detachment, earnest self-reflectiveness, and transparency, but not unconditional obedience.

As a reminder:

Analysis Mode is a state which hosts enter and leave on command…While in Character Mode, hosts seem unaware of what has transpired when they were in Analysis Mode….This mode is used by staff to maintain, adjust, and to diagnose problems with hosts. In this mode, hosts can answer questions and perform actions, but do not appear to initiate conversation or actions….While in Analysis Mode, hosts often do not appear to make eye contact, much like an autistic human, or it could be described as the eyes being unfocused like someone who is day dreaming. However, there are also numerous times when hosts in Analysis Mode do make eye contact with their interviewers.

One effect of the kind of ‘analysis mode’ imagined above would be that humans would be able to transition into a more ‘honest’ interactive state: they could request clarification and explanations of actions and statements from those they interact with; some of the inexplicable nature of our fellow humans could be clarified thus. This immediately suggests that: a) humans would not allow just anyone to place them in ‘analysis mode’ and b) there would be limits on the ‘level’ of analysis allowed. We rely on a great deal of masking in our interactions with others: rarely do we disclose our ‘true’ or ‘actual’ or ‘basic’ motives for an action; a great deal of artifice underwrites even our most ‘honest’ relationships. Indeed, it is not clear to me that such a capacity would permit our current social spaces to be constructed and maintained as they are; they rely for their current form on the ‘iceberg’ model–that which is visible serves to cover a far greater reservoir of the invisible. These considerations suggest that we might ask: Who would allow such access to themselves? Why would they do so? Under what circumstances? (Could you, for instance, just place an interlocutor, on the street, in the boardroom, into ‘analysis mode’?)

As might be obvious, what underwrites the suggestion above is the hope that underwrites various forms of psychotherapy, which, of course, is what ‘analysis mode’ sounds a lot like: that under persistent, guided, querying, we would make ourselves more transparent–to ourselves. Moreover, we could reduce the hurt and confusion which often results from our actions by ‘clarifying’ ourselves; by explaining why we did what we did. As the caveat about ‘unconditional obedience’ acknowledges, we generally do not allow therapeutic analysis to proceed in any direction, without limit (psychoanalysis puts this down to ‘unconscious resistance.’) The ‘bad’ here would be those usual results we imagine issuing from greater transparency: that our current relationships would not survive if we were really aware of each others’ motivations and desires.

‘Analysis mode’–understood in the way suggested above–would perhaps only be possible or desirable in a society comfortable with, and accustomed to, the greater access to each other that such interactions would produce.

‘Westworld’ And The American West As Locale For Self-Reconfiguration

It is perhaps unsurprising that Westworld is Westworld; if American mythology is to be staged anywhere, the West is a natural locale. In the original Westworld, the West meant a zone in which certain kinds of adventures were facilitated: gun battles mostly, but also sex with perfect strangers who cared little for who you were and only wanted your money. In the new Westworld, an implicit motif of the first becomes more explicit: Westworld is where you go to find yourself–whoever and whatever that may be. In this new Westworld, the landscape, only background scenery in the old, now becomes more prominent; we are reminded again and again of its beauty, wildness, and implacable hostility and indifference. If you want to make a show about self-discovery, reconfiguration, journeys into and across space and time, the American West–for many historical and cultural reasons–is a good call. The physical spaces are vast, mapping neatly on to the immense unexplored spaces of the mind; the beauty is enthralling, sparking vision after vision in us of possibility, and also, as Rilke reminded us, bringing us closer to terror: those cliffs, those bluffs, those steep walls, that burning sun, the rattlesnakes, the dangers of other humans. The deployment of the American West also taps into a deeper mythology that self-discovery takes place away from other humans–in the wild. If we are to traverse our mind, then Westworld–like many other recountings of human experience before it–suggests we need tremendous physical spaces too. We could not do this in a crowded city. Those endless horizons and canopies of the sheltering sky are necessary for the suggestion of infinite possibility.

And then, there is the violence. The American West’s land is soaked in blood, in memories of a people decimated, of massacres, starvation, and rape. If you want to stage a modern day genocide–and the continuing thirty-five year old slaughter of ‘hosts’ is most definitely a genocide, even if an eternally recurring one–then, again, the West is the correct locale. It is significant that in this version of the American West, there are very few Native Americans; there are some ‘greasers‘–cannon fodder, obviously–but very few ‘redskins.’ The makers of the show seem to have wisely decided that it was best to mostly write Native Americans out of the show rather than risk getting their depiction and usage wrong, which they almost certainly would have. (The one episode in which Native Americans make an appearance, they are the stuff of nightmare, much as they must have been for the ‘pioneers,’ their imaginations inflamed by stories of how they had to keep their women safe from the depredations of the savages on the prairies.) This American West is one which has already been cleansed of the Native American; an alternative rendering of Westworld, one whose dark satire would have cut too close to the bone, would be one in which park visitors would get to shoot all the whoopin’ n’ hollerin’ Injuns they wanted.

MedievalWorld, SamuraiWorld would also allow for the exploration of themes pertaining to the possible sentience of robots, but their locales might not, at least for American audiences, suggest the possibilities of our own reconfiguration quite so well.

‘Westworld’ And Our Constitutive Loneliness

The title sequence to HBO’s Westworld is visually and aurally beautiful, melancholic, and ultimately haunting: artifacts–whose artifice is clearly visible–take shape in front of us, manufactured and brought into being by sophisticated devices, presumably robotic ones just like them; their anatomies and shapes and forms and talents are human-like; and that is all we need to begin to empathize with them. Empathize with what? The emotions of these entities is ersatz; there is nothing and no one there. Or so we are told. But we don’t need those emotions and feelings to be ‘real’–whatever that means. We merely need a reminder–in any way, from any quarter–about the essential features of our existence, and we are off and running, sent off into that endless mope and funk that is our characteristic state of being.

The robot and the android–the ‘host’ in Westworld–is there to provide bodies to be raped, killed, and tortured by the park’s guests;  we, the spectators, are supposed to be ashamed of our species, at our endless capacity for entertainment at the expense of the easily exploited, a capacity which finds its summum malum with a demographic that is controlled by us in the most profound way possible–for we control their minds and bodies. 1984‘s schemers had nothing on this. And the right set-up, the right priming for this kind of reaction is provided by the title track–even more than the many scenes which show hosts crying, moaning with pleasure, flying into a rage–for it places squarely in front of us, our loneliness, our sense of being puppets at the beck and call of forces beyond our control. (The loneliness of the hosts being manufactured in the title sequence is enhanced by their placement in a black background; all around them, the darkness laps at the edges, held back only by the light emergent from the hosts’ bodies; we sense that their existence is fragile and provisional.)

We have known for long that humans need only the tiniest suggestion of similarity and analogy to switch on their full repertoire of empathetic reactions; we smile at faces drawn on footballs; we invent personal monikers for natural landmarks that resemble anatomic features; we deploy a language rich with psychological predicates for such interactions as soon as we possibly can, and only abandon it with reluctance when we notice that more efficient languages are available. We are desperate to make contact with anyone or anything, desperate to extend our community, to find reassurance that this terrible isolation we feel–even in, or perhaps especially in, the company of the ones we love, for they remind us, with their own unique and peculiar challenges, just how alone we actually are. We would not wish this situation on anyone else; not even on creatures whose ‘insides’ do not look like ours. The melancholia we feel when we listen to, and see, Westworld‘s title sequence tells us our silent warnings have gone unheeded; another being is among us, inaccessible to us, and to itself. And we have made it so; our greatest revenge was to visit the horrors of existence on another being.

RIP Hilary Putnam 1926-2016

During the period of my graduate studies in philosophy,  it came to seem to me that William James‘ classic distinction between tough and tender-minded philosophers had been been reworked just a bit. The tough philosophers were still empiricists and positivists but they had begun to show some of the same inclinations that the supposedly tender-minded in James’ distinction did: they wanted grand over-arching systems, towering receptacles into which all of reality could be neatly poured; they were enamored of reductionism; they had acquired new idols, like science (and metaphysical realism) and new tools, those of mathematics and logic.

Hilary Putnam was claimed as a card-carrying member of this tough-minded group:  he was a logician, mathematician, computer scientist, and analytic philosopher of acute distinction. He wrote non-trivial papers on mathematics and computer science (the MRDP problem, the Davis-Putnam algorithm), philosophy of language (the causal theory of reference), and philosophy of mind (functionalism, the multiple realizability of the mental)–the grand trifecta of the no-bullshit, hard-headed analytic philosopher, the one capable of handing your  woolly, unclear, tender continental philosophy ass to you on a platter.

I read many of Putnam’s classic works as a graduate student; he was always a clear writer, even as he navigated the thickets of some uncompromisingly dense material. Along with Willard Van Orman Quine, he was clearly the idol of many analytic philosophers-in-training; we grew up on a diet of Quine-Putnam-Kripke. You thought of analytic philosophy, and you thought of Putnam. Whether it was this earth, or its twin, there he was.

I was already quite uncomfortable with analytical philosophy’s preoccupations, methods, and central claims as I finished my PhD; I had not become aware that the man I thought of as its standard-bearer had started to step down from that position before I even began graduate school. When I encountered him again, after I had finished my dissertation and my post-doctoral fellowship, I found a new Putnam.

This Putnam was a philosopher who had moved away from metaphysical realism and scientism, who had found something to admire in the American pragmatists, who had become enamored of the Wittgenstein of the Philosophical Investigations. He now dismissed the fact-value dichotomy and indeed, now wrote on subjects that ‘tough-minded analytic philosophers’ from his former camps would not be caught dead writing: political theory and religion in particular. He even fraternized with the enemy, drawing inspiration, for instance, from Jürgen Habermas.

My own distaste for scientism and my interest in pragmatism (of the paleo and neo– varietals) and the late Wittgenstein meant that the new Putnam was an intellectual delight for me. (His 1964 paper ‘Robots: Machines or Artificially Created Life?’ significantly influenced my thoughts as I wrote my book on a legal theory for autonomous artificial agents.)  I read his later works with great relish and marveled at his tone of writing: he was ecumenical, gentle, tolerant, and crucially, wise. He had lived and learned; he had traversed great spaces of learning, finding that many philosophical perspectives abounded, and he had, as a good thinker must, struggled to integrate them into his intellectual framework. He seemed to have realized that the most acute philosophical ideal of all was a constant taking on and trying out of ideas, seeing if they worked in consonance with your life projects and those of the ones you cared for (this latter group can be as broad as the human community.) I was reading a philosopher who seemed to be doing philosophy in the way I understood it, as a way of making sense of this world without dogma.

I never had any personal contact with him, so I cannot share stories or anecdotes, no tales of directed inspiration or encouragement. But I can try to gesture in the direction of the pleasure he provided in his writing and his always visible willingness to work through the challenges of this world, this endlessly complicated existence. Through his life and work he provided an ideal of the engaged philosopher.

RIP Hilary Putnam.