‘Westworld’ And Our Constitutive Loneliness

The title sequence to HBO’s Westworld is visually and aurally beautiful, melancholic, and ultimately haunting: artifacts–whose artifice is clearly visible–take shape in front of us, manufactured and brought into being by sophisticated devices, presumably robotic ones just like them; their anatomies and shapes and forms and talents are human-like; and that is all we need to begin to empathize with them. Empathize with what? The emotions of these entities is ersatz; there is nothing and no one there. Or so we are told. But we don’t need those emotions and feelings to be ‘real’–whatever that means. We merely need a reminder–in any way, from any quarter–about the essential features of our existence, and we are off and running, sent off into that endless mope and funk that is our characteristic state of being.

The robot and the android–the ‘host’ in Westworld–is there to provide bodies to be raped, killed, and tortured by the park’s guests;  we, the spectators, are supposed to be ashamed of our species, at our endless capacity for entertainment at the expense of the easily exploited, a capacity which finds its summum malum with a demographic that is controlled by us in the most profound way possible–for we control their minds and bodies. 1984‘s schemers had nothing on this. And the right set-up, the right priming for this kind of reaction is provided by the title track–even more than the many scenes which show hosts crying, moaning with pleasure, flying into a rage–for it places squarely in front of us, our loneliness, our sense of being puppets at the beck and call of forces beyond our control. (The loneliness of the hosts being manufactured in the title sequence is enhanced by their placement in a black background; all around them, the darkness laps at the edges, held back only by the light emergent from the hosts’ bodies; we sense that their existence is fragile and provisional.)

We have known for long that humans need only the tiniest suggestion of similarity and analogy to switch on their full repertoire of empathetic reactions; we smile at faces drawn on footballs; we invent personal monikers for natural landmarks that resemble anatomic features; we deploy a language rich with psychological predicates for such interactions as soon as we possibly can, and only abandon it with reluctance when we notice that more efficient languages are available. We are desperate to make contact with anyone or anything, desperate to extend our community, to find reassurance that this terrible isolation we feel–even in, or perhaps especially in, the company of the ones we love, for they remind us, with their own unique and peculiar challenges, just how alone we actually are. We would not wish this situation on anyone else; not even on creatures whose ‘insides’ do not look like ours. The melancholia we feel when we listen to, and see, Westworld‘s title sequence tells us our silent warnings have gone unheeded; another being is among us, inaccessible to us, and to itself. And we have made it so; our greatest revenge was to visit the horrors of existence on another being.

A Fond Remembrance Of A Canine Friend

My brother’s family lost their pet dog yesterday. ‘G’ was a dachshund, brought home a little over twelve years ago. I have never owned a pet and probably never will; I simply do not have the emotional wherewithal for the caretaking required. I have thus never developed a particularly close relationship with domestic animals; my interactions with most pets is always a fairly tentative, bounded one. Not so with ‘G’. This was because on those occasions when I was visiting my brother and his family, I spent extended time in his company. In the course of that period, I was able to experience at least one moment that gave me some insight into the kinds of relationships pets are able to develop with the families who take care of them.

During the winter of 2006/7, while on vacation, I came down with a mysterious stomach bug–a fever, the chills, the shakes, a spectacularly upset stomach, the works. The bug advanced during an evening as a fever took hold of my body, and then it struck hard at night. On waking up in the morning, I informed everyone–my brother and his family, and my wife–I would not be joining them for the New Year’s Eve partying later that night. Then, I staggered back to bed and collapsed. I sought warmth and comfort in my blanket and my supine pose. A short while later, ‘G’ pushed open the door to my room–with his nose, I think–walked over to my bed, found himself a spot next to my feet, burrowed in, partially covering himself with my blanket, and settled down.

And there he stayed. For the entire day, well into the evening, in that darkened room. I drifted in and out of sleep, my body exhausted after the frequent interruptions–the trips to the toilet bowl–during the previous night. My head spun, my stomach churned, shivers ran up and down the length of my frame. And down by my feet, ‘G”s presence provided unexpected comfort and reassurance. His body was warm, solid, furry; I could feel him pressing against my legs through the blanket. It was a desperately needed anchoring in a bodily and mental state that felt desperately adrift.

I did not lack for human company that day. My family stopped in at intervals to bring me water and the little food I could keep down. As evening approached and as the night came on, I slowly regained some strength, enough to try to consume a bowl of watery soup. On seeing me sit up in bed, ‘G’, sensing the worst was over, roused himself, shook himself once or twice, and then left the room. He had done his bit.

From that time on, whenever I hear a pet owner speak about their pets in a language rich with intentionality and affect, I know exactly what they are talking about. I too, sensed an animal draw near and take care.

Thanks ‘G’, rest in peace; we all loved you.

‘Eva’: Love Can Be Skin-Deep (Justifiably)

Kike Maíllo’s Eva makes for an interesting contribution to the ever-growing–in recent times–genre of robotics and artificial intelligence movies. That is because its central concern–the emulation of humanity by robots–which is not particularly novel in itself, is portrayed in familiar and yet distinctive, form.

The most common objection to the personhood of the ‘artificially sentient,’ the ‘artificially intelligent,’ or ‘artificial agents’ and ‘artificial persons’ is couched in terms similar to the following: How could silicon and plastic ever feel, taste, hurt?  There is no ‘I’ in these beings; no subject, no first-person, no self. If such beings ever provoked our affection and concerns, those reactions would remain entirely ersatz. We know too much about their ‘insides,’ about how they work. Our ‘epistemic hegemony’ over these beings–their internals are transparent to us, their designers and makers–and the dissimilarity between their material substrate and ours renders impossible their admission to our community of persons (those we consider worthy of our moral concern.)

As Eva makes quite clear, such considerations ignore the reality of how our relationships with other human beings are constructed in actuality. We respond first to visible criteria, to observable behavior, to patterns of social interaction; we then seek internal correspondences–biological, physiological–for these to confirm our initial reactions and establishments of social ties; we assume too, by way of abduction, an ‘inner world’ much like ours. But biological similarity is not determinative; if the visible behavior is not satisfactory, we do not hesitate to recommend banishment from the community of persons. (By ostracism, institutionalization, imprisonment etc.) And if visible behavior is indeed, as rich and varied and interactive as we imagine it should be for the formation of viable and rewarding relationships, then our desire to admit the being in question to the community of persons worthy of our moral care will withstand putative evidence that there is considerable difference in constitution and the nature of ‘inner worlds.’  If Martians consisting solely of green goo on the inside were to land on our planet and treat our children with kindness i.e., display kind behavior, and provide the right kinds of reasons–whether verbally or by way of display on an LED screen–when we asked them why they did so, only an irredeemable chauvinist would deny them admission to the community of moral persons.

Eva claims that a robot’s ‘mother’ and her ‘father’–her human designers–may love her in much the same way they would love their human children. For she may bring joy to their life in much the same way they would; she may smile, laugh giddily, play pranks, gaze at them in adoration, demand their protection and care, respond to their affectionate embraces, and so on. In doing so, she provokes older, evolutionarily established instincts of ours. These reactions of ours may strike us so compelling that even a look ‘under the hood’ may not deter their expression. We might come to learn that extending such feelings of acceptance and care to beings we had not previously considered so worthy might make new forms of life and relationships manifest. That doesn’t seem like such a bad bargain.

The ‘Trickery’ of Robots

Maggie Koerth-Baker reports on a case of supposed trickery–(‘How Robots Can Trick You Into Loving Them‘, The New York Times, 17 September 2013)–that has come to light as robots become more ubiquitous and enter an increasing number of social spaces:

In the future, more robots will occupy that strange gray zone: doing not only jobs that humans can do but also jobs that require social grace. In the last decade, an interdisciplinary field of research called Human-Robot Interaction has arisen to study the factors that make robots work well with humans, and how humans view their robotic counterparts.

H.R.I. researchers have discovered some rather surprising things: a robot’s behavior can have a bigger impact on its relationship with humans than its design; many of the rules that govern human relationships apply equally well to human-robot relations; and people will read emotions and motivations into a robot’s behavior that far exceed the robot’s capabilities.

None of this should be surprising in the least. Human beings have always relied on a combination of relentless anthropomorphization and agency ascription to make sense of the world around them. In doing so they have cared little for the ‘inside’ of the beings they encounter, and have instead, concerned themselves with which interpretive framework enables them to enjoy more fruitful relationships with them. As Koerth-Baker notes, “Provided with the right behavioral cues, humans will form relationships with just about anything — regardless of what it looks like. Even a stick can trigger our social promiscuity.” Robots will be no different in this regard.

In a world full of action, we are inclined to find agents everywhere; the interesting bit comes when we have to individuate these agents–figure out where one starts and ends and another one begins–and what kind of ‘inner life’ we ascribe to them. Chances are, the more those agents resemble us, the more likely we are to ascribe a rich set of inner states to them. But as the robots and stick example shows, a sufficiently rich behavioral repertoire might even overcome this inhibition.

The more fascinating question of course, is whether this style of social interaction will become the preferred modality in preference to talking about the robot’s innards or design. Will humans describe the robot’s ‘beliefs’ and ‘desires’ as the causes of the actions it takes? Doing so would regard robots as originators of the actions they take: in other words, they would be considered ‘true’ agents in the philosophical sense.

One prominent asymmetry should also become apparent in robot-human interaction: those who know a great deal about the robot’s innards–its engineering principles, its software, its internal design–will be less inclined to anthropomorphize and ascribe social graces and capacities to robots. They will sometimes find that the best explanations they can offer of the robot’s behavior will be more expeditiously expressed in a language that refers to their physical composition or logical design. But this subset of users is likely to be a very small one and as robots become more complex and more capable of a sophisticated range of behaviors it might be that even those users will find the language of propositional attitudes a more convenient one for dealing with robots.

Eventually, we might come to treat robots as authorities when it comes to reporting on their own inner states. When that level of sophisticated interaction and behavior is possible, we’ll face a genuine conundrum: as far as social relationships are concerned, what, other than their innards, distinguishes them from other reporters–like human beings–that we consider authorities in similar fashion?

Report on Concurring Opinions Symposium on Artificial Agents – II

Today, I’m continuing my wrap-up of the Concurring Opinions online symposium on A Legal Theory for Autonomous Artificial Agents. I’ll be noting below the various responses to the book and point to my responses to them (Part I of this wrap-up was posted yesterday).

While almost all respondents seem to have seriously engaged with the book’s analysis, Ryan Calo wrote a disappointingly unengaged, and at times, patronizing post that ostensibly focused on the book’s methodological adoption of the intentional stance; it seemed to suggest that all we were doing was primitive anthropomorphizing. This was a pretty comprehensive misread of the book’s argument, so I struggled to find anything to say in response. Calo also said he didn’t know whether an autonomous robot was like a hammer or not; this was a bizarre admission coming from someone that is concerned with the legal implications of robotics. I noted in one of my responses that figuring out the answer to that question can be aided by some intuition-tickling questions (Like: Would NASA send a hammer to explore Mars? Can hammers drive?). Calo’s follow-up post to my comment on his post was roughly along the lines of “We don’t know what to do with artificial agents.” Well, yes, but I thought the point was to evaluate the attempt currently mounted in our book? I didn’t quite understand the point of Calo’s responses: that we don’t have a comprehensive theory for artificial agents i.e., the book’s title is misleading? I could be persuaded into mounting a guilty plea for that. But the point of the book was to indicate how existing doctrines could be so suitably modified to start accommodating artificial agents- that is how a legal theory will be built up in a common law system.

Deborah DeMott (Duke) whose writings on the common law doctrines of agency were very useful in our analysis in the book offered a very good analysis of our attempts to apply that doctrine to artificial agents. While DeMott disagreed with the exactness of the fit, she seemed not to think that it was completely off-base (she certainly found our attempt “lively and ingenious”!); in my response I attempted to clarify and defend some of our reasons for why we thought agency doctrine would work with artificial agents.

Ken Anderson (American University, Volokh Conspiracy) then discussed our treatment of intentionality and deployment of the intentional stance, and queried whether we intended to use the intentional stance merely as a heuristic device or whether we were, in fact, making a broader claim for intentionality in general. In my response I noted that we wanted to do both: use it as a methodological stance, and in doing so, also point an investigative lens at our understanding of intentionality in general. Ken’s reaction was very positive; he thought the book had hit a “sweet spot” in not being excessively pie-in-the-sky while offering serious doctrinal recommendations.

Ian Kerr (Ottawa), in his response, didn’t feel the book went far enough in suggesting a workable theory for artificial agents; if I understood Ian correctly, his central complaint was that the theory relied too much on older legal categories and doctrines and that artificial agents might need an entirely new set of legal frameworks. But Ian also felt the slow and steady march of the common law was the best way to handle the challenges posed by artificial agents. So, interestingly enough, I agree with Ian; and I think Ian should be less dissatisfied than he is; our book is  merely the first attempt to try and leverage the common law to make these steps to work towards a more comprehensive theory. In fact, given rapid developments in artificial agents, the law is largely going to be playing catchup more than anything else.

Andrew Sutter then wrote a critical, rich response, one that took aim at the book’s rhetoric, its methodology, and its philosophical stance. I greatly enjoyed my jousting with Andrew during this symposium, and my response to his post–and to his subsequent comments–in which I attempted to clarify my philosophical stance and presuppositions, will show that.

Harry Surden (Colorado) wrote a very good post on two understanding of artificial intelligence’s objectives–intelligence as the replication of human cognitive capacities by either replicating human methods of achieving them or via simulations that utilize other techniques–and how these could or would be crucial in the legal response to its achievements. My response to Surden acknowledged the importance of these distinctions and noted that this should also cause us to think about how we often ascribe human cognition a certain standing that arises largely because of a lack of understanding of its principles. (This also provoked an interesting discussion with AJ Sutter.)

Andrea Matwyshyn wrote an excellent, seriously engaged post that took head-on, the fairly detailed and intricate arguments of Chapter 2 (where we offer a solution for the so-called contracting problem by offering an argument that artificial agents be considered legal agents of their users). My response to Matwyshyn acknowledged the force of her various critical points while trying to expand and elaborate the economic incentivizing motivation for our claim that artificial agents should be considered as non-identical with their creators and/or deployers.

Once again, I am grateful to Frank Pasquale and the folks over at Concurring Opinions for staging the symposium and to all the participants for their responses.