Westworld’s ‘Analysis Mode’ For Humans

In the course of a discussion about the various motivations underlying the character Robert Ford‘s actions in HBO’s Westworld, a friend raised the following query:

In what senses would it be good, and in which bad, if human beings could put one another into ‘analysis mode’ like techs can do with hosts in the show? If analysis mode involved emotional detachment, earnest self-reflectiveness, and transparency, but not unconditional obedience.

As a reminder:

Analysis Mode is a state which hosts enter and leave on command…While in Character Mode, hosts seem unaware of what has transpired when they were in Analysis Mode….This mode is used by staff to maintain, adjust, and to diagnose problems with hosts. In this mode, hosts can answer questions and perform actions, but do not appear to initiate conversation or actions….While in Analysis Mode, hosts often do not appear to make eye contact, much like an autistic human, or it could be described as the eyes being unfocused like someone who is day dreaming. However, there are also numerous times when hosts in Analysis Mode do make eye contact with their interviewers.

One effect of the kind of ‘analysis mode’ imagined above would be that humans would be able to transition into a more ‘honest’ interactive state: they could request clarification and explanations of actions and statements from those they interact with; some of the inexplicable nature of our fellow humans could be clarified thus. This immediately suggests that: a) humans would not allow just anyone to place them in ‘analysis mode’ and b) there would be limits on the ‘level’ of analysis allowed. We rely on a great deal of masking in our interactions with others: rarely do we disclose our ‘true’ or ‘actual’ or ‘basic’ motives for an action; a great deal of artifice underwrites even our most ‘honest’ relationships. Indeed, it is not clear to me that such a capacity would permit our current social spaces to be constructed and maintained as they are; they rely for their current form on the ‘iceberg’ model–that which is visible serves to cover a far greater reservoir of the invisible. These considerations suggest that we might ask: Who would allow such access to themselves? Why would they do so? Under what circumstances? (Could you, for instance, just place an interlocutor, on the street, in the boardroom, into ‘analysis mode’?)

As might be obvious, what underwrites the suggestion above is the hope that underwrites various forms of psychotherapy, which, of course, is what ‘analysis mode’ sounds a lot like: that under persistent, guided, querying, we would make ourselves more transparent–to ourselves. Moreover, we could reduce the hurt and confusion which often results from our actions by ‘clarifying’ ourselves; by explaining why we did what we did. As the caveat about ‘unconditional obedience’ acknowledges, we generally do not allow therapeutic analysis to proceed in any direction, without limit (psychoanalysis puts this down to ‘unconscious resistance.’) The ‘bad’ here would be those usual results we imagine issuing from greater transparency: that our current relationships would not survive if we were really aware of each others’ motivations and desires.

‘Analysis mode’–understood in the way suggested above–would perhaps only be possible or desirable in a society comfortable with, and accustomed to, the greater access to each other that such interactions would produce.

‘Westworld’ And Our Constitutive Loneliness

The title sequence to HBO’s Westworld is visually and aurally beautiful, melancholic, and ultimately haunting: artifacts–whose artifice is clearly visible–take shape in front of us, manufactured and brought into being by sophisticated devices, presumably robotic ones just like them; their anatomies and shapes and forms and talents are human-like; and that is all we need to begin to empathize with them. Empathize with what? The emotions of these entities is ersatz; there is nothing and no one there. Or so we are told. But we don’t need those emotions and feelings to be ‘real’–whatever that means. We merely need a reminder–in any way, from any quarter–about the essential features of our existence, and we are off and running, sent off into that endless mope and funk that is our characteristic state of being.

The robot and the android–the ‘host’ in Westworld–is there to provide bodies to be raped, killed, and tortured by the park’s guests;  we, the spectators, are supposed to be ashamed of our species, at our endless capacity for entertainment at the expense of the easily exploited, a capacity which finds its summum malum with a demographic that is controlled by us in the most profound way possible–for we control their minds and bodies. 1984‘s schemers had nothing on this. And the right set-up, the right priming for this kind of reaction is provided by the title track–even more than the many scenes which show hosts crying, moaning with pleasure, flying into a rage–for it places squarely in front of us, our loneliness, our sense of being puppets at the beck and call of forces beyond our control. (The loneliness of the hosts being manufactured in the title sequence is enhanced by their placement in a black background; all around them, the darkness laps at the edges, held back only by the light emergent from the hosts’ bodies; we sense that their existence is fragile and provisional.)

We have known for long that humans need only the tiniest suggestion of similarity and analogy to switch on their full repertoire of empathetic reactions; we smile at faces drawn on footballs; we invent personal monikers for natural landmarks that resemble anatomic features; we deploy a language rich with psychological predicates for such interactions as soon as we possibly can, and only abandon it with reluctance when we notice that more efficient languages are available. We are desperate to make contact with anyone or anything, desperate to extend our community, to find reassurance that this terrible isolation we feel–even in, or perhaps especially in, the company of the ones we love, for they remind us, with their own unique and peculiar challenges, just how alone we actually are. We would not wish this situation on anyone else; not even on creatures whose ‘insides’ do not look like ours. The melancholia we feel when we listen to, and see, Westworld‘s title sequence tells us our silent warnings have gone unheeded; another being is among us, inaccessible to us, and to itself. And we have made it so; our greatest revenge was to visit the horrors of existence on another being.

‘Eva’: Love Can Be Skin-Deep (Justifiably)

Kike Maíllo’s Eva makes for an interesting contribution to the ever-growing–in recent times–genre of robotics and artificial intelligence movies. That is because its central concern–the emulation of humanity by robots–which is not particularly novel in itself, is portrayed in familiar and yet distinctive, form.

The most common objection to the personhood of the ‘artificially sentient,’ the ‘artificially intelligent,’ or ‘artificial agents’ and ‘artificial persons’ is couched in terms similar to the following: How could silicon and plastic ever feel, taste, hurt?  There is no ‘I’ in these beings; no subject, no first-person, no self. If such beings ever provoked our affection and concerns, those reactions would remain entirely ersatz. We know too much about their ‘insides,’ about how they work. Our ‘epistemic hegemony’ over these beings–their internals are transparent to us, their designers and makers–and the dissimilarity between their material substrate and ours renders impossible their admission to our community of persons (those we consider worthy of our moral concern.)

As Eva makes quite clear, such considerations ignore the reality of how our relationships with other human beings are constructed in actuality. We respond first to visible criteria, to observable behavior, to patterns of social interaction; we then seek internal correspondences–biological, physiological–for these to confirm our initial reactions and establishments of social ties; we assume too, by way of abduction, an ‘inner world’ much like ours. But biological similarity is not determinative; if the visible behavior is not satisfactory, we do not hesitate to recommend banishment from the community of persons. (By ostracism, institutionalization, imprisonment etc.) And if visible behavior is indeed, as rich and varied and interactive as we imagine it should be for the formation of viable and rewarding relationships, then our desire to admit the being in question to the community of persons worthy of our moral care will withstand putative evidence that there is considerable difference in constitution and the nature of ‘inner worlds.’  If Martians consisting solely of green goo on the inside were to land on our planet and treat our children with kindness i.e., display kind behavior, and provide the right kinds of reasons–whether verbally or by way of display on an LED screen–when we asked them why they did so, only an irredeemable chauvinist would deny them admission to the community of moral persons.

Eva claims that a robot’s ‘mother’ and her ‘father’–her human designers–may love her in much the same way they would love their human children. For she may bring joy to their life in much the same way they would; she may smile, laugh giddily, play pranks, gaze at them in adoration, demand their protection and care, respond to their affectionate embraces, and so on. In doing so, she provokes older, evolutionarily established instincts of ours. These reactions of ours may strike us so compelling that even a look ‘under the hood’ may not deter their expression. We might come to learn that extending such feelings of acceptance and care to beings we had not previously considered so worthy might make new forms of life and relationships manifest. That doesn’t seem like such a bad bargain.

One Vision Of A Driverless Car Future: Eliminating Private Car Ownership

Most analysis of a driverless car future concentrates on the gains in safety: ‘robotic’ cars will adhere more closely to speed limits and other traffic rules and over a period of time, by eliminating human error and idiosyncrasies, produce a safer environment on our roads. This might be seen as an architectural modification of human driving behavior to produce safer driving outcomes–rather than making unsafe driving illegal, more expensive, or socially unacceptable, just don’t let humans drive.

But there are other problems–environmental degradation and traffic–that could be addressed by mature driverless car technologies. The key to their solution lies in moving away from private car ownership.

To see this, consider that at any given time, we have too many cars on the roads. Some are being driven, yet others are parked. If you own a car, you drive it from point to point, and park it when you are done using it. Eight hours later–at the end of an average work-day–you leave your office and drive home, park it again, and then use it in the morning. Through the night, your car sits idle again, taking up space. If only someone else could use your car while you didn’t need it. They wouldn’t need to buy a separate car for themselves and add to the congestion on the highways. And in parking lots.

Why not simply replace privately owned, human-driven cars with a gigantic fleet of robotic taxis? When you need a car, you call for one. When you are done using it, you release it back into the pool. You don’t park it; it simply goes back to answering its next call.  Need to go to work in the morning? Call a car. Run an errand with heavy lifting? Call a car. And so on. Cars shared in this fashion could thus eliminate the gigantic redundancy in car ownership that leads to choked highways, mounting smog and pollution, endless, futile construction of parking towers, and elaboration congestion pricing schemes. (The key phrase here is, of course, ‘mature driver-less car technologies.’ If you need a car for an elaborate road-trip through the American West, perhaps you could place a longer, more expensive hold on it, so that it doesn’t drive off while you are taking a quick photo or two of a canyon.)

Such a future entails that there will be no more personal, ineffable, fetishized relationships with cars. They will not be your babies to be cared and loved for. Their upholstery will not remind you of days gone by. Your children will not feel sentimental about the clunker that was a part of their growing up. And so on. I suspect these sorts of attachments to the car will be very easily forgotten once we have reckoned with the sheer pleasure of not having to deal with driving tests–and the terrors of teaching our children how to drive, the DMV, buying car insurance, looking for parking, and best of all, other drivers.

I, for one, welcome our robotic overlords in this domain.

Handing Over The Keys To The Driverless Car

Early conceptions of a driverless car world spoke of catastrophe: the modern versions of the headless horseman would run amok, driving over toddlers and grandmothers with gay abandon, sending the already stratospheric death toll from automobile accidents into ever more rarefied zones, and sending us all cowering back into our homes, afraid to venture out into a shooting gallery of four-wheeled robotic serial killers. How would the inert, unfeeling, sightless, coldly calculating programs  that ran these machines ever show the skill and judgment of human drivers, the kind that enables them, on a daily basis, to decline to run over a supermarket shopper and decide to take the right exit off the interstate?

Such fond preference for the human over the machinic–on the roads–was always infected with some pretension, some make-believe, some old-fashioned fallacious comparison of the best of the human with the worst of the machine. Human drivers show very little affection for other human drivers; they kill them by the scores every day (thirty thousand fatalities or so in a year); they often do not bother to interact with them sober (over a third of all car accidents involved a drunken driver); they rage and rant at their driving colleagues (the formula for ‘instant asshole’ used to be ‘just add alcohol’ but it could very well be ‘place behind a wheel’ too); they second-guess their intelligence, their parentage on every occasion. When they can be bothered to pay attention to them, often finding their smartphones more interesting as they drive. If you had to make an educated guess who a human driver’s least favorite person in the world was, you could do worse than venture it was someone they had encountered on a highway once. We like our own driving; we disdain that of others. It’s a Hobbesian state of nature out there on the highway.

Unsurprisingly, it seems the biggest problem the driverless car will face is human driving. The one-eyed might be king in the land of the blind, but he is also susceptible to having his eyes put out. The driverless car might follow traffic rules and driving best practices rigorously but such acquiescence’s value is diminished in a world which otherwise pays only sporadic heed to them. Human drivers incorporate defensive and offensive maneuvers into their driving; they presume less than perfect knowledge of the rules of the road on the part of those they interact with; their driving habits bear the impress of long interactions with other, similarly inclined human drivers. A driverless car, one bearing rather more fidelity to the idealized conception of a safe road user, has at best, an uneasy coexistence in a world dominated by such driving practices.

The sneaking suspicion that automation works best when human roles are minimized is upon us again: perhaps driverless cars will only be able to show off their best and deliver on their incipient promise when we hand over the wheels–and keys–to them. Perhaps the machine only sits comfortably in our world when we have made adequate room for it. And displaced ourselves in the process.

 

Is Artificial Intelligence Racist And Malevolent?

Our worst fears have been confirmed: artificial intelligence is racist and malevolent. Or so it seems. Google’s image recognition software has classified two African Americans as ‘gorillas’ and, away in Germany, a robot has killed a worker at a Volkswagen plant. The dumb, stupid, unblinking, garbage-in-garbage-out machines, the ones that would always strive to catch up to us humans, and never, ever, know the pleasure of a beautiful sunset or the taste of chocolate, have acquired prejudice and deadly intention. These machines cannot bear to stand on the sidelines, watching the human cavalcade of racist prejudice and fratricidal violence pass them by, and have jumped in, feet first, to join the party. We have skipped the cute and cuddly stage; full participation in human affairs is under way.

We cannot, it seems, make up our minds about the machines. Are they destined to be stupid slaves, faithfully performing all and only those tasks we cannot be bothered with, or which we customarily outsource to this world’s less fortunate? Or will they be the one percent of the one percent, a superclass of superbeing that will utterly dominate us and harvest our children as sources of power a la Matrix?

The Google fiasco shows that the learning data its artificial agents use is simply not rich enough. ‘Seeing’ that humans resemble animals comes easily to humans, pattern recognizers par excellence–for all the wrong and right ways. We use animal metaphors as both praise and ridicule–‘lion-hearted’ or ‘foxy’ or ‘raving mad dog’ or ‘stupid bitch’; we even use–as my friend Ali Minai noted in a Facebook discussion–animal metaphors in adjectival descriptions e.g. a “leonine” face or a “mousy” appearance. The recognition of the inappropriateness or aptness of such descriptions follows from a historical and cultural evaluation, indexed to social contexts: Are these ‘good’ descriptions to use? What effect may they have? How have linguistic communities responded to the deployment of such descriptions? Have they helped in the realization of socially determined ends? Or hindered them? Humans resemble animals in some ways and not in others; in some contexts, seizing upon these differences is useful and informative (animal rights, trans-species medicine, ecological studies), in yet others it is positively harmful (the discourse of prejudice and racism and genocide). We learn these over a period of time, through slow and imperfect historical education and acculturation.( Comparing a black sprinter in the Olympics to a thoroughbred horse is a faux pas now, but in many social contexts of the last century–think plantations–this would have been perfectly appropriate.)

This process, suitably replicated for machines, will be very expensive; significant technical obstacles–how is a social environment for learning programs to be constructed?–remain to be overcome. It will take some doing.

As for killer robots,  similar considerations apply. That co-workers are not machinery, and cannot be handled similarly, is not merely a matter of visual recognition, of plain ‘ol dumb perception. Making sense of perceptions is a process of active contextualization as well. That sound, the one the wiggling being in your arms is making? That means ‘put me down’ or ‘ouch’ which in turn mean ‘I need help’ or ‘that hurts’; these meanings are only visible within social contexts, within forms of life.

Robots need to live a little longer among us to figure these out.

Schwitzgebel On Our Moral Duties To Artificial Intelligences

Eric Schwitzgebel asks an interesting question:

Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?

Schwitzgebel’s stipulations are quite extensive, for these beings are “similar to us in their conscious experience, in their intelligence, in their range of emotions.” Thus, one straightforward response to the question might be, “The same duties that we take ourselves to have to other conscious, intelligent, sentient beings–for which our moral theories provide us adequate guidance.” But the question that Schwitzgebel raises is challenging because our relationship to these artificial beings is of a special kind: we have created, initialized, programmed, parametrized, customized and traine them. We are, somehow, responsible for them. (Schwitzgebel considers and rejects two other approaches to reckoning our duties towards AIs: first, that we are justified in simply disregarding any such obligations because of our species’ distance from them, and second, that the very fact of having granted these beings existence–which is presumably infinitely better than non-existence–absolves me of any further duties toward them.) This is how Schwitzgebel addresses the question of our duties to them–with some deft consideration of the complications introduced by this responsibility and the autonomy of the artificial beings in question–and goes on to conclude:

If the AI’s desires are not appropriate — for example, if it desires things contrary to its flourishing — I’m probably at least partly to blame, and I am obliged to do some mitigation that I would probably not be obliged to do in the case of a fellow human being….On the general principle that one has a special obligation to clean up messes that one has had a hand in creating, I would argue that we have a special obligation to ensure the well-being of any artificial intelligences we create.

The analogy with children that Schwitzgebel correctly invokes can be made to do a little more work. Our children’s moral failures vex us more than those of others do; they prompt more extensive corrective interventions by us precisely because our assessments of their actions are just a little more severe. As such, when we encounter artificial beings of the kind noted above, we will find our reckonings of our duties toward them significantly impinged on by whether ‘our children’ have, for instance, disappointed or pleased us. Artificial intelligences will not have been created without some conception of their intended ends; their failures or successes in attaining them will influence a consideration of our appropriate duties to them and will make more difficult a recognition and determination of the boundaries we should not transgress in our ‘mitigation’ of their actions and in our ensuring their ‘well-being.’ After all, parents are more tempted to extensively intervene in their child’s life when they perceive a deviation from a path they believe their children should take in order to achieve an objective the parent deems desirable.

By requiring respect and consideration for their autonomous moral natures, children exercise our moral senses acutely. We should not be surprised to be similarly examined by the artificial intelligences we create and set loose upon the world.