Blade Runner 2049: Our Slaves Will Set Us Free

Blade Runner 2049 is a provocative visual and aural treat. It sparked many thoughts, two of which I make note of here; the relationship between the two should be apparent.

  1. What is the research project called ‘artificial intelligence’ trying to do? Is it trying to make machines that can do the things which, if done by humans, would be said to require intelligence? Regardless of the particular implementation? Is it trying to accomplish those tasks in the way that human beings do them? Or is it trying to find a non-biological method of reproducing human beings? These are three very different tasks. The first one is a purely engineering task; the machine must accomplish the task regardless of the method–any route to the solution will do, so long as it is tractable and efficient. The second is cognitive science, inspired by Giambattista Vico; “the true and the made are convertible” (Verum et factum convertuntur) or “the true is precisely what is made” (Verum esse ipsum factum); we will only understand the mind, and possess a ‘true’ model of it when we make it. The third is more curious (and related to the second)–it immediately implicates us in the task of making artificial persons. Perhaps by figuring out how the brain works, we can mimic human cognition but this capacity might be  placed in a non-human form made of silicon or plastic or some metal; the artificial persons project insists on a human form–the android or humanoid robot–and on replicating uniquely human capacities including the moral and aesthetic ones. This would require the original cognitive science project to be extended to an all-encompassing project of understanding human physiology so that its bodily functions can be replicated. Which immediately raises the question: why make artificial persons? We have a perfectly good way of making human replicants; and many people actually enjoy engaging in this process. So why make artificial persons this way? If the answer is to increase our knowledge of human beings’ workings, then we might well ask: To what end? To cure incurable diseases? To make us happier? To release us from biological prisons so that we may, in some singularity inspired fantasy, migrate our souls to these more durable containers? Or do we need them to be in human form, so that they can realistically–in all the right ways–fulfill all the functions we will require them to perform. For instance, as in Westworld, they could be our sex slaves, or as in Blade Runner, they could perform dangerous and onerous tasks that human beings are unwilling or unable to do. And, of course, prop up ecologically unstable civilizations like ours.
  2. It is a philosophical commonplace–well, at least to Goethe and Nietzsche, among others–that constraint is necessary for freedom; we cannot be free unless we are restrained, somehow, by law and rule and regulation and artifice. But is it necessary that we ourselves be restrained in order to be free? The Greeks figured out that the slave could be enslaved, lose his freedom, and through this loss, his owner, his master, could be free; as Hannah Arendt puts it in The Human Condition the work of the slaves–barbarians and women–does ‘labor’ for the owner, keeping the owner alive, taking care of his biological necessity, and freeing him up to go to the polis and do politics in a state of freedom, in the company of other property-owning householders like him. So: the slave is necessary for freedom; either we enslave ourselves, suppress our appetites and desires and drives and sublimate and channel them into the ‘right’ outlets, or we enslave someone else. (Freud noted glumly in Civilization and its Discontents that civilization enslaves our desires.) If we cannot enslave humans, with all their capricious desires to be free, then we can enslave other creatures, perhaps animals, domesticating them to turn them into companions and food. And if we ever become technologically adept at reproducing those processes that produce humans or persons, we can make copies–replicants–of ourselves, artificial persons, that mimic us in all the right ways, and keep us free. These slaves, by being slaves, make us free.

Much more on Blade Runner 2049 anon.

Will Artificial Intelligence Create More Jobs Than It Eliminates? Maybe

Over at the MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino strike an optimistic note as they respond to the “distressing picture” created by “the threat that automation will eliminate a broad swath of jobs across the world economy [for] as artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.” Wilson, Daugherty, and Morini-Bianzino suggest that we’ve been “overlooking” the fact that “many new jobs will also be created — jobs that look nothing like those that exist today.” They go on to identify three key positions:

Trainers

This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.

Explainers

The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases.

Sustainers

The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency.

Unfortunately for these claims, it is all too evident that these positions themselves are vulnerable to automation. That is, future trainers, explainers, and sustainers–if their job descriptions match the ones given above–will also be automated. Nothing in those descriptions indicates that humans are indispensable components of the learning processes they are currently facilitating.

Consider that the ’empathy trainer’ Koko is a machine-learning system which “can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth.” Currently Koko is being trained by a human; but it will then go on to teach Siri and Alexa. Teachers are teaching future teachers how to be teachers; after which they will teach on their own. Moreover, there is no reason why chatbots, which currently need human trainers, cannot be trained–as they already are–on increasingly large corpora of natural language texts like the collections of the world’s libraries (which consist of human inputs.)

Or consider that explainers may rely on an algorithmic system like “the Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction”; human analysts use its output to “pinpoint the data that led to a particular result.” But why wouldn’t LIME’s functioning be enhanced to do the relevant ‘pinpointing’ itself, thus obviating the need for a human analyst?

Finally, note that “AI may become more self-governing….an AI prototype named Quixote…can learn about ethics by reading simple stories….the system is able to “reverse engineer” human values through stories about how humans interact with one another. Quixote has learned…why stealing is not a good idea….even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.” This sounds like the roles and responsibilities for humans in this domain will decrease over time.

There is no doubt that artificial intelligence systems require human inputs, supervision, training, and maintenance; but those roles are diminishing, not increasing; short-term, local, requirements for human skills will not necessarily translate into long-term, global, ones. The economic impact of the coming robotics and artificial intelligence ‘revolution’ is still unclear.

Westworld’s ‘Analysis Mode’ For Humans

In the course of a discussion about the various motivations underlying the character Robert Ford‘s actions in HBO’s Westworld, a friend raised the following query:

In what senses would it be good, and in which bad, if human beings could put one another into ‘analysis mode’ like techs can do with hosts in the show? If analysis mode involved emotional detachment, earnest self-reflectiveness, and transparency, but not unconditional obedience.

As a reminder:

Analysis Mode is a state which hosts enter and leave on command…While in Character Mode, hosts seem unaware of what has transpired when they were in Analysis Mode….This mode is used by staff to maintain, adjust, and to diagnose problems with hosts. In this mode, hosts can answer questions and perform actions, but do not appear to initiate conversation or actions….While in Analysis Mode, hosts often do not appear to make eye contact, much like an autistic human, or it could be described as the eyes being unfocused like someone who is day dreaming. However, there are also numerous times when hosts in Analysis Mode do make eye contact with their interviewers.

One effect of the kind of ‘analysis mode’ imagined above would be that humans would be able to transition into a more ‘honest’ interactive state: they could request clarification and explanations of actions and statements from those they interact with; some of the inexplicable nature of our fellow humans could be clarified thus. This immediately suggests that: a) humans would not allow just anyone to place them in ‘analysis mode’ and b) there would be limits on the ‘level’ of analysis allowed. We rely on a great deal of masking in our interactions with others: rarely do we disclose our ‘true’ or ‘actual’ or ‘basic’ motives for an action; a great deal of artifice underwrites even our most ‘honest’ relationships. Indeed, it is not clear to me that such a capacity would permit our current social spaces to be constructed and maintained as they are; they rely for their current form on the ‘iceberg’ model–that which is visible serves to cover a far greater reservoir of the invisible. These considerations suggest that we might ask: Who would allow such access to themselves? Why would they do so? Under what circumstances? (Could you, for instance, just place an interlocutor, on the street, in the boardroom, into ‘analysis mode’?)

As might be obvious, what underwrites the suggestion above is the hope that underwrites various forms of psychotherapy, which, of course, is what ‘analysis mode’ sounds a lot like: that under persistent, guided, querying, we would make ourselves more transparent–to ourselves. Moreover, we could reduce the hurt and confusion which often results from our actions by ‘clarifying’ ourselves; by explaining why we did what we did. As the caveat about ‘unconditional obedience’ acknowledges, we generally do not allow therapeutic analysis to proceed in any direction, without limit (psychoanalysis puts this down to ‘unconscious resistance.’) The ‘bad’ here would be those usual results we imagine issuing from greater transparency: that our current relationships would not survive if we were really aware of each others’ motivations and desires.

‘Analysis mode’–understood in the way suggested above–would perhaps only be possible or desirable in a society comfortable with, and accustomed to, the greater access to each other that such interactions would produce.

‘Westworld’ And The American West As Locale For Self-Reconfiguration

It is perhaps unsurprising that Westworld is Westworld; if American mythology is to be staged anywhere, the West is a natural locale. In the original Westworld, the West meant a zone in which certain kinds of adventures were facilitated: gun battles mostly, but also sex with perfect strangers who cared little for who you were and only wanted your money. In the new Westworld, an implicit motif of the first becomes more explicit: Westworld is where you go to find yourself–whoever and whatever that may be. In this new Westworld, the landscape, only background scenery in the old, now becomes more prominent; we are reminded again and again of its beauty, wildness, and implacable hostility and indifference. If you want to make a show about self-discovery, reconfiguration, journeys into and across space and time, the American West–for many historical and cultural reasons–is a good call. The physical spaces are vast, mapping neatly on to the immense unexplored spaces of the mind; the beauty is enthralling, sparking vision after vision in us of possibility, and also, as Rilke reminded us, bringing us closer to terror: those cliffs, those bluffs, those steep walls, that burning sun, the rattlesnakes, the dangers of other humans. The deployment of the American West also taps into a deeper mythology that self-discovery takes place away from other humans–in the wild. If we are to traverse our mind, then Westworld–like many other recountings of human experience before it–suggests we need tremendous physical spaces too. We could not do this in a crowded city. Those endless horizons and canopies of the sheltering sky are necessary for the suggestion of infinite possibility.

And then, there is the violence. The American West’s land is soaked in blood, in memories of a people decimated, of massacres, starvation, and rape. If you want to stage a modern day genocide–and the continuing thirty-five year old slaughter of ‘hosts’ is most definitely a genocide, even if an eternally recurring one–then, again, the West is the correct locale. It is significant that in this version of the American West, there are very few Native Americans; there are some ‘greasers‘–cannon fodder, obviously–but very few ‘redskins.’ The makers of the show seem to have wisely decided that it was best to mostly write Native Americans out of the show rather than risk getting their depiction and usage wrong, which they almost certainly would have. (The one episode in which Native Americans make an appearance, they are the stuff of nightmare, much as they must have been for the ‘pioneers,’ their imaginations inflamed by stories of how they had to keep their women safe from the depredations of the savages on the prairies.) This American West is one which has already been cleansed of the Native American; an alternative rendering of Westworld, one whose dark satire would have cut too close to the bone, would be one in which park visitors would get to shoot all the whoopin’ n’ hollerin’ Injuns they wanted.

MedievalWorld, SamuraiWorld would also allow for the exploration of themes pertaining to the possible sentience of robots, but their locales might not, at least for American audiences, suggest the possibilities of our own reconfiguration quite so well.

‘Westworld’ And Our Constitutive Loneliness

The title sequence to HBO’s Westworld is visually and aurally beautiful, melancholic, and ultimately haunting: artifacts–whose artifice is clearly visible–take shape in front of us, manufactured and brought into being by sophisticated devices, presumably robotic ones just like them; their anatomies and shapes and forms and talents are human-like; and that is all we need to begin to empathize with them. Empathize with what? The emotions of these entities is ersatz; there is nothing and no one there. Or so we are told. But we don’t need those emotions and feelings to be ‘real’–whatever that means. We merely need a reminder–in any way, from any quarter–about the essential features of our existence, and we are off and running, sent off into that endless mope and funk that is our characteristic state of being.

The robot and the android–the ‘host’ in Westworld–is there to provide bodies to be raped, killed, and tortured by the park’s guests;  we, the spectators, are supposed to be ashamed of our species, at our endless capacity for entertainment at the expense of the easily exploited, a capacity which finds its summum malum with a demographic that is controlled by us in the most profound way possible–for we control their minds and bodies. 1984‘s schemers had nothing on this. And the right set-up, the right priming for this kind of reaction is provided by the title track–even more than the many scenes which show hosts crying, moaning with pleasure, flying into a rage–for it places squarely in front of us, our loneliness, our sense of being puppets at the beck and call of forces beyond our control. (The loneliness of the hosts being manufactured in the title sequence is enhanced by their placement in a black background; all around them, the darkness laps at the edges, held back only by the light emergent from the hosts’ bodies; we sense that their existence is fragile and provisional.)

We have known for long that humans need only the tiniest suggestion of similarity and analogy to switch on their full repertoire of empathetic reactions; we smile at faces drawn on footballs; we invent personal monikers for natural landmarks that resemble anatomic features; we deploy a language rich with psychological predicates for such interactions as soon as we possibly can, and only abandon it with reluctance when we notice that more efficient languages are available. We are desperate to make contact with anyone or anything, desperate to extend our community, to find reassurance that this terrible isolation we feel–even in, or perhaps especially in, the company of the ones we love, for they remind us, with their own unique and peculiar challenges, just how alone we actually are. We would not wish this situation on anyone else; not even on creatures whose ‘insides’ do not look like ours. The melancholia we feel when we listen to, and see, Westworld‘s title sequence tells us our silent warnings have gone unheeded; another being is among us, inaccessible to us, and to itself. And we have made it so; our greatest revenge was to visit the horrors of existence on another being.

Artificial Intelligence And Go: (Alpha)Go Ahead, Move The Goalposts

In the summer of 1999, I attended my first ever professional academic philosophy conference–in Vienna. At the conference, one titled ‘New Trends in Cognitive Science’, I gave a talk titled (rather pompously) ‘No Cognition without Representation: The Dynamical Theory of Cognition and The Emulation Theory of Mental Representation.’ I did the things you do at academic conferences as a graduate student in a job-strapped field: I hung around senior academics, hoping to strike up conversation (I think this is called ‘networking’); I tried to ask ‘intelligent’ questions at the talks, hoping my queries and remarks would mark me out as a rising star, one worthy of being offered a tenure-track position purely on the basis of my sparking public presence. You know the deal.

Among the talks I attended–a constant theme of which were the prospects of the mechanization of the mind–was one on artificial intelligence. Or rather, more accurately, the speaker concerned himself with evaluating the possible successes of artificial intelligence in domains like game-playing. Deep Blue had just beaten Garry Kasparov in an unofficial chess-human world championship in 1997, and such questions were no longer idle queries. In the wake of Deep Blue’s success the usual spate of responses–to news of artificial intelligence’s advance in some domain–had ensued: Deep Blue’s success did not indicate any ‘true intelligence’ but rather pure ‘computing brute force’; a true test of intelligence awaited in other domains. (Never mind that beating a human champion in chess had always been held out as a kind of Holy Grail for game-playing artificial intelligence.)

So, during this talk, the speaker elaborated on what he took to be artificial intelligence’s true challenge: learning and mastering the game of Go. I did not fully understand the contrasts drawn between chess and Go, but they seemed to come down to two vital ones: human Go players relied, indeed had to, a great deal on ‘intuition’, and on a ‘positional sizing-up’ that could not be reduced to an algorithmic process. Chess did not rely on intuition to the same extent; its board assessments were more amenable to an algorithmic calculation. (Go’s much larger state space was also a problem.) Therefore, roughly, success in chess was not so surprising; the real challenge was Go, and that was never going to be mastered.

Yesterday, Google’s DeepMind AlphaGo system beat the South Korean Go master Lee Se-dol in the first of an intended five-game series. Mr. Lee conceded defeat in three and a half hours. His pre-game mood was optimistic:

Mr. Lee had said he could win 5-0 or 4-1, predicting that computing power alone could not win a Go match. Victory takes “human intuition,” something AlphaGo has not yet mastered, he said.

Later though, he said that “AlphaGo appeared able to imitate human intuition to a certain degree” a fact which was born out to him during the game when “AlphaGo made a move so unexpected and unconventional that he thought “it was impossible to make such a move.”

As Jean-Pierre Dupuy noted in his The Mechanization of Mind, a very common response to the ‘mechanization of mind’ is that such attempts merely simulate or imitate, and are mere fronts for machinic complexity–but these proposals seemingly never consider the possibility that the phenomenon they consider genuine or the model for imitation and simulation can only retain such a status as long as simulations and imitations remain flawed. As those flaws diminish, the privileged status of the ‘real thing’ diminishes in turn. A really good simulation,  indistinguishable from the ‘real thing,’ should make us wonder why we grant it such a distinct station.

‘Eva’: Love Can Be Skin-Deep (Justifiably)

Kike Maíllo’s Eva makes for an interesting contribution to the ever-growing–in recent times–genre of robotics and artificial intelligence movies. That is because its central concern–the emulation of humanity by robots–which is not particularly novel in itself, is portrayed in familiar and yet distinctive, form.

The most common objection to the personhood of the ‘artificially sentient,’ the ‘artificially intelligent,’ or ‘artificial agents’ and ‘artificial persons’ is couched in terms similar to the following: How could silicon and plastic ever feel, taste, hurt?  There is no ‘I’ in these beings; no subject, no first-person, no self. If such beings ever provoked our affection and concerns, those reactions would remain entirely ersatz. We know too much about their ‘insides,’ about how they work. Our ‘epistemic hegemony’ over these beings–their internals are transparent to us, their designers and makers–and the dissimilarity between their material substrate and ours renders impossible their admission to our community of persons (those we consider worthy of our moral concern.)

As Eva makes quite clear, such considerations ignore the reality of how our relationships with other human beings are constructed in actuality. We respond first to visible criteria, to observable behavior, to patterns of social interaction; we then seek internal correspondences–biological, physiological–for these to confirm our initial reactions and establishments of social ties; we assume too, by way of abduction, an ‘inner world’ much like ours. But biological similarity is not determinative; if the visible behavior is not satisfactory, we do not hesitate to recommend banishment from the community of persons. (By ostracism, institutionalization, imprisonment etc.) And if visible behavior is indeed, as rich and varied and interactive as we imagine it should be for the formation of viable and rewarding relationships, then our desire to admit the being in question to the community of persons worthy of our moral care will withstand putative evidence that there is considerable difference in constitution and the nature of ‘inner worlds.’  If Martians consisting solely of green goo on the inside were to land on our planet and treat our children with kindness i.e., display kind behavior, and provide the right kinds of reasons–whether verbally or by way of display on an LED screen–when we asked them why they did so, only an irredeemable chauvinist would deny them admission to the community of moral persons.

Eva claims that a robot’s ‘mother’ and her ‘father’–her human designers–may love her in much the same way they would love their human children. For she may bring joy to their life in much the same way they would; she may smile, laugh giddily, play pranks, gaze at them in adoration, demand their protection and care, respond to their affectionate embraces, and so on. In doing so, she provokes older, evolutionarily established instincts of ours. These reactions of ours may strike us so compelling that even a look ‘under the hood’ may not deter their expression. We might come to learn that extending such feelings of acceptance and care to beings we had not previously considered so worthy might make new forms of life and relationships manifest. That doesn’t seem like such a bad bargain.