Blade Runner 2049: Our Slaves Will Set Us Free

Blade Runner 2049 is a provocative visual and aural treat. It sparked many thoughts, two of which I make note of here; the relationship between the two should be apparent.

  1. What is the research project called ‘artificial intelligence’ trying to do? Is it trying to make machines that can do the things which, if done by humans, would be said to require intelligence? Regardless of the particular implementation? Is it trying to accomplish those tasks in the way that human beings do them? Or is it trying to find a non-biological method of reproducing human beings? These are three very different tasks. The first one is a purely engineering task; the machine must accomplish the task regardless of the method–any route to the solution will do, so long as it is tractable and efficient. The second is cognitive science, inspired by Giambattista Vico; “the true and the made are convertible” (Verum et factum convertuntur) or “the true is precisely what is made” (Verum esse ipsum factum); we will only understand the mind, and possess a ‘true’ model of it when we make it. The third is more curious (and related to the second)–it immediately implicates us in the task of making artificial persons. Perhaps by figuring out how the brain works, we can mimic human cognition but this capacity might be  placed in a non-human form made of silicon or plastic or some metal; the artificial persons project insists on a human form–the android or humanoid robot–and on replicating uniquely human capacities including the moral and aesthetic ones. This would require the original cognitive science project to be extended to an all-encompassing project of understanding human physiology so that its bodily functions can be replicated. Which immediately raises the question: why make artificial persons? We have a perfectly good way of making human replicants; and many people actually enjoy engaging in this process. So why make artificial persons this way? If the answer is to increase our knowledge of human beings’ workings, then we might well ask: To what end? To cure incurable diseases? To make us happier? To release us from biological prisons so that we may, in some singularity inspired fantasy, migrate our souls to these more durable containers? Or do we need them to be in human form, so that they can realistically–in all the right ways–fulfill all the functions we will require them to perform. For instance, as in Westworld, they could be our sex slaves, or as in Blade Runner, they could perform dangerous and onerous tasks that human beings are unwilling or unable to do. And, of course, prop up ecologically unstable civilizations like ours.
  2. It is a philosophical commonplace–well, at least to Goethe and Nietzsche, among others–that constraint is necessary for freedom; we cannot be free unless we are restrained, somehow, by law and rule and regulation and artifice. But is it necessary that we ourselves be restrained in order to be free? The Greeks figured out that the slave could be enslaved, lose his freedom, and through this loss, his owner, his master, could be free; as Hannah Arendt puts it in The Human Condition the work of the slaves–barbarians and women–does ‘labor’ for the owner, keeping the owner alive, taking care of his biological necessity, and freeing him up to go to the polis and do politics in a state of freedom, in the company of other property-owning householders like him. So: the slave is necessary for freedom; either we enslave ourselves, suppress our appetites and desires and drives and sublimate and channel them into the ‘right’ outlets, or we enslave someone else. (Freud noted glumly in Civilization and its Discontents that civilization enslaves our desires.) If we cannot enslave humans, with all their capricious desires to be free, then we can enslave other creatures, perhaps animals, domesticating them to turn them into companions and food. And if we ever become technologically adept at reproducing those processes that produce humans or persons, we can make copies–replicants–of ourselves, artificial persons, that mimic us in all the right ways, and keep us free. These slaves, by being slaves, make us free.

Much more on Blade Runner 2049 anon.

The Lost Art Of Navigation And The Making Of New Selves

Giving, and following, driving directions was an art. A cartographic communication, conveyed and conducted by spoken description, verbal transcription, and subsequent decipherment. You asked for a route to a destination, and your partner in navigation issued a list of waypoints, landmarks, and driving instructions; you wrote these down (or bravely, committed them to memory); then, as you drove, you compared those descriptions with actual, physical reality, checking to see if a useful correspondence obtained; then, ideally, you glided ‘home.’ A successful navigation was occasion for celebration by both direction-giver and direction-follower; hermeneutical encounters like theirs deserved no less; before, there was the unknown space, forbidding and inscrutable; afterwards, there was a destination, magically clarified, made visible, and arrived at.   There were evaluative scales here to be found: some were better at giving directions than others, their sequence of instructions clear and concise, explicit specifications expertly balanced by exclusion of superfluous, confusing detail; not all were equally proficient at following directions, some mental compasses were easily confused by turns and intersections, some drivers’ equanimity was easily disturbed by difficult traffic and a missed exit or two. (Reading and making maps, of course, has always been a honorable and valuable skill in our civilizations.)

Which reminds us that driving while trying to navigate was often stressful and sometimes even dangerous (sudden attempts to take an exit or avoid taking one cause crashes all the time.) The anxiety of the lost driver has a peculiar phenomenological quality all its own, enhanced in terrifying ways by the addition of bad neighborhoods, fractious family members, darkness, hostile drivers in traffic. And so, Global Positioning System (GPS) navigators–with their pinpoint, precise, routes colorfully, explicitly marked out–were always destined to find a grateful and receptive following. An interactive, dynamic, realistic, updated in real-time map is a Good Thing, even if the voices in which it issued its commands and directions were sometimes a little too brusque or persistent.

But as has been obvious for a long time now, once you start offloading and outsourcing your navigation skills, you give them away for good. Dependency on the GPS is almost instantaneous and complete; very soon, you cannot drive anywhere without one. (Indeed, many cannot walk without one either.) The deskilling in this domain has been, like many others in which automation has found a dominant role, quite spectacular. Obviously, I speak from personal experience; I was only too happy to rely on GPS navigators when I drive, and now do not trust myself to follow even elementary verbal or written driving directions. I miss some of my old skills navigating skills, but I do not miss, even for a second, the anxiety, frustration, irritation, and desperation of feeling lost . An older set of experiences, an older part of me, is gone, melded and merged with a program, a console, an algorithm; the blessing is, expectedly, a mixed one. Over the years, I expect this process will continue; bits of me will be offloaded into an increasingly technologized environment, and a newer self will emerge.

Artificial Intelligence And Go: (Alpha)Go Ahead, Move The Goalposts

In the summer of 1999, I attended my first ever professional academic philosophy conference–in Vienna. At the conference, one titled ‘New Trends in Cognitive Science’, I gave a talk titled (rather pompously) ‘No Cognition without Representation: The Dynamical Theory of Cognition and The Emulation Theory of Mental Representation.’ I did the things you do at academic conferences as a graduate student in a job-strapped field: I hung around senior academics, hoping to strike up conversation (I think this is called ‘networking’); I tried to ask ‘intelligent’ questions at the talks, hoping my queries and remarks would mark me out as a rising star, one worthy of being offered a tenure-track position purely on the basis of my sparking public presence. You know the deal.

Among the talks I attended–a constant theme of which were the prospects of the mechanization of the mind–was one on artificial intelligence. Or rather, more accurately, the speaker concerned himself with evaluating the possible successes of artificial intelligence in domains like game-playing. Deep Blue had just beaten Garry Kasparov in an unofficial chess-human world championship in 1997, and such questions were no longer idle queries. In the wake of Deep Blue’s success the usual spate of responses–to news of artificial intelligence’s advance in some domain–had ensued: Deep Blue’s success did not indicate any ‘true intelligence’ but rather pure ‘computing brute force’; a true test of intelligence awaited in other domains. (Never mind that beating a human champion in chess had always been held out as a kind of Holy Grail for game-playing artificial intelligence.)

So, during this talk, the speaker elaborated on what he took to be artificial intelligence’s true challenge: learning and mastering the game of Go. I did not fully understand the contrasts drawn between chess and Go, but they seemed to come down to two vital ones: human Go players relied, indeed had to, a great deal on ‘intuition’, and on a ‘positional sizing-up’ that could not be reduced to an algorithmic process. Chess did not rely on intuition to the same extent; its board assessments were more amenable to an algorithmic calculation. (Go’s much larger state space was also a problem.) Therefore, roughly, success in chess was not so surprising; the real challenge was Go, and that was never going to be mastered.

Yesterday, Google’s DeepMind AlphaGo system beat the South Korean Go master Lee Se-dol in the first of an intended five-game series. Mr. Lee conceded defeat in three and a half hours. His pre-game mood was optimistic:

Mr. Lee had said he could win 5-0 or 4-1, predicting that computing power alone could not win a Go match. Victory takes “human intuition,” something AlphaGo has not yet mastered, he said.

Later though, he said that “AlphaGo appeared able to imitate human intuition to a certain degree” a fact which was born out to him during the game when “AlphaGo made a move so unexpected and unconventional that he thought “it was impossible to make such a move.”

As Jean-Pierre Dupuy noted in his The Mechanization of Mind, a very common response to the ‘mechanization of mind’ is that such attempts merely simulate or imitate, and are mere fronts for machinic complexity–but these proposals seemingly never consider the possibility that the phenomenon they consider genuine or the model for imitation and simulation can only retain such a status as long as simulations and imitations remain flawed. As those flaws diminish, the privileged status of the ‘real thing’ diminishes in turn. A really good simulation,  indistinguishable from the ‘real thing,’ should make us wonder why we grant it such a distinct station.

Don’t be a “Crabby Patty” About AI

Fredrik DeBoer has written an interesting post on the prospects for artificial intelligence, one that is pessimistic about its prospects and skeptical about some of the claims made for its success. I disagree with some of its implicit premises and claims.

AI’s goals can be understood as being two-fold, depending on your understanding of the field. First, to make machines that can perform tasks, which if performed by humans, would be said to require “intelligence”. Second, to understand human cognitive processes and replicate them in a suitable architecture. The first enterprise is engineering; the second, cognitive science (Vico-style: “the true and the made are convertible”).

The first cares little about the particular implementation mechanism or the theoretical underpinning of task performance; success in task execution is all. If you can make a robot capable of brewing a cup of tea in kitchens strange and familiar, it does not matter what its composition, computational architecture or control logics are, all that matters is that brewed cup of tea. The second cares little about the implementation medium – it could be silicon and plastic – but it does about the mechanism employed; it must faithfully instantiate and realize an abstraction of a distinctly human cognitive process. Aeronautical engineers replicate the flight of feathered birds using aluminum and jet engines; they physically instantiate abstract principles of flight. The cognitive science version of AI seeks to perform a similar feat for human cognition; AI should validate our best science of mind.

I take DeBoer’s critique of so-called “statistical” or “big-data” AI to be: you’re only working toward the first goal, not toward the second. That is a fair observation, but it does not establish the following added conclusion: cognitive science is the “right” or the “only” way to realize artificial intelligence. It also does not establish the following conclusion: engineering AI is a useless distraction in the task of understanding human cognition or what artificial intelligence or even “real intelligence” might be. Cognitive science AI is not the only worthwhile paradigm for AI, not the only intellectually useful one.

To see this, consider what the successes–even partial–of engineering AI tell  us: intelligence is not one thing, it is many; intelligence is achievable both by mimicking human cognitive processes and not; in some cases, it is more easily achieved by the latter. The successes of engineering AI should tell us that the very phenomena–intelligence–we take ourselves to be studying in cognitive science isn’t well understood; they tell us the very thing being studied–“mind”–might not be a thing to begin with.  (DeBoer rightly disdains the “mysterianism” in claims like “intelligence is an emergent property” but he seems comfortable with the chauvinism of “intelligence achievable by non-human means isn’t intelligence.” A simulation of intelligence isn’t “only” a simulation; it forces us to reckon with the possibility “real intelligence” might be “only a simulation.”)

What we call intelligence is a performative capacity; creatures that possess intelligence can do things in this world; the way humans accomplish those tasks is of interest, but so are other ways of doing so. They show us many relationships to our environment can be described as “cognitive” or “mindful”; if giant-lookup machines and human beings can both play chess and write poems then that tells us something interesting about the nature of those capacities. If language comprehension can be achieved by statistical methods, then that tells us we should regard our own linguistic capacities in a different light; a speaking and writing wind-up toy should make us revisit the phenomena of language anew: just what is this destination, reachable in such radically dissimilar routes–‘human cognition’ and ‘machine learning’?

DeBoer rightly points out the difficulties both AI methodologies face; I would go further and say that given our level of (in)comprehension, we do not even possess much of a principled basis for so roundly dismissing the claims made by statistical or big-data AI. It might turn out that the presuppositions of cognitive science might be altered by the successes of engineering AI, thus changing its methodologies and indicators of success; cognitive science might be looking in the wrong places for the wrong things.

Physical and Psychological Affordance

According to Wikipedia, ‘an affordance is a quality of an object, or an environment, which allows an individual to perform an action. For example, a knob affords twisting, and perhaps pushing, while a cord affords pulling.’ (A photograph of a tea set in the Wikipedia entry bears the caption, ‘The handles on this tea set provide an obvious affordance for holding.’) Later we learn that James J. Gibson introduced ‘affordance’ in his 1977 article “The Theory of Affordances”he ‘defined affordances as all “action possibilities” latent in the environment, objectively measurable and independent of the individual’s ability to recognize them, but always in relation to the actor and therefore dependent on their capabilities.’

I do not now remember where I first encountered the term–perhaps in my readings of embodied cognition literature  in graduate school, probably. It has always struck me as a marvelously evocative term, and one of those that almost immediately serves to illuminate the world in a different light. We are physical beings, minds and bodies united, caught up in a tightly coupled system of world and agent; the world provides us affordances for our particular modes of interactions with it; we modify the world, modifying its affordances and change in response; and so on. The dynamic, mutually determining nature of this interaction stood clarified. Thinking of the world as equipped with affordances helped me envision the evolutionary filtration of the environment better; those creatures with traits suitable for the environment’s affordances were evolutionary successful. Knobs and cords can only be twisted and pulled by those suitably equipped–mentally and physically–for doing so.  Babies learn to walk in an environment that provides them the means for doing so–level, firm surfaces–and not others. An affordance rich environment for walking, perhaps equipped with handles for grasping or helpful parents reaching out to provide support, facilitates the learning of walking. And so on.

But ‘affordance’ need not be restricted to understanding in purely physical terms. We can think of the world of psychological actors as providing psychological affordances too. An agent with a particular psychological makeup is plausibly understood as providing for certain modes of interaction with it: a hostile youngster, bristling with resentment and suspicion of authority restricts the space of possibilities for other agents to interact with him; the affordances he provides are minimal; others are more capacious in the affordances they provide. A psychological agent’s life can be viewed as a movement through a space of affordances; his trajectories through it are determined by his impingement on others and vice-versa; he finds his responses modified by those that the space allows or affords. As parents find out when they raise a child, theories of learning and rearing only go so far; the particular make-up of the pupil feed back to the parent and can modify the rearing strategy; the child has provided only some affordances that work with the child-rearing theory of choice. An inmate in jail is stuck in a very particular domain of psychological affordances; he will find his reactions modified accordingly.

Thinking of our exchanges with the world and other human beings in this light helps illuminate our dependence  and influence on them quite clearly; we are not solitary trailblazers; rather at every step, we are pressed on, and push back. What emerges at every point and at the end bears the impress of these rich relationships with our environment, both physical and psychological.