Artificial Intelligence And Go: (Alpha)Go Ahead, Move The Goalposts

In the summer of 1999, I attended my first ever professional academic philosophy conference–in Vienna. At the conference, one titled ‘New Trends in Cognitive Science’, I gave a talk titled (rather pompously) ‘No Cognition without Representation: The Dynamical Theory of Cognition and The Emulation Theory of Mental Representation.’ I did the things you do at academic conferences as a graduate student in a job-strapped field: I hung around senior academics, hoping to strike up conversation (I think this is called ‘networking’); I tried to ask ‘intelligent’ questions at the talks, hoping my queries and remarks would mark me out as a rising star, one worthy of being offered a tenure-track position purely on the basis of my sparking public presence. You know the deal.

Among the talks I attended–a constant theme of which were the prospects of the mechanization of the mind–was one on artificial intelligence. Or rather, more accurately, the speaker concerned himself with evaluating the possible successes of artificial intelligence in domains like game-playing. Deep Blue had just beaten Garry Kasparov in an unofficial chess-human world championship in 1997, and such questions were no longer idle queries. In the wake of Deep Blue’s success the usual spate of responses–to news of artificial intelligence’s advance in some domain–had ensued: Deep Blue’s success did not indicate any ‘true intelligence’ but rather pure ‘computing brute force’; a true test of intelligence awaited in other domains. (Never mind that beating a human champion in chess had always been held out as a kind of Holy Grail for game-playing artificial intelligence.)

So, during this talk, the speaker elaborated on what he took to be artificial intelligence’s true challenge: learning and mastering the game of Go. I did not fully understand the contrasts drawn between chess and Go, but they seemed to come down to two vital ones: human Go players relied, indeed had to, a great deal on ‘intuition’, and on a ‘positional sizing-up’ that could not be reduced to an algorithmic process. Chess did not rely on intuition to the same extent; its board assessments were more amenable to an algorithmic calculation. (Go’s much larger state space was also a problem.) Therefore, roughly, success in chess was not so surprising; the real challenge was Go, and that was never going to be mastered.

Yesterday, Google’s DeepMind AlphaGo system beat the South Korean Go master Lee Se-dol in the first of an intended five-game series. Mr. Lee conceded defeat in three and a half hours. His pre-game mood was optimistic:

Mr. Lee had said he could win 5-0 or 4-1, predicting that computing power alone could not win a Go match. Victory takes “human intuition,” something AlphaGo has not yet mastered, he said.

Later though, he said that “AlphaGo appeared able to imitate human intuition to a certain degree” a fact which was born out to him during the game when “AlphaGo made a move so unexpected and unconventional that he thought “it was impossible to make such a move.”

As Jean-Pierre Dupuy noted in his The Mechanization of Mind, a very common response to the ‘mechanization of mind’ is that such attempts merely simulate or imitate, and are mere fronts for machinic complexity–but these proposals seemingly never consider the possibility that the phenomenon they consider genuine or the model for imitation and simulation can only retain such a status as long as simulations and imitations remain flawed. As those flaws diminish, the privileged status of the ‘real thing’ diminishes in turn. A really good simulation,  indistinguishable from the ‘real thing,’ should make us wonder why we grant it such a distinct station.

Handing Over The Keys To The Driverless Car

Early conceptions of a driverless car world spoke of catastrophe: the modern versions of the headless horseman would run amok, driving over toddlers and grandmothers with gay abandon, sending the already stratospheric death toll from automobile accidents into ever more rarefied zones, and sending us all cowering back into our homes, afraid to venture out into a shooting gallery of four-wheeled robotic serial killers. How would the inert, unfeeling, sightless, coldly calculating programs  that ran these machines ever show the skill and judgment of human drivers, the kind that enables them, on a daily basis, to decline to run over a supermarket shopper and decide to take the right exit off the interstate?

Such fond preference for the human over the machinic–on the roads–was always infected with some pretension, some make-believe, some old-fashioned fallacious comparison of the best of the human with the worst of the machine. Human drivers show very little affection for other human drivers; they kill them by the scores every day (thirty thousand fatalities or so in a year); they often do not bother to interact with them sober (over a third of all car accidents involved a drunken driver); they rage and rant at their driving colleagues (the formula for ‘instant asshole’ used to be ‘just add alcohol’ but it could very well be ‘place behind a wheel’ too); they second-guess their intelligence, their parentage on every occasion. When they can be bothered to pay attention to them, often finding their smartphones more interesting as they drive. If you had to make an educated guess who a human driver’s least favorite person in the world was, you could do worse than venture it was someone they had encountered on a highway once. We like our own driving; we disdain that of others. It’s a Hobbesian state of nature out there on the highway.

Unsurprisingly, it seems the biggest problem the driverless car will face is human driving. The one-eyed might be king in the land of the blind, but he is also susceptible to having his eyes put out. The driverless car might follow traffic rules and driving best practices rigorously but such acquiescence’s value is diminished in a world which otherwise pays only sporadic heed to them. Human drivers incorporate defensive and offensive maneuvers into their driving; they presume less than perfect knowledge of the rules of the road on the part of those they interact with; their driving habits bear the impress of long interactions with other, similarly inclined human drivers. A driverless car, one bearing rather more fidelity to the idealized conception of a safe road user, has at best, an uneasy coexistence in a world dominated by such driving practices.

The sneaking suspicion that automation works best when human roles are minimized is upon us again: perhaps driverless cars will only be able to show off their best and deliver on their incipient promise when we hand over the wheels–and keys–to them. Perhaps the machine only sits comfortably in our world when we have made adequate room for it. And displaced ourselves in the process.

 

Is Artificial Intelligence Racist And Malevolent?

Our worst fears have been confirmed: artificial intelligence is racist and malevolent. Or so it seems. Google’s image recognition software has classified two African Americans as ‘gorillas’ and, away in Germany, a robot has killed a worker at a Volkswagen plant. The dumb, stupid, unblinking, garbage-in-garbage-out machines, the ones that would always strive to catch up to us humans, and never, ever, know the pleasure of a beautiful sunset or the taste of chocolate, have acquired prejudice and deadly intention. These machines cannot bear to stand on the sidelines, watching the human cavalcade of racist prejudice and fratricidal violence pass them by, and have jumped in, feet first, to join the party. We have skipped the cute and cuddly stage; full participation in human affairs is under way.

We cannot, it seems, make up our minds about the machines. Are they destined to be stupid slaves, faithfully performing all and only those tasks we cannot be bothered with, or which we customarily outsource to this world’s less fortunate? Or will they be the one percent of the one percent, a superclass of superbeing that will utterly dominate us and harvest our children as sources of power a la Matrix?

The Google fiasco shows that the learning data its artificial agents use is simply not rich enough. ‘Seeing’ that humans resemble animals comes easily to humans, pattern recognizers par excellence–for all the wrong and right ways. We use animal metaphors as both praise and ridicule–‘lion-hearted’ or ‘foxy’ or ‘raving mad dog’ or ‘stupid bitch’; we even use–as my friend Ali Minai noted in a Facebook discussion–animal metaphors in adjectival descriptions e.g. a “leonine” face or a “mousy” appearance. The recognition of the inappropriateness or aptness of such descriptions follows from a historical and cultural evaluation, indexed to social contexts: Are these ‘good’ descriptions to use? What effect may they have? How have linguistic communities responded to the deployment of such descriptions? Have they helped in the realization of socially determined ends? Or hindered them? Humans resemble animals in some ways and not in others; in some contexts, seizing upon these differences is useful and informative (animal rights, trans-species medicine, ecological studies), in yet others it is positively harmful (the discourse of prejudice and racism and genocide). We learn these over a period of time, through slow and imperfect historical education and acculturation.( Comparing a black sprinter in the Olympics to a thoroughbred horse is a faux pas now, but in many social contexts of the last century–think plantations–this would have been perfectly appropriate.)

This process, suitably replicated for machines, will be very expensive; significant technical obstacles–how is a social environment for learning programs to be constructed?–remain to be overcome. It will take some doing.

As for killer robots,  similar considerations apply. That co-workers are not machinery, and cannot be handled similarly, is not merely a matter of visual recognition, of plain ‘ol dumb perception. Making sense of perceptions is a process of active contextualization as well. That sound, the one the wiggling being in your arms is making? That means ‘put me down’ or ‘ouch’ which in turn mean ‘I need help’ or ‘that hurts’; these meanings are only visible within social contexts, within forms of life.

Robots need to live a little longer among us to figure these out.

Tom Friedman Has Joined Google’s HR Department

Tom Friedman is moonlighting by writing advertising copy for Google’s Human Resources Department; this talent is on display in his latest Op-Ed titled–appropriately enough “How To Get a Job at Google”. Perhaps staff at the Career Services offices of the nation’s major universities can print out this press release from Google HR and distribute it to their students, just in time for the next job fair.

Friedman is quick to get to the point (and to let someone else do the talking):

At a time when many people are asking, “How’s my kid gonna get a job?” I thought it would be useful to visit Google and hear how Bock would answer.

True to his word, the rest of the Op-Ed is a series of quotes from “Laszlo Bock, the senior vice president of people operations for Google — i.e., the guy in charge of hiring for one of the world’s most successful companies.” Let us, therefore, all fall into supplicant mode.

The How To Get a Job With Us press release is, of course, as much advertisement for the corporation’s self-imagined assessment of its work culture as anything else; how obliging, therefore, for Friedman to allow Bock to tell us Google so highly values “general cognitive ability”, “leadership — in particular emergent leadership as opposed to traditional leadership”, and “humility and ownership”. (In keeping with the usual neoliberal denigration of the university, Friedman helpfully echoes Bock’s claim that “Too many colleges…don’t deliver on what they promise. You generate a ton of debt, you don’t learn the most useful things for your life. It’s [just] an extended adolescence.” Interestingly enough, I had thought Google’s workspaces with their vending machines, toys and other play spaces contributed to the “extended adolescence” of its coders. The bit about the “ton of debt” is spot-on though.) 

The use of opinion pages at major national newspapers for corporate communiques, to advance business talking points, to function as megaphones for the suppressed, yearning voices of the board-room, eager to inform us of their strategic perspectives, is fast developing into a modern tradition. This process has thus far been accomplished with some subterfuge, some stealth, some attempt at disguise and cover-up; but there isn’t much subtlety in this use of the New York Times Op-Ed page for a press release.

Friedman’s piece clocks in at 955 words; direct and indirect quotes from Bock amount to over 700 of those. There are ten paragraphs in the piece; paragraphs one through nine are pretty much Bock quotes. Sometimes, I outsource my writing here on this blog to quotes from books and essays I’ve read; Friedman, the Patron Saint of Outsourcing, has outsourced his to Google’s VP of “people operations.”

The only thing missing in this Friedman piece is the conversation with the immigrant cabbie on the way to Google’s Mountain View in the course of which we would have learned how his American-born children were eager to excel in precisely those skills most desired by Google. Perhaps we’ll read that next week.

Artificial Agents, Knowledge Attribution, and Privacy Violations

I am a subscriber to a mailing list dedicated to discussing the many legal, social, and economic issues that arise out of the increasing use of drones. Recently on the list, the discussion turned to the privacy implications of drones. I was asked whether the doctrines developed in my book A Legal Theory of Autonomous Artificial Agents were relevant to the privacy issues raised by drones. I wrote a brief reply on the list indicating  that yes, they are.  I am posting a brief excerpt from the book here to address that question more fully (for the full argument, please see Chapter 3 of the book):

Knowledge Attribution and Privacy Violations

The relationship between knowledge and legal regimes for privacy is straightforward: privacy laws place restrictions, inter alia, on what knowledge may be acquired, and how.  Of course, knowledge acquisition does not exhaust the range of privacy protections  afforded under modern legal systems. EU privacy law, for example, is triggered when mere processing of personal data is involved. Nevertheless acquisition of knowledge of  someone’s affairs, by human or automated means, crosses an important threshold with regards to privacy protection.

Privacy obligations are implicitly relevant to the attribution of knowledge held by agents to their principals in two ways: confidentiality obligations can restrict such attribution and horizontal information barriers such as medical privacy obligations can prevent corporations being fixed with collective knowledge for liability purposes.

Conversely, viewing artificial agents as legally recognized “knowers” of digitized personal information on behalf of their principals brings conceptual clarity in answering the question of when automated access to personal data amounts to a privacy violation.

The problem with devising legal protections against privacy violations by artificial agents is not that current statutory regimes are weak; it is that they have not been interpreted appropriately given the functionality of agents and the nature of modern internet-based communications. The first move in this regard is to regard artificial agents as legal agents
of their principals capable of information and knowledge acquisition.

A crucial disanalogy drawn between artificial and human agents plays a role in the denial that artificial agents’ access to personal data can constitute a privacy violation: the argument that the automated nature of artificial agents provides reassurance sensitive personal data is “untouched by human hands, unseen by human eyes.” The artificial agent becomes a convenient surrogate, one that by its automated nature neatly takes the burden of responsibility off the putative corporate or governmental offender. Here the intuition that “programs don’t know what your email is about” allows the principal to put up an “automation screen” between themselves and the programs deployed by them. For
instance, Google has sought to assuage concerns over possible violations of privacy in connection with scanning of Gmail email messages by pointing to the non-involvement of humans in the scanning process.

Similarly, the U.S. Government, in the 1995 Echelon case, responded to complaints about its monitoring of messages flowing through Harvard University’s computer network by stating no privacy interests had been violated because all the scanning had been carried out by programs.

This putative need for humans to access personal data before a privacy violation can occur underwrites such defenses.

Viewing, as we do, the programs engaged in such monitoring or surveillance as legal agents capable of knowledge acquisition denies the legitimacy of the Google and Echelon defenses. An agent that has acquired user’s personal data acquires functionality that makes possible the processing or onward disclosure of that data in such a way as to constitute privacy violations. (Indeed, the very functionality enabled by the access to such data is what would permit the claim to be made under our knowledge analysis conditions that the agent in question knows a user’s personal data.)

Cyberflânerie Contd.

My post yesterday on cyberflanerie sparked a couple of thoughtful and interesting comments in response.

John says:

[T]he social web also permits us to make ‘friends’ on the basis of common interests. On blogs or on Twitter, we regularly see conversations between former strangers on subjects of common interest.

And David Barry said:

[T]o a small extent with Facebook, and to an enormous extent with Twitter, I get to see many, many more interesting things than if I were randomly following links….if I put even a small value on the interesting thing itself, then the total number of interesting things will overwhelm the pleasure at discovering a cool website on my own. To take your library metaphor, with social media I see many more bookshelves than I would have seen on my own. And even then, it is not as though I’m constrained by what my friends are reading

These are fair points, and they underlie:  a) the intuition that most people have that social networking provides real value; and b) the promotion of social networking by its creators and proponents. I don’t think these blessings are insignificant and I don’t mean to discount them. So, to take John’s comment, I concede that new ‘friends’ can be made this way, new contacts formed. And I will happily concede David’s point too, that I am often pointed to links of interest by my online contacts.

That said, in my post yesterday I was attempting to point out a consequence of a particular kind of social networking architecture, a consequence that appears likely given its actual implementation and patterns of usage (as opposed to just its promised form and application). The architecture of Facebook and Twitter–to stick to two prominent examples–is a ticker-tape of feeds from our ‘friends’ or our ‘leaders’ (the ones we follow). I could simply do the following: fire up Facebook and Twitter, open a tab for each, and watch the feed scroll, picking and choosing from the buffet offering. These then, are my windows to the web. I leave this window to go browse, and there is the chance that while I am so diverted, I will go off on journeys of exploration of my own, where I might find links that I post back on my Facebook and Twitter feeds. Or perhaps I open the link in a new tab, and then return to Facebook and Twitter to do more browsing.

So in one way, serendipity lives on: someone is providing links for me to chase, but the possibility of my diversion has not been taken away. Of course, when I do go to a site link provided by my Facebook or Twitter feed, I am likely to see other signs telling me my friends have liked or shared or read an article and I might be tempted to go chase those down instead. The entire net is marked with like buttons, and signs telling me what has been read and shared and by whom. The informational content of the web now is not just content, but content tagged with readership information. We impose hierarchies in this tagging of course. For instance, I might treat certain friends’ shares as more valuable indicators of good content than others and so on.

But as the ticker-tape/smorgasbord model and as the the presence of all pages–all tagged–all the time indicates, I am threading my way through a heavily marked-up, recommended and annotated world. All by my ‘friends’. It is the possibility of this world–and the sense that something has been lost in it–that I was trying to get at in my post yesterday. (One might ask of course, who is doing the initial discovery and link-provision; the answer might be ‘the leaders’, which would be a depressing sort of thing to find out about a space that was to introduce a kind of democratization of knowledge but which seems to just have imposed its own hierarchies.)

The central question then remains: is the possible loss of the un-guided exploration a reasonable bargain? David’s comment seem to indicate the answer is ‘yes’; perhaps all I had done yesterday was indicate the possibility of this loss and make a prima facie claim that its loss would be an undesirable consequence. I think the answer to this question can be quite fascinating especially because it could expose the degree to which even flânerie is perhaps a fiction: that even the notion of an unguided self exploring in autonomous fashion is a fallacy, that we are always being guided and prodded in our discoveries.

I did not mean to ever indicate that valuable information sharing and access provision was not taking place on the current ‘Net. My intention was rather to point out that the information-sharing model which is fundamentally about annotation, guidance, tagging, link-provision, and ‘sharing’ is likely to displace a particular kind of inquiry. It might be as David points out, that flanerie-style inquiry and exploration just isn’t that valuable, and we should be happy to see it displaced by the sharing and recommendation model. The assessment of the relative value of those models of inquiry needs an additional argument. My initial statement was to point to its impending loss (or its survival in tiny, epistemically aristocratic enclaves).

Incidentally, I do think that the customized search that Google seems to want to provide us is a disaster; I most certainly don’t want my past search history to constrain–in the ways that Google wants–my present and future searches. I find myself signing out of my Google account when I search, largely because I’m finding the personalized pages that come up to be perniciously foreclosing possibilities for discovery.