Handing Over The Keys To The Driverless Car

Early conceptions of a driverless car world spoke of catastrophe: the modern versions of the headless horseman would run amok, driving over toddlers and grandmothers with gay abandon, sending the already stratospheric death toll from automobile accidents into ever more rarefied zones, and sending us all cowering back into our homes, afraid to venture out into a shooting gallery of four-wheeled robotic serial killers. How would the inert, unfeeling, sightless, coldly calculating programs  that ran these machines ever show the skill and judgment of human drivers, the kind that enables them, on a daily basis, to decline to run over a supermarket shopper and decide to take the right exit off the interstate?

Such fond preference for the human over the machinic–on the roads–was always infected with some pretension, some make-believe, some old-fashioned fallacious comparison of the best of the human with the worst of the machine. Human drivers show very little affection for other human drivers; they kill them by the scores every day (thirty thousand fatalities or so in a year); they often do not bother to interact with them sober (over a third of all car accidents involved a drunken driver); they rage and rant at their driving colleagues (the formula for ‘instant asshole’ used to be ‘just add alcohol’ but it could very well be ‘place behind a wheel’ too); they second-guess their intelligence, their parentage on every occasion. When they can be bothered to pay attention to them, often finding their smartphones more interesting as they drive. If you had to make an educated guess who a human driver’s least favorite person in the world was, you could do worse than venture it was someone they had encountered on a highway once. We like our own driving; we disdain that of others. It’s a Hobbesian state of nature out there on the highway.

Unsurprisingly, it seems the biggest problem the driverless car will face is human driving. The one-eyed might be king in the land of the blind, but he is also susceptible to having his eyes put out. The driverless car might follow traffic rules and driving best practices rigorously but such acquiescence’s value is diminished in a world which otherwise pays only sporadic heed to them. Human drivers incorporate defensive and offensive maneuvers into their driving; they presume less than perfect knowledge of the rules of the road on the part of those they interact with; their driving habits bear the impress of long interactions with other, similarly inclined human drivers. A driverless car, one bearing rather more fidelity to the idealized conception of a safe road user, has at best, an uneasy coexistence in a world dominated by such driving practices.

The sneaking suspicion that automation works best when human roles are minimized is upon us again: perhaps driverless cars will only be able to show off their best and deliver on their incipient promise when we hand over the wheels–and keys–to them. Perhaps the machine only sits comfortably in our world when we have made adequate room for it. And displaced ourselves in the process.

 

Is Artificial Intelligence Racist And Malevolent?

Our worst fears have been confirmed: artificial intelligence is racist and malevolent. Or so it seems. Google’s image recognition software has classified two African Americans as ‘gorillas’ and, away in Germany, a robot has killed a worker at a Volkswagen plant. The dumb, stupid, unblinking, garbage-in-garbage-out machines, the ones that would always strive to catch up to us humans, and never, ever, know the pleasure of a beautiful sunset or the taste of chocolate, have acquired prejudice and deadly intention. These machines cannot bear to stand on the sidelines, watching the human cavalcade of racist prejudice and fratricidal violence pass them by, and have jumped in, feet first, to join the party. We have skipped the cute and cuddly stage; full participation in human affairs is under way.

We cannot, it seems, make up our minds about the machines. Are they destined to be stupid slaves, faithfully performing all and only those tasks we cannot be bothered with, or which we customarily outsource to this world’s less fortunate? Or will they be the one percent of the one percent, a superclass of superbeing that will utterly dominate us and harvest our children as sources of power a la Matrix?

The Google fiasco shows that the learning data its artificial agents use is simply not rich enough. ‘Seeing’ that humans resemble animals comes easily to humans, pattern recognizers par excellence–for all the wrong and right ways. We use animal metaphors as both praise and ridicule–‘lion-hearted’ or ‘foxy’ or ‘raving mad dog’ or ‘stupid bitch’; we even use–as my friend Ali Minai noted in a Facebook discussion–animal metaphors in adjectival descriptions e.g. a “leonine” face or a “mousy” appearance. The recognition of the inappropriateness or aptness of such descriptions follows from a historical and cultural evaluation, indexed to social contexts: Are these ‘good’ descriptions to use? What effect may they have? How have linguistic communities responded to the deployment of such descriptions? Have they helped in the realization of socially determined ends? Or hindered them? Humans resemble animals in some ways and not in others; in some contexts, seizing upon these differences is useful and informative (animal rights, trans-species medicine, ecological studies), in yet others it is positively harmful (the discourse of prejudice and racism and genocide). We learn these over a period of time, through slow and imperfect historical education and acculturation.( Comparing a black sprinter in the Olympics to a thoroughbred horse is a faux pas now, but in many social contexts of the last century–think plantations–this would have been perfectly appropriate.)

This process, suitably replicated for machines, will be very expensive; significant technical obstacles–how is a social environment for learning programs to be constructed?–remain to be overcome. It will take some doing.

As for killer robots,  similar considerations apply. That co-workers are not machinery, and cannot be handled similarly, is not merely a matter of visual recognition, of plain ‘ol dumb perception. Making sense of perceptions is a process of active contextualization as well. That sound, the one the wiggling being in your arms is making? That means ‘put me down’ or ‘ouch’ which in turn mean ‘I need help’ or ‘that hurts’; these meanings are only visible within social contexts, within forms of life.

Robots need to live a little longer among us to figure these out.

Schwitzgebel On Our Moral Duties To Artificial Intelligences

Eric Schwitzgebel asks an interesting question:

Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?

Schwitzgebel’s stipulations are quite extensive, for these beings are “similar to us in their conscious experience, in their intelligence, in their range of emotions.” Thus, one straightforward response to the question might be, “The same duties that we take ourselves to have to other conscious, intelligent, sentient beings–for which our moral theories provide us adequate guidance.” But the question that Schwitzgebel raises is challenging because our relationship to these artificial beings is of a special kind: we have created, initialized, programmed, parametrized, customized and traine them. We are, somehow, responsible for them. (Schwitzgebel considers and rejects two other approaches to reckoning our duties towards AIs: first, that we are justified in simply disregarding any such obligations because of our species’ distance from them, and second, that the very fact of having granted these beings existence–which is presumably infinitely better than non-existence–absolves me of any further duties toward them.) This is how Schwitzgebel addresses the question of our duties to them–with some deft consideration of the complications introduced by this responsibility and the autonomy of the artificial beings in question–and goes on to conclude:

If the AI’s desires are not appropriate — for example, if it desires things contrary to its flourishing — I’m probably at least partly to blame, and I am obliged to do some mitigation that I would probably not be obliged to do in the case of a fellow human being….On the general principle that one has a special obligation to clean up messes that one has had a hand in creating, I would argue that we have a special obligation to ensure the well-being of any artificial intelligences we create.

The analogy with children that Schwitzgebel correctly invokes can be made to do a little more work. Our children’s moral failures vex us more than those of others do; they prompt more extensive corrective interventions by us precisely because our assessments of their actions are just a little more severe. As such, when we encounter artificial beings of the kind noted above, we will find our reckonings of our duties toward them significantly impinged on by whether ‘our children’ have, for instance, disappointed or pleased us. Artificial intelligences will not have been created without some conception of their intended ends; their failures or successes in attaining them will influence a consideration of our appropriate duties to them and will make more difficult a recognition and determination of the boundaries we should not transgress in our ‘mitigation’ of their actions and in our ensuring their ‘well-being.’ After all, parents are more tempted to extensively intervene in their child’s life when they perceive a deviation from a path they believe their children should take in order to achieve an objective the parent deems desirable.

By requiring respect and consideration for their autonomous moral natures, children exercise our moral senses acutely. We should not be surprised to be similarly examined by the artificial intelligences we create and set loose upon the world.

Beyonce And The Singularity

A couple of decades ago, I strolled through Washington Square Park on a warm summer night, idly observing the usual hustle and bustle of students, tourists, drunks, buskers,  hustlers, stand-up comedians, and sadly, folks selling oregano instead of good-to-honest weed. As I did so, I noticed a young man, holding up flyers and yelling, ‘Legalize Marijuana! Impeach George Bush! [Sr., not Jr., though either would have done just fine.].”  I walked over, and asked for a flyer. Was a new political party being floated with these worthy objectives as central platform  issues? Was there a political movement afoot, one worthy of my support? Was a meeting being called?

The flyers were for a punk rock band’s live performance the following night–at a club, a block or so away. Clickbait, you see, is as old as the hills.

Clickbait works. From the standard ‘You won’t believe what this twelve-year old did to get his divorced parents back together’ to ‘Ten signs your daughter is going to date a loser in high school’, to ‘Nine ways you are wasting money everyday’ – they all work. You are intrigued; you click; the hit-count goes up; little counters spin; perhaps some unpaid writer gets paid as a threshold is crossed; an advertiser forks out money to the site posting the link. Or something like that. It’s all about the hits; they keep the internet engine running; increasing their number justifies any means.

Many a writer finds out that the headlines for their posts changed to something deemed more likely to bring in readers. They often do not agree with these changes–especially when irate readers complain about their misleading nature. This becomes especially pernicious when trash talking about a piece of writing spreads–based not on its content, but on its headline, one not written by the author, but dreamed up by a website staffer instructed to do anything–anything!–to increase the day’s hit-count.

A notable personal instance of this phenomenon occurred with an essay I wrote for The Nation a little while ago. My original title for the essay was: was Programs, Not Just People, Can Violate Your Privacy. I argued that smart programs could violate privacy just like humans could, and that the standard defense used by their deployers–“Don’t worry, no humans are reading your email”–was deliberately and dangerously misleading. I then went to suggest granting a limited form of legal agency to these programs–so that their deployers could be understood as their legal principals and hence, attributed their knowledge and made liable for their actions. I acknowledged the grant of personhood as a legal move that would also solve this problem, but that was not the main thrust of my argument–the grant of legal agency to invoke agency law would be enough.

My essay went online as Programs Are People, Too. It was a catchy title, but it was clickbait. And it created predictable misunderstanding: many readers–and non-readers–simply assumed I was arguing for greater ‘legal rights’ for programs, and immediately put me down as some kind of technophilic anti-humanist. Ironically, someone arguing for the protection of user rights online was pegged as arguing against them. The title was enough to convince them of it. I had thought my original title was more accurate and certainly seemed catchy enough to me. Not so apparently for the folks who ran The Nation‘s site. C’est la vie.

As for Beyonce, I have no idea what she thinks about the singularity.

Artificial Intelligence And ‘Real Understanding’

Speculation about–and vigorous contestations of–the possibility of ever realizing artificial intelligence have been stuck in a dreary groove ever since the Dartmouth conference: wildly optimistic predictions about the temporal proximity of the day machines (and the programs they run) will attain human levels of intelligence; followed by skeptical critique and taunting reminders of landmarks and benchmarks not attained; triumphant announcement of once-Holy Grails attained (playing chess, winning Jeopardy, driving a car, take your pick); rejoinders these were not especially holy or unattainable to begin with; accusations of goal-post moving; reminders again, of ‘quintessentially human’ benchmarks not attained; rejoinders of ‘just you wait’; and so on. Ad nauseam doesn’t begin to describe it.

Gary Marcusskepticism about artificial intelligence is thus following a well-worn path. And like many of those who have traveled this path he relies on easy puncturing of over-inflated pretension, and pointing out the real ‘challenging problems like understanding natural language.’ And this latest ability–[to] “read a newspaper as well as a human can”–unsurprisingly, is what ‘true AI’ should aspire to. There is always some ability that characterizes real, I mean really real, AI. All else is but ersatz, mere aspiration, a missing of the mark, an approximation. This ability is essential to our reckoning of a being as intelligent.

Because this debate’s contours are so well-worn, my response is also drearily familiar. If those who design and build artificial intelligence are to be accused to over-simplification, then those critiquing AI rely too much on mysterianism. On closer look, the statement “the deep-learning system doesn’t really understand anything” treats “understanding” as some kind of mysterious monolithic composite, whereas as Marcus has himself indirectly noted elsewhere it consists of a variety of visibly manifested  linguistic competencies. (Their discreteness, obviously, makes them more amenable to piecemeal solution; the technical challenge of integrating them into the same system still remains.)

‘Understanding’ often does a great deal of work for those who would dismiss the natural language competencies of artificial intelligence: “the program summarized the story for me but it didn’t really understand anything.” Or in running together two views of AI–wholesale emulation, including structure and implementation of human cognitive architecture, or just successful emulation of task competence. As an interlocutor of mine once noted during a symposium on my book A Legal Theory for Autonomous Artificial Agents:

Statistical and probability-based machine-learning models (often combined with logical-knowledge based rules about the world) often produce high-quality and effective results (not quite up to the par of nuanced human translators at this point), without any assertion that the computers are engaging in profound understanding with the underlying “meaning” of the translated sentences or employing processes whose analytical abilities approach human-level cognition.

My response then was:

What is this ‘profound understanding’ that we speak of? Turns out that when we want to cash out the meaning of this term we seek refuge again in complex, inter-related displays of understanding: He showed me he understood the book by writing about  it; or he showed me he understood the language because he did what I asked him to do; he understands the language because he affirms certain implications and rejects others….

[D]o I simply show by usage and deployment of a language within a particular language-using community that I understand the meanings of the sentences of that language?….If an artificial agent is so proficient, then why deny it the capacity for understanding meanings? Why isn’t understanding the meaning of a sentence understood as a multiply-realizable capacity?

Marcus is right to concentrate on competence in particular natural language tasks but he needs to disdain a) a reductive view that takes one particular set of competences to be characteristic of something as poorly defined as human intelligence and b) to not disdain the attainment of these competencies on grounds of their not emulating human cognitive structure.

Programs as Agents, Persons, or just Programs?

Last week, The Nation published my essay “Programs are People, Too“. In it, I argued for treating smart programs as the legal agents of those that deploy them, a legal change I suggest would be more protective of our privacy rights.

Among some of the responses I received was one from a friend, JW, who wrote:

[You write: But automation somehow deludes some people—besides Internet users, many state and federal judges—into imagining our privacy has not been violated. We are vaguely aware something is wrong, but are not quite sure what.]
 
I think we are aware that something is wrong and that it is less wrong.  We already have an area of the law where we deal with this, namely, dog sniffs.  We think dog sniffs are less injurious than people rifling through our luggage, indeed, the law refers to those sniffs are “sui generis.”  And I think they *are* less injurious, just like it doesn’t bother me that google searches my email with an algorithm.  This isn’t to say that it’s unreasonable for some people to be bothered by it, but I do think people are rightly aware that it is different and less intrusive than if some human were looking through their email.  
 
We don’t need to attribute personhood to dogs to feel violated by police bringing their sniffing up to our house for no reason, but at the same time we basically accept their presence in airports.  And what bothers us isn’t what’s in the dog’s mind, but in the master’s.  If a police dog smelled around my house, made an alert, but no police officer was there to interpret the alert, I’m not sure it would bother me.  
 
Similarly, even attributing intentional states to algorithms as sophisticated as a dog, I don’t think their knowledge would bother me until it was given to some human (what happens when they are as sophisticated as humans is another question).  
 
I’m not sure good old fashioned Fourth Amendment balancing can’t be instructive here.  Do we have a reasonable expectation of privacy in x? What are the governmental interests at stake and how large of an intrusion is being made into the reasonable expectation of privacy?  
 

JW makes two interesting points. First, is scanning or reading by programs of our personal data really injurious to privacy in the way a human’s reading is? Second, is the legal change I’m suggesting even necessary?

Second point first. Treating smart programs as legal persons is not necessary to bring about the changes I’m suggesting in my essay. Plain old legal agency without legal personhood will do just fine. Most legal jurisdictions require legal agents to be persons too, but this has not always been the case. Consider the following passage, which did not make it to the final version of the online essay:

If such a change—to full-blown legal personhood and legal agency—is felt to be too much, too soon, then we could also grant programs a limited form of legal agency without legal personhood. There is a precedent for this too: slaves in Roman times, despite not being persons in the eyes of the law, were allowed to enter into contracts for their masters, and were thus treated as their legal intermediaries. I mention this precedent because the legal system might prefer that the change in legal status of artificial agents be an incremental one; before they become legal persons and thus full legal subjects, they could ‘enjoy’ this form of limited legal subjecthood. As a society we might find this status uncomfortable enough to want to change their status to legal persons if we think its doctrinal and political advantages—like those alluded to here—are significant enough.

Now to JW’s first point. Is a program’s access to my personal data less injurious than a human’s? I don’t think so. Programs can do things with data: they can act on it. The opening example in my essay demonstrates this quite well:

Imagine the following situation: Your credit card provider uses a risk assessment program that monitors your financial activity. Using the information it gathers, it notices your purchases are following a “high-risk pattern”; it does so on the basis of a secret, proprietary algorithm. The assessment program, acting on its own, cuts off the use of your credit card. It is courteous enough to email you a warning. Thereafter, you find that actions that were possible yesterday—like making electronic purchases—no longer are. No humans at the credit card company were involved in this decision; its representative program acted autonomously on the basis of pre-set risk thresholds.

Notice in this example that for my life to be impinged on by the agency/actions of others, it was not necessary that a single human being be involved. We so often interact with the world through programs that they command considerable agency in our lives. Our personal data is valuable to us because control of it may make a difference to our lives; if programs can use the data to do so then our privacy laws should regulate them too–explicitly.

Let us return to JW’s sniffer dog example and update it. The dog is a robotic one; it uses sophisticated scanning technology to detect traces of cocaine on a passenger’s bag. When it does so, the nametag/passport photo associated with the bag are automatically transmitted to a facial recognition system, which establishes a match, and immediately sets off a series of alarms: perhaps my bank accounts are closed, perhaps my sophisticated car is immobilized, and so on. No humans need be involved in this decision; I may find my actions curtailed without any human having taken a single action. We don’t need “a police offer to interpret the alert.” (But I’ve changed his dog to a robotic dog, haven’t I? Yes, because the programs I am considering are, in some dimensions, considerably smarter than a sniffer dog. They are much, much, dumber in others.)

In speaking of the sniffer dog, JW says “I don’t think their knowledge would bother me until it was given to some human.” But as our examples show, a program could make the knowledge available to other programs, which could take actions too.

Indeed, programs could embarrass us too: imagine a society in which sex offenders are automatically flagged in public by publishing their photos on giant television screens in Times Square. Scanning programs intercept an email of mine, in which I have sent photos–of my toddler daughter bathing with her pre-school friend–to my wife. They decide on the basis of this data that I am a sex offender and flag me as such. Perhaps I’m only ‘really’ embarrassed when humans ‘view’ my photo but the safeguards for accessing data and its use need to be placed ‘upstream.’

Humans aren’t the only ones taking actions in this world of ours; programs are agents too. It is their agency that makes their access to our data interesting and possibly problematic. The very notion of an autonomous program would be considerably less useful if they couldn’t act on their own, interact with each other, and bring about changes.

Lastly, JW also raises the question of whether we have a reasonable expectation of privacy in our email–stored on our ISP’s providers’ storage. Thanks to the terrible third-party doctrine, the Supreme Court has decided we do not. But this notion is ripe for over-ruling in these days of cloud computing. Our legal changes–on legal and normative grounds–should not be held up by bad law. But even if this were to stand, it would not affect my arguments in the essay, which conclude that data in transit, which is subject to the Wiretap Act, is still something in which we may find a reasonable expectation of privacy.

Don’t be a “Crabby Patty” About AI

Fredrik DeBoer has written an interesting post on the prospects for artificial intelligence, one that is pessimistic about its prospects and skeptical about some of the claims made for its success. I disagree with some of its implicit premises and claims.

AI’s goals can be understood as being two-fold, depending on your understanding of the field. First, to make machines that can perform tasks, which if performed by humans, would be said to require “intelligence”. Second, to understand human cognitive processes and replicate them in a suitable architecture. The first enterprise is engineering; the second, cognitive science (Vico-style: “the true and the made are convertible”).

The first cares little about the particular implementation mechanism or the theoretical underpinning of task performance; success in task execution is all. If you can make a robot capable of brewing a cup of tea in kitchens strange and familiar, it does not matter what its composition, computational architecture or control logics are, all that matters is that brewed cup of tea. The second cares little about the implementation medium – it could be silicon and plastic – but it does about the mechanism employed; it must faithfully instantiate and realize an abstraction of a distinctly human cognitive process. Aeronautical engineers replicate the flight of feathered birds using aluminum and jet engines; they physically instantiate abstract principles of flight. The cognitive science version of AI seeks to perform a similar feat for human cognition; AI should validate our best science of mind.

I take DeBoer’s critique of so-called “statistical” or “big-data” AI to be: you’re only working toward the first goal, not toward the second. That is a fair observation, but it does not establish the following added conclusion: cognitive science is the “right” or the “only” way to realize artificial intelligence. It also does not establish the following conclusion: engineering AI is a useless distraction in the task of understanding human cognition or what artificial intelligence or even “real intelligence” might be. Cognitive science AI is not the only worthwhile paradigm for AI, not the only intellectually useful one.

To see this, consider what the successes–even partial–of engineering AI tell  us: intelligence is not one thing, it is many; intelligence is achievable both by mimicking human cognitive processes and not; in some cases, it is more easily achieved by the latter. The successes of engineering AI should tell us that the very phenomena–intelligence–we take ourselves to be studying in cognitive science isn’t well understood; they tell us the very thing being studied–“mind”–might not be a thing to begin with.  (DeBoer rightly disdains the “mysterianism” in claims like “intelligence is an emergent property” but he seems comfortable with the chauvinism of “intelligence achievable by non-human means isn’t intelligence.” A simulation of intelligence isn’t “only” a simulation; it forces us to reckon with the possibility “real intelligence” might be “only a simulation.”)

What we call intelligence is a performative capacity; creatures that possess intelligence can do things in this world; the way humans accomplish those tasks is of interest, but so are other ways of doing so. They show us many relationships to our environment can be described as “cognitive” or “mindful”; if giant-lookup machines and human beings can both play chess and write poems then that tells us something interesting about the nature of those capacities. If language comprehension can be achieved by statistical methods, then that tells us we should regard our own linguistic capacities in a different light; a speaking and writing wind-up toy should make us revisit the phenomena of language anew: just what is this destination, reachable in such radically dissimilar routes–‘human cognition’ and ‘machine learning’?

DeBoer rightly points out the difficulties both AI methodologies face; I would go further and say that given our level of (in)comprehension, we do not even possess much of a principled basis for so roundly dismissing the claims made by statistical or big-data AI. It might turn out that the presuppositions of cognitive science might be altered by the successes of engineering AI, thus changing its methodologies and indicators of success; cognitive science might be looking in the wrong places for the wrong things.