Handing Over The Keys To The Driverless Car

Early conceptions of a driverless car world spoke of catastrophe: the modern versions of the headless horseman would run amok, driving over toddlers and grandmothers with gay abandon, sending the already stratospheric death toll from automobile accidents into ever more rarefied zones, and sending us all cowering back into our homes, afraid to venture out into a shooting gallery of four-wheeled robotic serial killers. How would the inert, unfeeling, sightless, coldly calculating programs  that ran these machines ever show the skill and judgment of human drivers, the kind that enables them, on a daily basis, to decline to run over a supermarket shopper and decide to take the right exit off the interstate?

Such fond preference for the human over the machinic–on the roads–was always infected with some pretension, some make-believe, some old-fashioned fallacious comparison of the best of the human with the worst of the machine. Human drivers show very little affection for other human drivers; they kill them by the scores every day (thirty thousand fatalities or so in a year); they often do not bother to interact with them sober (over a third of all car accidents involved a drunken driver); they rage and rant at their driving colleagues (the formula for ‘instant asshole’ used to be ‘just add alcohol’ but it could very well be ‘place behind a wheel’ too); they second-guess their intelligence, their parentage on every occasion. When they can be bothered to pay attention to them, often finding their smartphones more interesting as they drive. If you had to make an educated guess who a human driver’s least favorite person in the world was, you could do worse than venture it was someone they had encountered on a highway once. We like our own driving; we disdain that of others. It’s a Hobbesian state of nature out there on the highway.

Unsurprisingly, it seems the biggest problem the driverless car will face is human driving. The one-eyed might be king in the land of the blind, but he is also susceptible to having his eyes put out. The driverless car might follow traffic rules and driving best practices rigorously but such acquiescence’s value is diminished in a world which otherwise pays only sporadic heed to them. Human drivers incorporate defensive and offensive maneuvers into their driving; they presume less than perfect knowledge of the rules of the road on the part of those they interact with; their driving habits bear the impress of long interactions with other, similarly inclined human drivers. A driverless car, one bearing rather more fidelity to the idealized conception of a safe road user, has at best, an uneasy coexistence in a world dominated by such driving practices.

The sneaking suspicion that automation works best when human roles are minimized is upon us again: perhaps driverless cars will only be able to show off their best and deliver on their incipient promise when we hand over the wheels–and keys–to them. Perhaps the machine only sits comfortably in our world when we have made adequate room for it. And displaced ourselves in the process.

 

Report on Concurring Opinions Symposium on Artificial Agents – II

Today, I’m continuing my wrap-up of the Concurring Opinions online symposium on A Legal Theory for Autonomous Artificial Agents. I’ll be noting below the various responses to the book and point to my responses to them (Part I of this wrap-up was posted yesterday).

While almost all respondents seem to have seriously engaged with the book’s analysis, Ryan Calo wrote a disappointingly unengaged, and at times, patronizing post that ostensibly focused on the book’s methodological adoption of the intentional stance; it seemed to suggest that all we were doing was primitive anthropomorphizing. This was a pretty comprehensive misread of the book’s argument, so I struggled to find anything to say in response. Calo also said he didn’t know whether an autonomous robot was like a hammer or not; this was a bizarre admission coming from someone that is concerned with the legal implications of robotics. I noted in one of my responses that figuring out the answer to that question can be aided by some intuition-tickling questions (Like: Would NASA send a hammer to explore Mars? Can hammers drive?). Calo’s follow-up post to my comment on his post was roughly along the lines of “We don’t know what to do with artificial agents.” Well, yes, but I thought the point was to evaluate the attempt currently mounted in our book? I didn’t quite understand the point of Calo’s responses: that we don’t have a comprehensive theory for artificial agents i.e., the book’s title is misleading? I could be persuaded into mounting a guilty plea for that. But the point of the book was to indicate how existing doctrines could be so suitably modified to start accommodating artificial agents- that is how a legal theory will be built up in a common law system.

Deborah DeMott (Duke) whose writings on the common law doctrines of agency were very useful in our analysis in the book offered a very good analysis of our attempts to apply that doctrine to artificial agents. While DeMott disagreed with the exactness of the fit, she seemed not to think that it was completely off-base (she certainly found our attempt “lively and ingenious”!); in my response I attempted to clarify and defend some of our reasons for why we thought agency doctrine would work with artificial agents.

Ken Anderson (American University, Volokh Conspiracy) then discussed our treatment of intentionality and deployment of the intentional stance, and queried whether we intended to use the intentional stance merely as a heuristic device or whether we were, in fact, making a broader claim for intentionality in general. In my response I noted that we wanted to do both: use it as a methodological stance, and in doing so, also point an investigative lens at our understanding of intentionality in general. Ken’s reaction was very positive; he thought the book had hit a “sweet spot” in not being excessively pie-in-the-sky while offering serious doctrinal recommendations.

Ian Kerr (Ottawa), in his response, didn’t feel the book went far enough in suggesting a workable theory for artificial agents; if I understood Ian correctly, his central complaint was that the theory relied too much on older legal categories and doctrines and that artificial agents might need an entirely new set of legal frameworks. But Ian also felt the slow and steady march of the common law was the best way to handle the challenges posed by artificial agents. So, interestingly enough, I agree with Ian; and I think Ian should be less dissatisfied than he is; our book is  merely the first attempt to try and leverage the common law to make these steps to work towards a more comprehensive theory. In fact, given rapid developments in artificial agents, the law is largely going to be playing catchup more than anything else.

Andrew Sutter then wrote a critical, rich response, one that took aim at the book’s rhetoric, its methodology, and its philosophical stance. I greatly enjoyed my jousting with Andrew during this symposium, and my response to his post–and to his subsequent comments–in which I attempted to clarify my philosophical stance and presuppositions, will show that.

Harry Surden (Colorado) wrote a very good post on two understanding of artificial intelligence’s objectives–intelligence as the replication of human cognitive capacities by either replicating human methods of achieving them or via simulations that utilize other techniques–and how these could or would be crucial in the legal response to its achievements. My response to Surden acknowledged the importance of these distinctions and noted that this should also cause us to think about how we often ascribe human cognition a certain standing that arises largely because of a lack of understanding of its principles. (This also provoked an interesting discussion with AJ Sutter.)

Andrea Matwyshyn wrote an excellent, seriously engaged post that took head-on, the fairly detailed and intricate arguments of Chapter 2 (where we offer a solution for the so-called contracting problem by offering an argument that artificial agents be considered legal agents of their users). My response to Matwyshyn acknowledged the force of her various critical points while trying to expand and elaborate the economic incentivizing motivation for our claim that artificial agents should be considered as non-identical with their creators and/or deployers.

Once again, I am grateful to Frank Pasquale and the folks over at Concurring Opinions for staging the symposium and to all the participants for their responses.