One Vision Of A Driverless Car Future: Eliminating Private Car Ownership

Most analysis of a driverless car future concentrates on the gains in safety: ‘robotic’ cars will adhere more closely to speed limits and other traffic rules and over a period of time, by eliminating human error and idiosyncrasies, produce a safer environment on our roads. This might be seen as an architectural modification of human driving behavior to produce safer driving outcomes–rather than making unsafe driving illegal, more expensive, or socially unacceptable, just don’t let humans drive.

But there are other problems–environmental degradation and traffic–that could be addressed by mature driverless car technologies. The key to their solution lies in moving away from private car ownership.

To see this, consider that at any given time, we have too many cars on the roads. Some are being driven, yet others are parked. If you own a car, you drive it from point to point, and park it when you are done using it. Eight hours later–at the end of an average work-day–you leave your office and drive home, park it again, and then use it in the morning. Through the night, your car sits idle again, taking up space. If only someone else could use your car while you didn’t need it. They wouldn’t need to buy a separate car for themselves and add to the congestion on the highways. And in parking lots.

Why not simply replace privately owned, human-driven cars with a gigantic fleet of robotic taxis? When you need a car, you call for one. When you are done using it, you release it back into the pool. You don’t park it; it simply goes back to answering its next call.  Need to go to work in the morning? Call a car. Run an errand with heavy lifting? Call a car. And so on. Cars shared in this fashion could thus eliminate the gigantic redundancy in car ownership that leads to choked highways, mounting smog and pollution, endless, futile construction of parking towers, and elaboration congestion pricing schemes. (The key phrase here is, of course, ‘mature driver-less car technologies.’ If you need a car for an elaborate road-trip through the American West, perhaps you could place a longer, more expensive hold on it, so that it doesn’t drive off while you are taking a quick photo or two of a canyon.)

Such a future entails that there will be no more personal, ineffable, fetishized relationships with cars. They will not be your babies to be cared and loved for. Their upholstery will not remind you of days gone by. Your children will not feel sentimental about the clunker that was a part of their growing up. And so on. I suspect these sorts of attachments to the car will be very easily forgotten once we have reckoned with the sheer pleasure of not having to deal with driving tests–and the terrors of teaching our children how to drive, the DMV, buying car insurance, looking for parking, and best of all, other drivers.

I, for one, welcome our robotic overlords in this domain.

Artificial Agents and the Law: Legal Personhood in Good Time

The Concurring Opinions online symposium on A Legal Theory for Autonomous Artificial Agents is under way, and most respondents thus far are taking on the speculative portion of the book (where we suggest that legal personhood for autonomous artificial is philosophically and legally coherent and might be advanced in the future). The incremental arguments constructed in Chapters 2 and 3 for considering artificial agents as legal agents for the purposes of contracting and knowledge attribution have not yet been taken on. Sometimes, as a result, it has seemed that we are suggesting a far greater change to existing legal doctrines than the modest changes we actually do suggest. Those modest changes could, admittedly, go on to have widespread ramifications, but still, for the time being, that’s all we do.

I’m not surprised that most respondents to the book thus far have chosen to concentrate on the ‘sexier’ argument in Chapter 5. In any case, these comments are very thoughtful and are thought-provoking and as a result they have already resulted in some very interesting discussion. Indeed, some of the objections raised are going to require some very careful responses from my side.

Still, the concentration on the legal personhood aspect of the doctrine we suggest might result in one confusion being created: that in this book we are advocating personhood for artificial agents. Not just legal personhood, but in fact personhood tout court. This is especially ironic as we deliberately have chosen the most incremental changes in doctrine possible in keeping with law’s generally conservative treatment of proposed changes to legal doctrine.

Here is what we say in the introduction about the argument for legal personhood:

<start quote>

In Chapter 5, we explore the potential for according sophisticated artificial agents with legal personality. In order to provide a discursive framework, we distinguish between dependent and independent legal persons. We conclude that the conditions for each kind of legal personality could, in principle, be met by artificial agents in the right circumstances. [emphasis added] We suggest objections to such a status for them are based on a combination of human chauvinism and a misunderstanding of the notion of a legal person [more often than not, this is the conflation of “human” with “legal person”]. We note the result-oriented nature of the jurisprudence surrounding legal personality, and surmise legal personality for artificial agents will follow on their attaining a sufficiently rich and complex positioning within our network of social and economic relationships. The question of legal personality for artificial agents will be informed by a variety of pragmatic, philosophical and extra-legal concepts; philosophically unfounded chauvinism about human uniqueness should not and would not play a significant role in such deliberations.

<end quote>

The “result-oriented nature of the jurisprudence surrounding legal personality” is actually such as to suggest that artificial agents might even be considered legal persons for the purposes of contracting now. But for the time being, I think, we can get the same outcomes just by treating them as legal agents without personhood. Which is why we advocate that change first, and suggest we wait till they attain “a sufficiently rich and complex positioning within our network of social and economic relationships.”

Artificial Agents, Knowledge Attribution, and Privacy Violations

I am a subscriber to a mailing list dedicated to discussing the many legal, social, and economic issues that arise out of the increasing use of drones. Recently on the list, the discussion turned to the privacy implications of drones. I was asked whether the doctrines developed in my book A Legal Theory of Autonomous Artificial Agents were relevant to the privacy issues raised by drones. I wrote a brief reply on the list indicating  that yes, they are.  I am posting a brief excerpt from the book here to address that question more fully (for the full argument, please see Chapter 3 of the book):

Knowledge Attribution and Privacy Violations

The relationship between knowledge and legal regimes for privacy is straightforward: privacy laws place restrictions, inter alia, on what knowledge may be acquired, and how.  Of course, knowledge acquisition does not exhaust the range of privacy protections  afforded under modern legal systems. EU privacy law, for example, is triggered when mere processing of personal data is involved. Nevertheless acquisition of knowledge of  someone’s affairs, by human or automated means, crosses an important threshold with regards to privacy protection.

Privacy obligations are implicitly relevant to the attribution of knowledge held by agents to their principals in two ways: confidentiality obligations can restrict such attribution and horizontal information barriers such as medical privacy obligations can prevent corporations being fixed with collective knowledge for liability purposes.

Conversely, viewing artificial agents as legally recognized “knowers” of digitized personal information on behalf of their principals brings conceptual clarity in answering the question of when automated access to personal data amounts to a privacy violation.

The problem with devising legal protections against privacy violations by artificial agents is not that current statutory regimes are weak; it is that they have not been interpreted appropriately given the functionality of agents and the nature of modern internet-based communications. The first move in this regard is to regard artificial agents as legal agents
of their principals capable of information and knowledge acquisition.

A crucial disanalogy drawn between artificial and human agents plays a role in the denial that artificial agents’ access to personal data can constitute a privacy violation: the argument that the automated nature of artificial agents provides reassurance sensitive personal data is “untouched by human hands, unseen by human eyes.” The artificial agent becomes a convenient surrogate, one that by its automated nature neatly takes the burden of responsibility off the putative corporate or governmental offender. Here the intuition that “programs don’t know what your email is about” allows the principal to put up an “automation screen” between themselves and the programs deployed by them. For
instance, Google has sought to assuage concerns over possible violations of privacy in connection with scanning of Gmail email messages by pointing to the non-involvement of humans in the scanning process.

Similarly, the U.S. Government, in the 1995 Echelon case, responded to complaints about its monitoring of messages flowing through Harvard University’s computer network by stating no privacy interests had been violated because all the scanning had been carried out by programs.

This putative need for humans to access personal data before a privacy violation can occur underwrites such defenses.

Viewing, as we do, the programs engaged in such monitoring or surveillance as legal agents capable of knowledge acquisition denies the legitimacy of the Google and Echelon defenses. An agent that has acquired user’s personal data acquires functionality that makes possible the processing or onward disclosure of that data in such a way as to constitute privacy violations. (Indeed, the very functionality enabled by the access to such data is what would permit the claim to be made under our knowledge analysis conditions that the agent in question knows a user’s personal data.)

Artificial Agents and the Law: Some Preliminary Considerations

As I noted here last week, the Concurring Opinions blog will be hosting an online symposium on my book A Legal Theory for Autonomous Artificial Agents. There has already been some discussion over at the blog; I’m hoping that once the book has been read and its actual arguments engaged with, we can have a more substantive discussion.

Last week, I spoke on the book at Bard College, to a mixed audience of philosophy, computer science, and cognitive science faculty and students. The question-and-answer session was quite lively and our conversations continued over dinner later.  Some of the questions that were directed at me are quite familiar to me by now: Why make any change in the legal status of artificial agents? That is, why elevate them from non-entities in the ontology of the law to the status of legal agents, or beyond? When it comes to assigning responsibility, why not simply make the designers or deployers of agents responsible for all acts of the artificial agents?  How can an artificial agent, which lacks the supposedly distinctively-human characteristics of <insert consciousness, free-will, rationality, autonomy, subjectivity, phenomenal experience here> ever be considered an “agent” or a “person”? Aren’t you abusing language when you say that a program or a robot can be attributed knowledge? How can those kinds of things ever “know” anything? Who is doing the knowing?  

I’ll be addressing questions like these (I’m reasonably sure of that) over at the online symposium, which starts tomorrow. For the time being, I’d like to make a couple of general remarks. 

The modest changes in legal doctrine proposed in our book are largely driven by two considerations. 

First, existent legal doctrine, in a couple of domains, most especially contracting, which kicks off our discussion and serves as the foundations for the eventual development of the book’s thesis, is placed under considerable strain by its current treatment of highly sophisticated artificial agents. We could maintain current contracting doctrines as is but run the risk of increasing the implausibility of that doctrine. This might be seen as a reasonable price to pay so that we can maintain our intuitions about the kinds of beings we take artificial agents to be. I’d suggest this kind of retention of intuitions starts to become increasingly untenable when we see the disparateness in the entities that are placed in the same legal category. (Are autonomous robots really just the same as tools like hammers?)   

Second, a change in a  legal doctrine can sometimes bring about better outcomes for us. One of the crucial arguments in our Chapter 2, (one I really hope readers engage with) is an assessment in the economic dimension of electronic contracting by artificial agents considered as legal agents. I share the skepticism of those in the legal academy that economic analysis of law not drive all doctrinal changes but in this case, I’d suggest the risk allocation does work out better. But I think an even stronger argument can be made when it comes to privacy. In Chapter 3, the dimissal of the Google defense (“if humans don’t read your email, your privacy is not violated”) is enabled precisely by treating artificial agents as legal  agents.

Much more on this in the next few days.

Concurring Opinions Online Symposium on A Legal Theory for Autonomous Artificial Agents

Remember that New York Times article about all the legal headaches that Google’s autonomous cars are causing? Well, if you found that interesting, you should read on.

On February 14-16, the Concurring Opinions blog will host an online symposium dedicated to a discussion of my book A Legal Theory for Autonomous Artificial Agents. (Many thanks to Frank Pasquale for organizing this; Concurring Opinions’ online symposiums are quite a treat; in the past it has put on symposiums for on Tim Wu’s Master Switch  and Jonathan Zittrain’s Future of the Internet.) You can find a preview of the book at Amazon. David Coady recently helped launch the book in Melbourne, Australia with some rather witty and personal opening remarks; well worth a read (full disclosure: David is an old friend of mine).

As of now, the stellar line-up of participants includes Ken AndersonRyan Calo, James Grimmelmann, Sonia Katyal, Ian KerrAndrea MatwyshynDeborah DeMottPaul Ohm,  Ugo Pagallo, Lawrence SolumRamesh Subramanian and Harry Surden. The quality, breadth and range of scholarship included in that list is quite awe-inspiring. I look forward to reading their responses and discussing the book’s arguments and analysis with them.

The following is the Introduction to the book (Chapter 2 of the book was published in the Illinois Journal of Law, Technology and Policy, and can be found online at SSRN; I will post more excerpts from the book in the next couple of weeks):

Social and economic interactions today increasingly feature a new category of being: the artificial agent. It buys and sells goods; determines eligibility for legal entitlements like healthcare benefits; processes applications for visas and credit cards; collects, acquires and processes financial information; trades on stock markets; and so on. We use language inflected with intentions in describing our interactions with an artificial agent, as when we say “the shopping cart program wants to know my shipping address.” This being’s competence at settling into our lives, in taking on our tasks, leads us to attribute knowledge and motivations, and to delegate responsibility, to it. Its abilities, often approximating human ones and sometimes going beyond them, make it the object of fear and gratitude: it might spy on us, or it might relieve us of tedium and boredom.

The advances in the technical sophistication and autonomous functioning of these systems represent a logical continuation of our social adoption of technologies of automation. Agent programs represent just one end of a spectrum of technologies that automate human capacities and abilities, extend our cognitive apparatus, and become modeled enhancements of ourselves. More than ever before, it is coherent to speak of computer programs and hardware systems as agents working on our behalf. The spelling checker that corrects this page as it is written is a lexicographic agent that aids in our writing, as much an agent as the automated trading system of a major Wall Street brokerage, and the PR2 robot, a prototype personal robotic assistant (Markoff 2009). While some delegations of our work to such agents are the oft-promised ones of alleviating tedious labor, others are ethically problematic, as in robots taking on warfare roles (Singer 2009). Yet others enable a richer, wider set of social and economic interconnections in our networked society, especially evident in e-commerce (Papazoglu 2001).

As we increasingly interact with these artificial agents in unsupervised settings, with no human mediators, their seeming autonomy and increasingly sophisticated functionality and behavior, raises legal and philosophical questions. For as the number of interactions mediated by artificial agents increase, as they  become actors in literal, metaphorical and legal senses, it is ever more important to understand, and do justice to, the artificial agent’s role within our networks of social, political and economic relations. What is the standing of these entities in our socio-legal framework? What is the legal status of the commercial transactions they enter into? What legal status should artificial agents have? Should they be mere things, tools, and instrumentalities?  Do they have any rights, duties, obligations? What are the legal strategies to make room for these future residents of our polity and society? The increasing sophistication, use, and social embedding of computerized agents makes the coherent answering of older questions raised by mechanical automation ever more necessary.

Carving out a niche for a new category of legal actor is a task rich with legal and philosophical significance. The history of jurisprudence addressing doctrinal changes in the law suggests legal theorizing to accommodate artificial agents will inevitably find its pragmatic deliberations colored by philosophical musings over the nature and being of these agents. Conversely, the accommodation, within legal doctrines, of the artificial agent, will influence future philosophical theorizing about such agents, for such accommodation will invariably include conceptual and empirical assessments of their capacities and abilities. This interplay between law and philosophy is not new: philosophical debates on personhood, for instance, cannot proceed without an acknowledgement of the legal person, just as legal discussions on tort liability are grounded in a philosophical understanding of responsibility and causation.

This book seeks to advance interdisciplinary legal scholarship in answer to the conundrums posed by this new entity in our midst. Drawing upon both contemporary and classical legal and philosophical analysis, we attempt to develop a prescriptive legal theory to guide our interactions with artificial agents, whether as users or operators entering contracts, acquiring knowledge or causing harm through agents, or as persons to whom agents are capable of causing harm in their own right. We seek to apply and extend existing legal and philosophical theories of agency, knowledge attribution, liability, and personhood, to the many roles artificial agents can be expected to play and the legal challenges they will pose while so doing. We emphasize legal continuity, while seeking to refocus on deep existing questions in legal theory.

The artificial agent is here to stay; our task is to accommodate it in a manner that does justice to our interests and its abilities.