Report on Concurring Opinions Symposium on Artificial Agents – II

Today, I’m continuing my wrap-up of the Concurring Opinions online symposium on A Legal Theory for Autonomous Artificial Agents. I’ll be noting below the various responses to the book and point to my responses to them (Part I of this wrap-up was posted yesterday).

While almost all respondents seem to have seriously engaged with the book’s analysis, Ryan Calo wrote a disappointingly unengaged, and at times, patronizing post that ostensibly focused on the book’s methodological adoption of the intentional stance; it seemed to suggest that all we were doing was primitive anthropomorphizing. This was a pretty comprehensive misread of the book’s argument, so I struggled to find anything to say in response. Calo also said he didn’t know whether an autonomous robot was like a hammer or not; this was a bizarre admission coming from someone that is concerned with the legal implications of robotics. I noted in one of my responses that figuring out the answer to that question can be aided by some intuition-tickling questions (Like: Would NASA send a hammer to explore Mars? Can hammers drive?). Calo’s follow-up post to my comment on his post was roughly along the lines of “We don’t know what to do with artificial agents.” Well, yes, but I thought the point was to evaluate the attempt currently mounted in our book? I didn’t quite understand the point of Calo’s responses: that we don’t have a comprehensive theory for artificial agents i.e., the book’s title is misleading? I could be persuaded into mounting a guilty plea for that. But the point of the book was to indicate how existing doctrines could be so suitably modified to start accommodating artificial agents- that is how a legal theory will be built up in a common law system.

Deborah DeMott (Duke) whose writings on the common law doctrines of agency were very useful in our analysis in the book offered a very good analysis of our attempts to apply that doctrine to artificial agents. While DeMott disagreed with the exactness of the fit, she seemed not to think that it was completely off-base (she certainly found our attempt “lively and ingenious”!); in my response I attempted to clarify and defend some of our reasons for why we thought agency doctrine would work with artificial agents.

Ken Anderson (American University, Volokh Conspiracy) then discussed our treatment of intentionality and deployment of the intentional stance, and queried whether we intended to use the intentional stance merely as a heuristic device or whether we were, in fact, making a broader claim for intentionality in general. In my response I noted that we wanted to do both: use it as a methodological stance, and in doing so, also point an investigative lens at our understanding of intentionality in general. Ken’s reaction was very positive; he thought the book had hit a “sweet spot” in not being excessively pie-in-the-sky while offering serious doctrinal recommendations.

Ian Kerr (Ottawa), in his response, didn’t feel the book went far enough in suggesting a workable theory for artificial agents; if I understood Ian correctly, his central complaint was that the theory relied too much on older legal categories and doctrines and that artificial agents might need an entirely new set of legal frameworks. But Ian also felt the slow and steady march of the common law was the best way to handle the challenges posed by artificial agents. So, interestingly enough, I agree with Ian; and I think Ian should be less dissatisfied than he is; our book is  merely the first attempt to try and leverage the common law to make these steps to work towards a more comprehensive theory. In fact, given rapid developments in artificial agents, the law is largely going to be playing catchup more than anything else.

Andrew Sutter then wrote a critical, rich response, one that took aim at the book’s rhetoric, its methodology, and its philosophical stance. I greatly enjoyed my jousting with Andrew during this symposium, and my response to his post–and to his subsequent comments–in which I attempted to clarify my philosophical stance and presuppositions, will show that.

Harry Surden (Colorado) wrote a very good post on two understanding of artificial intelligence’s objectives–intelligence as the replication of human cognitive capacities by either replicating human methods of achieving them or via simulations that utilize other techniques–and how these could or would be crucial in the legal response to its achievements. My response to Surden acknowledged the importance of these distinctions and noted that this should also cause us to think about how we often ascribe human cognition a certain standing that arises largely because of a lack of understanding of its principles. (This also provoked an interesting discussion with AJ Sutter.)

Andrea Matwyshyn wrote an excellent, seriously engaged post that took head-on, the fairly detailed and intricate arguments of Chapter 2 (where we offer a solution for the so-called contracting problem by offering an argument that artificial agents be considered legal agents of their users). My response to Matwyshyn acknowledged the force of her various critical points while trying to expand and elaborate the economic incentivizing motivation for our claim that artificial agents should be considered as non-identical with their creators and/or deployers.

Once again, I am grateful to Frank Pasquale and the folks over at Concurring Opinions for staging the symposium and to all the participants for their responses.

Artificial Agents and the Law: Legal Personhood in Good Time

The Concurring Opinions online symposium on A Legal Theory for Autonomous Artificial Agents is under way, and most respondents thus far are taking on the speculative portion of the book (where we suggest that legal personhood for autonomous artificial is philosophically and legally coherent and might be advanced in the future). The incremental arguments constructed in Chapters 2 and 3 for considering artificial agents as legal agents for the purposes of contracting and knowledge attribution have not yet been taken on. Sometimes, as a result, it has seemed that we are suggesting a far greater change to existing legal doctrines than the modest changes we actually do suggest. Those modest changes could, admittedly, go on to have widespread ramifications, but still, for the time being, that’s all we do.

I’m not surprised that most respondents to the book thus far have chosen to concentrate on the ‘sexier’ argument in Chapter 5. In any case, these comments are very thoughtful and are thought-provoking and as a result they have already resulted in some very interesting discussion. Indeed, some of the objections raised are going to require some very careful responses from my side.

Still, the concentration on the legal personhood aspect of the doctrine we suggest might result in one confusion being created: that in this book we are advocating personhood for artificial agents. Not just legal personhood, but in fact personhood tout court. This is especially ironic as we deliberately have chosen the most incremental changes in doctrine possible in keeping with law’s generally conservative treatment of proposed changes to legal doctrine.

Here is what we say in the introduction about the argument for legal personhood:

<start quote>

In Chapter 5, we explore the potential for according sophisticated artificial agents with legal personality. In order to provide a discursive framework, we distinguish between dependent and independent legal persons. We conclude that the conditions for each kind of legal personality could, in principle, be met by artificial agents in the right circumstances. [emphasis added] We suggest objections to such a status for them are based on a combination of human chauvinism and a misunderstanding of the notion of a legal person [more often than not, this is the conflation of “human” with “legal person”]. We note the result-oriented nature of the jurisprudence surrounding legal personality, and surmise legal personality for artificial agents will follow on their attaining a sufficiently rich and complex positioning within our network of social and economic relationships. The question of legal personality for artificial agents will be informed by a variety of pragmatic, philosophical and extra-legal concepts; philosophically unfounded chauvinism about human uniqueness should not and would not play a significant role in such deliberations.

<end quote>

The “result-oriented nature of the jurisprudence surrounding legal personality” is actually such as to suggest that artificial agents might even be considered legal persons for the purposes of contracting now. But for the time being, I think, we can get the same outcomes just by treating them as legal agents without personhood. Which is why we advocate that change first, and suggest we wait till they attain “a sufficiently rich and complex positioning within our network of social and economic relationships.”

Artificial Agents and the Law: Some Preliminary Considerations

As I noted here last week, the Concurring Opinions blog will be hosting an online symposium on my book A Legal Theory for Autonomous Artificial Agents. There has already been some discussion over at the blog; I’m hoping that once the book has been read and its actual arguments engaged with, we can have a more substantive discussion.

Last week, I spoke on the book at Bard College, to a mixed audience of philosophy, computer science, and cognitive science faculty and students. The question-and-answer session was quite lively and our conversations continued over dinner later.  Some of the questions that were directed at me are quite familiar to me by now: Why make any change in the legal status of artificial agents? That is, why elevate them from non-entities in the ontology of the law to the status of legal agents, or beyond? When it comes to assigning responsibility, why not simply make the designers or deployers of agents responsible for all acts of the artificial agents?  How can an artificial agent, which lacks the supposedly distinctively-human characteristics of <insert consciousness, free-will, rationality, autonomy, subjectivity, phenomenal experience here> ever be considered an “agent” or a “person”? Aren’t you abusing language when you say that a program or a robot can be attributed knowledge? How can those kinds of things ever “know” anything? Who is doing the knowing?  

I’ll be addressing questions like these (I’m reasonably sure of that) over at the online symposium, which starts tomorrow. For the time being, I’d like to make a couple of general remarks. 

The modest changes in legal doctrine proposed in our book are largely driven by two considerations. 

First, existent legal doctrine, in a couple of domains, most especially contracting, which kicks off our discussion and serves as the foundations for the eventual development of the book’s thesis, is placed under considerable strain by its current treatment of highly sophisticated artificial agents. We could maintain current contracting doctrines as is but run the risk of increasing the implausibility of that doctrine. This might be seen as a reasonable price to pay so that we can maintain our intuitions about the kinds of beings we take artificial agents to be. I’d suggest this kind of retention of intuitions starts to become increasingly untenable when we see the disparateness in the entities that are placed in the same legal category. (Are autonomous robots really just the same as tools like hammers?)   

Second, a change in a  legal doctrine can sometimes bring about better outcomes for us. One of the crucial arguments in our Chapter 2, (one I really hope readers engage with) is an assessment in the economic dimension of electronic contracting by artificial agents considered as legal agents. I share the skepticism of those in the legal academy that economic analysis of law not drive all doctrinal changes but in this case, I’d suggest the risk allocation does work out better. But I think an even stronger argument can be made when it comes to privacy. In Chapter 3, the dimissal of the Google defense (“if humans don’t read your email, your privacy is not violated”) is enabled precisely by treating artificial agents as legal  agents.

Much more on this in the next few days.

Concurring Opinions Online Symposium on A Legal Theory for Autonomous Artificial Agents

Remember that New York Times article about all the legal headaches that Google’s autonomous cars are causing? Well, if you found that interesting, you should read on.

On February 14-16, the Concurring Opinions blog will host an online symposium dedicated to a discussion of my book A Legal Theory for Autonomous Artificial Agents. (Many thanks to Frank Pasquale for organizing this; Concurring Opinions’ online symposiums are quite a treat; in the past it has put on symposiums for on Tim Wu’s Master Switch  and Jonathan Zittrain’s Future of the Internet.) You can find a preview of the book at Amazon. David Coady recently helped launch the book in Melbourne, Australia with some rather witty and personal opening remarks; well worth a read (full disclosure: David is an old friend of mine).

As of now, the stellar line-up of participants includes Ken AndersonRyan Calo, James Grimmelmann, Sonia Katyal, Ian KerrAndrea MatwyshynDeborah DeMottPaul Ohm,  Ugo Pagallo, Lawrence SolumRamesh Subramanian and Harry Surden. The quality, breadth and range of scholarship included in that list is quite awe-inspiring. I look forward to reading their responses and discussing the book’s arguments and analysis with them.

The following is the Introduction to the book (Chapter 2 of the book was published in the Illinois Journal of Law, Technology and Policy, and can be found online at SSRN; I will post more excerpts from the book in the next couple of weeks):

Social and economic interactions today increasingly feature a new category of being: the artificial agent. It buys and sells goods; determines eligibility for legal entitlements like healthcare benefits; processes applications for visas and credit cards; collects, acquires and processes financial information; trades on stock markets; and so on. We use language inflected with intentions in describing our interactions with an artificial agent, as when we say “the shopping cart program wants to know my shipping address.” This being’s competence at settling into our lives, in taking on our tasks, leads us to attribute knowledge and motivations, and to delegate responsibility, to it. Its abilities, often approximating human ones and sometimes going beyond them, make it the object of fear and gratitude: it might spy on us, or it might relieve us of tedium and boredom.

The advances in the technical sophistication and autonomous functioning of these systems represent a logical continuation of our social adoption of technologies of automation. Agent programs represent just one end of a spectrum of technologies that automate human capacities and abilities, extend our cognitive apparatus, and become modeled enhancements of ourselves. More than ever before, it is coherent to speak of computer programs and hardware systems as agents working on our behalf. The spelling checker that corrects this page as it is written is a lexicographic agent that aids in our writing, as much an agent as the automated trading system of a major Wall Street brokerage, and the PR2 robot, a prototype personal robotic assistant (Markoff 2009). While some delegations of our work to such agents are the oft-promised ones of alleviating tedious labor, others are ethically problematic, as in robots taking on warfare roles (Singer 2009). Yet others enable a richer, wider set of social and economic interconnections in our networked society, especially evident in e-commerce (Papazoglu 2001).

As we increasingly interact with these artificial agents in unsupervised settings, with no human mediators, their seeming autonomy and increasingly sophisticated functionality and behavior, raises legal and philosophical questions. For as the number of interactions mediated by artificial agents increase, as they  become actors in literal, metaphorical and legal senses, it is ever more important to understand, and do justice to, the artificial agent’s role within our networks of social, political and economic relations. What is the standing of these entities in our socio-legal framework? What is the legal status of the commercial transactions they enter into? What legal status should artificial agents have? Should they be mere things, tools, and instrumentalities?  Do they have any rights, duties, obligations? What are the legal strategies to make room for these future residents of our polity and society? The increasing sophistication, use, and social embedding of computerized agents makes the coherent answering of older questions raised by mechanical automation ever more necessary.

Carving out a niche for a new category of legal actor is a task rich with legal and philosophical significance. The history of jurisprudence addressing doctrinal changes in the law suggests legal theorizing to accommodate artificial agents will inevitably find its pragmatic deliberations colored by philosophical musings over the nature and being of these agents. Conversely, the accommodation, within legal doctrines, of the artificial agent, will influence future philosophical theorizing about such agents, for such accommodation will invariably include conceptual and empirical assessments of their capacities and abilities. This interplay between law and philosophy is not new: philosophical debates on personhood, for instance, cannot proceed without an acknowledgement of the legal person, just as legal discussions on tort liability are grounded in a philosophical understanding of responsibility and causation.

This book seeks to advance interdisciplinary legal scholarship in answer to the conundrums posed by this new entity in our midst. Drawing upon both contemporary and classical legal and philosophical analysis, we attempt to develop a prescriptive legal theory to guide our interactions with artificial agents, whether as users or operators entering contracts, acquiring knowledge or causing harm through agents, or as persons to whom agents are capable of causing harm in their own right. We seek to apply and extend existing legal and philosophical theories of agency, knowledge attribution, liability, and personhood, to the many roles artificial agents can be expected to play and the legal challenges they will pose while so doing. We emphasize legal continuity, while seeking to refocus on deep existing questions in legal theory.

The artificial agent is here to stay; our task is to accommodate it in a manner that does justice to our interests and its abilities.