Personhood for Non-Humans (including Artificial Agents)

As these articles in recent issues of the New York Times (here and here) and the holding of the Personhood Beyond the Human conference indicate, personhood for non-humans is a live issue, both philosophical and legal. As I noted during the Concurring Opinions online symposium on my book A Legal Theory for Autonomous Artificial Agents last year, (an additional link to discussions is here) this includes personhood for artificial agents. Rather than repeat the arguments I made during that symposium, let me just quote–self-indulgently, at a little length–from the conclusion of my book:

The most salutary effect of our discussions thus far on the possibility of personhood for artificial agents might have been to point out the conceptual difficulties in ascriptions of personhood—especially acute in accounts of personhood based on psychological characteristics that might give us both too many persons and too few—and its parasitism on our social needs. The grounding of the person in social needs and legal responsibilities suggests personhood is socially determined, its supposed essence nominal, subject to revision in light of different usages of person. Recognizing personhood may consist of a set of customs and practices, and so while paradigmatic conceptions of persons are based on human beings…the various connections of the concept of person with legal roles concede personhood is a matter of interpretation of the entities in question, explicitly dependent on our relationships and interactions with them.

Personhood thus emerges as a relational, organizing concept that reflects a common form of life and common felt need. For artificial agents to be become legal persons, a crucial determinant would be the formation of genuinely interesting relationships, both social and economic, for it is the complexity of the agent’s relational interactions that will be of crucial importance.

Personhood is a status marker of a class of agents we, as a species, are interested in and care about. Such recognition is a function of a rich enough social organization that demands such discourse as a cohesive presence and something that enables us to make the most sense of our fellow beings. Beings that do not possess the capacities to enter into a sufficiently complex set of social relationships are unlikely to be viewed as moral or legal persons by us. Perhaps when the ascription of second-order intentionality becomes a preferred interpretationist strategy in dealing with artificial agents, relationships will be more readily seen as forming between artificial agents and others and legal personhood is more likely to be assigned.

Fundamentally, the question of extending legal personality to a particular category of thing remains one of assessing its social importance….The evaluation of the need for legal protection for the entity in question is sensitive, then, to the needs of the community. The entity in question might interact with, and impinge on, social, political, and legal institutions in such a way that the only coherent understanding of its social role emerges by treating it as a person.

The question of legal personality suggests the candidate entity’s presence in our networks of legal and social meanings has attained a level of significance that demands reclassification. An entity is a viable candidate for legal personality in this sense if it fits within our networks of social, political, and economic relations in such a way it can coherently be a subject of legal rulings.

Thus, the real question is whether the scope and extent of artificial agent interactions have reached such a stage. Answers will reveal what we take to be valuable and useful in our future society as well, for we will be engaged in determining what roles artificial agents should be playing for us to be convinced the question of legal personality has become a live issue. Perhaps artificial agents can only become persons if they enter into social relationships that go beyond purely commercial agentlike relationships to genuinely personal relationships (like medical care robots or companion robots). And even in e-commerce settings, an important part of forming deeper commercial relationships will be whether trust will arise between human and artificial agents; users will need to be convinced “an agent is capable of reliably performing required tasks” and will pursue their interests rather than that of a third party.

Autopoietic legal theory, which emphasizes the circularity of legal concepts, suggests too, that artificial agents’ interactions will play a crucial role in the determination of legal personality….If it is a sufficient condition for personality that an entity engage in legal acts, then, an artificial agent participating in the formation of contracts becomes a candidate for legal personality by virtue of its participation in those transactions.

Personhood may be acquired in the form of capacities and sensibilities acquired through initiation into the traditions of thought and action embodied in language and culture; personhood may be result of the maturation of beings, whose attainment depends on the creation of an evolving intersubjectivity. Artificial agents may be more convincingly thought of as persons as their role within our lives increases and as we develop such intersubjectivity with them. As our experience with children shows, we slowly come to accept them as responsible human beings. Thus we might come to consider artificial agents as legal persons for reasons of expedience, while ascriptions of full moral personhood, independent legal personality, and responsibility might await the attainment of more sophisticated capacities on their part.

Conclusion

While artificial agents are not close to being regarded as moral persons, they are coherently becoming subjects of the intentional stance, and may be thought of as intentional agents. They take actions that they initiate, and their actions can be understood as originating in their own reasons. An artificial agent with the right sorts of capacities—most importantly, that of being an intentional system—would have a strong case for legal personality, a case made stronger by the richness of its relationships with us and by its behavioral patterns. There is no reason in principle that artificial agents could not attain such a status, given their current capacities and the arc of their continued development in the direction of increasing sophistication.

The discussion of contracting suggested the capabilities of artificial agents, doctrinal convenience and neatness, and the economic implications of various choices would all play a role in future determinations of the legal status of artificial agents. Such “system-level” concerns will continue to dominate for the near future. Attributes such as the practical ability to perform cognitive tasks, the ability to control money, and considerations such as cost benefit analysis, will further influence the decision whether to accord legal personality to artificial agents. Such cost-benefit analysis will need to pay attention to whether agents’ principals will have enough economic incentive to use artificial agents in an increasing array of transactions that grant agents more financial and decision-making responsibility, whether principals will be able, both technically and economically, to grant agents adequate capital assets to be full economic and legal players in tomorrow’s marketplaces, whether the use of such artificial agents will require the establishment of special registers or the taking out of insurance to cover losses arising from malfunction in contractual settings, and even the peculiar and specialized kinds and costs of litigation that the use of artificial agents will involve. Factors such as efficient risk allocation, whether it is necessary to introduce personality in order to explain all relevant phenomena, and whether alternative explanations gel better with existing theory, will also carry considerable legal weight in deliberations over personhood. Most fundamentally, such an analysis will evaluate the transaction costs and economic benefits of introducing artificial agents as full legal players in a sphere not used to an explicit acknowledgment of their role.

Many purely technical issues remain unresolved as yet….Economic considerations might ultimately be the most important in any decision whether to accord artificial agents with legal personality. Seldom is a law proposed today in an advanced democracy without some semblance of a utilitarian argument that its projected benefits would outweigh its estimated costs. As the range and nature of electronic commerce transactions handled by artificial agents grows and diversifies, these considerations will increasingly come into play. Our discussion of the contractual liability implications of the agency law approach to the contracting problem was a partial example of such an analysis.

Whatever the resolution of the arguments considered above, the issue of legal personality for artificial agents may not come ready-formed into the courts, or the courts may be unable or unwilling to do more than take a piecemeal approach, as in the case of extending constitutional protections to corporations. Rather, a system for granting legal personality may need to be set out by legislatures, perhaps through a registration system or “Turing register,”.

A final note on these entities that challenge us by their advancing presence in our midst. Philosophical discussions on personal identity often take recourse in the pragmatic notion that ascriptions of personal identity to human beings are of most importance in a social structure where that concept plays the important legal role of determining responsibility and agency. We ascribe a physical and psychological coherence to a rapidly changing object, the human being, because otherwise very little social interaction would make sense. Similarly, it is unlikely that, in a future society where artificial agents wield significant amounts of executive power, anything would be gained by continuing to deny them legal personality. At best it would be a chauvinistic preservation of a special status for biological creatures like us. If we fall back repeatedly on making claims about human uniqueness and the singularity of the human mind and moral sense in a naturalistic world order, then we might justly be accused of being an “autistic” species, unable to comprehend the minds of other types of beings.

Note: Citations removed above.

Robot Graders: A Professor’s Delight?

Over at Concurring Opinions, Deven Desai makes note of an interesting study–whose details I have not yet had the time to investigate–underwritten by the William and Flora Hewlett Foundation and conducted by a team of “experts in educational measurement and assessment, led by Dr. Mark Shermis, dean of the College of Education at The University of Akron.” The study claims to have found that,

A direct comparison between human graders and software designed to score student essays achieved virtually identical levels of accuracy, with the software in some cases proving to be more reliable [I am not quite sure what ‘reliable’ means here]

The reaction of at least one kind of college professor is, I suspect, likely to be: Hallelujah, no more grading! Another kind will mutter and grumble about the invasion of a domain of faculty privilege, the mechanization of a humanist skill, the loss to students of vital professorial feedback and so on. I’m not quite sure which camp I fall into.

The reason for that ambiguous response is that I find the business of grading papers (student writing assignments) genuinely perplexing. I’ve now been grading papers, on and off, for some fifteen years. (That is how long I have been teaching philosophy, first as a graduate teaching fellow, and then later, of course, as a full-time faculty member; before that my teaching was centered on computer science classes and there was little writing to grade.) In that time, I have never had a teaching assistant to help me with grading but neither have I had to teach a class with more thirty students in it. But twenty or so six-page or four-page papers–the standard length of my assignments, of which I assign three in a typical philosophy class–is still plenty of work.

And that is so because fifteen years on, I’m still not quite sure how to provide good feedback to my students. I find writing to be very hard work; I struggle with it constantly; I still remain terrified by the blank page. More to the point, when confronted by a piece of writing that doesn’t ‘read well,’ I don’t quite know how to instruct someone other than me in the business of how to make it better. There is an exaggeration here, of course; I can point out problems in relevance (‘You haven’t addressed the question I asked!’); I can note elementary mistakes in spelling and grammar; I can point to mangled sentences and constructions that don’t make sense. And so on. But at the end of this process it still seems like there is something that I haven’t managed to convey to my students. It is for this reason that I urge my student to consult with writing tutors, to have their papers read by their friends (or even their parents, if they have time!).

The long and short of it is that I continue to find writing a bit of a mystery, and given that I find it so intractable, I find the task of teaching someone else how to do it to be particularly insuperable.

Any help would be much appreciated. Bring on the robotic graders!

Kraftwerk Makes Us Tell The Truth: We Are The Robots?

Kraftwerk’s The Robots has been an electro-pop classic ever since its release–on Kraftwerk’s classic seventh album, The Man-Machinein 1978. My brother and I discovered Kraftwerk at roughly the same time, and, like many other schoolboys, quickly became entranced by its revolutionary blend of synthesizers, vocoders, and electronic percussion.  Some thirty years on, I still get a kick out of strapping on the earphones for The Robots (and turning up the volume to eleven); I don’t dance to it but the temptation never quite goes away. (I’ve only seen The Robots  performed live once, when I saw Kraftwerk at Sydney’s Enmore Theater in January 2003.)

Besides triggering the urge to flop around in slightly demented fashion, there are two juvenile fantasies of mine that The Robots gives comfort and succor to: One, as as part of a grand book tour for A Legal Theory for Autonomous Artificial Agents, I would make a presentation centered on the book that would feature The Robots playing in the background as I walked on stage (I don’t need smoke machines or lasers). And, wouldn’t it pretty nifty if I could get an e-book version that would play The Robots when the book file was first opened? Trust me, I spend time thinking about this stuff.

(In the summer of 2006, I played The Robots for Brooklyn high-school students at the conclusion of a summer ‘camp’ that had introduced them to, among other things, robotics and cryptography. I had  taught the cryptography track but thought the young folks that had worked on robotics kits would appreciate both the track and the fact that computers and music were connected in ways other than downloading. I’m not sure it went down all that well; most of the students in attendance found the sound perplexing, so at least for that generation, or that demographic, the track had not aged well nor come across as relevant.)

There are many good versions of The Robots out there; this soundboard-recording from a Birmingham (UK) concert on 15 July 1991, from the Dynamo Deutschland CDs is particularly good. The Russian lines “Я твой слуга” (Ya tvoi sluga, I’m your servant) and “Я твой работник” (Ya tvoi rabotnik, I’m your worker) come across particularly clearly; the lyrics in this live version are also slightly, interestingly, different from the standard lyrics. (Since this is a soundboard recording there are also some irritating sections where concert-goers can be heard talking!)

The most memorable part of this live track are the sing-along chants, between 1:55 and 2:15, for the chorus “We are the Robots”. In performing the sing-along so vigorously, the Birmingham concert-goers perhaps make two kinds of statements: one, an acknowledgement, in this hyper-corporatized and industrialized age, of the enduring relevance of the two lines in Russian quoted above; and second, a vocalized bridging of the gap between the robots and themselves, perhaps even a joining of communities. The former is appropriately disturbing, but the latter at least can be optimistically read as a denial of difference. (As I often sought to remind my interlocutors during the recent online symposium on my book, we are often more like robots than we might imagine.)

Report on Concurring Opinions Symposium on Artificial Agents – II

Today, I’m continuing my wrap-up of the Concurring Opinions online symposium on A Legal Theory for Autonomous Artificial Agents. I’ll be noting below the various responses to the book and point to my responses to them (Part I of this wrap-up was posted yesterday).

While almost all respondents seem to have seriously engaged with the book’s analysis, Ryan Calo wrote a disappointingly unengaged, and at times, patronizing post that ostensibly focused on the book’s methodological adoption of the intentional stance; it seemed to suggest that all we were doing was primitive anthropomorphizing. This was a pretty comprehensive misread of the book’s argument, so I struggled to find anything to say in response. Calo also said he didn’t know whether an autonomous robot was like a hammer or not; this was a bizarre admission coming from someone that is concerned with the legal implications of robotics. I noted in one of my responses that figuring out the answer to that question can be aided by some intuition-tickling questions (Like: Would NASA send a hammer to explore Mars? Can hammers drive?). Calo’s follow-up post to my comment on his post was roughly along the lines of “We don’t know what to do with artificial agents.” Well, yes, but I thought the point was to evaluate the attempt currently mounted in our book? I didn’t quite understand the point of Calo’s responses: that we don’t have a comprehensive theory for artificial agents i.e., the book’s title is misleading? I could be persuaded into mounting a guilty plea for that. But the point of the book was to indicate how existing doctrines could be so suitably modified to start accommodating artificial agents- that is how a legal theory will be built up in a common law system.

Deborah DeMott (Duke) whose writings on the common law doctrines of agency were very useful in our analysis in the book offered a very good analysis of our attempts to apply that doctrine to artificial agents. While DeMott disagreed with the exactness of the fit, she seemed not to think that it was completely off-base (she certainly found our attempt “lively and ingenious”!); in my response I attempted to clarify and defend some of our reasons for why we thought agency doctrine would work with artificial agents.

Ken Anderson (American University, Volokh Conspiracy) then discussed our treatment of intentionality and deployment of the intentional stance, and queried whether we intended to use the intentional stance merely as a heuristic device or whether we were, in fact, making a broader claim for intentionality in general. In my response I noted that we wanted to do both: use it as a methodological stance, and in doing so, also point an investigative lens at our understanding of intentionality in general. Ken’s reaction was very positive; he thought the book had hit a “sweet spot” in not being excessively pie-in-the-sky while offering serious doctrinal recommendations.

Ian Kerr (Ottawa), in his response, didn’t feel the book went far enough in suggesting a workable theory for artificial agents; if I understood Ian correctly, his central complaint was that the theory relied too much on older legal categories and doctrines and that artificial agents might need an entirely new set of legal frameworks. But Ian also felt the slow and steady march of the common law was the best way to handle the challenges posed by artificial agents. So, interestingly enough, I agree with Ian; and I think Ian should be less dissatisfied than he is; our book is  merely the first attempt to try and leverage the common law to make these steps to work towards a more comprehensive theory. In fact, given rapid developments in artificial agents, the law is largely going to be playing catchup more than anything else.

Andrew Sutter then wrote a critical, rich response, one that took aim at the book’s rhetoric, its methodology, and its philosophical stance. I greatly enjoyed my jousting with Andrew during this symposium, and my response to his post–and to his subsequent comments–in which I attempted to clarify my philosophical stance and presuppositions, will show that.

Harry Surden (Colorado) wrote a very good post on two understanding of artificial intelligence’s objectives–intelligence as the replication of human cognitive capacities by either replicating human methods of achieving them or via simulations that utilize other techniques–and how these could or would be crucial in the legal response to its achievements. My response to Surden acknowledged the importance of these distinctions and noted that this should also cause us to think about how we often ascribe human cognition a certain standing that arises largely because of a lack of understanding of its principles. (This also provoked an interesting discussion with AJ Sutter.)

Andrea Matwyshyn wrote an excellent, seriously engaged post that took head-on, the fairly detailed and intricate arguments of Chapter 2 (where we offer a solution for the so-called contracting problem by offering an argument that artificial agents be considered legal agents of their users). My response to Matwyshyn acknowledged the force of her various critical points while trying to expand and elaborate the economic incentivizing motivation for our claim that artificial agents should be considered as non-identical with their creators and/or deployers.

Once again, I am grateful to Frank Pasquale and the folks over at Concurring Opinions for staging the symposium and to all the participants for their responses.

Report on Concurring Opinions Symposium on Artificial Agents – I

The Concurring Opinions online symposium on my recently-released book A Legal Theory for Autonomous Artificial Agents (University of Michigan Press, 2011) wrapped up yesterday. The respondents to the book blogged on it from Tuesday till Thursday last week; from Friday till Monday I spent most of my time putting together responses to the excellent responses offered by the participants; I also replied to comments made by blog readers (two of whom, Patrick S. O’Donnell and AJ Sutter, provided very thoughtful and critical commentary).

Frank Pasquale (Seton Hall) organized the symposium and announced it on the blog on February 2nd.  The symposium was kicked off by Sonia Katyal (Fordham) who responded to the book’s argument for legal personhood for artificial agents. While positive in her response, Katyal was curious about whether a strong enough case for legal personhood had been made yet (compared to the historical case for corporations for instance). (This was useful in helping me think about how such a legal-empirical case could be made for artificial agents’ legal personhood, something I alluded to in my response.)

James Grimmelmann (New York Law School) then followed up with a post that addressed the law’s response to complex systems and pointed out that responding to the presence of artificial agents could or would draw upon some of those patterns of response. (Sonia and James had started things a little early so my introductory post on artificial agents showed up after theirs!) James also wrote a follow-up to his first piece, which further elaborated on some of law’s strategies for dealing with complexity, pointing out the grant of personhood was not inevitable. These posts were very useful in illustrating the law’s pragmatic stance towards the presence of complex systems. (Danielle Citron (Maryland), incidentally, wrote a reminder of how automated decision making has been causing a headache for administrative law; in the original version of our book we had begun work on a chapter that addressed this but left it on the cutting floor; it would be good to resurrect that at some point.)

Lawrence Solum (Georgetown and Illinois), who has been writing at the intersection of philosophy and law for many years, then wrote a post suggesting that some dimensions of the problem of artificial agents’ legal personhood could be illustrated by a thought experiment involving zombies.  (I drew upon this thought experiment with another one of my own: how would we respond to extraterrestrials that petitioned for legal personhood?)

Frank Pasquale then pointed out how bots were being used for political campaigning and could be said to be contributing to political speech; this was really quite a provocative and fascinating post and I regret not having addressed it over at CO in my responses. I will do so soon here.

Ugo Pagallo (Georgetown and Turin), staying with the legal personhood theme, then questioned several aspects of our personhood argument, (while agreeing with our agency analysis in earlier parts of the book). In my response to Ugo, I suggested we were in greater argument than it might have originally seemed. Ramesh Subramanian (Yale ISP and Quinnipiac), meanwhile, took the argument for legal personhood seriously, and wondered more broadly about what some of its futuristic implications could be.

I will have another post tomorrow with summaries and descriptions of the various responses and the discussions that followed. This was an exhausting and invigorating experience in more ways than one.

Artificial Agents and the Law: Legal Personhood in Good Time

The Concurring Opinions online symposium on A Legal Theory for Autonomous Artificial Agents is under way, and most respondents thus far are taking on the speculative portion of the book (where we suggest that legal personhood for autonomous artificial is philosophically and legally coherent and might be advanced in the future). The incremental arguments constructed in Chapters 2 and 3 for considering artificial agents as legal agents for the purposes of contracting and knowledge attribution have not yet been taken on. Sometimes, as a result, it has seemed that we are suggesting a far greater change to existing legal doctrines than the modest changes we actually do suggest. Those modest changes could, admittedly, go on to have widespread ramifications, but still, for the time being, that’s all we do.

I’m not surprised that most respondents to the book thus far have chosen to concentrate on the ‘sexier’ argument in Chapter 5. In any case, these comments are very thoughtful and are thought-provoking and as a result they have already resulted in some very interesting discussion. Indeed, some of the objections raised are going to require some very careful responses from my side.

Still, the concentration on the legal personhood aspect of the doctrine we suggest might result in one confusion being created: that in this book we are advocating personhood for artificial agents. Not just legal personhood, but in fact personhood tout court. This is especially ironic as we deliberately have chosen the most incremental changes in doctrine possible in keeping with law’s generally conservative treatment of proposed changes to legal doctrine.

Here is what we say in the introduction about the argument for legal personhood:

<start quote>

In Chapter 5, we explore the potential for according sophisticated artificial agents with legal personality. In order to provide a discursive framework, we distinguish between dependent and independent legal persons. We conclude that the conditions for each kind of legal personality could, in principle, be met by artificial agents in the right circumstances. [emphasis added] We suggest objections to such a status for them are based on a combination of human chauvinism and a misunderstanding of the notion of a legal person [more often than not, this is the conflation of “human” with “legal person”]. We note the result-oriented nature of the jurisprudence surrounding legal personality, and surmise legal personality for artificial agents will follow on their attaining a sufficiently rich and complex positioning within our network of social and economic relationships. The question of legal personality for artificial agents will be informed by a variety of pragmatic, philosophical and extra-legal concepts; philosophically unfounded chauvinism about human uniqueness should not and would not play a significant role in such deliberations.

<end quote>

The “result-oriented nature of the jurisprudence surrounding legal personality” is actually such as to suggest that artificial agents might even be considered legal persons for the purposes of contracting now. But for the time being, I think, we can get the same outcomes just by treating them as legal agents without personhood. Which is why we advocate that change first, and suggest we wait till they attain “a sufficiently rich and complex positioning within our network of social and economic relationships.”

Artificial Agents and the Law: Some Preliminary Considerations

As I noted here last week, the Concurring Opinions blog will be hosting an online symposium on my book A Legal Theory for Autonomous Artificial Agents. There has already been some discussion over at the blog; I’m hoping that once the book has been read and its actual arguments engaged with, we can have a more substantive discussion.

Last week, I spoke on the book at Bard College, to a mixed audience of philosophy, computer science, and cognitive science faculty and students. The question-and-answer session was quite lively and our conversations continued over dinner later.  Some of the questions that were directed at me are quite familiar to me by now: Why make any change in the legal status of artificial agents? That is, why elevate them from non-entities in the ontology of the law to the status of legal agents, or beyond? When it comes to assigning responsibility, why not simply make the designers or deployers of agents responsible for all acts of the artificial agents?  How can an artificial agent, which lacks the supposedly distinctively-human characteristics of <insert consciousness, free-will, rationality, autonomy, subjectivity, phenomenal experience here> ever be considered an “agent” or a “person”? Aren’t you abusing language when you say that a program or a robot can be attributed knowledge? How can those kinds of things ever “know” anything? Who is doing the knowing?  

I’ll be addressing questions like these (I’m reasonably sure of that) over at the online symposium, which starts tomorrow. For the time being, I’d like to make a couple of general remarks. 

The modest changes in legal doctrine proposed in our book are largely driven by two considerations. 

First, existent legal doctrine, in a couple of domains, most especially contracting, which kicks off our discussion and serves as the foundations for the eventual development of the book’s thesis, is placed under considerable strain by its current treatment of highly sophisticated artificial agents. We could maintain current contracting doctrines as is but run the risk of increasing the implausibility of that doctrine. This might be seen as a reasonable price to pay so that we can maintain our intuitions about the kinds of beings we take artificial agents to be. I’d suggest this kind of retention of intuitions starts to become increasingly untenable when we see the disparateness in the entities that are placed in the same legal category. (Are autonomous robots really just the same as tools like hammers?)   

Second, a change in a  legal doctrine can sometimes bring about better outcomes for us. One of the crucial arguments in our Chapter 2, (one I really hope readers engage with) is an assessment in the economic dimension of electronic contracting by artificial agents considered as legal agents. I share the skepticism of those in the legal academy that economic analysis of law not drive all doctrinal changes but in this case, I’d suggest the risk allocation does work out better. But I think an even stronger argument can be made when it comes to privacy. In Chapter 3, the dimissal of the Google defense (“if humans don’t read your email, your privacy is not violated”) is enabled precisely by treating artificial agents as legal  agents.

Much more on this in the next few days.