Beyonce And The Singularity

A couple of decades ago, I strolled through Washington Square Park on a warm summer night, idly observing the usual hustle and bustle of students, tourists, drunks, buskers,  hustlers, stand-up comedians, and sadly, folks selling oregano instead of good-to-honest weed. As I did so, I noticed a young man, holding up flyers and yelling, ‘Legalize Marijuana! Impeach George Bush! [Sr., not Jr., though either would have done just fine.].”  I walked over, and asked for a flyer. Was a new political party being floated with these worthy objectives as central platform  issues? Was there a political movement afoot, one worthy of my support? Was a meeting being called?

The flyers were for a punk rock band’s live performance the following night–at a club, a block or so away. Clickbait, you see, is as old as the hills.

Clickbait works. From the standard ‘You won’t believe what this twelve-year old did to get his divorced parents back together’ to ‘Ten signs your daughter is going to date a loser in high school’, to ‘Nine ways you are wasting money everyday’ – they all work. You are intrigued; you click; the hit-count goes up; little counters spin; perhaps some unpaid writer gets paid as a threshold is crossed; an advertiser forks out money to the site posting the link. Or something like that. It’s all about the hits; they keep the internet engine running; increasing their number justifies any means.

Many a writer finds out that the headlines for their posts changed to something deemed more likely to bring in readers. They often do not agree with these changes–especially when irate readers complain about their misleading nature. This becomes especially pernicious when trash talking about a piece of writing spreads–based not on its content, but on its headline, one not written by the author, but dreamed up by a website staffer instructed to do anything–anything!–to increase the day’s hit-count.

A notable personal instance of this phenomenon occurred with an essay I wrote for The Nation a little while ago. My original title for the essay was: was Programs, Not Just People, Can Violate Your Privacy. I argued that smart programs could violate privacy just like humans could, and that the standard defense used by their deployers–“Don’t worry, no humans are reading your email”–was deliberately and dangerously misleading. I then went to suggest granting a limited form of legal agency to these programs–so that their deployers could be understood as their legal principals and hence, attributed their knowledge and made liable for their actions. I acknowledged the grant of personhood as a legal move that would also solve this problem, but that was not the main thrust of my argument–the grant of legal agency to invoke agency law would be enough.

My essay went online as Programs Are People, Too. It was a catchy title, but it was clickbait. And it created predictable misunderstanding: many readers–and non-readers–simply assumed I was arguing for greater ‘legal rights’ for programs, and immediately put me down as some kind of technophilic anti-humanist. Ironically, someone arguing for the protection of user rights online was pegged as arguing against them. The title was enough to convince them of it. I had thought my original title was more accurate and certainly seemed catchy enough to me. Not so apparently for the folks who ran The Nation‘s site. C’est la vie.

As for Beyonce, I have no idea what she thinks about the singularity.

Programs as Agents, Persons, or just Programs?

Last week, The Nation published my essay “Programs are People, Too“. In it, I argued for treating smart programs as the legal agents of those that deploy them, a legal change I suggest would be more protective of our privacy rights.

Among some of the responses I received was one from a friend, JW, who wrote:

[You write: But automation somehow deludes some people—besides Internet users, many state and federal judges—into imagining our privacy has not been violated. We are vaguely aware something is wrong, but are not quite sure what.]
 
I think we are aware that something is wrong and that it is less wrong.  We already have an area of the law where we deal with this, namely, dog sniffs.  We think dog sniffs are less injurious than people rifling through our luggage, indeed, the law refers to those sniffs are “sui generis.”  And I think they *are* less injurious, just like it doesn’t bother me that google searches my email with an algorithm.  This isn’t to say that it’s unreasonable for some people to be bothered by it, but I do think people are rightly aware that it is different and less intrusive than if some human were looking through their email.  
 
We don’t need to attribute personhood to dogs to feel violated by police bringing their sniffing up to our house for no reason, but at the same time we basically accept their presence in airports.  And what bothers us isn’t what’s in the dog’s mind, but in the master’s.  If a police dog smelled around my house, made an alert, but no police officer was there to interpret the alert, I’m not sure it would bother me.  
 
Similarly, even attributing intentional states to algorithms as sophisticated as a dog, I don’t think their knowledge would bother me until it was given to some human (what happens when they are as sophisticated as humans is another question).  
 
I’m not sure good old fashioned Fourth Amendment balancing can’t be instructive here.  Do we have a reasonable expectation of privacy in x? What are the governmental interests at stake and how large of an intrusion is being made into the reasonable expectation of privacy?  
 

JW makes two interesting points. First, is scanning or reading by programs of our personal data really injurious to privacy in the way a human’s reading is? Second, is the legal change I’m suggesting even necessary?

Second point first. Treating smart programs as legal persons is not necessary to bring about the changes I’m suggesting in my essay. Plain old legal agency without legal personhood will do just fine. Most legal jurisdictions require legal agents to be persons too, but this has not always been the case. Consider the following passage, which did not make it to the final version of the online essay:

If such a change—to full-blown legal personhood and legal agency—is felt to be too much, too soon, then we could also grant programs a limited form of legal agency without legal personhood. There is a precedent for this too: slaves in Roman times, despite not being persons in the eyes of the law, were allowed to enter into contracts for their masters, and were thus treated as their legal intermediaries. I mention this precedent because the legal system might prefer that the change in legal status of artificial agents be an incremental one; before they become legal persons and thus full legal subjects, they could ‘enjoy’ this form of limited legal subjecthood. As a society we might find this status uncomfortable enough to want to change their status to legal persons if we think its doctrinal and political advantages—like those alluded to here—are significant enough.

Now to JW’s first point. Is a program’s access to my personal data less injurious than a human’s? I don’t think so. Programs can do things with data: they can act on it. The opening example in my essay demonstrates this quite well:

Imagine the following situation: Your credit card provider uses a risk assessment program that monitors your financial activity. Using the information it gathers, it notices your purchases are following a “high-risk pattern”; it does so on the basis of a secret, proprietary algorithm. The assessment program, acting on its own, cuts off the use of your credit card. It is courteous enough to email you a warning. Thereafter, you find that actions that were possible yesterday—like making electronic purchases—no longer are. No humans at the credit card company were involved in this decision; its representative program acted autonomously on the basis of pre-set risk thresholds.

Notice in this example that for my life to be impinged on by the agency/actions of others, it was not necessary that a single human being be involved. We so often interact with the world through programs that they command considerable agency in our lives. Our personal data is valuable to us because control of it may make a difference to our lives; if programs can use the data to do so then our privacy laws should regulate them too–explicitly.

Let us return to JW’s sniffer dog example and update it. The dog is a robotic one; it uses sophisticated scanning technology to detect traces of cocaine on a passenger’s bag. When it does so, the nametag/passport photo associated with the bag are automatically transmitted to a facial recognition system, which establishes a match, and immediately sets off a series of alarms: perhaps my bank accounts are closed, perhaps my sophisticated car is immobilized, and so on. No humans need be involved in this decision; I may find my actions curtailed without any human having taken a single action. We don’t need “a police offer to interpret the alert.” (But I’ve changed his dog to a robotic dog, haven’t I? Yes, because the programs I am considering are, in some dimensions, considerably smarter than a sniffer dog. They are much, much, dumber in others.)

In speaking of the sniffer dog, JW says “I don’t think their knowledge would bother me until it was given to some human.” But as our examples show, a program could make the knowledge available to other programs, which could take actions too.

Indeed, programs could embarrass us too: imagine a society in which sex offenders are automatically flagged in public by publishing their photos on giant television screens in Times Square. Scanning programs intercept an email of mine, in which I have sent photos–of my toddler daughter bathing with her pre-school friend–to my wife. They decide on the basis of this data that I am a sex offender and flag me as such. Perhaps I’m only ‘really’ embarrassed when humans ‘view’ my photo but the safeguards for accessing data and its use need to be placed ‘upstream.’

Humans aren’t the only ones taking actions in this world of ours; programs are agents too. It is their agency that makes their access to our data interesting and possibly problematic. The very notion of an autonomous program would be considerably less useful if they couldn’t act on their own, interact with each other, and bring about changes.

Lastly, JW also raises the question of whether we have a reasonable expectation of privacy in our email–stored on our ISP’s providers’ storage. Thanks to the terrible third-party doctrine, the Supreme Court has decided we do not. But this notion is ripe for over-ruling in these days of cloud computing. Our legal changes–on legal and normative grounds–should not be held up by bad law. But even if this were to stand, it would not affect my arguments in the essay, which conclude that data in transit, which is subject to the Wiretap Act, is still something in which we may find a reasonable expectation of privacy.

Personhood for Non-Humans (including Artificial Agents)

As these articles in recent issues of the New York Times (here and here) and the holding of the Personhood Beyond the Human conference indicate, personhood for non-humans is a live issue, both philosophical and legal. As I noted during the Concurring Opinions online symposium on my book A Legal Theory for Autonomous Artificial Agents last year, (an additional link to discussions is here) this includes personhood for artificial agents. Rather than repeat the arguments I made during that symposium, let me just quote–self-indulgently, at a little length–from the conclusion of my book:

The most salutary effect of our discussions thus far on the possibility of personhood for artificial agents might have been to point out the conceptual difficulties in ascriptions of personhood—especially acute in accounts of personhood based on psychological characteristics that might give us both too many persons and too few—and its parasitism on our social needs. The grounding of the person in social needs and legal responsibilities suggests personhood is socially determined, its supposed essence nominal, subject to revision in light of different usages of person. Recognizing personhood may consist of a set of customs and practices, and so while paradigmatic conceptions of persons are based on human beings…the various connections of the concept of person with legal roles concede personhood is a matter of interpretation of the entities in question, explicitly dependent on our relationships and interactions with them.

Personhood thus emerges as a relational, organizing concept that reflects a common form of life and common felt need. For artificial agents to be become legal persons, a crucial determinant would be the formation of genuinely interesting relationships, both social and economic, for it is the complexity of the agent’s relational interactions that will be of crucial importance.

Personhood is a status marker of a class of agents we, as a species, are interested in and care about. Such recognition is a function of a rich enough social organization that demands such discourse as a cohesive presence and something that enables us to make the most sense of our fellow beings. Beings that do not possess the capacities to enter into a sufficiently complex set of social relationships are unlikely to be viewed as moral or legal persons by us. Perhaps when the ascription of second-order intentionality becomes a preferred interpretationist strategy in dealing with artificial agents, relationships will be more readily seen as forming between artificial agents and others and legal personhood is more likely to be assigned.

Fundamentally, the question of extending legal personality to a particular category of thing remains one of assessing its social importance….The evaluation of the need for legal protection for the entity in question is sensitive, then, to the needs of the community. The entity in question might interact with, and impinge on, social, political, and legal institutions in such a way that the only coherent understanding of its social role emerges by treating it as a person.

The question of legal personality suggests the candidate entity’s presence in our networks of legal and social meanings has attained a level of significance that demands reclassification. An entity is a viable candidate for legal personality in this sense if it fits within our networks of social, political, and economic relations in such a way it can coherently be a subject of legal rulings.

Thus, the real question is whether the scope and extent of artificial agent interactions have reached such a stage. Answers will reveal what we take to be valuable and useful in our future society as well, for we will be engaged in determining what roles artificial agents should be playing for us to be convinced the question of legal personality has become a live issue. Perhaps artificial agents can only become persons if they enter into social relationships that go beyond purely commercial agentlike relationships to genuinely personal relationships (like medical care robots or companion robots). And even in e-commerce settings, an important part of forming deeper commercial relationships will be whether trust will arise between human and artificial agents; users will need to be convinced “an agent is capable of reliably performing required tasks” and will pursue their interests rather than that of a third party.

Autopoietic legal theory, which emphasizes the circularity of legal concepts, suggests too, that artificial agents’ interactions will play a crucial role in the determination of legal personality….If it is a sufficient condition for personality that an entity engage in legal acts, then, an artificial agent participating in the formation of contracts becomes a candidate for legal personality by virtue of its participation in those transactions.

Personhood may be acquired in the form of capacities and sensibilities acquired through initiation into the traditions of thought and action embodied in language and culture; personhood may be result of the maturation of beings, whose attainment depends on the creation of an evolving intersubjectivity. Artificial agents may be more convincingly thought of as persons as their role within our lives increases and as we develop such intersubjectivity with them. As our experience with children shows, we slowly come to accept them as responsible human beings. Thus we might come to consider artificial agents as legal persons for reasons of expedience, while ascriptions of full moral personhood, independent legal personality, and responsibility might await the attainment of more sophisticated capacities on their part.

Conclusion

While artificial agents are not close to being regarded as moral persons, they are coherently becoming subjects of the intentional stance, and may be thought of as intentional agents. They take actions that they initiate, and their actions can be understood as originating in their own reasons. An artificial agent with the right sorts of capacities—most importantly, that of being an intentional system—would have a strong case for legal personality, a case made stronger by the richness of its relationships with us and by its behavioral patterns. There is no reason in principle that artificial agents could not attain such a status, given their current capacities and the arc of their continued development in the direction of increasing sophistication.

The discussion of contracting suggested the capabilities of artificial agents, doctrinal convenience and neatness, and the economic implications of various choices would all play a role in future determinations of the legal status of artificial agents. Such “system-level” concerns will continue to dominate for the near future. Attributes such as the practical ability to perform cognitive tasks, the ability to control money, and considerations such as cost benefit analysis, will further influence the decision whether to accord legal personality to artificial agents. Such cost-benefit analysis will need to pay attention to whether agents’ principals will have enough economic incentive to use artificial agents in an increasing array of transactions that grant agents more financial and decision-making responsibility, whether principals will be able, both technically and economically, to grant agents adequate capital assets to be full economic and legal players in tomorrow’s marketplaces, whether the use of such artificial agents will require the establishment of special registers or the taking out of insurance to cover losses arising from malfunction in contractual settings, and even the peculiar and specialized kinds and costs of litigation that the use of artificial agents will involve. Factors such as efficient risk allocation, whether it is necessary to introduce personality in order to explain all relevant phenomena, and whether alternative explanations gel better with existing theory, will also carry considerable legal weight in deliberations over personhood. Most fundamentally, such an analysis will evaluate the transaction costs and economic benefits of introducing artificial agents as full legal players in a sphere not used to an explicit acknowledgment of their role.

Many purely technical issues remain unresolved as yet….Economic considerations might ultimately be the most important in any decision whether to accord artificial agents with legal personality. Seldom is a law proposed today in an advanced democracy without some semblance of a utilitarian argument that its projected benefits would outweigh its estimated costs. As the range and nature of electronic commerce transactions handled by artificial agents grows and diversifies, these considerations will increasingly come into play. Our discussion of the contractual liability implications of the agency law approach to the contracting problem was a partial example of such an analysis.

Whatever the resolution of the arguments considered above, the issue of legal personality for artificial agents may not come ready-formed into the courts, or the courts may be unable or unwilling to do more than take a piecemeal approach, as in the case of extending constitutional protections to corporations. Rather, a system for granting legal personality may need to be set out by legislatures, perhaps through a registration system or “Turing register,”.

A final note on these entities that challenge us by their advancing presence in our midst. Philosophical discussions on personal identity often take recourse in the pragmatic notion that ascriptions of personal identity to human beings are of most importance in a social structure where that concept plays the important legal role of determining responsibility and agency. We ascribe a physical and psychological coherence to a rapidly changing object, the human being, because otherwise very little social interaction would make sense. Similarly, it is unlikely that, in a future society where artificial agents wield significant amounts of executive power, anything would be gained by continuing to deny them legal personality. At best it would be a chauvinistic preservation of a special status for biological creatures like us. If we fall back repeatedly on making claims about human uniqueness and the singularity of the human mind and moral sense in a naturalistic world order, then we might justly be accused of being an “autistic” species, unable to comprehend the minds of other types of beings.

Note: Citations removed above.

The Personhood Beyond the Human Conference

This weekend (Dec 7-8) I am attending the Personhood Beyond the Human conference at Yale University. Here is a description of the conference’s agenda:

The event will focus on personhood for nonhuman animals, including great apes, cetaceans, and elephants, and will explore the evolving notions of personhood by analyzing them through the frameworks of neuroscience, behavioral science, philosophy, ethics, and law….Special consideration will be given to discussions of nonhuman animal personhood, both in terms of understanding the history, science, and philosophy behind personhood, and ways to protect animal interests through the establishment of legal precedents and by increasing public awareness.

I will be speaking on Sunday afternoon. Here is an abstract for my talk:

Personhood for Artificial Agents: What it teaches us about animals’ rights

For the past few years, I have presented arguments based on my book, A Legal Theory for Autonomous Artificial Agents, which suggest that legal and perhaps even moral and metaphysical personhood for artificial agents is not a conceptual impossibility. In some cases, a form of dependent legal personality might even be possible in today’s legal frameworks for such entities. As I have presented these arguments, I have encountered many objections to them.In this talk, I will examine some of these objections as they have taught me a great deal about how personhood for artificial agents is relevant to the question of human beings’ relationships with animals. I will conclude with the claims that a) advocating personhood for artificial agents should not be viewed as an anti-humanistic perspective and b) rather, it should allow us to assess the question of animals’ rights more sympathetically. Bio

Steven Wise, the  most prominent animal rights lawyer in the US, will be speaking today and sharing some rather interesting news about some very important lawsuits filed by his organization, the Nonhuman Rights Project, on behalf of great apes’, arguing for their legal personhood. (Some information can  be found here, and there is heaps more at the website obviously.)

If you are in the area, do stop on by.

Report on Concurring Opinions Symposium on Artificial Agents – II

Today, I’m continuing my wrap-up of the Concurring Opinions online symposium on A Legal Theory for Autonomous Artificial Agents. I’ll be noting below the various responses to the book and point to my responses to them (Part I of this wrap-up was posted yesterday).

While almost all respondents seem to have seriously engaged with the book’s analysis, Ryan Calo wrote a disappointingly unengaged, and at times, patronizing post that ostensibly focused on the book’s methodological adoption of the intentional stance; it seemed to suggest that all we were doing was primitive anthropomorphizing. This was a pretty comprehensive misread of the book’s argument, so I struggled to find anything to say in response. Calo also said he didn’t know whether an autonomous robot was like a hammer or not; this was a bizarre admission coming from someone that is concerned with the legal implications of robotics. I noted in one of my responses that figuring out the answer to that question can be aided by some intuition-tickling questions (Like: Would NASA send a hammer to explore Mars? Can hammers drive?). Calo’s follow-up post to my comment on his post was roughly along the lines of “We don’t know what to do with artificial agents.” Well, yes, but I thought the point was to evaluate the attempt currently mounted in our book? I didn’t quite understand the point of Calo’s responses: that we don’t have a comprehensive theory for artificial agents i.e., the book’s title is misleading? I could be persuaded into mounting a guilty plea for that. But the point of the book was to indicate how existing doctrines could be so suitably modified to start accommodating artificial agents- that is how a legal theory will be built up in a common law system.

Deborah DeMott (Duke) whose writings on the common law doctrines of agency were very useful in our analysis in the book offered a very good analysis of our attempts to apply that doctrine to artificial agents. While DeMott disagreed with the exactness of the fit, she seemed not to think that it was completely off-base (she certainly found our attempt “lively and ingenious”!); in my response I attempted to clarify and defend some of our reasons for why we thought agency doctrine would work with artificial agents.

Ken Anderson (American University, Volokh Conspiracy) then discussed our treatment of intentionality and deployment of the intentional stance, and queried whether we intended to use the intentional stance merely as a heuristic device or whether we were, in fact, making a broader claim for intentionality in general. In my response I noted that we wanted to do both: use it as a methodological stance, and in doing so, also point an investigative lens at our understanding of intentionality in general. Ken’s reaction was very positive; he thought the book had hit a “sweet spot” in not being excessively pie-in-the-sky while offering serious doctrinal recommendations.

Ian Kerr (Ottawa), in his response, didn’t feel the book went far enough in suggesting a workable theory for artificial agents; if I understood Ian correctly, his central complaint was that the theory relied too much on older legal categories and doctrines and that artificial agents might need an entirely new set of legal frameworks. But Ian also felt the slow and steady march of the common law was the best way to handle the challenges posed by artificial agents. So, interestingly enough, I agree with Ian; and I think Ian should be less dissatisfied than he is; our book is  merely the first attempt to try and leverage the common law to make these steps to work towards a more comprehensive theory. In fact, given rapid developments in artificial agents, the law is largely going to be playing catchup more than anything else.

Andrew Sutter then wrote a critical, rich response, one that took aim at the book’s rhetoric, its methodology, and its philosophical stance. I greatly enjoyed my jousting with Andrew during this symposium, and my response to his post–and to his subsequent comments–in which I attempted to clarify my philosophical stance and presuppositions, will show that.

Harry Surden (Colorado) wrote a very good post on two understanding of artificial intelligence’s objectives–intelligence as the replication of human cognitive capacities by either replicating human methods of achieving them or via simulations that utilize other techniques–and how these could or would be crucial in the legal response to its achievements. My response to Surden acknowledged the importance of these distinctions and noted that this should also cause us to think about how we often ascribe human cognition a certain standing that arises largely because of a lack of understanding of its principles. (This also provoked an interesting discussion with AJ Sutter.)

Andrea Matwyshyn wrote an excellent, seriously engaged post that took head-on, the fairly detailed and intricate arguments of Chapter 2 (where we offer a solution for the so-called contracting problem by offering an argument that artificial agents be considered legal agents of their users). My response to Matwyshyn acknowledged the force of her various critical points while trying to expand and elaborate the economic incentivizing motivation for our claim that artificial agents should be considered as non-identical with their creators and/or deployers.

Once again, I am grateful to Frank Pasquale and the folks over at Concurring Opinions for staging the symposium and to all the participants for their responses.

Report on Concurring Opinions Symposium on Artificial Agents – I

The Concurring Opinions online symposium on my recently-released book A Legal Theory for Autonomous Artificial Agents (University of Michigan Press, 2011) wrapped up yesterday. The respondents to the book blogged on it from Tuesday till Thursday last week; from Friday till Monday I spent most of my time putting together responses to the excellent responses offered by the participants; I also replied to comments made by blog readers (two of whom, Patrick S. O’Donnell and AJ Sutter, provided very thoughtful and critical commentary).

Frank Pasquale (Seton Hall) organized the symposium and announced it on the blog on February 2nd.  The symposium was kicked off by Sonia Katyal (Fordham) who responded to the book’s argument for legal personhood for artificial agents. While positive in her response, Katyal was curious about whether a strong enough case for legal personhood had been made yet (compared to the historical case for corporations for instance). (This was useful in helping me think about how such a legal-empirical case could be made for artificial agents’ legal personhood, something I alluded to in my response.)

James Grimmelmann (New York Law School) then followed up with a post that addressed the law’s response to complex systems and pointed out that responding to the presence of artificial agents could or would draw upon some of those patterns of response. (Sonia and James had started things a little early so my introductory post on artificial agents showed up after theirs!) James also wrote a follow-up to his first piece, which further elaborated on some of law’s strategies for dealing with complexity, pointing out the grant of personhood was not inevitable. These posts were very useful in illustrating the law’s pragmatic stance towards the presence of complex systems. (Danielle Citron (Maryland), incidentally, wrote a reminder of how automated decision making has been causing a headache for administrative law; in the original version of our book we had begun work on a chapter that addressed this but left it on the cutting floor; it would be good to resurrect that at some point.)

Lawrence Solum (Georgetown and Illinois), who has been writing at the intersection of philosophy and law for many years, then wrote a post suggesting that some dimensions of the problem of artificial agents’ legal personhood could be illustrated by a thought experiment involving zombies.  (I drew upon this thought experiment with another one of my own: how would we respond to extraterrestrials that petitioned for legal personhood?)

Frank Pasquale then pointed out how bots were being used for political campaigning and could be said to be contributing to political speech; this was really quite a provocative and fascinating post and I regret not having addressed it over at CO in my responses. I will do so soon here.

Ugo Pagallo (Georgetown and Turin), staying with the legal personhood theme, then questioned several aspects of our personhood argument, (while agreeing with our agency analysis in earlier parts of the book). In my response to Ugo, I suggested we were in greater argument than it might have originally seemed. Ramesh Subramanian (Yale ISP and Quinnipiac), meanwhile, took the argument for legal personhood seriously, and wondered more broadly about what some of its futuristic implications could be.

I will have another post tomorrow with summaries and descriptions of the various responses and the discussions that followed. This was an exhausting and invigorating experience in more ways than one.

Artificial Agents and the Law: Legal Personhood in Good Time

The Concurring Opinions online symposium on A Legal Theory for Autonomous Artificial Agents is under way, and most respondents thus far are taking on the speculative portion of the book (where we suggest that legal personhood for autonomous artificial is philosophically and legally coherent and might be advanced in the future). The incremental arguments constructed in Chapters 2 and 3 for considering artificial agents as legal agents for the purposes of contracting and knowledge attribution have not yet been taken on. Sometimes, as a result, it has seemed that we are suggesting a far greater change to existing legal doctrines than the modest changes we actually do suggest. Those modest changes could, admittedly, go on to have widespread ramifications, but still, for the time being, that’s all we do.

I’m not surprised that most respondents to the book thus far have chosen to concentrate on the ‘sexier’ argument in Chapter 5. In any case, these comments are very thoughtful and are thought-provoking and as a result they have already resulted in some very interesting discussion. Indeed, some of the objections raised are going to require some very careful responses from my side.

Still, the concentration on the legal personhood aspect of the doctrine we suggest might result in one confusion being created: that in this book we are advocating personhood for artificial agents. Not just legal personhood, but in fact personhood tout court. This is especially ironic as we deliberately have chosen the most incremental changes in doctrine possible in keeping with law’s generally conservative treatment of proposed changes to legal doctrine.

Here is what we say in the introduction about the argument for legal personhood:

<start quote>

In Chapter 5, we explore the potential for according sophisticated artificial agents with legal personality. In order to provide a discursive framework, we distinguish between dependent and independent legal persons. We conclude that the conditions for each kind of legal personality could, in principle, be met by artificial agents in the right circumstances. [emphasis added] We suggest objections to such a status for them are based on a combination of human chauvinism and a misunderstanding of the notion of a legal person [more often than not, this is the conflation of “human” with “legal person”]. We note the result-oriented nature of the jurisprudence surrounding legal personality, and surmise legal personality for artificial agents will follow on their attaining a sufficiently rich and complex positioning within our network of social and economic relationships. The question of legal personality for artificial agents will be informed by a variety of pragmatic, philosophical and extra-legal concepts; philosophically unfounded chauvinism about human uniqueness should not and would not play a significant role in such deliberations.

<end quote>

The “result-oriented nature of the jurisprudence surrounding legal personality” is actually such as to suggest that artificial agents might even be considered legal persons for the purposes of contracting now. But for the time being, I think, we can get the same outcomes just by treating them as legal agents without personhood. Which is why we advocate that change first, and suggest we wait till they attain “a sufficiently rich and complex positioning within our network of social and economic relationships.”