As these articles in recent issues of the New York Times (here and here) and the holding of the Personhood Beyond the Human conference indicate, personhood for non-humans is a live issue, both philosophical and legal. As I noted during the Concurring Opinions online symposium on my book A Legal Theory for Autonomous Artificial Agents last year, (an additional link to discussions is here) this includes personhood for artificial agents. Rather than repeat the arguments I made during that symposium, let me just quote–self-indulgently, at a little length–from the conclusion of my book:
The most salutary effect of our discussions thus far on the possibility of personhood for artificial agents might have been to point out the conceptual difficulties in ascriptions of personhood—especially acute in accounts of personhood based on psychological characteristics that might give us both too many persons and too few—and its parasitism on our social needs. The grounding of the person in social needs and legal responsibilities suggests personhood is socially determined, its supposed essence nominal, subject to revision in light of different usages of person. Recognizing personhood may consist of a set of customs and practices, and so while paradigmatic conceptions of persons are based on human beings…the various connections of the concept of person with legal roles concede personhood is a matter of interpretation of the entities in question, explicitly dependent on our relationships and interactions with them.
Personhood thus emerges as a relational, organizing concept that reflects a common form of life and common felt need. For artificial agents to be become legal persons, a crucial determinant would be the formation of genuinely interesting relationships, both social and economic, for it is the complexity of the agent’s relational interactions that will be of crucial importance.
Personhood is a status marker of a class of agents we, as a species, are interested in and care about. Such recognition is a function of a rich enough social organization that demands such discourse as a cohesive presence and something that enables us to make the most sense of our fellow beings. Beings that do not possess the capacities to enter into a sufficiently complex set of social relationships are unlikely to be viewed as moral or legal persons by us. Perhaps when the ascription of second-order intentionality becomes a preferred interpretationist strategy in dealing with artificial agents, relationships will be more readily seen as forming between artificial agents and others and legal personhood is more likely to be assigned.
Fundamentally, the question of extending legal personality to a particular category of thing remains one of assessing its social importance….The evaluation of the need for legal protection for the entity in question is sensitive, then, to the needs of the community. The entity in question might interact with, and impinge on, social, political, and legal institutions in such a way that the only coherent understanding of its social role emerges by treating it as a person.
The question of legal personality suggests the candidate entity’s presence in our networks of legal and social meanings has attained a level of significance that demands reclassification. An entity is a viable candidate for legal personality in this sense if it fits within our networks of social, political, and economic relations in such a way it can coherently be a subject of legal rulings.
Thus, the real question is whether the scope and extent of artificial agent interactions have reached such a stage. Answers will reveal what we take to be valuable and useful in our future society as well, for we will be engaged in determining what roles artificial agents should be playing for us to be convinced the question of legal personality has become a live issue. Perhaps artificial agents can only become persons if they enter into social relationships that go beyond purely commercial agentlike relationships to genuinely personal relationships (like medical care robots or companion robots). And even in e-commerce settings, an important part of forming deeper commercial relationships will be whether trust will arise between human and artificial agents; users will need to be convinced “an agent is capable of reliably performing required tasks” and will pursue their interests rather than that of a third party.
Autopoietic legal theory, which emphasizes the circularity of legal concepts, suggests too, that artificial agents’ interactions will play a crucial role in the determination of legal personality….If it is a sufficient condition for personality that an entity engage in legal acts, then, an artificial agent participating in the formation of contracts becomes a candidate for legal personality by virtue of its participation in those transactions.
Personhood may be acquired in the form of capacities and sensibilities acquired through initiation into the traditions of thought and action embodied in language and culture; personhood may be result of the maturation of beings, whose attainment depends on the creation of an evolving intersubjectivity. Artificial agents may be more convincingly thought of as persons as their role within our lives increases and as we develop such intersubjectivity with them. As our experience with children shows, we slowly come to accept them as responsible human beings. Thus we might come to consider artificial agents as legal persons for reasons of expedience, while ascriptions of full moral personhood, independent legal personality, and responsibility might await the attainment of more sophisticated capacities on their part.
While artificial agents are not close to being regarded as moral persons, they are coherently becoming subjects of the intentional stance, and may be thought of as intentional agents. They take actions that they initiate, and their actions can be understood as originating in their own reasons. An artificial agent with the right sorts of capacities—most importantly, that of being an intentional system—would have a strong case for legal personality, a case made stronger by the richness of its relationships with us and by its behavioral patterns. There is no reason in principle that artificial agents could not attain such a status, given their current capacities and the arc of their continued development in the direction of increasing sophistication.
The discussion of contracting suggested the capabilities of artificial agents, doctrinal convenience and neatness, and the economic implications of various choices would all play a role in future determinations of the legal status of artificial agents. Such “system-level” concerns will continue to dominate for the near future. Attributes such as the practical ability to perform cognitive tasks, the ability to control money, and considerations such as cost benefit analysis, will further influence the decision whether to accord legal personality to artificial agents. Such cost-benefit analysis will need to pay attention to whether agents’ principals will have enough economic incentive to use artificial agents in an increasing array of transactions that grant agents more financial and decision-making responsibility, whether principals will be able, both technically and economically, to grant agents adequate capital assets to be full economic and legal players in tomorrow’s marketplaces, whether the use of such artificial agents will require the establishment of special registers or the taking out of insurance to cover losses arising from malfunction in contractual settings, and even the peculiar and specialized kinds and costs of litigation that the use of artificial agents will involve. Factors such as efficient risk allocation, whether it is necessary to introduce personality in order to explain all relevant phenomena, and whether alternative explanations gel better with existing theory, will also carry considerable legal weight in deliberations over personhood. Most fundamentally, such an analysis will evaluate the transaction costs and economic benefits of introducing artificial agents as full legal players in a sphere not used to an explicit acknowledgment of their role.
Many purely technical issues remain unresolved as yet….Economic considerations might ultimately be the most important in any decision whether to accord artificial agents with legal personality. Seldom is a law proposed today in an advanced democracy without some semblance of a utilitarian argument that its projected benefits would outweigh its estimated costs. As the range and nature of electronic commerce transactions handled by artificial agents grows and diversifies, these considerations will increasingly come into play. Our discussion of the contractual liability implications of the agency law approach to the contracting problem was a partial example of such an analysis.
Whatever the resolution of the arguments considered above, the issue of legal personality for artificial agents may not come ready-formed into the courts, or the courts may be unable or unwilling to do more than take a piecemeal approach, as in the case of extending constitutional protections to corporations. Rather, a system for granting legal personality may need to be set out by legislatures, perhaps through a registration system or “Turing register,”.
A final note on these entities that challenge us by their advancing presence in our midst. Philosophical discussions on personal identity often take recourse in the pragmatic notion that ascriptions of personal identity to human beings are of most importance in a social structure where that concept plays the important legal role of determining responsibility and agency. We ascribe a physical and psychological coherence to a rapidly changing object, the human being, because otherwise very little social interaction would make sense. Similarly, it is unlikely that, in a future society where artificial agents wield significant amounts of executive power, anything would be gained by continuing to deny them legal personality. At best it would be a chauvinistic preservation of a special status for biological creatures like us. If we fall back repeatedly on making claims about human uniqueness and the singularity of the human mind and moral sense in a naturalistic world order, then we might justly be accused of being an “autistic” species, unable to comprehend the minds of other types of beings.
Note: Citations removed above.