‘Eva’: Love Can Be Skin-Deep (Justifiably)

Kike Maíllo’s Eva makes for an interesting contribution to the ever-growing–in recent times–genre of robotics and artificial intelligence movies. That is because its central concern–the emulation of humanity by robots–which is not particularly novel in itself, is portrayed in familiar and yet distinctive, form.

The most common objection to the personhood of the ‘artificially sentient,’ the ‘artificially intelligent,’ or ‘artificial agents’ and ‘artificial persons’ is couched in terms similar to the following: How could silicon and plastic ever feel, taste, hurt?  There is no ‘I’ in these beings; no subject, no first-person, no self. If such beings ever provoked our affection and concerns, those reactions would remain entirely ersatz. We know too much about their ‘insides,’ about how they work. Our ‘epistemic hegemony’ over these beings–their internals are transparent to us, their designers and makers–and the dissimilarity between their material substrate and ours renders impossible their admission to our community of persons (those we consider worthy of our moral concern.)

As Eva makes quite clear, such considerations ignore the reality of how our relationships with other human beings are constructed in actuality. We respond first to visible criteria, to observable behavior, to patterns of social interaction; we then seek internal correspondences–biological, physiological–for these to confirm our initial reactions and establishments of social ties; we assume too, by way of abduction, an ‘inner world’ much like ours. But biological similarity is not determinative; if the visible behavior is not satisfactory, we do not hesitate to recommend banishment from the community of persons. (By ostracism, institutionalization, imprisonment etc.) And if visible behavior is indeed, as rich and varied and interactive as we imagine it should be for the formation of viable and rewarding relationships, then our desire to admit the being in question to the community of persons worthy of our moral care will withstand putative evidence that there is considerable difference in constitution and the nature of ‘inner worlds.’  If Martians consisting solely of green goo on the inside were to land on our planet and treat our children with kindness i.e., display kind behavior, and provide the right kinds of reasons–whether verbally or by way of display on an LED screen–when we asked them why they did so, only an irredeemable chauvinist would deny them admission to the community of moral persons.

Eva claims that a robot’s ‘mother’ and her ‘father’–her human designers–may love her in much the same way they would love their human children. For she may bring joy to their life in much the same way they would; she may smile, laugh giddily, play pranks, gaze at them in adoration, demand their protection and care, respond to their affectionate embraces, and so on. In doing so, she provokes older, evolutionarily established instincts of ours. These reactions of ours may strike us so compelling that even a look ‘under the hood’ may not deter their expression. We might come to learn that extending such feelings of acceptance and care to beings we had not previously considered so worthy might make new forms of life and relationships manifest. That doesn’t seem like such a bad bargain.

Schwitzgebel On Our Moral Duties To Artificial Intelligences

Eric Schwitzgebel asks an interesting question:

Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?

Schwitzgebel’s stipulations are quite extensive, for these beings are “similar to us in their conscious experience, in their intelligence, in their range of emotions.” Thus, one straightforward response to the question might be, “The same duties that we take ourselves to have to other conscious, intelligent, sentient beings–for which our moral theories provide us adequate guidance.” But the question that Schwitzgebel raises is challenging because our relationship to these artificial beings is of a special kind: we have created, initialized, programmed, parametrized, customized and traine them. We are, somehow, responsible for them. (Schwitzgebel considers and rejects two other approaches to reckoning our duties towards AIs: first, that we are justified in simply disregarding any such obligations because of our species’ distance from them, and second, that the very fact of having granted these beings existence–which is presumably infinitely better than non-existence–absolves me of any further duties toward them.) This is how Schwitzgebel addresses the question of our duties to them–with some deft consideration of the complications introduced by this responsibility and the autonomy of the artificial beings in question–and goes on to conclude:

If the AI’s desires are not appropriate — for example, if it desires things contrary to its flourishing — I’m probably at least partly to blame, and I am obliged to do some mitigation that I would probably not be obliged to do in the case of a fellow human being….On the general principle that one has a special obligation to clean up messes that one has had a hand in creating, I would argue that we have a special obligation to ensure the well-being of any artificial intelligences we create.

The analogy with children that Schwitzgebel correctly invokes can be made to do a little more work. Our children’s moral failures vex us more than those of others do; they prompt more extensive corrective interventions by us precisely because our assessments of their actions are just a little more severe. As such, when we encounter artificial beings of the kind noted above, we will find our reckonings of our duties toward them significantly impinged on by whether ‘our children’ have, for instance, disappointed or pleased us. Artificial intelligences will not have been created without some conception of their intended ends; their failures or successes in attaining them will influence a consideration of our appropriate duties to them and will make more difficult a recognition and determination of the boundaries we should not transgress in our ‘mitigation’ of their actions and in our ensuring their ‘well-being.’ After all, parents are more tempted to extensively intervene in their child’s life when they perceive a deviation from a path they believe their children should take in order to achieve an objective the parent deems desirable.

By requiring respect and consideration for their autonomous moral natures, children exercise our moral senses acutely. We should not be surprised to be similarly examined by the artificial intelligences we create and set loose upon the world.

The Personhood Beyond the Human Conference

This weekend (Dec 7-8) I am attending the Personhood Beyond the Human conference at Yale University. Here is a description of the conference’s agenda:

The event will focus on personhood for nonhuman animals, including great apes, cetaceans, and elephants, and will explore the evolving notions of personhood by analyzing them through the frameworks of neuroscience, behavioral science, philosophy, ethics, and law….Special consideration will be given to discussions of nonhuman animal personhood, both in terms of understanding the history, science, and philosophy behind personhood, and ways to protect animal interests through the establishment of legal precedents and by increasing public awareness.

I will be speaking on Sunday afternoon. Here is an abstract for my talk:

Personhood for Artificial Agents: What it teaches us about animals’ rights

For the past few years, I have presented arguments based on my book, A Legal Theory for Autonomous Artificial Agents, which suggest that legal and perhaps even moral and metaphysical personhood for artificial agents is not a conceptual impossibility. In some cases, a form of dependent legal personality might even be possible in today’s legal frameworks for such entities. As I have presented these arguments, I have encountered many objections to them.In this talk, I will examine some of these objections as they have taught me a great deal about how personhood for artificial agents is relevant to the question of human beings’ relationships with animals. I will conclude with the claims that a) advocating personhood for artificial agents should not be viewed as an anti-humanistic perspective and b) rather, it should allow us to assess the question of animals’ rights more sympathetically. Bio

Steven Wise, the  most prominent animal rights lawyer in the US, will be speaking today and sharing some rather interesting news about some very important lawsuits filed by his organization, the Nonhuman Rights Project, on behalf of great apes’, arguing for their legal personhood. (Some information can  be found here, and there is heaps more at the website obviously.)

If you are in the area, do stop on by.