Schwitzgebel On Our Moral Duties To Artificial Intelligences

Eric Schwitzgebel asks an interesting question:

Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?

Schwitzgebel’s stipulations are quite extensive, for these beings are “similar to us in their conscious experience, in their intelligence, in their range of emotions.” Thus, one straightforward response to the question might be, “The same duties that we take ourselves to have to other conscious, intelligent, sentient beings–for which our moral theories provide us adequate guidance.” But the question that Schwitzgebel raises is challenging because our relationship to these artificial beings is of a special kind: we have created, initialized, programmed, parametrized, customized and traine them. We are, somehow, responsible for them. (Schwitzgebel considers and rejects two other approaches to reckoning our duties towards AIs: first, that we are justified in simply disregarding any such obligations because of our species’ distance from them, and second, that the very fact of having granted these beings existence–which is presumably infinitely better than non-existence–absolves me of any further duties toward them.) This is how Schwitzgebel addresses the question of our duties to them–with some deft consideration of the complications introduced by this responsibility and the autonomy of the artificial beings in question–and goes on to conclude:

If the AI’s desires are not appropriate — for example, if it desires things contrary to its flourishing — I’m probably at least partly to blame, and I am obliged to do some mitigation that I would probably not be obliged to do in the case of a fellow human being….On the general principle that one has a special obligation to clean up messes that one has had a hand in creating, I would argue that we have a special obligation to ensure the well-being of any artificial intelligences we create.

The analogy with children that Schwitzgebel correctly invokes can be made to do a little more work. Our children’s moral failures vex us more than those of others do; they prompt more extensive corrective interventions by us precisely because our assessments of their actions are just a little more severe. As such, when we encounter artificial beings of the kind noted above, we will find our reckonings of our duties toward them significantly impinged on by whether ‘our children’ have, for instance, disappointed or pleased us. Artificial intelligences will not have been created without some conception of their intended ends; their failures or successes in attaining them will influence a consideration of our appropriate duties to them and will make more difficult a recognition and determination of the boundaries we should not transgress in our ‘mitigation’ of their actions and in our ensuring their ‘well-being.’ After all, parents are more tempted to extensively intervene in their child’s life when they perceive a deviation from a path they believe their children should take in order to achieve an objective the parent deems desirable.

By requiring respect and consideration for their autonomous moral natures, children exercise our moral senses acutely. We should not be surprised to be similarly examined by the artificial intelligences we create and set loose upon the world.

One thought on “Schwitzgebel On Our Moral Duties To Artificial Intelligences

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: