Artificial Agents and the Law: Some Preliminary Considerations

As I noted here last week, the Concurring Opinions blog will be hosting an online symposium on my book A Legal Theory for Autonomous Artificial Agents. There has already been some discussion over at the blog; I’m hoping that once the book has been read and its actual arguments engaged with, we can have a more substantive discussion.

Last week, I spoke on the book at Bard College, to a mixed audience of philosophy, computer science, and cognitive science faculty and students. The question-and-answer session was quite lively and our conversations continued over dinner later.  Some of the questions that were directed at me are quite familiar to me by now: Why make any change in the legal status of artificial agents? That is, why elevate them from non-entities in the ontology of the law to the status of legal agents, or beyond? When it comes to assigning responsibility, why not simply make the designers or deployers of agents responsible for all acts of the artificial agents?  How can an artificial agent, which lacks the supposedly distinctively-human characteristics of <insert consciousness, free-will, rationality, autonomy, subjectivity, phenomenal experience here> ever be considered an “agent” or a “person”? Aren’t you abusing language when you say that a program or a robot can be attributed knowledge? How can those kinds of things ever “know” anything? Who is doing the knowing?  

I’ll be addressing questions like these (I’m reasonably sure of that) over at the online symposium, which starts tomorrow. For the time being, I’d like to make a couple of general remarks. 

The modest changes in legal doctrine proposed in our book are largely driven by two considerations. 

First, existent legal doctrine, in a couple of domains, most especially contracting, which kicks off our discussion and serves as the foundations for the eventual development of the book’s thesis, is placed under considerable strain by its current treatment of highly sophisticated artificial agents. We could maintain current contracting doctrines as is but run the risk of increasing the implausibility of that doctrine. This might be seen as a reasonable price to pay so that we can maintain our intuitions about the kinds of beings we take artificial agents to be. I’d suggest this kind of retention of intuitions starts to become increasingly untenable when we see the disparateness in the entities that are placed in the same legal category. (Are autonomous robots really just the same as tools like hammers?)   

Second, a change in a  legal doctrine can sometimes bring about better outcomes for us. One of the crucial arguments in our Chapter 2, (one I really hope readers engage with) is an assessment in the economic dimension of electronic contracting by artificial agents considered as legal agents. I share the skepticism of those in the legal academy that economic analysis of law not drive all doctrinal changes but in this case, I’d suggest the risk allocation does work out better. But I think an even stronger argument can be made when it comes to privacy. In Chapter 3, the dimissal of the Google defense (“if humans don’t read your email, your privacy is not violated”) is enabled precisely by treating artificial agents as legal  agents.

Much more on this in the next few days.

One thought on “Artificial Agents and the Law: Some Preliminary Considerations

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.