Today, I’m continuing my wrap-up of the Concurring Opinions online symposium on A Legal Theory for Autonomous Artificial Agents. I’ll be noting below the various responses to the book and point to my responses to them (Part I of this wrap-up was posted yesterday).
While almost all respondents seem to have seriously engaged with the book’s analysis, Ryan Calo wrote a disappointingly unengaged, and at times, patronizing post that ostensibly focused on the book’s methodological adoption of the intentional stance; it seemed to suggest that all we were doing was primitive anthropomorphizing. This was a pretty comprehensive misread of the book’s argument, so I struggled to find anything to say in response. Calo also said he didn’t know whether an autonomous robot was like a hammer or not; this was a bizarre admission coming from someone that is concerned with the legal implications of robotics. I noted in one of my responses that figuring out the answer to that question can be aided by some intuition-tickling questions (Like: Would NASA send a hammer to explore Mars? Can hammers drive?). Calo’s follow-up post to my comment on his post was roughly along the lines of “We don’t know what to do with artificial agents.” Well, yes, but I thought the point was to evaluate the attempt currently mounted in our book? I didn’t quite understand the point of Calo’s responses: that we don’t have a comprehensive theory for artificial agents i.e., the book’s title is misleading? I could be persuaded into mounting a guilty plea for that. But the point of the book was to indicate how existing doctrines could be so suitably modified to start accommodating artificial agents- that is how a legal theory will be built up in a common law system.
Deborah DeMott (Duke) whose writings on the common law doctrines of agency were very useful in our analysis in the book offered a very good analysis of our attempts to apply that doctrine to artificial agents. While DeMott disagreed with the exactness of the fit, she seemed not to think that it was completely off-base (she certainly found our attempt “lively and ingenious”!); in my response I attempted to clarify and defend some of our reasons for why we thought agency doctrine would work with artificial agents.
Ken Anderson (American University, Volokh Conspiracy) then discussed our treatment of intentionality and deployment of the intentional stance, and queried whether we intended to use the intentional stance merely as a heuristic device or whether we were, in fact, making a broader claim for intentionality in general. In my response I noted that we wanted to do both: use it as a methodological stance, and in doing so, also point an investigative lens at our understanding of intentionality in general. Ken’s reaction was very positive; he thought the book had hit a “sweet spot” in not being excessively pie-in-the-sky while offering serious doctrinal recommendations.
Ian Kerr (Ottawa), in his response, didn’t feel the book went far enough in suggesting a workable theory for artificial agents; if I understood Ian correctly, his central complaint was that the theory relied too much on older legal categories and doctrines and that artificial agents might need an entirely new set of legal frameworks. But Ian also felt the slow and steady march of the common law was the best way to handle the challenges posed by artificial agents. So, interestingly enough, I agree with Ian; and I think Ian should be less dissatisfied than he is; our book is merely the first attempt to try and leverage the common law to make these steps to work towards a more comprehensive theory. In fact, given rapid developments in artificial agents, the law is largely going to be playing catchup more than anything else.
Andrew Sutter then wrote a critical, rich response, one that took aim at the book’s rhetoric, its methodology, and its philosophical stance. I greatly enjoyed my jousting with Andrew during this symposium, and my response to his post–and to his subsequent comments–in which I attempted to clarify my philosophical stance and presuppositions, will show that.
Harry Surden (Colorado) wrote a very good post on two understanding of artificial intelligence’s objectives–intelligence as the replication of human cognitive capacities by either replicating human methods of achieving them or via simulations that utilize other techniques–and how these could or would be crucial in the legal response to its achievements. My response to Surden acknowledged the importance of these distinctions and noted that this should also cause us to think about how we often ascribe human cognition a certain standing that arises largely because of a lack of understanding of its principles. (This also provoked an interesting discussion with AJ Sutter.)
Andrea Matwyshyn wrote an excellent, seriously engaged post that took head-on, the fairly detailed and intricate arguments of Chapter 2 (where we offer a solution for the so-called contracting problem by offering an argument that artificial agents be considered legal agents of their users). My response to Matwyshyn acknowledged the force of her various critical points while trying to expand and elaborate the economic incentivizing motivation for our claim that artificial agents should be considered as non-identical with their creators and/or deployers.
Once again, I am grateful to Frank Pasquale and the folks over at Concurring Opinions for staging the symposium and to all the participants for their responses.
One thought on “Report on Concurring Opinions Symposium on Artificial Agents – II”