I am posting today from the University of Miami Law School, which is staging the We Robot 2012 conference. I presented and discussed Patrick Hubbard’s (University of South Carolina Law School) Regulation of Liability for Risks of Physical Injury From “Sophisticated Robots”. Presenting someone else’s work presents a difficult challenge; thanks to being an academic I have perfected the dark arts of bullshitting about my own work but doing so about someone else’s work is far more difficult. I tried my best to present Patrick’s work as comprehensively and fairly as possible and to raise some questions that could spur on some discussion. (I will place the slides online very soon so you can see what I got up to.)
One of the points I raised in response to Patrick’s claim that robots that displayed ’emergent behavior’ would occasion changes in tort doctrine was: How should we understand such emergence? Might we need to see if robots, for instance, displayed stability, homeostasis and evolvability–all often held to be features of living systems, paradigmatic examples of entities that display emergent behavior. Would robots be judged to display emergent behavior if it was not just a function of its parts but also of the holistic and relational properties of the system. I also asked Patrick how the law should understand autonomy given that some philosophical definitions of autonomy–like Kant’s for instance–would rule out some humans as being autonomous. (Earlier in the morning during discussions in another talk, I suggested another related benchmark that could be useful: Draw upon the suggestion made in Daniel Dennett’s The Case for Rorts that robots could be viewed as intentional agents when we trust robots as authorities in reporting on their inner states, when its programmers and designers lose epistemic hegemony.) An interesting section of the discussion that followed my presentation centered on how useful analogizing robots to animals or children or other kinds of entities was likely to be, and if useful, which analogies could work best. (This kind of analogizing was done in Chapter 4 of A Legal Theory of Autonomous Artificial Agents.)
Earlier in the day in discussing automated law enforcement–perhaps done by fleets of Robocops–I was glad to note that one of its positive outcomes was highlighted: that such automation could bring about a reduction of bias in law enforcement. In my comment following the talk, I noted that a fleet of Robocops aware of the Fourth Amendment might be be very welcome news for all those who were the targets of the almost seven hundred thousand Stop-n-Frisk searches in New York City.
As was noted in discussions in the morning, some common threads have already emerged: the suggestion that robots are ‘just tools,’ (which I continue to find bizarre), the not-so-clear distinction–and reliance on–true and apparent autonomy, the concerns about the need to avoid ‘projecting’ human will and agency onto robots and treating them like people (i.e., that we need to avoid the so-called ‘android fallacy.’) I personally don’t think warnings about the android fallacy are very useful; contemporary robots are not sophisticated enough to be people and there is no impossibility proof against them being sophisticated enough to be persons in the future.
Hopefully, I will have another–much more detailed–report from this very interesting and wonderfully well-organized conference tomorrow. (I really haven’t done justice to the rich discussions and presentations yet; for that I need a little more time.)