Maggie Koerth-Baker reports on a case of supposed trickery–(‘How Robots Can Trick You Into Loving Them‘, The New York Times, 17 September 2013)–that has come to light as robots become more ubiquitous and enter an increasing number of social spaces:
In the future, more robots will occupy that strange gray zone: doing not only jobs that humans can do but also jobs that require social grace. In the last decade, an interdisciplinary field of research called Human-Robot Interaction has arisen to study the factors that make robots work well with humans, and how humans view their robotic counterparts.
H.R.I. researchers have discovered some rather surprising things: a robot’s behavior can have a bigger impact on its relationship with humans than its design; many of the rules that govern human relationships apply equally well to human-robot relations; and people will read emotions and motivations into a robot’s behavior that far exceed the robot’s capabilities.
None of this should be surprising in the least. Human beings have always relied on a combination of relentless anthropomorphization and agency ascription to make sense of the world around them. In doing so they have cared little for the ‘inside’ of the beings they encounter, and have instead, concerned themselves with which interpretive framework enables them to enjoy more fruitful relationships with them. As Koerth-Baker notes, “Provided with the right behavioral cues, humans will form relationships with just about anything — regardless of what it looks like. Even a stick can trigger our social promiscuity.” Robots will be no different in this regard.
In a world full of action, we are inclined to find agents everywhere; the interesting bit comes when we have to individuate these agents–figure out where one starts and ends and another one begins–and what kind of ‘inner life’ we ascribe to them. Chances are, the more those agents resemble us, the more likely we are to ascribe a rich set of inner states to them. But as the robots and stick example shows, a sufficiently rich behavioral repertoire might even overcome this inhibition.
The more fascinating question of course, is whether this style of social interaction will become the preferred modality in preference to talking about the robot’s innards or design. Will humans describe the robot’s ‘beliefs’ and ‘desires’ as the causes of the actions it takes? Doing so would regard robots as originators of the actions they take: in other words, they would be considered ‘true’ agents in the philosophical sense.
One prominent asymmetry should also become apparent in robot-human interaction: those who know a great deal about the robot’s innards–its engineering principles, its software, its internal design–will be less inclined to anthropomorphize and ascribe social graces and capacities to robots. They will sometimes find that the best explanations they can offer of the robot’s behavior will be more expeditiously expressed in a language that refers to their physical composition or logical design. But this subset of users is likely to be a very small one and as robots become more complex and more capable of a sophisticated range of behaviors it might be that even those users will find the language of propositional attitudes a more convenient one for dealing with robots.
Eventually, we might come to treat robots as authorities when it comes to reporting on their own inner states. When that level of sophisticated interaction and behavior is possible, we’ll face a genuine conundrum: as far as social relationships are concerned, what, other than their innards, distinguishes them from other reporters–like human beings–that we consider authorities in similar fashion?