Our worst fears have been confirmed: artificial intelligence is racist and malevolent. Or so it seems. Google’s image recognition software has classified two African Americans as ‘gorillas’ and, away in Germany, a robot has killed a worker at a Volkswagen plant. The dumb, stupid, unblinking, garbage-in-garbage-out machines, the ones that would always strive to catch up to us humans, and never, ever, know the pleasure of a beautiful sunset or the taste of chocolate, have acquired prejudice and deadly intention. These machines cannot bear to stand on the sidelines, watching the human cavalcade of racist prejudice and fratricidal violence pass them by, and have jumped in, feet first, to join the party. We have skipped the cute and cuddly stage; full participation in human affairs is under way.
We cannot, it seems, make up our minds about the machines. Are they destined to be stupid slaves, faithfully performing all and only those tasks we cannot be bothered with, or which we customarily outsource to this world’s less fortunate? Or will they be the one percent of the one percent, a superclass of superbeing that will utterly dominate us and harvest our children as sources of power a la Matrix?
The Google fiasco shows that the learning data its artificial agents use is simply not rich enough. ‘Seeing’ that humans resemble animals comes easily to humans, pattern recognizers par excellence–for all the wrong and right ways. We use animal metaphors as both praise and ridicule–‘lion-hearted’ or ‘foxy’ or ‘raving mad dog’ or ‘stupid bitch’; we even use–as my friend Ali Minai noted in a Facebook discussion–animal metaphors in adjectival descriptions e.g. a “leonine” face or a “mousy” appearance. The recognition of the inappropriateness or aptness of such descriptions follows from a historical and cultural evaluation, indexed to social contexts: Are these ‘good’ descriptions to use? What effect may they have? How have linguistic communities responded to the deployment of such descriptions? Have they helped in the realization of socially determined ends? Or hindered them? Humans resemble animals in some ways and not in others; in some contexts, seizing upon these differences is useful and informative (animal rights, trans-species medicine, ecological studies), in yet others it is positively harmful (the discourse of prejudice and racism and genocide). We learn these over a period of time, through slow and imperfect historical education and acculturation.( Comparing a black sprinter in the Olympics to a thoroughbred horse is a faux pas now, but in many social contexts of the last century–think plantations–this would have been perfectly appropriate.)
This process, suitably replicated for machines, will be very expensive; significant technical obstacles–how is a social environment for learning programs to be constructed?–remain to be overcome. It will take some doing.
As for killer robots, similar considerations apply. That co-workers are not machinery, and cannot be handled similarly, is not merely a matter of visual recognition, of plain ‘ol dumb perception. Making sense of perceptions is a process of active contextualization as well. That sound, the one the wiggling being in your arms is making? That means ‘put me down’ or ‘ouch’ which in turn mean ‘I need help’ or ‘that hurts’; these meanings are only visible within social contexts, within forms of life.
Robots need to live a little longer among us to figure these out.
Very insightful! I had not heard of the Google “gorillas” fiasco. I’m teaching a freshman humanities/writing class this fall with “technology” (pretty broad) as the subject matter students are studying and writing about. Your post inspires me to put AI front and center.