Nicholas Carr offers us some interesting and thoughtful worries about automation in The Atlantic (‘All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines,’ 23 October 2013). These worries center largely around de-skilling: as automation grows ever more sophisticated–and evidence suggests it is pushing into domains once thought to be inaccessible–humans will lose the precious know-how associated with them, setting themselves up for a situation in which once the technology fails–as it inevitably will–we run the risk of catastrophe. Carr’s examples are alarming; he highlights the use of the ‘substitution fallacy’ in standard defenses of automation; most usefully, he points out that as automation proceeds, all too-many humans will become merely its monitors; and finally, concludes:
Whether it’s a pilot on a flight deck, a doctor in an examination room, or an Inuit hunter on an ice floe, knowing demands doing. One of the most remarkable things about us is also one of the easiest to overlook: each time we collide with the real, we deepen our understanding of the world and become more fully a part of it. While we’re wrestling with a difficult task, we may be motivated by an anticipation of the ends of our labor, but it’s the work itself—the means—that makes us who we are. Computer automation severs the ends from the means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want? If we don’t grapple with that question ourselves, our gadgets will be happy to answer it for us.
This is a good question to ask. I want to complicate the picture somewhat by raising some questions.
1. Does Carr want to suggest we roll back the advancing tide of automation? Should we demarcate some areas of human expertise as ‘too human’ or ‘too important’ to be automated? Should we discourage research on automated driving, navigation systems, spell checking and the like? Should we make a list of ‘core human cognitive capacities’ and then discourage research on automating these? How would we ‘discourage’ such research? By law, the market, or social norming? What would such judgments be based on? Do we have a set of values that would animate them and that we could rely on?
2. There is a flip-side to the de-skilling blamed on automation: a tremendous increase in human knowledge and technical capacities has been required to create and implement the systems that so alarm us. Where does this knowledge and its associated power reside? As a society, we are witnessing the creation of a new elite of knowledgeable producers, those who make the gadgets that Carr worries are making us dumb. Are we, along with economic inequality, also creating cognitive inequality? Can the technical knowledge gained by work on automation help us alleviate the problems associated with de-skilling?
This latter consideration suggests that perhaps the real problem with automation is not automation per se but automation in a radically inegalitarian and economically skewed society like ours, one whose economic and moral priorities do not permit an adequate amelioration of the effects of automation, or permit a rich enough life that may allow those de-skilled by automation in some domains to develop and apply their talents elsewhere.