Fredrik DeBoer has written an interesting post on the prospects for artificial intelligence, one that is pessimistic about its prospects and skeptical about some of the claims made for its success. I disagree with some of its implicit premises and claims.
AI’s goals can be understood as being two-fold, depending on your understanding of the field. First, to make machines that can perform tasks, which if performed by humans, would be said to require “intelligence”. Second, to understand human cognitive processes and replicate them in a suitable architecture. The first enterprise is engineering; the second, cognitive science (Vico-style: “the true and the made are convertible”).
The first cares little about the particular implementation mechanism or the theoretical underpinning of task performance; success in task execution is all. If you can make a robot capable of brewing a cup of tea in kitchens strange and familiar, it does not matter what its composition, computational architecture or control logics are, all that matters is that brewed cup of tea. The second cares little about the implementation medium – it could be silicon and plastic – but it does about the mechanism employed; it must faithfully instantiate and realize an abstraction of a distinctly human cognitive process. Aeronautical engineers replicate the flight of feathered birds using aluminum and jet engines; they physically instantiate abstract principles of flight. The cognitive science version of AI seeks to perform a similar feat for human cognition; AI should validate our best science of mind.
I take DeBoer’s critique of so-called “statistical” or “big-data” AI to be: you’re only working toward the first goal, not toward the second. That is a fair observation, but it does not establish the following added conclusion: cognitive science is the “right” or the “only” way to realize artificial intelligence. It also does not establish the following conclusion: engineering AI is a useless distraction in the task of understanding human cognition or what artificial intelligence or even “real intelligence” might be. Cognitive science AI is not the only worthwhile paradigm for AI, not the only intellectually useful one.
To see this, consider what the successes–even partial–of engineering AI tell us: intelligence is not one thing, it is many; intelligence is achievable both by mimicking human cognitive processes and not; in some cases, it is more easily achieved by the latter. The successes of engineering AI should tell us that the very phenomena–intelligence–we take ourselves to be studying in cognitive science isn’t well understood; they tell us the very thing being studied–“mind”–might not be a thing to begin with. (DeBoer rightly disdains the “mysterianism” in claims like “intelligence is an emergent property” but he seems comfortable with the chauvinism of “intelligence achievable by non-human means isn’t intelligence.” A simulation of intelligence isn’t “only” a simulation; it forces us to reckon with the possibility “real intelligence” might be “only a simulation.”)
What we call intelligence is a performative capacity; creatures that possess intelligence can do things in this world; the way humans accomplish those tasks is of interest, but so are other ways of doing so. They show us many relationships to our environment can be described as “cognitive” or “mindful”; if giant-lookup machines and human beings can both play chess and write poems then that tells us something interesting about the nature of those capacities. If language comprehension can be achieved by statistical methods, then that tells us we should regard our own linguistic capacities in a different light; a speaking and writing wind-up toy should make us revisit the phenomena of language anew: just what is this destination, reachable in such radically dissimilar routes–‘human cognition’ and ‘machine learning’?
DeBoer rightly points out the difficulties both AI methodologies face; I would go further and say that given our level of (in)comprehension, we do not even possess much of a principled basis for so roundly dismissing the claims made by statistical or big-data AI. It might turn out that the presuppositions of cognitive science might be altered by the successes of engineering AI, thus changing its methodologies and indicators of success; cognitive science might be looking in the wrong places for the wrong things.