Fredrik DeBoer has written an interesting post on the prospects for artificial intelligence, one that is pessimistic about its prospects and skeptical about some of the claims made for its success. I disagree with some of its implicit premises and claims.
AI’s goals can be understood as being two-fold, depending on your understanding of the field. First, to make machines that can perform tasks, which if performed by humans, would be said to require “intelligence”. Second, to understand human cognitive processes and replicate them in a suitable architecture. The first enterprise is engineering; the second, cognitive science (Vico-style: “the true and the made are convertible”).
The first cares little about the particular implementation mechanism or the theoretical underpinning of task performance; success in task execution is all. If you can make a robot capable of brewing a cup of tea in kitchens strange and familiar, it does not matter what its composition, computational architecture or control logics are, all that matters is that brewed cup of tea. The second cares little about the implementation medium – it could be silicon and plastic – but it does about the mechanism employed; it must faithfully instantiate and realize an abstraction of a distinctly human cognitive process. Aeronautical engineers replicate the flight of feathered birds using aluminum and jet engines; they physically instantiate abstract principles of flight. The cognitive science version of AI seeks to perform a similar feat for human cognition; AI should validate our best science of mind.
I take DeBoer’s critique of so-called “statistical” or “big-data” AI to be: you’re only working toward the first goal, not toward the second. That is a fair observation, but it does not establish the following added conclusion: cognitive science is the “right” or the “only” way to realize artificial intelligence. It also does not establish the following conclusion: engineering AI is a useless distraction in the task of understanding human cognition or what artificial intelligence or even “real intelligence” might be. Cognitive science AI is not the only worthwhile paradigm for AI, not the only intellectually useful one.
To see this, consider what the successes–even partial–of engineering AI tell us: intelligence is not one thing, it is many; intelligence is achievable both by mimicking human cognitive processes and not; in some cases, it is more easily achieved by the latter. The successes of engineering AI should tell us that the very phenomena–intelligence–we take ourselves to be studying in cognitive science isn’t well understood; they tell us the very thing being studied–“mind”–might not be a thing to begin with. (DeBoer rightly disdains the “mysterianism” in claims like “intelligence is an emergent property” but he seems comfortable with the chauvinism of “intelligence achievable by non-human means isn’t intelligence.” A simulation of intelligence isn’t “only” a simulation; it forces us to reckon with the possibility “real intelligence” might be “only a simulation.”)
What we call intelligence is a performative capacity; creatures that possess intelligence can do things in this world; the way humans accomplish those tasks is of interest, but so are other ways of doing so. They show us many relationships to our environment can be described as “cognitive” or “mindful”; if giant-lookup machines and human beings can both play chess and write poems then that tells us something interesting about the nature of those capacities. If language comprehension can be achieved by statistical methods, then that tells us we should regard our own linguistic capacities in a different light; a speaking and writing wind-up toy should make us revisit the phenomena of language anew: just what is this destination, reachable in such radically dissimilar routes–‘human cognition’ and ‘machine learning’?
DeBoer rightly points out the difficulties both AI methodologies face; I would go further and say that given our level of (in)comprehension, we do not even possess much of a principled basis for so roundly dismissing the claims made by statistical or big-data AI. It might turn out that the presuppositions of cognitive science might be altered by the successes of engineering AI, thus changing its methodologies and indicators of success; cognitive science might be looking in the wrong places for the wrong things.
I don’t have any sort of problem with “expert systems” work, Samir, which is your first category (which I think later on you call ‘engineering AI’). That sort of work is valuable and interesting, although the claims for it have less tendency to be true than claims about any other field of tech. By several times over.
But your attempt at analogizing what expert systems are doing is also missing the point. Engineers building airplanes are not replicating bird flight. They are flying humans around, yes, but they are in no way replicating bird flight. A replication of bird flight would be an amazingly useful technology; think about the versatility and maneuverability of a bird, and how useful it would be to design tech that could fly in this fashion. (Very different from what we use planes for, of course).
We can’t do it, of course (not nearly). We’re not that clever yet.
Calling expert systems “AI” is like calling planes “artificial birds”. A plane is fantastic. It’s can do lot of things; it is in no way a bird. Similarly, expert systems are fantastic. They can do a lot of things (a very few of them even well); it is in no way “intelligence”. That doesn’t matter to the success of the expert systems at their tasks. But it does matter to those who want to make them better; because there’s no doubt that intelligence could do the job a heck of a lot better.
Craig: I don’t take aircraft to be “replicating” bird flight. I take them to be instantiating flight in a different way. Because we aren’t committed to merely replicating bird flight we can do much more than birds can – we can fly faster, longer,. higher and so on. Bird flight can do things that jets can’t, of course, and replicating that would be useful. But its not what engineers should limit themselves to. Planes aren’t “artificial birds” – they are systems that fly, i.e. they do the same things birds do, by relying on some of the same principles but not exactly the way they do it. This limits them but also lets them go beyond.
Also of interest: http://www.concurringopinions.com/archives/2012/02/autonomous-agents-and-extension-of-law-policymakers-should-be-aware-of-technical-nuances.html
and:
http://www.concurringopinions.com/archives/2012/02/autonomous-agents-and-extension-of-law-policymakers-should-be-aware-of-technical-nuances.html
– What we call intelligence is a performative capacity; creatures that possess intelligence can do things in this world; the way humans accomplish those tasks is of interest, but so are other ways of doing so.
Do you mean by this that AI could lead to new understandings of intelligence and performative capability? If so, this seems to me a more compelling outcome than attempting to replicate the abilities of an animal that congratulates itself with an illusion of rationality while staggering through its days in haze of cognitive biases.
Jeff:
Thanks for the comment. Yes, exactly. To me, the replication of our cognitive capacities, if done in a way that doesn’t seem ‘human’, is more interesting than perfect reproduction of all things human. The latter project seems more interestingly accomplished by having sex. We are a species all too easily convinced of its superiority; engineering AI helps debunk some of those self-serving myths.
Reblogged this on Recent Items and commented:
Samir Chopra’s succinct response to Fredrik DeBoer’s rangy post about AI. Taken with their comments, these posts have an engineering focus that helps to adumbrate many issues of cognition that will interest anyone working on symbolic systems and language (less so about consciousness though).
On a related note: https://samirchopra.com/2013/03/07/the-mind-is-not-a-place-or-an-object/
I’m not so sure that DeBoer was arguing that “engineering AI is a useless distraction in the task of understanding human cognition or what artificial intelligence or even ‘real intelligence'”. I think what he was arguing was that this kind of industrial AI is overemphasized and not obviously relevant to the cognitive science enterprise. If I had a trillion dollars and needed to come up with a prioritized list of research projects to fund in order to try to crack the cognitive science problem, it’s not obvious that machine learning on massive data sets even makes the top 5. It’s not worthless, but it has disproportionate mindshare because it happens to be lucrative (since it offers tools for solving difficult computational problems).
Some brief responses:
1. Engineering AI is relevant: it shows the phenomena cogsci is studying are achievable in a variety of media, by a variety of techniques.
2. The two approaches are not competitive but complementary; they can proceed independently.
3. Engineering AI is not solving computational but conceptual problems.