RIP Hilary Putnam 1926-2016

During the period of my graduate studies in philosophy,  it came to seem to me that William James‘ classic distinction between tough and tender-minded philosophers had been been reworked just a bit. The tough philosophers were still empiricists and positivists but they had begun to show some of the same inclinations that the supposedly tender-minded in James’ distinction did: they wanted grand over-arching systems, towering receptacles into which all of reality could be neatly poured; they were enamored of reductionism; they had acquired new idols, like science (and metaphysical realism) and new tools, those of mathematics and logic.

Hilary Putnam was claimed as a card-carrying member of this tough-minded group:  he was a logician, mathematician, computer scientist, and analytic philosopher of acute distinction. He wrote non-trivial papers on mathematics and computer science (the MRDP problem, the Davis-Putnam algorithm), philosophy of language (the causal theory of reference), and philosophy of mind (functionalism, the multiple realizability of the mental)–the grand trifecta of the no-bullshit, hard-headed analytic philosopher, the one capable of handing your  woolly, unclear, tender continental philosophy ass to you on a platter.

I read many of Putnam’s classic works as a graduate student; he was always a clear writer, even as he navigated the thickets of some uncompromisingly dense material. Along with Willard Van Orman Quine, he was clearly the idol of many analytic philosophers-in-training; we grew up on a diet of Quine-Putnam-Kripke. You thought of analytic philosophy, and you thought of Putnam. Whether it was this earth, or its twin, there he was.

I was already quite uncomfortable with analytical philosophy’s preoccupations, methods, and central claims as I finished my PhD; I had not become aware that the man I thought of as its standard-bearer had started to step down from that position before I even began graduate school. When I encountered him again, after I had finished my dissertation and my post-doctoral fellowship, I found a new Putnam.

This Putnam was a philosopher who had moved away from metaphysical realism and scientism, who had found something to admire in the American pragmatists, who had become enamored of the Wittgenstein of the Philosophical Investigations. He now dismissed the fact-value dichotomy and indeed, now wrote on subjects that ‘tough-minded analytic philosophers’ from his former camps would not be caught dead writing: political theory and religion in particular. He even fraternized with the enemy, drawing inspiration, for instance, from Jürgen Habermas.

My own distaste for scientism and my interest in pragmatism (of the paleo and neo– varietals) and the late Wittgenstein meant that the new Putnam was an intellectual delight for me. (His 1964 paper ‘Robots: Machines or Artificially Created Life?’ significantly influenced my thoughts as I wrote my book on a legal theory for autonomous artificial agents.)  I read his later works with great relish and marveled at his tone of writing: he was ecumenical, gentle, tolerant, and crucially, wise. He had lived and learned; he had traversed great spaces of learning, finding that many philosophical perspectives abounded, and he had, as a good thinker must, struggled to integrate them into his intellectual framework. He seemed to have realized that the most acute philosophical ideal of all was a constant taking on and trying out of ideas, seeing if they worked in consonance with your life projects and those of the ones you cared for (this latter group can be as broad as the human community.) I was reading a philosopher who seemed to be doing philosophy in the way I understood it, as a way of making sense of this world without dogma.

I never had any personal contact with him, so I cannot share stories or anecdotes, no tales of directed inspiration or encouragement. But I can try to gesture in the direction of the pleasure he provided in his writing and his always visible willingness to work through the challenges of this world, this endlessly complicated existence. Through his life and work he provided an ideal of the engaged philosopher.

RIP Hilary Putnam.

Artificial Intelligence And ‘Real Understanding’

Speculation about–and vigorous contestations of–the possibility of ever realizing artificial intelligence have been stuck in a dreary groove ever since the Dartmouth conference: wildly optimistic predictions about the temporal proximity of the day machines (and the programs they run) will attain human levels of intelligence; followed by skeptical critique and taunting reminders of landmarks and benchmarks not attained; triumphant announcement of once-Holy Grails attained (playing chess, winning Jeopardy, driving a car, take your pick); rejoinders these were not especially holy or unattainable to begin with; accusations of goal-post moving; reminders again, of ‘quintessentially human’ benchmarks not attained; rejoinders of ‘just you wait’; and so on. Ad nauseam doesn’t begin to describe it.

Gary Marcusskepticism about artificial intelligence is thus following a well-worn path. And like many of those who have traveled this path he relies on easy puncturing of over-inflated pretension, and pointing out the real ‘challenging problems like understanding natural language.’ And this latest ability–[to] “read a newspaper as well as a human can”–unsurprisingly, is what ‘true AI’ should aspire to. There is always some ability that characterizes real, I mean really real, AI. All else is but ersatz, mere aspiration, a missing of the mark, an approximation. This ability is essential to our reckoning of a being as intelligent.

Because this debate’s contours are so well-worn, my response is also drearily familiar. If those who design and build artificial intelligence are to be accused to over-simplification, then those critiquing AI rely too much on mysterianism. On closer look, the statement “the deep-learning system doesn’t really understand anything” treats “understanding” as some kind of mysterious monolithic composite, whereas as Marcus has himself indirectly noted elsewhere it consists of a variety of visibly manifested  linguistic competencies. (Their discreteness, obviously, makes them more amenable to piecemeal solution; the technical challenge of integrating them into the same system still remains.)

‘Understanding’ often does a great deal of work for those who would dismiss the natural language competencies of artificial intelligence: “the program summarized the story for me but it didn’t really understand anything.” Or in running together two views of AI–wholesale emulation, including structure and implementation of human cognitive architecture, or just successful emulation of task competence. As an interlocutor of mine once noted during a symposium on my book A Legal Theory for Autonomous Artificial Agents:

Statistical and probability-based machine-learning models (often combined with logical-knowledge based rules about the world) often produce high-quality and effective results (not quite up to the par of nuanced human translators at this point), without any assertion that the computers are engaging in profound understanding with the underlying “meaning” of the translated sentences or employing processes whose analytical abilities approach human-level cognition.

My response then was:

What is this ‘profound understanding’ that we speak of? Turns out that when we want to cash out the meaning of this term we seek refuge again in complex, inter-related displays of understanding: He showed me he understood the book by writing about  it; or he showed me he understood the language because he did what I asked him to do; he understands the language because he affirms certain implications and rejects others….

[D]o I simply show by usage and deployment of a language within a particular language-using community that I understand the meanings of the sentences of that language?….If an artificial agent is so proficient, then why deny it the capacity for understanding meanings? Why isn’t understanding the meaning of a sentence understood as a multiply-realizable capacity?

Marcus is right to concentrate on competence in particular natural language tasks but he needs to disdain a) a reductive view that takes one particular set of competences to be characteristic of something as poorly defined as human intelligence and b) to not disdain the attainment of these competencies on grounds of their not emulating human cognitive structure.