Steven Pinker Should Read Some Nietzsche For Himself

Steven Pinker does not like Nietzsche. The following exchange–in an interview with the Times Literary Supplement makes this clear:

Question: Which author (living or dead) do you think is most overrated?

Pinker: Friedrich Nietzsche. It’s easy to see why his sociopathic ravings would have inspired so many repugnant movements of the twentieth and twenty-first centuries, including fascism, Nazism, Bolshevism, the Ayn Randian fringe of libertarianism, and the American alt-Right and neo-Nazi movements today. Less easy to see is why he continues to be a darling of the academic humanities. True, he was a punchy stylist, and, as his apologists note, he extolled the individual superman rather than a master race. But as Bertrand Russell pointed out in A History of Western Philosophy, the intellectual content is slim: it “might be stated more simply and honestly in the one sentence: ‘I wish I had lived in the Athens of Pericles or the Florence of the Medici’.”

The answers that Pinker seeks–in response to his plaintive query–are staring him right in the face. To wit, ‘we’ study Nietzsche with great interest because:

1. If indeed it is true that Nietzsche’s ‘ravings…inspired so many repugnant movements’–and these ‘movements’ have not been without considerable import, then surely we owe it to ourselves to read him and find out why they did so. Pinker thinks it ‘It’s easy to see why’ but surely he would not begrudge students reading Nietzsche for themselves to find out why? Moreover, Nietzsche served as the inspiration for a great deal of twentieth-century literature too–Thomas Mann is but one of the many authors to be so influenced. These connections are worth exploring as well.

2. As Pinker notes with some understatement, Nietzsche was a ‘punchy stylist.’ (I mean, that is like saying Mohammad Ali was a decent boxer, but let’s let that pass for a second.) Well, folks in the humanities–in departments like philosophy, comparative literature, and others–often study things like style, rhetoric, and argumentation; they might be interested in seeing how these are employed to produce the ‘sociopathic ravings’ that have had such impact on our times. Moreover, Nietzsche’s writings employ many different literary styles; the study of those is also of interest.

3. Again, as Pinker notes, Nietzsche ‘extolled the individual superman rather than a master race,’ which then prompts the question of why the Nazis were able to co-opt him in some measure. This is a question of historical, philosophical, and cultural interest; the kinds of things folks in humanities departments like to study. And if Nietzsche did develop some theory of the “individual superman,” what was it? The humanities are surely interested in this topic too.

4. Lastly, for Pinker’s credibility, he should find a more serious history of philosophy than Bertrand Russell‘s A History of Western Philosophy, which is good as a light read–it was written very quickly as a popular work for purely commercial purposes and widely reviled in its time for its sloppy history. There is some good entertainment in there; but a serious introduction to the philosophers noted in there can only begin with their own texts. If Pinker wants to concentrate on secondary texts, he can read Frederick Copleston‘s Friedrich Nietzsche: Philosopher of Culture; this work, written by a man largely unsympathetic to Nietzsche’s views and who indeed finds him morally repugnant, still finds them worthy of serious consideration and analysis. So much so that Copleston thought it worthwhile to write a book about them. Maybe Pinker should confront some primary texts himself. He might understand the twentieth century better.

Jerry Fodor And Philosophical Practice

I wrote a short post on Facebook today, making note of the passing away of Jerry Fodor:

Much as I admired Fodor’s writing chops, I deplored the way he did philosophy. The stories of his ‘put-downs’ and sarcastic, ironic, ‘devastating’ objections, questions, or responses in seminars always left me feeling like this was not how I understood philosophy as a practice. The admiration all those around me extended to Fodor was a significant component in me feeling alienated from philosophy during graduate school. (It didn’t help that in the one and only paper I wrote on Fodor–in refuting his supposed critique of Quine‘s inscrutability of reference claim–I found him begging the question rather spectacularly.) I had no personal contact with him, so I cannot address that component of him; all I can say is that from a distance, he resembled too many other academic philosophers: very smart folk, but not people I felt I could work with or for, or converse with to figure out things together.

In response, a fellow philosopher wrote to me:

[H]onestly that was my impression of Fodor also….while I too didn’t ever even meet him in person, I thought much of his rhetoric was nasty and unfair, that he routinely caricatured positions of others and then sort of pranced around about how he had totally refuted them, and that he basically ignored criticism…he was very far from what I would take to be a model for the profession….I got the impression that pretty much every other philosopher he mentioned was just a foil – produce a sort of comic book version of them to show how much better his view was.

There has been plenty of praise for Fodor on social media, much of which made note of precisely the style I pointed out above, albeit in admiring tones. In their obit for FodorThe London Review of Books paid attention to similar issues:

Jerry Fodor, who died yesterday, wrote thirty pieces for the LRB….Many of them were on philosophy of mind…more often than not, lucidly explaining how the books under review had got it all wrong….His literary criticism included a withering review of a pair of ‘amply unsuccessful’ novels about apes; and he had this to say of Steven Pinker’s view of Hamlet in his demolition of psychological Darwinism:

And here [Pinker] is on why we like to read fiction: ‘Fictional narratives supply us with a mental catalogue of the fatal conundrums we might face someday and the outcomes of strategies we could deploy in them. What are the options if I were to suspect that my uncle killed my father, took his position, and married my mother?’ Good question. Or what if it turns out that, having just used the ring that I got by kidnapping a dwarf to pay off the giants who built me my new castle, I should discover that it is the very ring that I need in order to continue to be immortal and rule the world? It’s important to think out the options betimes, because a thing like that could happen to anyone and you can never have too much insurance.

Unsurprisingly, this quote from Fodor was cited as a ‘sick burn’ on Twitter–as an example of his ‘genteel trash talk.’ But a second’s reading of Pinker, and of the response above by Fodor, shows that Fodor is again operating at his worst here. The paragraph cited is a deliberately obtuse and highly superficial reading of Pinker’s claim. Do we have to think about the specific events in Hamlet in order to ponder the ethical dilemmas that the play showcases for us? Is this why people have the emotional responses they do to Hamlet? Or is it because they are able to recognize and internalize the intractability of the issues that Hamlet raises? Do we need to specifically think about rings, dwarfs, and giants in order to specifically ponder the abstract problems that lie at the heart of the tale Fodor cites? Indeed, the many folks who have read these stories over the years seem–in their emotional responses–to have been perfectly capable of separating their concrete particulars from the concepts they traffic in. Fodor does not bother to offer a charitable reading of Pinker; he sets off immediately to scorn and ridicule. This kind of philosophy, and this kind of writing, earns plenty of applause from those who imagine philosophy to be a contact sport. But it does little to advance philosophical thinking on the issues at play.

Pigliucci And Shaw On The Allegedly Useful Reduction

Massimo Pigliucci critiques the uncritical reductionism that the conflation of philosophy and science brings in its wake, using as a jumping-off point, Tamsin Shaw’s essay in the New York Review of Books, which addresses psychologists’ claims “that human beings are not rational, but rather rationalizing, and that one of the things we rationalize most about is ethics.” Pigliucci notes that Shaw‘s targets “Jonathan Haidt, Steven Pinker, Paul Bloom, Joshua Greene and a number of others….make the same kind of fundamental mistake [a category mistake], regardless of the quality of their empirical research.”

Pigliucci highlights Shaw’s critique of Joshua Greene’s claims that “neuroscientific data can test ethical theories, concluding that there is empirical evidence for utilitarianism.” Shaw had noted that:

Greene interpreted these results in the light of an unverifiable and unfalsifiable story about evolutionary psychology … Greene inferred … that the slower mechanisms we see in the brain are a later development and are superior because morality is properly concerned with impersonal values … [But] the claim here is that personal factors are morally irrelevant, so the neural and psychological processes that track such factors in each person cannot be relied on to support moral propositions or guide moral decisions. Greene’s controversial philosophical claim is simply presupposed; it is in no way motivated by the findings of science. An understanding of the neural correlates of reasoning can tell us nothing about whether the outcome of this reasoning is justified.

At this point Pigliucci intervenes:

Let me interject here with my favorite analogy to explain why exactly Greene’s reasoning doesn’t hold up: mathematics. Imagine we subjected a number of individuals to fMRI scanning of their brain activity while they are in the process of tackling mathematical problems. I am positive that we would conclude the following…

There are certain areas, and not others, of the brain that lit up when a person is engaged with a mathematical problem.

There is probably variation in the human population for the level of activity, and possibly even the size or micro-anatomy, of these areas.

There is some sort of evolutionary psychological story that can be told for why the ability to carry out simple mathematical or abstract reasoning may have been adaptive in the Pleistocene (though it would be much harder to come up with a similar story that justifies the ability of some people to understand advanced math, or to solve Fermat’s Last Theorem).

But none of the above will tell us anything at all about whether the subjects in the experiment got the math right. Only a mathematician — not a neuroscientist, not an evolutionary psychologist — can tell us that.

Correct. Now imagine an ambitious neuroscientist who claims his science has really, really advanced, and indeed, imaging technology has improved so much that Pigliucci’s first premise above should be changed to:

There are certain areas, and not others, of the brain that lit up when a person is working out the correct solution to a particular mathematical problem.

So, contra Pigliucci’s claim above, neuroscience will tell us a great deal about whether the subjects in the experiment got the math right. Our funky imaging science and technology makes that possible now. At this stage, the triumphant reductionist says, “We’ve reduced the doing of mathematics to doing neuroscience; when you think you are doing mathematics, all that is happening is that a bunch of neurons are firing in the following patterns and particular parts of your brain are lighting up. We can now tell a evolutionary psychology story about why the ability to reason correctly may have been adaptive.”

But we may ask: Should the presence of such technology mean we should stop doing mathematics? Have we learned, as a result of such imaging studies, how to do mathematics correctly? We know that when our brains are in particular states, they can be interpreted as doing mathematical problems–‘this activity means you are doing a math problem in this fashion.’ A mathematician looks at proofs; a neuroscientist would look at the corresponding brain scans. We know when one corresponds to another. This is perhaps useful for comparing math-brain-states with poetry-brain-states but it won’t tell us how to write poetry or proofs for theorems. It does not tell us how humans would produce those proofs (or those brain states in their brains.) If a perverse neuroscientist were to suggest that the right way to do maths now would be to aim to put your brain into the states suggested by the imaging machines, we would note we already have a perfectly good of learning how to do good mathematics: learning from masters’ techniques, as found in books, journals, and notebooks.

In short, the reduction of a human activity–math–to its corresponding brain activity achieves precisely nothing when it comes to the doing of the activity. It aids our understanding of that activity in some regards–as in, how does its corresponding brain activity compare to other corresponding brain activities for other actions–and not in others. Some aspects of this reduction will strike us as perfectly pointless, given the antecedent accomplishments of mathematics and mathematicians.

Not all possible reduction is desirable or meaningful.