Massimo Pigliucci critiques the uncritical reductionism that the conflation of philosophy and science brings in its wake, using as a jumping-off point, Tamsin Shaw’s essay in the New York Review of Books, which addresses psychologists’ claims “that human beings are not rational, but rather rationalizing, and that one of the things we rationalize most about is ethics.” Pigliucci notes that Shaw‘s targets “Jonathan Haidt, Steven Pinker, Paul Bloom, Joshua Greene and a number of others….make the same kind of fundamental mistake [a category mistake], regardless of the quality of their empirical research.”

Pigliucci highlights Shaw’s critique of Joshua Greene’s claims that “neuroscientific data can test ethical theories, concluding that there is empirical evidence for utilitarianism.” Shaw had noted that:

Greene interpreted these results in the light of an unverifiable and unfalsifiable story about evolutionary psychology … Greene inferred … that the slower mechanisms we see in the brain are a later development and are superior because morality is properly concerned with impersonal values … [But] the claim here is that personal factors are morally irrelevant, so the neural and psychological processes that track such factors in each person cannot be relied on to support moral propositions or guide moral decisions. Greene’s controversial philosophical claim is simply presupposed; it is in no way motivated by the findings of science. An understanding of the neural correlates of reasoning can tell us nothing about whether the outcome of this reasoning is justified.

At this point Pigliucci intervenes:

Let me interject here with my favorite analogy to explain why exactly Greene’s reasoning doesn’t hold up: mathematics. Imagine we subjected a number of individuals to fMRI scanning of their brain activity while they are in the process of tackling mathematical problems. I am positive that we would conclude the following…

There are certain areas, and not others, of the brain that lit up when a person is engaged with a mathematical problem.

There is probably variation in the human population for the level of activity, and possibly even the size or micro-anatomy, of these areas.

There is some sort of evolutionary psychological story that can be told for why the ability to carry out simple mathematical or abstract reasoning may have been adaptive in the Pleistocene (though it would be much harder to come up with a similar story that justifies the ability of some people to understand advanced math, or to solve Fermat’s Last Theorem).

But *none* of the above will tell us anything at all about whether the subjects in the experiment got the math right. Only a mathematician — not a neuroscientist, not an evolutionary psychologist — can tell us that.

Correct. Now imagine an ambitious neuroscientist who claims his science has really, really advanced, and indeed, imaging technology has improved so much that Pigliucci’s first premise above should be changed to:

There are certain areas, and not others, of the brain that lit up when a person is working out the *correct* solution to a particular mathematical problem.

So, contra Pigliucci’s claim above, neuroscience *will* tell us a great deal about whether the subjects in the experiment got the math right. Our funky imaging science and technology makes that possible now. At this stage, the triumphant reductionist says, “We’ve reduced the doing of mathematics to doing neuroscience; when you think you are doing mathematics, all that is happening is that a bunch of neurons are firing in the following patterns and particular parts of your brain are lighting up. We can now tell a evolutionary psychology story about why the ability to reason correctly may have been adaptive.”

But we may ask: Should the presence of such technology mean we should stop doing mathematics? Have we learned, as a result of such imaging studies, *how* to do mathematics correctly? We know that when our brains are in particular states, they can be interpreted as doing mathematical problems–‘this activity means you are doing a math problem in this fashion.’ A mathematician looks at proofs; a neuroscientist would look at the corresponding brain scans. We know when one corresponds to another. This is perhaps useful for comparing math-brain-states with poetry-brain-states but it won’t tell us how to write poetry or proofs for theorems. It does not tell us how humans would produce those proofs (or those brain states in their brains.) If a perverse neuroscientist were to suggest that the right way to do maths now would be to aim to put your brain into the states suggested by the imaging machines, we would note we already have a perfectly good of learning how to do good mathematics: learning from masters’ techniques, as found in books, journals, and notebooks.

In short, the reduction of a human activity–math–to its corresponding brain activity achieves precisely nothing when it comes to the *doing* of the activity. It aids our *understanding* of that activity in some regards–as in, how does its corresponding brain activity compare to other corresponding brain activities for other actions–and not in others. Some aspects of this reduction will strike us as perfectly pointless, given the antecedent accomplishments of mathematics and mathematicians.

Not all possible reduction is desirable or meaningful.

The amended premise is interesting. To suppose there is such a contraption by which we might ascertain the correctness of a particular math problem would like strike most scientists as perverse. Correct me if I’m wrong (I have no special training in this field, so I’m just working on my basic understanding of this subject), but interpreting the function of this machine in this light seems to implicitly endorse a particular view of mathematics, i.e. that either there is some subterranean mental process by which the correctness/incorrectness is imputed to a mathematical solution or there is something perceptually recognizable about a correct mathematical solution that has an underlying mental correlate. I’m not sure either of those views is obvious or without philosophical baggage (the first view seems to commit one to the view that correctness in mathematics is substantially internal and mind-dependent; the second raises questions about what, exactly, is being perceived or recognized).

Even if analogous technology were developed vis a vis ethics, it seems to me that the technology would require tacit acceptance of certain antecedent ethical ideas.

Sorry for the late and rushed reply. Yes, I don’t think either of those views are uncontroversial. I assumed the existence of such a ‘contraption’ as a kind of stipulation, that even if it were to exist, we’d still be left with a kind of ‘mathematical assessment’ of the solution. More on this very soon. Sorry again for being so tardy.