That Elusive Mark By Which To Distinguish Good People From Bad

In Journey to the End of the NightCéline‘s central character, Ferdinand Bardamu is confronted with uncontrovertible evidence of moral goodness in Sergeant Alcide–who is nobly working away in a remote colonial outpost to financially support a niece who is little more than a perfect stranger to him. That night, as Bardamu gazes at the sleeping Alcide, now once again, in inactivity, utterly unremarkable and undistinguishable from others who serve like him, he thinks to himself:

There ought to be some mark by which to distinguish good people from bad.

There isn’t, of course. But that hasn’t stopped mankind from continuing to hold on to this forlorn hope in the face of the stubborn difficulty of making moral judgements and evaluations about our fellow humans. Sometimes we seek to evaluate fellow humans on the basis of simple tests of conformance to a pre-established, clearly specified, moral code or decision procedure; sometimes we drop all pretence of sophisticated ethical analysis and take refuge in literal external marks.

These external marks and identifiers have varied through and across space and time and cultures. Sometimes shadings of skin pigmentations have been established as the distinguishing marker of goodness; sometimes it is the shape of the skull that has been taken to be the desired marker; sometimes national or ethnic origin; sometimes religious affiliation. (If that religious affiliation is visible by means of an external marker–like a turban for instance–then so much the better. West Pakistani troops conducting genocide in East Pakistan in 1971 were fond of asking Bengali civilians to drop their pants and expose their genitals;¹ the uncircumcised ones were led off to be shot; their bodies had revealed them to be of the wrong religion, and that was all that mattered as the West Pakistani Army sought to cleanse East Pakistan of those subversive elements that threatened the Pakistani polity.)

Confronted with this history of failure to find the distinguishing external mark of goodness, perhaps emblazoned on our foreheads by the cosmic branding authority, hope has turned elsewhere, inwards. Perhaps the distinguishing mark is not placed outside on our bodies but will be found inside us–in some innard or other. Perhaps there is ‘bad blood’ in some among us, or even worse, some might have ‘bad brains.’ Unsurprisingly, we have turned to neuroscience to help us with moral decisions: here is a brain state found in mass murderers and criminals; innocents do not seem to have it; our penal and moral decisions have received invaluable assistance. But as a growing litany of problems with neuroscientific inference suggest, these identifications of brain states and their correlations with particular behavior and the explanations that result rest on shaky foundations.

In the face of this determination to seek simple markers for moral judgement my ‘There isn’t, of course’ seems rather glib; it fails to acknowledge the endless frustration and difficulty of decision-making in the moral domain–and the temptation to seek refuge in the clearly visible.

Note: R. J Rummel, Death by Government, page 323

Neuroscience’s Inference Problem And The Perils Of Scientific Reduction

In Science’s Inference Problem: When Data Doesn’t Mean What We Think It Does, while reviewing Jerome Kagan‘s Five Constraints on Predicting Behavior, James Ryerson writes:

Perhaps the most difficult challenge Kagan describes is the mismatching of the respective concepts and terminologies of brain science and psychology. Because neuroscientists lack a “rich biological vocabulary” for the variety of brain states, they can be tempted to borrow correlates from psychology before they have shown there is in fact a correlation. On the psychology side, many concepts can be faddish or otherwise short-lived, which should make you skeptical that today’s psychological terms will “map neatly” onto information about the brain. If fMRI machines had been available a century ago, Kagan points out, we would have been searching for the neurological basis of Pavlov’s “freedom reflex” or Freud’s “oral stage” of development, no doubt in vain. Why should we be any more confident that today’s psychological concepts will prove any better at cutting nature at the joints?

In a review of Theory and Method in the Neurosciences (Peter K. Machamer, Rick Grush, Peter McLaughlin (eds), University of Pittsburgh Press, 2001), I made note¹ of related epistemological concerns:

When experiments are carried out, neuroscientists continue to run into problems. The level of experimental control available to practitioners in other sciences is simply not available to them, and the theorising that results often seems to be on shaky ground….The localisation techniques that are amongst the most common in neuroscience rely on experimental methods such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and magnetoencephelography (MEG). [In PET] a radioactive tracer consisting of labelled water or glucose analogue molecules is injected into a subject, who is then asked to perform a cognitive task under controlled conditions. The tracer decays and emits positrons and gamma rays that increase the blood flow or glucose metabolism in an area of the brain. It is now assumed that this area is responsible for the cognitive function performed by the subject. The problem with this assumption, of course, is that the increased blood flow might occur in one area, and the relevant neural activity might occur in another, or in no particular area at all….this form of investigation, rather than pointing to the modularity and functional decomposability of the brain, merely assumes it.

The fundamental problem–implicit and explicit in Kagan’s book and my little note above–is the urge to ‘reduce’ psychology to neuroscience, to reduce mind to brain, to eliminate psychological explanations and language in favor of neuroscientific ones, which will introduce precise scientific language in place of imprecise psychological descriptions.  This urge to eliminate one level of explanation in favor of a ‘better, lower, more basic, more fundamental’ one is to put it bluntly, scientistic hubris, and the various challenges Kagan outlines in his book bear out the foolishness of this enterprise. It results in explanations and theories that rest on unstable foundations: optimistic correlations and glib assumptions are the least of it. Worst of all, it contributes to a blindness: what is visible at the level of psychology is not visible at the level of neuroscience. Knowledge should enlighten, not render us myopic.

Note: In Metascience, 11(1): March 2002.

Pigliucci And Shaw On The Allegedly Useful Reduction

Massimo Pigliucci critiques the uncritical reductionism that the conflation of philosophy and science brings in its wake, using as a jumping-off point, Tamsin Shaw’s essay in the New York Review of Books, which addresses psychologists’ claims “that human beings are not rational, but rather rationalizing, and that one of the things we rationalize most about is ethics.” Pigliucci notes that Shaw‘s targets “Jonathan Haidt, Steven Pinker, Paul Bloom, Joshua Greene and a number of others….make the same kind of fundamental mistake [a category mistake], regardless of the quality of their empirical research.”

Pigliucci highlights Shaw’s critique of Joshua Greene’s claims that “neuroscientific data can test ethical theories, concluding that there is empirical evidence for utilitarianism.” Shaw had noted that:

Greene interpreted these results in the light of an unverifiable and unfalsifiable story about evolutionary psychology … Greene inferred … that the slower mechanisms we see in the brain are a later development and are superior because morality is properly concerned with impersonal values … [But] the claim here is that personal factors are morally irrelevant, so the neural and psychological processes that track such factors in each person cannot be relied on to support moral propositions or guide moral decisions. Greene’s controversial philosophical claim is simply presupposed; it is in no way motivated by the findings of science. An understanding of the neural correlates of reasoning can tell us nothing about whether the outcome of this reasoning is justified.

At this point Pigliucci intervenes:

Let me interject here with my favorite analogy to explain why exactly Greene’s reasoning doesn’t hold up: mathematics. Imagine we subjected a number of individuals to fMRI scanning of their brain activity while they are in the process of tackling mathematical problems. I am positive that we would conclude the following…

There are certain areas, and not others, of the brain that lit up when a person is engaged with a mathematical problem.

There is probably variation in the human population for the level of activity, and possibly even the size or micro-anatomy, of these areas.

There is some sort of evolutionary psychological story that can be told for why the ability to carry out simple mathematical or abstract reasoning may have been adaptive in the Pleistocene (though it would be much harder to come up with a similar story that justifies the ability of some people to understand advanced math, or to solve Fermat’s Last Theorem).

But none of the above will tell us anything at all about whether the subjects in the experiment got the math right. Only a mathematician — not a neuroscientist, not an evolutionary psychologist — can tell us that.

Correct. Now imagine an ambitious neuroscientist who claims his science has really, really advanced, and indeed, imaging technology has improved so much that Pigliucci’s first premise above should be changed to:

There are certain areas, and not others, of the brain that lit up when a person is working out the correct solution to a particular mathematical problem.

So, contra Pigliucci’s claim above, neuroscience will tell us a great deal about whether the subjects in the experiment got the math right. Our funky imaging science and technology makes that possible now. At this stage, the triumphant reductionist says, “We’ve reduced the doing of mathematics to doing neuroscience; when you think you are doing mathematics, all that is happening is that a bunch of neurons are firing in the following patterns and particular parts of your brain are lighting up. We can now tell a evolutionary psychology story about why the ability to reason correctly may have been adaptive.”

But we may ask: Should the presence of such technology mean we should stop doing mathematics? Have we learned, as a result of such imaging studies, how to do mathematics correctly? We know that when our brains are in particular states, they can be interpreted as doing mathematical problems–‘this activity means you are doing a math problem in this fashion.’ A mathematician looks at proofs; a neuroscientist would look at the corresponding brain scans. We know when one corresponds to another. This is perhaps useful for comparing math-brain-states with poetry-brain-states but it won’t tell us how to write poetry or proofs for theorems. It does not tell us how humans would produce those proofs (or those brain states in their brains.) If a perverse neuroscientist were to suggest that the right way to do maths now would be to aim to put your brain into the states suggested by the imaging machines, we would note we already have a perfectly good of learning how to do good mathematics: learning from masters’ techniques, as found in books, journals, and notebooks.

In short, the reduction of a human activity–math–to its corresponding brain activity achieves precisely nothing when it comes to the doing of the activity. It aids our understanding of that activity in some regards–as in, how does its corresponding brain activity compare to other corresponding brain activities for other actions–and not in others. Some aspects of this reduction will strike us as perfectly pointless, given the antecedent accomplishments of mathematics and mathematicians.

Not all possible reduction is desirable or meaningful.

What the Brain Can Tell Us About Art (and Literature)

In ‘What the Brain Can Tell Us About Art‘ (New York Times, April 12, 2013), Eric R. Kandel writes:

Alois Riegl….understood that art is incomplete without the perceptual and emotional involvement of the viewer. Not only does the viewer collaborate with the artist in transforming a two-dimensional likeness on a canvas into a three-dimensional depiction of the world, the viewer interprets what he or she sees on the canvas in personal terms, thereby adding meaning to the picture….In addition to our built-in visual processes, each of us brings to a work of art our acquired memories: we remember other works of art that we have seen. We remember scenes and people that have meaning to us and relate the work of art to those memories. In order to see what is painted on a canvas, we have to know beforehand what we might see in a painting. These insights into perception served as a bridge between the visual perception of art and the biology of the brain.

Kande’s focus in his article is on visual art, but these considerations apply equally to the printed word. Here are the passages excerpted above with very slight emendation:

Literature is incomplete without the perceptual and emotional involvement of the viewer. Not only does the reader  collaborate with the author in transforming two-dimensional printed words on a page into an imaginative depiction of the world, the reader interprets what he or she sees on the page canvas in personal terms, thereby adding meaning to the text….In addition to our acquired reading abilities, each of us brings to a work of literature our acquired memories: we remember other works of literature that we have seen. We remember scenes and people who have meaning to us and relate the work of literature to those memories. In order to read what is printed on a page on a page, we have to know beforehand what we might read in the text.

So, we get the collaborative theory of the reader: a literary work is brought to life by the reader, it acquires meaning in the act of reading.  This ensures that the work serves as raw material for an act of active engagement with the reader, who brings a history of reading, a corpus of memories, and thus, an inclination and disposition toward the text. The more you read, the more you bring to every subsequent act of reading; the more you engage with humans, the more varied the archetypes and templates of the human experience you have playing in your mind as you read.

The classic work then, which endures over time and acquires a new set of readers in each successive generation, becomes so because it remains reinterpretable on an ongoing basis; newer bodies of text and human histories surround it and it acquires new meanings from them.  We are still unable to analyze this phenomenon, to determine what makes a particular text receptive to such reimaginings over time; its success is the only indicator it has what it takes to acquire the status of a classic.

Op-Eds and the Social Context of Science

A few years ago, I taught the third of four special interdisciplinary seminars that students of the CUNY Honors College are required to complete during the course of their degrees. The CHC3 seminar is titled Science and Technology in New York City, a moniker that is open, and subject to, broad interpretation by any faculty member that teaches it. In my three terms of teaching it, I used it to introduce to my students–many of whom were science majors and planned to go on to graduate work in the sciences–among other things, the practice of science and the development and deployment of technology in urban spaces. This treatment almost invariably required me to introduce the notion of a social history of science, among whose notions are that science does not operate independent of its social context, that scientists are social and political actors, that scientific laboratories are social and political spaces, not just repositories for scientific equipment, that scientific theories, ‘advances’ and ‘truths’ bear the mark of historical contingencies and developments. (One of my favorite discussion-inducing examples was to point to the amazing pace of scientific and technological progress in the years from 1939 to 1945 and ask: What could have brought this about?)

If I were teaching that class this semester, I would have brought in Phillip M. Boffey‘s Op-Ed (‘The Next Frontier is Inside Your Brain‘, New York Times, February 23) for a classroom discussion activity. I would have pointed out to my students that the practice of science requires funding, sometimes from private sources, sometimes from governmental ones. This funding does not happen without contestation; it requires justification, because funds are limited and there are invariably more requests for funding than can be satisfied, and sometimes because there is skepticism about the scientific worth of the work proposed. So the practice of science has a rhetorical edge to it; its practitioners–and those who believe in the value of their work–must convince, persuade, and argue. They must establish the worth of what they do to the society that plays host to them.

Boffey’s Op-Ed then, would have served as a classic example of this aspect of the practice of science. It aims to build public support for research projects in neuroscience, because, as Boffey notes at the very outset:

The Obama administration is planning a multiyear research effort to produce an “activity map” that would show in unprecedented detail the workings of the human brain, the most complex organ in the body. It is a breathtaking goal at a time when Washington, hobbled by partisan gridlock and deficit worries, seems unable to launch any major new programs.

This effort — if sufficiently financed — could develop new tools and techniques that would lead to a much deeper understanding of how the brain works. [link  in original]

And then Boffey is off and running. For Congressmen need to be convinced; perhaps petitions will have to be signed; perhaps other competitors who also hope to be ‘sufficiently financed’ need to be shown to be less urgent. And what better place to place and present these arguments than the nation’s media outlets, perhaps its most prominent newspaper?

The scientist as polemicist is one of the many roles a scientist may be called on to play in his work in science. Sometimes his work may be done, in part, by those who have been persuaded by him already. Boffey’s arguments, his language, his framing of the importance of the forthcoming legislation, would, I think, all serve to show to my imagined students this very important component of the practice of science.