That Elusive Mark By Which To Distinguish Good People From Bad

In Journey to the End of the NightCéline‘s central character, Ferdinand Bardamu is confronted with uncontrovertible evidence of moral goodness in Sergeant Alcide–who is nobly working away in a remote colonial outpost to financially support a niece who is little more than a perfect stranger to him. That night, as Bardamu gazes at the sleeping Alcide, now once again, in inactivity, utterly unremarkable and undistinguishable from others who serve like him, he thinks to himself:

There ought to be some mark by which to distinguish good people from bad.

There isn’t, of course. But that hasn’t stopped mankind from continuing to hold on to this forlorn hope in the face of the stubborn difficulty of making moral judgements and evaluations about our fellow humans. Sometimes we seek to evaluate fellow humans on the basis of simple tests of conformance to a pre-established, clearly specified, moral code or decision procedure; sometimes we drop all pretence of sophisticated ethical analysis and take refuge in literal external marks.

These external marks and identifiers have varied through and across space and time and cultures. Sometimes shadings of skin pigmentations have been established as the distinguishing marker of goodness; sometimes it is the shape of the skull that has been taken to be the desired marker; sometimes national or ethnic origin; sometimes religious affiliation. (If that religious affiliation is visible by means of an external marker–like a turban for instance–then so much the better. West Pakistani troops conducting genocide in East Pakistan in 1971 were fond of asking Bengali civilians to drop their pants and expose their genitals;¹ the uncircumcised ones were led off to be shot; their bodies had revealed them to be of the wrong religion, and that was all that mattered as the West Pakistani Army sought to cleanse East Pakistan of those subversive elements that threatened the Pakistani polity.)

Confronted with this history of failure to find the distinguishing external mark of goodness, perhaps emblazoned on our foreheads by the cosmic branding authority, hope has turned elsewhere, inwards. Perhaps the distinguishing mark is not placed outside on our bodies but will be found inside us–in some innard or other. Perhaps there is ‘bad blood’ in some among us, or even worse, some might have ‘bad brains.’ Unsurprisingly, we have turned to neuroscience to help us with moral decisions: here is a brain state found in mass murderers and criminals; innocents do not seem to have it; our penal and moral decisions have received invaluable assistance. But as a growing litany of problems with neuroscientific inference suggest, these identifications of brain states and their correlations with particular behavior and the explanations that result rest on shaky foundations.

In the face of this determination to seek simple markers for moral judgement my ‘There isn’t, of course’ seems rather glib; it fails to acknowledge the endless frustration and difficulty of decision-making in the moral domain–and the temptation to seek refuge in the clearly visible.

Note: R. J Rummel, Death by Government, page 323

Talking About Natural Law With Children

Last Thursday, thanks to New York City public schools taking a ‘mid-winter break,’ my daughter accompanied me to Brooklyn College and sat in on two classes. My students, as might be expected, were friendly and welcoming; my daughter, for her part, conducted herself exceedingly well by taking a seat and occupying herself by drawing on a piece of paper and often, just paying attention to the class discussion. She did not interrupt me even once; and I only had to ask her to pipe down a bit when she began humming a little ditty to herself. After the second class–philosophy of law, which featured a discussion of St. Thomas Aquinas and natural law theory–had ended, I asked her what she thought the class was about. She replied, “it was about good and bad.” This was a pretty good answer, but things got better the next day.

On Friday, as we drove to gym for my workout and my daughter’s climbing session, I picked up the conversation again, asking my daughter what she made of the class discussion and whether she had found it interesting. She said she did; so I pressed on and the following conversation resulted:

“Let me ask you something. Would you always obey the law?”

“Yes”

“What if the law told you to do something bad?”

“I would do it.”

“Why? Why would you do something bad?”

“Because I don’t want to go to jail.”

“You know, I’ve been to jail twice. For breaking the law.”

“Why?”

“Well, one time, I was angry with one country for attacking people and dropping bombs on them, so I went to their embassy and protested by lying down on the street. When the police told me to move, I didn’t, and so they arrested me and put me in jail for a day. Another time, I protested our university not paying the teachers enough money for their work, and I was arrested again for protesting in the same way.” [Strictly speaking this is a bad example of civil disobedience; I wasn’t breaking a law I thought unjust, rather, I was breaking a law to make a point about the unjustness of other actions.]

“Did they feed you in jail?”

“Yes, they did.”

“Oh, that’s good.”

“Well, so what do you think? Would you break the law if it told you to do something bad?”

“No.”

“Why not? The law is asking you to do something bad.”

“What if I was wrong?”

“What do you mean?”

“What if I was wrong, and it wasn’t bad, and the policeman put me in jail?”

“What if you were sure that you were being asked to do something bad?”

“Then I wouldn’t do it.”

“Why?”

“Because I don’t want do bad things.”

“But isn’t breaking the law a bad thing?”

“Yes.”

“So, why are you breaking the law?”

“Because it’s asking me to do a bad thing.”

At this point, we were close to our turn-off for the gym and our parking spot, and so our conversation ended. A couple of interesting takeaways from it:

1. We see the social construction of a legal order here in the making; at the age of five, my daughter has already internalized the idea that breaking the law is a ‘bad thing’ and that bad things happen to those who break the law. She can also identify the enforcers of the law.  This has already created a normative hold on her; she was inclined to obey the law even if it asked her to do something bad because she was worried about the consequences.

2. My daughter displayed an interesting humility about her moral intuitions; she wasn’t sure of whether her thinking of some act as ‘bad’ was infallible. What if she was wrong about that judgment?

Note: My reporting of the conversation above might be a little off; I’m reproducing it from memory.

Steven Pinker Should Read Some Nietzsche For Himself

Steven Pinker does not like Nietzsche. The following exchange–in an interview with the Times Literary Supplement makes this clear:

Question: Which author (living or dead) do you think is most overrated?

Pinker: Friedrich Nietzsche. It’s easy to see why his sociopathic ravings would have inspired so many repugnant movements of the twentieth and twenty-first centuries, including fascism, Nazism, Bolshevism, the Ayn Randian fringe of libertarianism, and the American alt-Right and neo-Nazi movements today. Less easy to see is why he continues to be a darling of the academic humanities. True, he was a punchy stylist, and, as his apologists note, he extolled the individual superman rather than a master race. But as Bertrand Russell pointed out in A History of Western Philosophy, the intellectual content is slim: it “might be stated more simply and honestly in the one sentence: ‘I wish I had lived in the Athens of Pericles or the Florence of the Medici’.”

The answers that Pinker seeks–in response to his plaintive query–are staring him right in the face. To wit, ‘we’ study Nietzsche with great interest because:

1. If indeed it is true that Nietzsche’s ‘ravings…inspired so many repugnant movements’–and these ‘movements’ have not been without considerable import, then surely we owe it to ourselves to read him and find out why they did so. Pinker thinks it ‘It’s easy to see why’ but surely he would not begrudge students reading Nietzsche for themselves to find out why? Moreover, Nietzsche served as the inspiration for a great deal of twentieth-century literature too–Thomas Mann is but one of the many authors to be so influenced. These connections are worth exploring as well.

2. As Pinker notes with some understatement, Nietzsche was a ‘punchy stylist.’ (I mean, that is like saying Mohammad Ali was a decent boxer, but let’s let that pass for a second.) Well, folks in the humanities–in departments like philosophy, comparative literature, and others–often study things like style, rhetoric, and argumentation; they might be interested in seeing how these are employed to produce the ‘sociopathic ravings’ that have had such impact on our times. Moreover, Nietzsche’s writings employ many different literary styles; the study of those is also of interest.

3. Again, as Pinker notes, Nietzsche ‘extolled the individual superman rather than a master race,’ which then prompts the question of why the Nazis were able to co-opt him in some measure. This is a question of historical, philosophical, and cultural interest; the kinds of things folks in humanities departments like to study. And if Nietzsche did develop some theory of the “individual superman,” what was it? The humanities are surely interested in this topic too.

4. Lastly, for Pinker’s credibility, he should find a more serious history of philosophy than Bertrand Russell‘s A History of Western Philosophy, which is good as a light read–it was written very quickly as a popular work for purely commercial purposes and widely reviled in its time for its sloppy history. There is some good entertainment in there; but a serious introduction to the philosophers noted in there can only begin with their own texts. If Pinker wants to concentrate on secondary texts, he can read Frederick Copleston‘s Friedrich Nietzsche: Philosopher of Culture; this work, written by a man largely unsympathetic to Nietzsche’s views and who indeed finds him morally repugnant, still finds them worthy of serious consideration and analysis. So much so that Copleston thought it worthwhile to write a book about them. Maybe Pinker should confront some primary texts himself. He might understand the twentieth century better.

‘The Usefulness Of Dread’ Is Up At Aeon Magazine

My essay, ‘The Usefulness of Dread‘ is up at Aeon Magazine today.

Dear ‘Fellow’ Indians, Please Spell My Fucking Name Correctly

It’s ‘Samir’, not ‘Sameer.’ That, really, should be enough. Here is the correct spelling of someone’s name; please abide by it. But Indians will simply not comply. I’m a middle-aged man, about to hit fifty-one in a few weeks time, and my entire life,  Indians have been systematically misspelling and butchering my name with this horrendous lexicography. All are equally guilty: strangers, family, and friends. I can excuse those who have only heard my name and written to me–for after all, the pronunciation of ‘Samir’ is ‘Sameer’ and for those used to spelling phonetically, this might suggest itself as a plausible spelling. But what excuse do those have who have seen my name in print, who indeed are corresponding with me by email and have seen my name in the message header? Or even worse, what excuse do members of my family and my many friends of many years have, who continue to misspell my name? Some of these folks have known me for over thirty years, some for over twenty years. The prize must go to those who begin an email correspondence with me using the correct spelling and then a few messages later, decide they have had enough, and decide to start using ‘Sameer’ instead. On the many occasions I’ve tried to issue corrections, my pleas have been greeted with some bemusement, and never have I been granted the courtesy of a simple mea culpa.

‘Samir’ is, of course, a common name in the Arab world (especially, I believe, in Egypt, Lebanon, and Palestine.) There, it means: “jovial, loyal or charming companion.” (I’ve rarely had this description used for me.) The Arabic spelling is (سمير); the English spelling is as indicated (and preferred by me.) In India, where it means ” gust of wind or gentle breeze”–though my friends prefer to think of me as “hot air”–the Hindi spelling is (समीर) while both Samir’ and ‘Sameer’ are used as English spellings. That is, in India, the spelling ‘Samir’ is not unknown, though perhaps just a little less common than ‘Sameer.’ To reiterate, Indians simply have no excuse for their misspelling of my name.

Americans cannot pronounce my name correctly; I’ve slowly grown used to this frustrating state of affairs where I’m referred to as ‘Shamir,’ ‘Smear,’ ‘Sameyer’ and so on. (Pride of place though, must go to the Irish lad who called me ‘Izmir.’ No, no, call me Ishmael. Please. It shares more vowels with my name.) I suppose it’s the price that an immigrant must pay: lose your ‘homeland,’ lose your name, and so on. I’ll deal with it. (Though it will remain a mystery to me that people capable of mastering the pronunciation of ‘Arkansas’ and ‘Massachusetts’ cannot flex their linguistic muscles for a much simpler word; perhaps my ‘foreignness’ trips up their tongues.) With one rare, recent exception, Americans don’t misspell my name; once they see my name in print, they spell it correctly. Indians pronounce my name correctly; how could they not? But they can’t spell it. I wonder if those Indian kids who win the spelling bees year after year in the US could pull it off. Or perhaps their parents’ sins have been visited on them, and they too, would mangle my name.

I will make sure, in my will, to include the provision that no Indian should be allowed anywhere near the writing of my epitaph; I have no faith they will get the spelling right.

US Elections Invite External Intervention, As They Well Might

The Robert Mueller indictment of thirteen Russians for ‘interfering’ in the American elections of 2016 confirms the bad news: those elections were ‘influenced’–in some shape or form–by non-Americans. The extent of this ‘influence’ is unclear–whether they decisively swung the election to Donald Trump or not–but be that as it may, one fact remains established: among the various forces aiming to influence American voters minds as they exercised their electoral franchise were non-American ones. It is unclear whether the Russian Internet Agency coordinated with the Kremlin or with the Trump campaign, but they did ‘participate’ in the American electoral process.

One might well ask: why not? The entire world looks on with bated breath as an American president is elected; some wonder whether their country will benefit from US largess, yet others whether they will need to scurry for cover as cruise missiles, drones, and aircraft carriers are sent their way. Russians are not immune to such concern; they, like many of the world’s citizens, are as keen to see their national interests protected by the new US administration. They too have favorites: they would rather see one candidate elected than another. This is as true for American ‘friends’ as it is for ‘foes,’ precisely because those nations too, have varied interests and inclinations, which line up in varied and interesting ways behind different American candidates. Those ‘interests and inclinations’ too, jostle for representation in the American elections.

The US involves and implicates itself in the affairs of many sovereign nations; it places conditions on the aid it sends them; it too, is interested in who gets elected and where (or who comes to power through a coup); the American record of influencing elections and the choice of political leaders and administrations the world over is well known. (Consider just Iraq and Iran as examples.) The US cannot reasonably expect that such involvement and implication will remain unilateral; it especially cannot expect that the rest of the world will not express its interest in American elections by attempting to influence American voters’ choices. For instance, it is not at all unreasonable to expect that leading newspapers like the Guardian or Der Spiegel might write editorials endorsing particular American candidates and expressing sentiments like “We hope the American people will elect X; X‘s polices speak to the establishment of world peace, something that we here in country Y are most eager for.”

American elections have, by virtue of their increased prominence in the American political calendar, also become worldwide entertainment events; they invite punters to lay bets; they drive up television ratings of many television stations and websites–worldwide–on the night of the presidential debates and the election results. Americans are proud of this: look, the whole world is watching as we elect our leaders. Well, those folks want to participate too; they know the folks getting elected could make them lose their jobs, or worse, their lives. American election campaigns are conducted on the Internet; a global platform for communication and information transfer. This invites participation of a kind not possible in yesteryear, when non-Americans could only look on from afar as Americans debated among themselves on who to vote for; now, on Facebook and Twitter and many other internet forums those same folks can converse with Americans and participate in the American electoral process. Americans are used to this kind of participation and influencing on an informal basis: our European and South American and Asian and African friends often exclaim loudly how they hope we will elect X, not Y.

A global player, one as powerful and important as the US, one used to ‘participating’ in the affairs of the world, invites a corresponding participation in its policies; the world has long thought it would be nice if they got a say in electing the American president because of the reach and extent of American power. With American elections now ‘opened’ to the world–thanks to the Internet, that participation has begun.

Gide’s Immoralist And The Existential Necessity Of The Colony

The immoralist at the heart of André Gide‘s The Immoralist, Michel, does not travel just anywhere; he travels to French colonies like Algeria and Tunisia; the boys who he meets, is attracted to, and falls in love with, are not just any boys; they are Muslim Arab boys. He is old; they are young. He is white; they are brown. He is sick and tubercular; they are young and exuberant, bursting to the seams with health and vitality. Their blood is redder, and flows more freely; Michel’s blood is black, and hideous, and disgusting. He is diseased, but as he spends time among his new companions, whose bodies and nakedness underneath their clothes he cannot take his eyes off of, his health improves and he begins to describe the arc of a journey to greater health and well-being, away from disease; he begins a journey from flirting with death to welcoming life in all its fullness. The language that Gide uses to describe Michel’s journey or passage is richly symbolic and metaphorical, and invites multiple interpretations, mingling as it does, these descriptions of the physical with those of the mental, so that we are tempted to see Michel’s journey from bad to good health as his journey from being ‘a lost soul’ to being ‘a found self’; that much is straightforward.

But why place this journey in colonized lands, why make the vehicles of Michel’s transformation and self-discovery be the colonized, the subjugated, the colonial subject? For one, we can see the colonizer use both the land and the peoples of the colony as his experiential space for self-discovery; it becomes one more of the services or functions that the colonized provides; besides markets, it provides an avenue and domain for self-construction; it becomes one more of the means by which the colonizer comes to realize himself. Because the colonized inhabits a world in which the colonizer has been, as it were, ‘marketed’, Michel finds in the colonies and in the gaze of the colonial subject, one component of his identity: how a Frenchman is understood by those he has colonized. If the colonial identity is an indissoluble part of what it meant to be a Frenchman in the twentieth century then Michel has done the right thing by traveling to a French colony; it is there that he will find out what a Frenchman truly is.

But this salvation need not be individual; all of French culture and Western civilization may be redeemed in the colonies; it is where a decadent, dying civilization looks to being revitalized; to literally being brought back to life. French and Western civilization has become old and tubercular, its blood is polluted. But the Muslim Arab world is younger, even if immature, it promises a new vision of life to a culture on its death-bed and drags it back from its flirtation with death.

The colony is a material and spiritual and existential necessity; it extends the life of the colonizer; the journey to a new form of life for the colonizer begins there.

Neuroscience’s Inference Problem And The Perils Of Scientific Reduction

In Science’s Inference Problem: When Data Doesn’t Mean What We Think It Does, while reviewing Jerome Kagan‘s Five Constraints on Predicting Behavior, James Ryerson writes:

Perhaps the most difficult challenge Kagan describes is the mismatching of the respective concepts and terminologies of brain science and psychology. Because neuroscientists lack a “rich biological vocabulary” for the variety of brain states, they can be tempted to borrow correlates from psychology before they have shown there is in fact a correlation. On the psychology side, many concepts can be faddish or otherwise short-lived, which should make you skeptical that today’s psychological terms will “map neatly” onto information about the brain. If fMRI machines had been available a century ago, Kagan points out, we would have been searching for the neurological basis of Pavlov’s “freedom reflex” or Freud’s “oral stage” of development, no doubt in vain. Why should we be any more confident that today’s psychological concepts will prove any better at cutting nature at the joints?

In a review of Theory and Method in the Neurosciences (Peter K. Machamer, Rick Grush, Peter McLaughlin (eds), University of Pittsburgh Press, 2001), I made note¹ of related epistemological concerns:

When experiments are carried out, neuroscientists continue to run into problems. The level of experimental control available to practitioners in other sciences is simply not available to them, and the theorising that results often seems to be on shaky ground….The localisation techniques that are amongst the most common in neuroscience rely on experimental methods such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and magnetoencephelography (MEG). [In PET] a radioactive tracer consisting of labelled water or glucose analogue molecules is injected into a subject, who is then asked to perform a cognitive task under controlled conditions. The tracer decays and emits positrons and gamma rays that increase the blood flow or glucose metabolism in an area of the brain. It is now assumed that this area is responsible for the cognitive function performed by the subject. The problem with this assumption, of course, is that the increased blood flow might occur in one area, and the relevant neural activity might occur in another, or in no particular area at all….this form of investigation, rather than pointing to the modularity and functional decomposability of the brain, merely assumes it.

The fundamental problem–implicit and explicit in Kagan’s book and my little note above–is the urge to ‘reduce’ psychology to neuroscience, to reduce mind to brain, to eliminate psychological explanations and language in favor of neuroscientific ones, which will introduce precise scientific language in place of imprecise psychological descriptions.  This urge to eliminate one level of explanation in favor of a ‘better, lower, more basic, more fundamental’ one is to put it bluntly, scientistic hubris, and the various challenges Kagan outlines in his book bear out the foolishness of this enterprise. It results in explanations and theories that rest on unstable foundations: optimistic correlations and glib assumptions are the least of it. Worst of all, it contributes to a blindness: what is visible at the level of psychology is not visible at the level of neuroscience. Knowledge should enlighten, not render us myopic.

Note: In Metascience, 11(1): March 2002.

An Ode To The Semicolon

I discovered semicolons in the fall of 1992. I had asked–on a lark of sorts–to read a term paper written by my then-girlfriend, who was taking a class in literary theory at New York University. In it, I noticed a ‘new’ form of punctuation; I had seen the semicolon before, but I had not seen it pressed so artfully into service. Here and there, my girlfriend had used it to mark off clauses–sometimes two, sometimes three–within a sentence; her placement turned one sentence into two, with a pause more pronounced than that induced by a comma. The two separated clauses acquired a dignity they did not previously progress; there was now a dramatic transition from one to the other as opposed to the blurring, the running on, induced by the comma. I had not read writing like this before; it read differently; it spoke to a level of sophistication in expression that seemed aspirational to me. I immediately resolved to use the semicolon in my own writing.

And so I did; I plunged enthusiastically into the business of sprinkling semicolons over my writing; they sprung up like spring wildflowers all over my prose, academic or not. Like my girlfriend, I did not stop at a mere pair of clauses; triplets and sometimes quadruplets were common. Indeed, the more the merrier; why not just string all of them along?

Needless to say, my early enthusiasm for semicolon deployment necessitated a pair of corrections. (My girlfriend herself offered one; my ego was not too enlarged to make me reject her help.) One was to use the semicolon properly. That is, to use it as a separator only when there were in fact separate clauses to be separated, and not just when a mere comma would have sufficed. The other, obviously, was to cut down just a tad on the number of clauses I was stringing together. Truth be told, there was something exhilarating about adding on one clause after another to a rapidly growing sentence, throwing in semicolon after semicolon, watching the whole dramatic edifice take shape on the page. Many editors of mine have offered interventions in this domain; I’ve almost always disagreed with their edits when they delete semicolons I’ve inserted in my writing. To my mind, they ran together too much and produced clunkier sentences in the process.

I don’t think there is any contest; the semicolon is my favorite piece of punctuation. The period is depressing; it possesses too much finality. The comma is a poser; it clutters up sentences, and very few people ever become comfortable with, or competent in, using them. (I often need to read aloud passages of prose I’ve written in order to get my comma placement right.) The colon is a little too officious. (My ascription of personalities to punctuation marks comes naturally to a synesthete like me.) The semicolon combines the best of all three, typographically and syntactically. It looks good; it works even better. What’s not to like?

A Conversation On Religious Experience

A couple of summers ago, a friend and I waited at a parking lot by Cottonwood Pass in Colorado for a ride back to Buena Vista. Bad weather had forced us off the Colorado Trail, and we now needed transportation to the nearest lodging venue. A pair of daytrippers, a middle-aged couple, appeared, walking back to their parked vehicle, done with their viewing and photography for the day. We walked over and made our request: could we please hitch a ride with them? They were not going back all the way to Buena Vista, but only to a campground nearby; would that work? We said it would; we would find another ride from there. In we hopped, and off we went.

As we drove, introductions were made; we were from Brooklyn, our benefactors were visiting from Texas, sans children and grandchildren.  When asked what I ‘did,’ I said I was a professor of philosophy at the City University of New York. Intrigued, they asked which classes I taught; when I listed philosophy of religion as one of them, they asked me if I was religious. I said that I wasn’t religious in belief or practice, but was very interested in the problems and questions that arose in that domain of philosophy. I asked my newly made friends if they were religious; they both said they were devout Christians. I asked them if their faith had been a matter of being born into a devout family, and they both replied that while they had been born into churchgoing families, each had had an individual ‘experience’ that had cemented their faith, given it a new dimension, and indeed caused them to say they ‘found Christ’ only later in life–in each case, following a profound ‘crisis’ of one sort or the other. When Christ ‘had come’ to them, they had ‘felt’ it immediately, and had never had any doubt ‘in their hearts’ from that point on. I listened, fascinated. I made note of the fact that I taught an section on ‘religious experiences’ in my class, and mentioned William James and St. Teresa of Avila‘s theoretical and personal accounts of its phenomena.

When my new friends were done recounting the story of their journey to their faith, they asked me again if I was sure I wasn’t religious. I said that I was quite sure I had no theistic belief–even as I remained fascinated by religion as a social, cultural, psychological, and intellectual phenomena and by the nature of religious feeling–which is why, of course, I had inquired into the nature of their religious belief and how they came by their beliefs. In response, my friends said they were ‘relieved’ to hear of my attitude, that they frequently skirted the subject in conversation with ‘strangers’ because they didn’t want anyone to feel they were proselytizing; I assured them I didn’t think they were, and that I had found the conversation singularly illuminating.

We had driven on past the campground that was supposed to be our destination; our friends said they found our conversation worth continuing; they would drop us to Buena Vista. Rain clouds were still threatening, so this offer was most welcome. Soon, we found ourselves on Buena Vista’s Main Street, our destination for the day. We alighted, grabbed our backpacks, posed for a photograph or two, and then bade them farewell; I asked for their names, but did not write them down, and so have forgotten them. But not that conversation; there was a refreshing warmth and openness on display that was refreshing, and in the context of the US and its endless ‘religious wars,’ a genuine sense of novelty.