Iris Murdoch On Interpreting Our Messages To Ourselves

In Iris Murdoch‘s Black Prince (1973), Bradley Pearson wonders about his “two recent encounters with Rachel and how calm and pleased I had felt after the first one, and how disturbed and excited I now felt after the second one”:

Was I going to “fall in love” with Rachel? Should I even play with the idea, utter the words to myself? Was I upon the brink of some balls-up of catastrophic dimensions, some real disaster? Or was this perhaps in an unexpected form the opening itself of my long-awaited “break through,” my passage into another world, into the presence of the god? Or was it just nothing, the ephemeral emotions of an unhappily married middle-aged woman, the transient embarrassment of an elderly puritan who had for a very long time had no adventures at all? [Penguin Classics edition, New York, 2003, p. 134]

Pearson is right to be confused and perplexed. The ‘messages’ we receive from ‘ourselves’ at moments like these–ethical dilemmas being but the most vivid–can be counted upon to be garbled in some shape or fashion. The communication channel is noisy; and the identity of the communicating party at ‘the other end’ is obscure. Intimations may speak to us–as they do to Pearson–of both the sordid and sublime for we are possessed, in equal measure, by the both devilish and the divine; these intimations promise glory but they also threaten extinction. What meaning are we to ascribe to them? What action are we to take at their bidding? A cosmic shrug follows, and we are left to our own devices all over again. ‘Listen to your heart’ is as useless as any other instruction in this domain, for ‘the heart’ also speaks in confusing ways; its supposed desires are as complex, as confusing as those of any other part of ourselves. Cognitive dissonance is not an aberration, a pathological state of affairs; it is the norm for creatures as divided as us, as superficially visible to ourselves, as possessed by the unconscious. (Freud’s greatest contribution to moral psychology and literature was to raise the disturbing possibility that it would be unwise to expect coherence–moral or otherwise–from agents as internally divided, as self-opaque as us.)

We interpret these messages, these communiques, from ourselves with tactics and strategies and heuristics that are an unstable mixture of the expedient, the irrational, the momentarily pleasurable; we deal with ‘losses’ and ‘gains’ as best as we can, absorbing the ‘lessons’ they impart with some measure of impatience; we are unable to rest content and must move on, for life presses in on us at every turn, generating new crises, each demanding resolution. Our responses can only satisfice, only be imperfect.

The Clash were right thus, to wonder, to be provoked into an outburst of song, by the question of whether they should ‘stay or go.‘ We do not express our indecision quite as powerfully and vividly as they do, but we feel the torment it engenders in our own particular way.

That Elusive Mark By Which To Distinguish Good People From Bad

In Journey to the End of the NightCéline‘s central character, Ferdinand Bardamu is confronted with uncontrovertible evidence of moral goodness in Sergeant Alcide–who is nobly working away in a remote colonial outpost to financially support a niece who is little more than a perfect stranger to him. That night, as Bardamu gazes at the sleeping Alcide, now once again, in inactivity, utterly unremarkable and undistinguishable from others who serve like him, he thinks to himself:

There ought to be some mark by which to distinguish good people from bad.

There isn’t, of course. But that hasn’t stopped mankind from continuing to hold on to this forlorn hope in the face of the stubborn difficulty of making moral judgements and evaluations about our fellow humans. Sometimes we seek to evaluate fellow humans on the basis of simple tests of conformance to a pre-established, clearly specified, moral code or decision procedure; sometimes we drop all pretence of sophisticated ethical analysis and take refuge in literal external marks.

These external marks and identifiers have varied through and across space and time and cultures. Sometimes shadings of skin pigmentations have been established as the distinguishing marker of goodness; sometimes it is the shape of the skull that has been taken to be the desired marker; sometimes national or ethnic origin; sometimes religious affiliation. (If that religious affiliation is visible by means of an external marker–like a turban for instance–then so much the better. West Pakistani troops conducting genocide in East Pakistan in 1971 were fond of asking Bengali civilians to drop their pants and expose their genitals;¹ the uncircumcised ones were led off to be shot; their bodies had revealed them to be of the wrong religion, and that was all that mattered as the West Pakistani Army sought to cleanse East Pakistan of those subversive elements that threatened the Pakistani polity.)

Confronted with this history of failure to find the distinguishing external mark of goodness, perhaps emblazoned on our foreheads by the cosmic branding authority, hope has turned elsewhere, inwards. Perhaps the distinguishing mark is not placed outside on our bodies but will be found inside us–in some innard or other. Perhaps there is ‘bad blood’ in some among us, or even worse, some might have ‘bad brains.’ Unsurprisingly, we have turned to neuroscience to help us with moral decisions: here is a brain state found in mass murderers and criminals; innocents do not seem to have it; our penal and moral decisions have received invaluable assistance. But as a growing litany of problems with neuroscientific inference suggest, these identifications of brain states and their correlations with particular behavior and the explanations that result rest on shaky foundations.

In the face of this determination to seek simple markers for moral judgement my ‘There isn’t, of course’ seems rather glib; it fails to acknowledge the endless frustration and difficulty of decision-making in the moral domain–and the temptation to seek refuge in the clearly visible.

Note: R. J Rummel, Death by Government, page 323

Steven Pinker Should Read Some Nietzsche For Himself

Steven Pinker does not like Nietzsche. The following exchange–in an interview with the Times Literary Supplement makes this clear:

Question: Which author (living or dead) do you think is most overrated?

Pinker: Friedrich Nietzsche. It’s easy to see why his sociopathic ravings would have inspired so many repugnant movements of the twentieth and twenty-first centuries, including fascism, Nazism, Bolshevism, the Ayn Randian fringe of libertarianism, and the American alt-Right and neo-Nazi movements today. Less easy to see is why he continues to be a darling of the academic humanities. True, he was a punchy stylist, and, as his apologists note, he extolled the individual superman rather than a master race. But as Bertrand Russell pointed out in A History of Western Philosophy, the intellectual content is slim: it “might be stated more simply and honestly in the one sentence: ‘I wish I had lived in the Athens of Pericles or the Florence of the Medici’.”

The answers that Pinker seeks–in response to his plaintive query–are staring him right in the face. To wit, ‘we’ study Nietzsche with great interest because:

1. If indeed it is true that Nietzsche’s ‘ravings…inspired so many repugnant movements’–and these ‘movements’ have not been without considerable import, then surely we owe it to ourselves to read him and find out why they did so. Pinker thinks it ‘It’s easy to see why’ but surely he would not begrudge students reading Nietzsche for themselves to find out why? Moreover, Nietzsche served as the inspiration for a great deal of twentieth-century literature too–Thomas Mann is but one of the many authors to be so influenced. These connections are worth exploring as well.

2. As Pinker notes with some understatement, Nietzsche was a ‘punchy stylist.’ (I mean, that is like saying Mohammad Ali was a decent boxer, but let’s let that pass for a second.) Well, folks in the humanities–in departments like philosophy, comparative literature, and others–often study things like style, rhetoric, and argumentation; they might be interested in seeing how these are employed to produce the ‘sociopathic ravings’ that have had such impact on our times. Moreover, Nietzsche’s writings employ many different literary styles; the study of those is also of interest.

3. Again, as Pinker notes, Nietzsche ‘extolled the individual superman rather than a master race,’ which then prompts the question of why the Nazis were able to co-opt him in some measure. This is a question of historical, philosophical, and cultural interest; the kinds of things folks in humanities departments like to study. And if Nietzsche did develop some theory of the “individual superman,” what was it? The humanities are surely interested in this topic too.

4. Lastly, for Pinker’s credibility, he should find a more serious history of philosophy than Bertrand Russell‘s A History of Western Philosophy, which is good as a light read–it was written very quickly as a popular work for purely commercial purposes and widely reviled in its time for its sloppy history. There is some good entertainment in there; but a serious introduction to the philosophers noted in there can only begin with their own texts. If Pinker wants to concentrate on secondary texts, he can read Frederick Copleston‘s Friedrich Nietzsche: Philosopher of Culture; this work, written by a man largely unsympathetic to Nietzsche’s views and who indeed finds him morally repugnant, still finds them worthy of serious consideration and analysis. So much so that Copleston thought it worthwhile to write a book about them. Maybe Pinker should confront some primary texts himself. He might understand the twentieth century better.

Neuroscience’s Inference Problem And The Perils Of Scientific Reduction

In Science’s Inference Problem: When Data Doesn’t Mean What We Think It Does, while reviewing Jerome Kagan‘s Five Constraints on Predicting Behavior, James Ryerson writes:

Perhaps the most difficult challenge Kagan describes is the mismatching of the respective concepts and terminologies of brain science and psychology. Because neuroscientists lack a “rich biological vocabulary” for the variety of brain states, they can be tempted to borrow correlates from psychology before they have shown there is in fact a correlation. On the psychology side, many concepts can be faddish or otherwise short-lived, which should make you skeptical that today’s psychological terms will “map neatly” onto information about the brain. If fMRI machines had been available a century ago, Kagan points out, we would have been searching for the neurological basis of Pavlov’s “freedom reflex” or Freud’s “oral stage” of development, no doubt in vain. Why should we be any more confident that today’s psychological concepts will prove any better at cutting nature at the joints?

In a review of Theory and Method in the Neurosciences (Peter K. Machamer, Rick Grush, Peter McLaughlin (eds), University of Pittsburgh Press, 2001), I made note¹ of related epistemological concerns:

When experiments are carried out, neuroscientists continue to run into problems. The level of experimental control available to practitioners in other sciences is simply not available to them, and the theorising that results often seems to be on shaky ground….The localisation techniques that are amongst the most common in neuroscience rely on experimental methods such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and magnetoencephelography (MEG). [In PET] a radioactive tracer consisting of labelled water or glucose analogue molecules is injected into a subject, who is then asked to perform a cognitive task under controlled conditions. The tracer decays and emits positrons and gamma rays that increase the blood flow or glucose metabolism in an area of the brain. It is now assumed that this area is responsible for the cognitive function performed by the subject. The problem with this assumption, of course, is that the increased blood flow might occur in one area, and the relevant neural activity might occur in another, or in no particular area at all….this form of investigation, rather than pointing to the modularity and functional decomposability of the brain, merely assumes it.

The fundamental problem–implicit and explicit in Kagan’s book and my little note above–is the urge to ‘reduce’ psychology to neuroscience, to reduce mind to brain, to eliminate psychological explanations and language in favor of neuroscientific ones, which will introduce precise scientific language in place of imprecise psychological descriptions.  This urge to eliminate one level of explanation in favor of a ‘better, lower, more basic, more fundamental’ one is to put it bluntly, scientistic hubris, and the various challenges Kagan outlines in his book bear out the foolishness of this enterprise. It results in explanations and theories that rest on unstable foundations: optimistic correlations and glib assumptions are the least of it. Worst of all, it contributes to a blindness: what is visible at the level of psychology is not visible at the level of neuroscience. Knowledge should enlighten, not render us myopic.

Note: In Metascience, 11(1): March 2002.

Virginia Woolf On Autobiography And Not Writing ‘Directly About The Soul’

In Inspiration and Obsession in Life and Literature, (New York Review of Books, 13 August, 2015), Joyce Carol Oates writes:

[Virginia] Woolf suggests the power of a different sort of inspiration, the sheerly autobiographical—the work created out of intimacy with one’s own life and experience….What is required, beyond memory, is a perspective on one’s own past that is both a child’s and an adult’s, constituting an entirely new perspective. So the writer of autobiographical fiction is a time traveler in his or her life and the writing is often, as Woolf noted, “fertile” and “fluent”:

I am now writing as fast & freely as I have written in the whole of my life; more so—20 times more so—than any novel yet. I think this is the proof that I was on the right path; & that what fruit hangs in my soul is to be reached there…. The truth is, one can’t write directly about the soul. Looked at, it vanishes: but look [elsewhere] & the soul slips in. [link added above]

I will freely confess to being obsessed by autobiography and memoir. Three planned book projects of mine, each in varying stages of early drafting and note-taking, are autobiographical, even as I can see more similar ventures in the offing; another book, Shapeshifter: The Evolution of a Cricket Fan, currently contracted to Temple University Press, is a memoir; yet another book Eye on Cricket, has many autobiographical passages; and of course, I often write quasi-autobiographical, memoirish posts on this blog all the time. In many ways, my reasons for finding myself most comfortable in this genre echo those of Woolf’s: I find my writing within its confines to be at its most ‘fertile’ and ‘fluent’–if at all, it ever approaches those marks; I write ‘fast’ and ‘freely’ when I write about recollections and lessons learned therein; I find that combining my past sensations and memories with present and accumulated judgments and experiences results in a fascinating, more-than-stereoscopic perspective that I often find to be genuinely illuminating and revealing. (Writing memoirs is tricky business, as all who write them know. No man is an island and all that, and so our memoirs implicate the lives of others as they must; those lives might not appreciate their inclusion in our imperfect, incomplete, slanted, agenda-driven, literary recounting of them. Still, it is a risk many are willing to take.)

Most importantly, writing here, or elsewhere, on autobiographical subjects creates a ‘couch’ and a ‘clinic’ of sorts; I am the patient and I am the therapist; as I write, the therapeutic recounting and analysis and story-retelling kicks off; the end of a writing session has at its best moments, brought with it moments of clarity and insight about myself to the most important of quarters: moi. More than anything else, this therapeutic function of autobiographical writing confirms yet another of Woolf’s claims: that “one can’t write directly about the soul. Looked at, it vanishes.” Sometimes, one must look at the blank page, and hope to find the soul take shape there instead.


Ramachandra Guha On The Lack Of Modern Indian Histories

In India After Gandhi: The History of the World’s Largest Democracy (HarperCollins, New York, 2007), Ramachandra Guha writes:

Of his recent history of postwar Europe, Tony Judt writes that ‘a book of this kind rests, in the first instance, on the shoulders of other books’. He notes that ‘for the brief sixty-year period of Europe’s history since the end of the Second World War – indeed, for this period above all – the secondary literature in English is inexhaustible’. The situation in India is all too different. Here the gaps in our knowledge are colossal. The Republic of India is a union of twenty-eight states, some larger than France. Yet not even the bigger or more important of these states have had their histories written. In the 1950s and 60s India pioneered a new approach to foreign policy, and to economic policy and planning as well. Authoritative or even adequate accounts of these experiments remain to be written. India has produced entrepreneurs of great vision and dynamism – but the stories of the institutions they built and the wealth they created are mostly unwritten. Again, there are no proper biographies of some of the key figures in our modern history: such as Sheikh Abdullah or Master Tara Singh or M. G. Ramachandran, ‘provincial’ leaders each of whose province is the size of a large European country. [p. 13; links added]

Guha’s analysis here is, sadly enough, almost wholly correct. Guha’s own ‘opus,’ cited above, runs to over 800 pages, and yet it is barely more than a sampler, an appetizer, a pointer to the many corners of modern Indian history that remain unexplored: in the face of a historical project as imposing as that of modern India’s, even such large works can do little more than gesture at their own insignificance. I’m not a historian by trade (and professional historians have accused me of being an amateur) but even my ‘casual’ efforts have resulted in my encountering the lacunae in historical scholarship that Guha writes about. In the realm of military history, for instance, my co-author Jagan Mohan and I found–while working on our books on the 1965 and 1971 air wars  between India and Pakistan–few to none published works on Indian military history, and had to rely largely on personal accounts–autobiographical and biographical–with all of their inherent frailties as sources of information. Official archival stores were hard to access, their points of entry blocked sometimes by official legal strictures, sometimes by bureaucratic inflexibility. Moreover, to add final insult to injury, there simply wasn’t the readership–the all-critical market for publishers–for such historical works as ours. Quite simply, the failure that Guha speaks of was manifest at every level of the historical enterprise: actual histories were hard come by; historical sources were meager; interest in histories and antiquities was only marginal.  Under these conditions, the production of written history seemed intractable at best.

This state of affairs is especially peculiar in the context of the Indian popular imagination–one which finds its national pride grounded in tremendous antiquity of India’s civilizations and cultures. It offers a stark reminder that the nationalist imagination all too often outruns the actual national enterprise.

Robert Morrison And Antoine Panaioti’s Nietzsche And The Buddha

Two recent books on Nietzsche and Buddhism–Robert Morrison’s Nietzsche and Buddhism: A Study in Nihilism and Ironic Affinities, and Antoine Panaioti’s Nietzsche and Buddhist Philosophy–do an exemplary job of examining, sympathetically and rigorously, some related questions of enduring philosophical interest: What is the relationship between Nietzsche’s writings and Buddhism? What were Nietzsche’s views on Buddhism? Was he grossly mistaken in his reading–if any–of Buddhist texts?

The answers these two texts provide are roughly similar.

First, Nietzsche had mixed views on Buddhism: he praised it for sounding the same alarm he was to a decadent culture confronting the loss of its most cherished ideals and ‘fictions’; he criticized it for what he saw as its nihilistic, world-denying aspects. This latter viewpoint, as both Morrison and Panaioti are at pains to point out, rests on a systematic misunderstanding of key Buddhist concepts and theories. Nietzsche was handicapped in this regard, ironically for someone who was a philologist, by his lack of fluency in the Indian languages–Sanskrit and Pali–essential for reading original Buddhist texts; he had to rely, perforce, on indirect access to the Buddhist corpus. Some of this indirect access, notably, was provided by Schopenhauer, who extracted from Buddhism a pessimism that Nietzsche ultimately found untenable and defeatist.

Second, Nietzsche and Buddhism share points of resonance or ‘affinities’ at several points: they both are committed to: a no-self theory of the self that denies the substantiality of an enduring self, a theory which they describe as a ‘delusion’ and which serves to underwrite many other species of pernicious theorizing; a metaphysics that eschews ‘substance‘–indeed, the no-self theory of the self serves to underwrite a no-object theory of objects or no-substantiality theory of substance (Buddhism employs the notion of “co-dependent arising” to deny independent, non-contingent existence to any thing or substance); a rigorous practice of self-overcoming or self-mastery, a key component of which is the mastery of perspectives that are free of the various illusions and delusions that contribute to ‘world weariness’ or ‘pointless suffering.’ Moreover, both can be understood in ‘medical’ or ‘therapeutic’ terms; they both aim, through their philosophizing, to ‘cure’ a certain kind of perplexity that has led to intellectual and physical ill-health. And they both do it with an emphasis on practice, on modifying and altering the very ways in which we think and live.

Both Morrison and Panaioti know the relevant literatures exceedingly well; they’ve clearly mastered the Nietzschean corpus, and engaged rigorously with original Buddhist texts. (They both seem to be fluent in Pali and Sanskrit and often contest older translations of technical terms in these languages.) They write clearly and do a wonderful job of making difficult Buddhist material more accessible. Morrison does this to a greater extent as he engages in several attempts to provide new interpretations to Buddhist terms and theses–not all of which will find approval with scholars of Buddhism, but they will applaud the attempted rigor of his interpretations anyway.

Much academic writing these days is sterile and unreadable; these two books provide a much-needed counterpoint to that claim.