Learning From Freud: Addiction, Distraction, Schedules

In An Anatomy of an Addiction: Sigmund Freud, William Halsted and The Miracle Drug CocaineHoward Markel writes:

At some point in every addict’s life comes the moment when what started as a recreational escape devolves into an endless reserve of negative physical, emotional, and social consequences. Those seeking recovery today call this drug-induced nadir a “bottom.”…The bottom that Sigmund experienced featured far more than the physical and mental ravages of consuming too much cocaine….Most recovering addicts insist that two touchstones of a successful recovery are daily routines and rigorous accountability.

As Sherwin Nuland noted in his review of Markel:

Around 1896, Freud began to follow a constant pattern of awakening before 7 each morning and filling every moment until the very late evening hours with the demands of his ever enlarging practice…writing, lecturing, meeting with colleagues and ruminating over the theories he enunciated in such articulate literary style.

Markel goes on:

It appears unlikely that Sigmund used cocaine after 1896, during the years when he mapped out and composed his best-known and most influential works, significantly enriched and revised the techniques of psychoanalysis and…attempted to ‘explain some of the great riddles of human existence.’

Because I consider myself an excessively and easily distracted person, one who finds that his distraction makes him miserable, I was struck by the description of the ‘drug-induced nadir’ that Markel refers to. In noting my own state of distraction, I wrote:

Like many users of the Internet I suffer terribly from net-induced attention deficit disorder, that terrible affliction that causes one to ceaselessly click on ‘Check Mail’ buttons, switch between a dozen tabs, log-in-log-out, reload, and perhaps worst of all, seek my machine immediately upon waking in the mornings.

The effect of this distraction on me is not dissimilar to that experienced by other sufferers: I sometimes feel a beehive has taken up residence in my cranium; my attention span is limited to ludicrously short periods; my reading skills have suffered; writing, always a painful and onerous task, has become even more so. Because of the failure to attend to tasks at hand, my to-do, to-read, to-write, to-attend-to lists grow longer and cast ever more accusing glances my way. Worse, their steadily increasing stature ensures that picking a starting point from any of them becomes a task fraught with ever-greater anxiety: as I begin one task, I become aware that several others are crying out for my attention, causing me to either hurry through the one I have started, or worse, to abandon it, and take up something else.

And:

I experience distraction as a fraying at the edges, a coming apart at the seams, a sundering of the center–whichever description you want to use, it’s all that in my feverish imaginings and experiencing of it.

Since my primary mode of distraction is ‘Net distraction, I’d like to offer another description it. I sometimes use ‘screeching’ or ‘scratching’ in trying to describe the activity in the inside of my cranium that makes me want to stand up and run away–and check mail or reload a page–from reading or writing. All too quickly, when working on a computer, I need ‘release’ and the act of moving the mouse so that something else appears on my screen promises relief. A change of screens, that’s all it is. And ironically, I can never take in whatever it is that I switch to. My mind is too blank at that moment, still perhaps processing residual irritation. Then, seething with rapidly accumulating anxiety about my still-on-the-burner work, I switch back. A little later, the ‘scratching’ begins again. I jump in response. Repeat ad nauseam.

And then, I thought about some of the techniques I’ve used to try to combat these these states of mind and being:

In the spring of 2009, as I sought to make a book deadline, I first tried to impose internet fasts on myself; I was only intermittently successful. I pulled off a few eight-hour abstentions, starting at 10AM and going till 6PM. I found them tremendously productive: I got long stretches of writing accomplished, and on my breaks, for diversion, read through a stack of unread periodicals. But I found it too hard; and soon, my resolve faltered, and I returned to the bad old days.

This past spring and summer, in an effort to inject some discipline into my writing habits, I began working in forty-five minute blocks; I would set a timer on my phone and resolve to work for that period without interruption. For a few weeks, this method worked astonishingly well. And then, again, my resolve decayed, and I slowly began to drift back to the constantly interrupted writing session, a nightmare of multiple tabs open at once, each monitored for update and interruption.

Or:

I have tried many strategies for partial or total withdrawal: timed writing periods (ranging from 30 minutes to an hour); eight-hour fasts (I pulled off several of these in 2009…to date, this remains my most successful, if not repeated since, intervention; since then, somehow, it has been all too easy to convince myself that when I work, I should stay online because, you know, I might need to ‘look something up’); weekend sabbaths (only accomplished once, when I logged off on a Friday night, and logged back on on Sunday morning); evening abstentions (i.e., logging off at the end of a workday and not logging back on when I reached home). None of these strategies has survived, despite each one of them bringing succor of a sort.

And I went on to conclude:

I do realize, as many others have, that all of this sounds most like an incurable, pernicious addiction.

I take some solace in the fact that the strategies I have adopted–even if unsuccessful–at least put me in some very good company.

The Pencil Eraser As Proustian Madeleine

I prepare for classes by reading the texts I have assigned. As I read, on occasion, I make notes in the margins or underline words and sentences. Not too vigorously or extensively, because I still suffer from old scruples and niceties having to do with a fetishistic respect for the printed word; it took me a long time to get over my hesitancy  about marking up books. In graduate school, it took some doing for me to mark up even printed or photocopied versions of journal articles; I always preferred make notes in a separate notebook. I cannot, still, markup a book with a pen, but I have mustered up enough courage and wherewithal to mark up text with a pencil. (Suitably sharpened.)

Now, writing with a pencil is a curious sensation. I hardly ever write any more with one. So the mere contact of hand on pencil, pressing down on paper, feeling, watching, and sensing lead marks appear on paper and arrange themselves into shapes pregnant with meaning is an interesting enough experience. But it gets better.

Sometimes I make notes that are incorrect. Sometimes I call out an author in the margins with a ‘!’ or a ‘?’–even an odd ‘?!’ here and there–and sometimes with a more elaborate expression of surprise, disbelief, or skepticism, and then find out, a few paragraphs later, that I spoke too soon. In those cases, I shamefacedly return to the margin and erase my note, my mark-up.

When I first did so, I noticed a curious sensation manifest itself, one even more peculiar than the sensation manifested by my writing with a pencil. When I erase pencil marks, I apply eraser to paper and scrub, hard. Then, I brush off the accumulated residue of paper, lead and eraser material, sometimes blowing it off the paper. These archaic bodily sensations, these antique bodily memories, this embodied set of memories, these were all part of my normal arsenal of daily sensations and bodily interactions at a particular point in my life–schooldays mostly, of course, but also time spent at home with my books. Locked up in that eraser and my use of it–just like it is in pieces of music–is a kind of Proustian madeleine then.

Using an eraser summoned up memories of notes taken in classrooms, of sharpening pencils, of homeworks completed late at night, of correcting drafts of essays and exams, of bemoaning the induction of black smudges in place of neat handwriting, of painfully wringing my hand after a furious bout of scribbling and erasing. (Confession: I never used the term erasing when I was in school; then, I used a ‘rubber’ and rubbed. And yes, that nomenclature is a leading contender for hilarious differences in American English and the English spoken elsewhere.)

So it was via that eraser that an older being suddenly emerged and poked its head around. There’s always multiple selves in us; many lie dormant; and some, if subjected to the right stimulus, the right madeleine, can remind us of their presence, and of the time that they were formed and made their way into our being.

Teaching Wittgenstein And Making The Familiar Unfamiliar

I’m teaching Wittgenstein this semester–for the first time ever–to my Twentieth-Century Philosophy class. My syllabus requires my students to read two long excerpts from the Tractatus Logico-Philosophicus and Philosophical Investigations; bizarrely enough, in my original version of that exalted contract with my students, I had allotted one class meeting to a discussion of the section from the Tractatus. Three classes later, we are still not done; as you can tell, it has been an interesting challenge thus far.

The style of the Tractatus is notorious for the difficulties it can create for the unprepared. Many students find it its terseness, its statement in quasi-mathematical form as a series of seeming definitions, lemmas, theorems and corollaries–as part of a presentation of a grand total of seven propositions–off-putting and abstruse. Yet others find in it a curious beauty, a poetic statement, stark and austere, pregnant with meaning and suggestion. The content of the Tractatus can be forbidding too. Many philosophical doctrines–the picture theory of language, the truth-functional account of propositions, logical atomism and the correspondence theory of truth, the verification theory of meaning, the ‘no-sense’ theory of ethical and emotive statements–may be found here, varying in their level of implicit or explicit statement. A special vocabulary is employed, and the meanings of many of the special terms of art employed–‘facts’ for instance–has to be unpacked carefully.

I have read Wittgenstein before, and indeed, did my dissertation with a logician, Rohit Parikh, who doubled as a Wittgenstein scholar. (This excellent paper by Juliet Floyd explores the several dimensions of his appropriation of Wittgensteinian themes in his work.) For several years during graduate school I attended a discussion and reading group, conducted by Parikh, which often veered off into conversations on Wittgensteinian themes. Years after I completed my dissertation, I realized that many of its fundamental presuppositions and descriptions bore a similar stamp. But, I never taught Wittgenstein.

Now I have. And so, yet again, I’ve been reminded of how radically different my relationship to a philosophical text or doctrine becomes once I’ve had occasion to teach the material. I read differently; I critique differently, trying to anticipate the ambiguities my students might encounter; I notice more in the text, I seize on more. And then, in the classroom, as I work directly through the reading with my students my relationship with it changes yet again.

Sometimes, my teaching has consisted of making a few opening statements, previewing the theories and theses to be presented, and then turning to the text to find their statements within. I invite students to point me to particular propositions that they have found thought-provoking and/or difficult. At times, I have read aloud sections in class, stopping to offer and receive–along with the class–explications and exegesis. I’ve used the ‘reading-aloud-in-class’ method before; in that case, for Leibniz and Kant. What I wrote then about that particular method of approaching a philosophical text still holds:

First, more careful exegesis becomes possible, and little subtle shadings of meaning which could be brushed over in a high-level synoptic discussion are noticed and paid attention to (by both myself and my students). Second, students become aware that reading the text closely pays dividends; when one sentence in the text becomes the topic of an involved discussion, they become aware of how pregnant with meanings these texts can be. Third, the literary quality of the writing, (more evident in Leibniz and Freud than in Kant) becomes more visible; I often stop and flag portions of the text as having been particularly well-expressed or framed. The students become aware that these arguments can be evaluated in more than one dimension: analytical and artistic perhaps.

This method is exhausting, and that is an understatement. There is the obvious physical strain, of course, but doing this kind of close reading is also intellectually taxing. There is more to explain, more to place in context.

Now, with Wittgenstein and Tractatus, I am struck again, by how the seemingly familiar takes on a little of its older novelty.

On Not Recommending One’s Choices

Recently, all too often, I catch myself saying something like the following, “I took decision X, and I have my fair share of regrets and self-congratulation about it but I would not recommend X to anyone” or “In all honesty, I couldn’t recommend that you take decision X as I did.” Or something like that: I took this path, and I’ve reconciled myself to it, but I cannot recommend that you do the same. Even with the express caveat to be prepared for mixed blessings, which would seem to provide all the ‘cover’ needed.  (The kinds of decisions I have mind included some of the most momentous of my life: immigrating, choosing a graduate education and then an academic career, entering a monogamous relationship, and having a child.)

Some of this hesitancy is, I think, quite straightforward. Many of these reasons–cultural, intellectual, psychological–are familiar and infected with a favorable assessment of ourselves and others. We are reluctant to preach and proselytize; we are modest, and think it inappropriate to convey the impression of having gotten things right; we do not want to oversell the good and we do not want to understate the bad–we do not want to brag, we do not want to whine; we want others to take on the terrible responsibility we felt when we took those decisions; we value the boundaries of the autonomous protective space that others have built up around themselves (see: ‘reluctance to preach…’ above.). And lastly, I think, a less exalted, but related, reason: we do not want to saddled with the burden of having pointed out the path to someone, we do not want to be ‘blamed’ when things go wrong.  (There are dozens of web sites, or at least pages, which are dedicated to getting ‘modern, sensitive’ parents to overcome their loathing to preach to their kids, urging them to ‘just do it’ and ‘say something’; don’t be afraid of being a ‘hypocrite’ or a ‘preacher’ if your child’s safety is at stake, and so on.)

I experience my hesitancy as grounded in all these reasons, of course. But there is also another quite fundamental grounds as explanation for my–and possibly others’–failure to preach. I am never quite sure if my interlocutor and I are talking about precisely the same thing: too many dimensions and facets of their existential choices remain hidden, unclear, or ambiguous to me. I do not know whether all the paths of conduct that are entailed by these decisions are understood as such by them; I do not know if they mean, or refer to, the same objects and states and affairs as I do. These differences, always minor in the context of conversations with most we know, acquire an added facet when we encounter something like a truly crucial choice–made by someone else, another possessor of a unique, only partially accessible perspective.

That is, much like in another state of ignorance that I described in an earlier post about not interfering with others’ self-conceptions, I am reluctant to act for fear of blundering into an unknown space with inadequate navigational aid.

Meritocracies, Rankings, Curricula: A Personal Take On Academic Philosophy

Some six years ago, shortly after I had been appointed to its faculty, the philosophy department at the CUNY Graduate Center began revising its long-standing curriculum; part of its expressed motivation for doing so was to bring its curriculum into line with those of “leading” and “top-ranked” programs. As part of this process, it invited feedback from its faculty members. As a former graduate of the Graduate Center’s Ph.D program, I thought I was well-placed to offer some hopefully useful feedback on its curriculum, and so, I wrote to the faculty mailing list, doing just that. Some of the issues raised in my email are, I think, still relevant to academic philosophy. Not everybody agreed with its contents; some of my cohort didn’t, but in any case, perhaps this might provoke some discussion.

Here, reproduced almost verbatim, is that email:

Perhaps I can throw my tuppence in the pile, by offering some remarks based on my experiences as a graduate of this Ph.D program, and by commenting on whether changing the curriculum to bring it into line with “leading” or “top-ranked” programs is a worthwhile motivation or not.

Firstly, I question the need to bring our curriculum into line with that of “leading” programs. I remain unconvinced that rankings of philosophy programs are a serious indicator of the education they provide. In the bad old days, rankings of philosophy programs seemed to be a set of indices that reflected the star power of the faculty. When NYU’s Ph.D program went “live”, its ranking magically jumped to 2 or 3, without that department having produced a single Ph.D, or having given any indicator whatsoever that their graduates were “better philosophers” than the ones produced by ours.

While the Leiters of this world have made their Reports more comprehensive, it is still not clear to me that the rankings are saying anything worthwhile about how well they *prepare* their students. If we had some solid data for saying that a particular curriculum is a significant causal factor in the philosophical acumen of its graduates, then I’m all for major change. Without that I’m a little reluctant to tinker so extensively.

A significant set of reasons why graduates of XYZ University (please replace with your favorite top-ranked department) are able to get good jobs is because they have had:

a) better financial support and are able to concentrate more on coursework and writing projects;

b) more institutional support for research activities like visiting conferences and building up a solid professional network;

c) more ‘star faculty’ at their disposal who are then able to tap into their rich network of professional contacts, write the important letters, make the important phone calls after the APA and ensure things like invited chapters in edited collections and the like.

The academy, like most other institutions in this world of ours, follows the Matthew Principle: those that have, get more.

I attended classes at NYU and Columbia, and interacted with graduate students from many of the programs in this region. My cohort was second to none in their philosophical chops. I never thought, “If only our curriculum was structured differently, then we’d be the ones with eight interviews at the APA’s Eastern Division Meeting.”

What we lacked the most perhaps was some sense of professionalization in our discipline. We spent most of our time wondering how we would graduate given our financial situation, how we would clean up those incompletes that had accumulated, and so on. Many of us were not bold enough to send papers to professional conferences or journals. We started to think about publications a little late in the game. This is what needs to change the most in my opinion.

I have a feeling some of this already has. I see more students from this program publishing in professional journals and conferences, learning the vagaries of the reviewing process, and most fundamentally, starting to see themselves as professors in training. May this process continue.

We can most help our graduates by making sure they produce scholarly output by the time they graduate. A publication in a top-ranked journal or two, possibly as a result of a semester long mentored study with a faculty member. Done right, this could be of considerable value to the faculty member as well. It seems this idea (or some variant thereof) is on the table, and I’m all for it.

My experience with the Grad Center‘s curriculum was largely positive. I enjoyed the Core courses and the broad grounding they provided in the central issues of the discipline. If I had a complaint–and this was echoed by many of my cohort–it was that the classes were often quite ahistorical. Some or most of the reading lists/syllabi were almost exclusively 20th century in content. I would be in favor of standardizing core reading lists so as to make them more comprehensive and rigorous, but I’m not overly optimistic that any sort of consensus would be reached.

My exam experiences were mixed. I enjoyed studying for the written and oral exams because again, I felt I gained a synoptic perspective on the discipline. Of the exams the oral exam was the most useful. I felt one of the written exams had become a little silly because its questions had become predictable. And the other exam was so out in left-field, I felt blindsided by the lack of a definitive reading list. But this problem has been taken care of–I believe–thanks to structured reading lists. I’m not against getting rid of the comprehensives because the education they aim to impart can be provided by other requirements.

I did my 60 credits for coursework as follows: six cores (Metaphysics, Epistemology, Philosophy of Language, Ethics, Logic, Social and Political Philosophy); one independent study in Mathematical Methods for Physicists at NYU; one class on Space and Time at Columbia; one class on Film and the City at the GC; and eleven other classes from our Departmental offerings. I felt my education was well-rounded, and that I had numerous opportunities to specialize in many different fields. At no stage in my Ph.D or during the job hunt, did I feel the curriculum had been a problem.

I wished more professors had urged me to convert my term papers into conference presentations, or to take the germ of an idea in there and explore it further, possibly for a conference presentation or a journal article.  That’s what I felt was missing.

As always, I would be very interested in comments.

Philosophical Silencing: A Follow-Up

In response to my post on an act of philosophical silencing, Wesley Buckwalter wrote the following comment (over at the NewAPPS blog, where I cross-posted):

As you know, I was the gentleman that made that remark in a private facebook thread with a close friend. If I recall correctly, people in that thread were asking about whether certain kinds of thought experiments were typically referred to as “Gettier Cases”. I said that they were, despite how inaccurate or uninformative it might be to do so, in part because of the alternative traditions you cite. I’m sorry you interpreted my remark as silencing my friends on facebook. Personally I believe that philosophers should abandon the notion of “Gettier cases” and that the practice of labeling thought experiments in this way should be discouraged. If you are interested, I have recently argued for this in two articles here (http://philpapers.org/rec/BLOGCA) and here (http://philpapers.org/rec/TURKAL).

Many thanks to Wesley for his clarification. His initial comment, which I cited, did not acknowledge the content of the other comment I had quoted, and neither did it mention the presence of “alternative traditions” as a reason for the stance that he takes in the first of the two papers he refers me to. Those papers, if I remember correctly, were not cited in the thread. So, in the comment he had initially made, it had seemed to me that the amendment offered by the first commenter had not been taken on board.(In the Gettier case paper, Wesley refers to the following article–Turri, John. 2012. In Gettier’s Wake. In S. Hetherington (Ed.) Epistemology: The Key Thinkers. Continuum Press–as citing the Indian philosopher Sriharsa as someone who has offered similar examples. I am obviously very glad to see such an acknowledgment made in a published work.)

Let me go on to say that the attitude I was interested in highlighting, even if not instantiated in this particular token, is an existent type. (As you can tell, I was trained as an Anglo-American analytical philosopher.) Which is why I was not interested in naming individuals but in pointing to the existence of an intellectual stance. To the commenter Chris, who thinks he was ‘misled’, let me direct the following question:  What were you misled about? That an unnamed individual indulged in silencing or that the silencing of academic conversations about alternative philosophical traditions exists in academic philosophy? Perhaps my excessive familiarity with such acts of silencing, thanks to twenty-three years of utter failure in provoking a conversation about Indian philosophy, led me to the kind of conclusions I drew. I don’t think the conclusion to be drawn in response to my original post is that all is good, there is nothing to see here, and that we should just move on.

I started studying philosophy twenty-three years ago. In that time, I’ve only managed to provoke conversations about alternative philosophical traditions with the following demographics: one graduate school friend of mine who asked me a few questions about Indian philosophy while we were drinking beers, one senior professor who teaches Buddhism (among other things), my dissertation adviser (an Indian) who is a practicing Buddhist, and the attendees at a conference on Eastern philosophy a few years ago. In that same period, I’ve initiated several conversations about Indian philosophy, and have had them all shot down with varying degrees of skepticism and disdain. My worst mistake was to try to talk about Buddhist theories of relational consciousness with the members of a class on consciousness who were going down the usual Nagel-Block-Rosenthal-Ramachandran-Churchland et al route.

I realized over the years that most people I talked to in philosophical academia conflated ‘Eastern philosophy’ with ‘mysticism’. In response, I would sometimes point to the ‘harder’ schools: Samkhya and Lokyata (or Carvaka). The latter, in particular, was materialist in its orientation; perhaps that would appeal to the hard-edged analytical types I hung out with, the ones so enamored of science? Sometimes I would try to talk about Nyaya;  you know, logic and inference, and all that good stuff that analytical types like and love? No dice. It never worked. I was perceived as either indulging in a kind of facile ‘We’ve done it all before!’–perhaps like someone invoking the glories of the Nubian empire in a modern conversation about technological and cultural achievements–or dragging in wishy-washy pale imitations of the real thing.  (Logic only started with Frege, Russell, and Wittgenstein, dontcha know?)

But, of course, those traditions were not the only ones so dismissed. Within ‘Western philosophy’ I have heard graduate students who had never read Foucault dismiss him as ‘useless’, describe feminist theory as fundamentally misguided, and the less said about critical race theory, the better.

A few weeks ago, I posted a photograph of an old family friend, a former professor of philosophy, with the following caption:

A photo of my brother and myself with Dr. Dhirendra Sharma, a man I deeply admire and respect. He is the author of _The Negative Dialectics: A Study of the Negative Dialecticism in Indian Philosophy_, _The Differentiation Theory of Meaning in Indian Logic_, a critic of India’s nuclear program back in the 1970s, (when he was writing about “appropriate technology”), an environmental activist working to preserve the Garhwal Himalayas, and going back further, an anti-Vietnam war activist when he had tenure at Michigan State. He is now in his 80s, fit as a fiddle, bright as ever. I aspire to his health and wisdom.

Posting that photograph reminded me of an incident that occurred during my thirtieth birthday. On that day, many of my graduate school friends showed up to help me celebrate. Some of us moved to my room to drink beer and smoke cigarettes. I then owned one of Professor Sharma’s books and I took it down from the shelves and thrust it toward one of my friends. Because it featured ‘meaning’ in its title, and because all of us, as analytical types, seemed suitably obeisant toward philosophy of language, I thought it might get someone interested in opening it and taking a look. Instead, it was contemptuously waved off, even when I desperately said that it invoked distinctions that were reminiscent of the Fregean distinction between sense and reference. No one accepted the book held out, and it remained unopened.

Silencing exists.

An Act Of Philosophical Silencing

A few months ago, I noticed an interesting and telling interaction between a group of academic philosophers. A Facebook friend posted a little note about how one of her students had written to her about having encountered a so-called “Gettier case” i.e., she had acquired a true belief for invalid reasons. In the email, the student described how he/she had been told the ‘right time’ by a broken clock. The brief discussion that broke out in response to my friend’s note featured a comment from someone noting that the broken clock example is originally due to Bertrand Russell. A little later, a participant in the discussion offered the following comment:

Even though the clock case is due to Russell, it’s worth noting that “Gettier” cases were present in Nyāya philosophy in India well before Russell, for instance in the work of Gaṅgeśa, circa 1325 CE. The example is of someone inferring that there is fire on a faraway mountain based on the presence of smoke (a standard case of inference in Indian philosophy), but the smoke is actually dust. As it turns out, though, there is a fire on the mountain. See the Tattva-cintā-maṇi or “Jewel of Reflection on the Truth of Epistemology.” [links added]

In response to this, one gentleman wrote:

[T]here are countless cases that are standardly referred to as gettier kinds despite author, radical diversity, historical inaccuracy

I found this response peculiar, and yet, interestingly revealing.

Naming a particular fact-pattern, one used in a standard pedagogical example, as a “Gettier case” is not an innocent act. It is fraught with significance. It attaches the name of a person, an individual philosopher, to an entire range of philosophical cases used to illustrate epistemological principles. That person, that philosopher, does not come unattached; his name brings in its train an entire philosophical tradition and serves to stamp its institutions and its personnel with the imprimatur of philosophical innovator, as worthwhile contributors to a hallowed–and well-established and recognized–tradition. Because of this naming process, in part, an entire area of philosophical work is marked off and stamped with a certain kind of ownership.

Of even more interest to me is the response I made note of. A philosophical discussion is underway, proceeding along familiar, well-worn lines. Names of well-known philosophers from well-known traditions roll off everyone’s lips. Then, an interjection is made: politely pointing out that the nomenclature in use has an etymology that is not always acknowledged. This reminder is provided, I repeat, politely. There is no snark, and pointers to references are provided for the interesting reader. It is the very model of a respectful academic contribution to a philosophical discussion; I dare say I’d call it a useful philosophical contribution for the interested scholar of philosophy.

The response to this contribution–the first one, before any welcoming acknowledgments can be made–is, roughly, to cease and desist. There’s a conversation going on; it’s following the usual well-worn path, and you’d like us to look elsewhere? The nerve. There is no acknowledgment of an alternative tradition.

This is what silencing looks like.

Addendum: In response to my post, Professor Alan Richardson of the University of British Columbia wrote to me saying:

I find it interesting that the stopped clock example, which Russell mentions in a sentence of his 1948 Human Knowledge (on p 154 of the 1948 Simon and Schuster edition) would have been known to Russell (indeed to have been derived by Russell, one imagines) from Lewis Carroll’s little 1898 essay “The Two Clocks.”

Here’s a version of the Carroll essay from the web. 

So, Russell’s example gets subsumed under “Gettier cases” and what I have to think is the inspiration for it (the Carroll essay) goes missing.  Yes, just another example of “the Matthew Effect” but given what your post was about, it seemed interesting enough.

A Paradigmatic Example Of A Philosophical Dickhead

Over at the Rough Ground, Bharath Vallabha has an interesting and critical post on the institutional biases implicit and explicit in the ranking of philosophers. He takes as target a recent poll that ranked the Top Twenty Anglophone Philosophers. Vallabha notes the lists’ most prominently featured institutions and philosophical traditions, its narrow emphases, and goes on to conclude:

At its root what “Anglophone philosophy” picks out is not a language or even a philosophical tradition (like Logical Positivism or Ordinary Language Philosophy), but simply the network of departments which are considered to form a unit. Therefore “Anglophone philosophy” is just another way of saying: “doing philosophy this way, what we do, at these departments.”

Unsurprisingly, given Vallabha’s recent persistence in making this–and related–critiques of the institutions of academic philosophy, his post provokes the following comment from ‘Anonymous’:

I think you’re getting to be a broken record on this topic. We get that you think it’s all about the sociology of institutions and connections, and not intellectual content, but have you really argued that or just asserted it? Can you name an Indian philosopher in the last fifty years who wrote in English and explain why his (or her) work was important, indeed, as important as any of those in the top 20 or the top 30 on the list?

When Vallabha responds, offering reasons for why he makes such a critique and cites as an example of an ‘important philosopher’, J. N. Mohanty–a philosopher I can bet good money Anonymous has never heard of–Anonymous comes back with a series of rapid-fire questions. (Picture, if you will, this querulous questioner at an academic seminar.)

You have not explained why Mohanty’s work was important, other than saying it bridges traditions. Why is that important? And how is that comparable to, for example, Kripke’s contributions to modal logic and our thinking about meaning and reference? What important philosophical theses are due to Mohanty? Can you state them for us?

Anonymous’ opening comment was a very good example of a very particular style of doing philosophy–one I am intimately familiar with thanks to my experience attending philosophy colloquia. Here, Anonymous opens with an accusation of ‘overkill’; no reasons are given for this characterization. Instead, it is assumed emphasis and persistence are philosophical sins (especially when they concern the ‘sociology of institutions’, an unimportant issue to be sure.) Then, interestingly enough, for someone familiar enough with the content of Vallabha’s posts to say they sound like ‘a broken record,’ he asks “Have you argued it or just asserted it’? Perhaps Anonymous can tell us why–i.e., offer reasons why he thinks Vallabha is only asserting and not arguing. Then, we have some aggressive interrogation, mixed in with a healthy dose of disbelief: “Can you name an Indian philosopher…” Clearly, if an Indian philosopher writes a post critical of the Anglo-American tradition in philosophy, it must be because he is upset about Indian philosophers being left off some exalted list. And, of course, if Anonymous hasn’t heard of them, they don’t exist.

There is a masterful engagement here with the content of Vallabha’s post that suggests a really well-trained philosophical mind–one perhaps keenly honed on modal logic and repeated readings of Naming and Necessity. Having thrown these rhetorical firecrackers, and thus in his exalted mind, having scattered the advancing forces of critique directed at the high temples of Anglophone philosophy, Anonymous does not stick around to offer, gasp, reasons for his skepticism and disdain in response to questions put to him by Martin Shuster:

Anonymous – can you explain why modal logic or thoughts about meaning/reference are important? Why are they more important than thinking about how to bring together disparate traditions and people, or indeed whatever (other) issues Mohanty thought about? Thinking about how to bring together traditions might do much to alleviate human suffering, and one could argue then, that it is far more valuable than advances in modal logic or theories of reference.The thing is, either philosophy–understood here in the broadest sense as self reflection and critical thought–is important or it is not. If it is, then there is no way to decide, in advance, which issues are more important than others, and therefore which figures.

I’ve written a couple of posts before on the discursive environment in academic philosophy. They were titled On The Lack of Women in Philosophy: The Dickhead Theory and The Dickhead Theory of Academic Philosophy, Revisited. In those posts, I was indulging in some hand-waving, referring to a class of academic philosophers without naming names or citing paradigmatic examples. Well, I still don’t have names, but I do a have a paradigmatic example.

Anonymous is a dickhead. And he–I use that pronoun advisedly–is not alone.

‘Don’t Call Me A Philosopher’

I cringe, I wince, when I hear someone refer to me as a ‘philosopher.’ I never use that description for myself. Instead, I prefer locutions like, “I teach philosophy at the City University of New York”, or “I am a professor of philosophy.” This is especially the case if someone asks me, “Are you a philosopher?”. In that case, my reply begins, “Well, I am a professor of philosophy…”. Once, one of my undergraduate students asked me, “Professor, what made you become a philosopher?” And I replied, “Well, I don’t know if I would go so far as to call myself a philosopher, though I did get a Ph.D in it, and…”. You get the picture.

I’m not sure why this is the case. I think folks that have Ph.Ds in mathematics or physics or economics and who teach those subjects and produce academic works in those domains have no hesitation in calling themselves mathematicians or physicists or economists.

Part of the problem, of course, is that in our day and age, in our culture, ‘philosopher’ has come to stand for some kind of willful pedant, a non-productive member of society, content to not contribute to the Gross Domestic Product but to merely stand on the sidelines and take potshots at those who actually produce value. The hair-splitter, the boringly didactic drone. (Once, shortly after a friend and I had finished watching Once Were Warriors, we began a discussion of its merits. As I began pointing out that the director’s explicit depiction of violence toward women might have been necessary to drive home a broader point about the degradation of Maori culture, my friend interrupted, “There you go, being philosophical again! Can’t you just keep things simple?”).

But this modern disdain for the ‘philosopher’, this assessment of her uselessness, her unemployability, is not the only reason that I shrink from being termed one. There is another pole of opinion that I tend toward: ‘philosopher’ sounds a little too exalted, a little too lofty; it sounds insufferably pompous. It comes packaged with too many pretensions, too many claims to intellectual rectitude and hygiene. Far too often, that title has served as cover for too many sorts of intellectual prejudice. To describe myself thus or allow someone else to do would be to permit a placement on a pedestal of sorts, one I’m not comfortable occupying. (This situation has not been helped by the fact that when someone has described me thus in company, others have giggled and said “Oh, you’re a philosopher now?” – as if I had rather grandiosely allowed such a title to be assigned to me.)

This discomfort arises in part from my self-assessment of intellectual worth, of course. I do not think I am sufficiently well-read in the philosophical literature; there are huge, gaping, gaps in my education. I remain blithely unaware of the contours of many philosophical debates and traditions; the number of classics that I keep reminding myself I have to stop merely quoting and citing and actually read just keeps on growing. I do not write clearly or prolifically enough.  And so on. (Some of these feelings should be familiar to many of my colleagues in the academic world.)

For the time being, I’m happy enough to make do with the opportunity that I’ve been offered to be able to read, write, and teach philosophy. The titles can wait.

The Philosophical Education Of Scientists

Yesterday, in my Twentieth Century Philosophy class, we worked our way through Bertrand Russell‘s essay on “Appearance and Reality” (excerpted, along with “The Value of Philosophy” and “Knowledge by Acquaintance and Knowledge by Description” from Russell’s ‘popular’ work The Problems of Philosophy.) I introduced the class to Russell’s notion of physical objects being inferences from sense-data, and then went on to his discussions of idealism, materialism, and realism as metaphysical responses to the epistemological problems created by such an understanding of objects. This discussion led to the epistemological stances–rationalism and empiricism–that these metaphysical positions might generate. (There was also a digression into the distinction between necessary and contingent truths.)

At one point, shortly after I had made a statement to the effect that science could be seen as informed by materialist, realist, and empiricist conceptions of its metaphysical and epistemological presuppositions, I blurted out, “Really, scientists who think philosophy is useless and irrelevant to their work are stupid and ungrateful.”  This was an embarrassingly intemperate remark to have made in a classroom, and sure enough, it provoked some amused twittering from my students, waking up many who were only paying partial attention at that time to my ramblings.

While I always welcome approving responses from my students to my usual lame attempts at humor, my remark was too harshly phrased. But I don’t think it is false in at least one sense. Too many scientists remain ignorant of the philosophical presuppositions of their enterprise, and are not only proud of this ignorance, but bristle when they are reminded of them. Too many think claims of scientific knowledge are only uselessly examined for their foundations; too many assume metaphysics and physics don’t mix. And all too many seem to consider their scientific credentials as being burnished by making a withering attack on the intellectual competence of philosophers and intellectual sterility of their work. Of course, many will do so by making a philosophical argument of some sort, like perhaps that philosophical questioning of the foundations of science is in principle irrelevant to scientific practice.

I get some of the scientists’ impatience. Who likes pedantry and hair-splitting? And yes, many philosophers are embarrassingly ignorant about actual scientific theory and practice. But not most of it. I wonder: Did they never take a class on the history of science? Do they never study the process by which theories come to be advanced, challenged, modified, rejected, formed anew?

I have long advocated–not in any particular public forum, but in some private conversations–that the Philosophy of Science class taught by philosophy departments should really be a History and Philosophy of Science class. You can’t study the history of science without ‘doing’ the philosophy of science, and you can’t study the philosophy of science without knowing something about its history. One can only hope that those who study science with an eye to becoming its practitioners would at least be exposed to a similar curricular requirement. (I made a similar point in a post that was triggered by the Lawrence Krauss-David Albert dispute a while ago.)

Incidentally, I’m genuinely curious: Is it just me or does it seem that this kind of ‘scientific’ rejection of the philosophical enterprise is a modern–i.e., late twentieth-century onwards–disease?