Brave Analytic Philosophers Use Trump Regime To Settle Old Academic Scores

Recently, Daniel Dennett took the opportunity to, as John Protevi put it, “settle some old academic scores.” He did this by making the following observation in an interview with The Guardian:

I think what the postmodernists did was truly evil. They are responsible for the intellectual fad that made it respectable to be cynical about truth and facts. You’d have people going around saying: “Well, you’re part of that crowd who still believe in facts.””

Roughly, postmodernism brought you Donald Trump. If only Trump voters hadn’t read so much Deleuze or Derrida or Spivak, we wouldn’t be in the mess we are now. Dennett has now been joined in this valiant enterprise of Defending Truth and Knowledge by Timothy Williamson who makes the following remarks in an interview with The Irish Times:

No philosophical manoeuvre can stop politicians telling lies. But some philosophical manoeuvres do help politicians obscure the distinction between truth and falsity.

When I visited Lima, a woman interviewed me for YouTube. She had recently interviewed a ‘postmodernist’ philosopher. When she pointed at a black chair and asked ‘Is that chair black or white?’ he replied ‘Things are not so simple’.

The more philosophers take up such obscurantist lines, the more spurious intellectual respectability they give to those who try to confuse the issues in public debate when they are caught out in lies. Of course, many things in public affairs are genuinely very complicated, but that’s all the more reason not to bring in bogus complexity….

Obviously it wasn’t mainly postmodernism or relativism that won it for Trump, indeed those philosophical views are presumably more widespread amongst his liberal opponents than amongst his supporters, perhaps most of whom have never heard of them. Still, those who think it somehow intolerant to classify beliefs as true or false should be aware that they are making it easier for people like Trump, by providing them with a kind of smokescreen.

In the course of an informal Facebook discussion, I made the following responses to Dennett’s remarks (which I described as ‘very silly’):

[We] could just as well lay the blame on the very idea of truth. Perhaps if truth wasn’t so exalted so much, we wouldn’t have so many people claiming that they should be followed just because what they said was the truth. Especially because many lies really are better for us than some truths. Perhaps we would have been better off seeing what worked for us, rather than obsessing about naming things as true or false.

Fascist insurgencies like the ones here in our country are not relying on post-modern critiques of truth and fact to prop up their claims; they need only rely on something far simpler: the fact that talking of truth and facts grants them an aura of respectability. The elevation (or demotion) of this political debate to a matter of metaphysics and epistemology is to play their game because we will find these pillars of ours to actually rest on sand. Far better to point out to proponents of ‘alternative facts’ that these facts will not help them send their kids to school or cure their illnesses. Let us not forget that these ‘facts’ help them in many ways now: it finds them a community, makes them secure, gives vent to their anger and so on. I’ve never liked the way everyone is jumping up and down about how some great methodological crisis is upon us in this new era, which is entirely ab initio. People have been using ‘fake news’ and ‘alternative facts’ all through history and using them to achieve political ends.

On a related note, Ali Minai responds to another set of claims against ‘relativism’ made in an article in The Chronicle of Higher Education by Alan Jay Levinovitz:

In fact, it is non-relativism that has generally been the weapon of choice for authoritarians. The weaponization of “alternative facts” may be aided by relativism but its efficacy still relies on the opposite attitude. It works only when its targets accept “alternative facts” as actually true.

What these responses to the Defenders of Truth Against Relativism make quite clear are the following propositions:

  1. So-called ‘postmodern’ critiques are more often than not, the political weapons of choice for those critiquing authoritarian regimes: they serve as a theoretical grounding for claims against ‘dominant’ or ‘totalizing’ narratives that issue from such regimes.
  2. Non-relativism or absolutism about truth is the preferred theoretical, argumentative, and rhetorical platform for authoritarians. Let us not forget that science was a challenge to the absolutism about truth that revealed religions claimed to profess; the Enlightenment brought many ‘alternative facts’ in its wake. Those worked; their ‘truth’ was established by their working. All the mathematical proofs and telescope gazings would have been useless had the science and technology built on them not ‘worked.’
  3. When fascists and authoritarians present ‘alternative facts’ and reject established knowledge claims, they do not present their alternative claims as ‘false’ because ‘truth’ is to be disdained; rather, they take an explicitly orthodox line in claiming ‘truth’ for their claims. ‘Truth’ is still valuable; it is still ‘correspondence’ to facts that matters.

The target of the critiques above then, is misplaced several times over. (Moreover, Willamson’s invocation of the philosopher who could not give a ‘straightforward’ answer to his interlocutor is disingenous at best. What if the ‘postmodernist’ philosopher wanted to make a point about colorblindness, or primary or secondary qualities? I presume Williamson would have no trouble with an analytic philosopher ‘complicating’ matters in such fashion. What would Williamson say to Wilfrid Sellars who might, as part of his answer, say, “To call that chair ‘black’ would be to show mastery of the linguistic concept ‘black’ in the space of reasons.” Another perfectly respectable philosophical answer, which Williamson would not find objectionable. Willamson’s glib answer to the question of whether the definition of truth offered by Aristotle correct is just that; surely, he would not begrudge the reams of scholarship produced in exploring the adequacy of the ‘correspondence theory of truth,’ what it leaves out, and indeed, the many devastating critiques leveled at it? The bogus invocation of ‘bogus complexity’ serves no one here.)

Critiques like Williamson and Dennett’s are exercises in systematic, dishonest misunderstandings of the claims made by their supposed targets. They refuse to note that it is the valorization of truth that does all the work for the political regimes they critique, that it is the disagreement about political ends that leads to the retrospective hunt for the ‘right, true, facts’ that will enable the desired political end. There is not a whiff of relativism in the air.

But such a confusion is only to be expected when the epistemology that Williamson and Dennett take themselves to be defending rests on a fundamental confusion itself: an incoherent notion of ‘correspondence to the facts’ and a refusal to acknowledge that beliefs are just rules for actions–directed to some end.

Honey And Me And Quining Qualia

I grew up loathing honey. I preferred jams: plum, orange. apple, ‘mixed fruit,’ gauva, mango, marmalade. Toasted bread with thick white cream and jam; never honey. Honey was just a little ‘sickly-sweet;’ its taste was a ‘little off.’ It crossed some permissible boundary of ‘sweetness’ and became cloying; it sent shudders through me. I couldn’t wait to get a drink of water, washing out the offending affect. My taste was inexplicable; I could not make sense of it when I made my reluctance to consume honey known. I stood by, a mere onlooker, as others around me sang paeans to its glory.

But then, just as mysteriously, shortly after I moved to the US, I began adoring honey. The ‘taste of honey’ was now a glorious treat, the right attribute of a nectar of sorts. I liked honey with crackers and cheese, on toasted bagels, in iced tea, lemonade–all of it. Sugar seemed a crude sweetener, its ‘taste’ not ‘complex’ enough; honey gave off the right airs of sophistication. Had I, in ‘growing up,’ finally found, in this new maturity, the right apparatus to process honey’s ‘taste’? Or was the honey just ‘better’?

Time rolled by; I found myself growing distant from honey again. Its ‘taste’ lost its standing on the pedestal I had erected for it, and now mingled with the masses. I grew suspicious of sugar and sweeteners and things that gave you insulin spikes; like many men north of the forties, I possessed a new-found rectitude at the dinner table, the salad bar, the diner counter. Honey’s ‘taste’ acquired connotations and allusions; honey entered the precinct marked ‘treats,’ its contents to be pilfered with care. The contrast with all else I ate grew, marking every encounter with honey with a distinctive shock of sorts. The ‘taste of honey’ ain’t what it used to be, no sir.

A curious business then, this ‘taste’ of honey.  Talking about ‘the taste of honey’:

presumes that we can isolate [it] from everything else that is going on….What counts as the way [honey tasted to me] can be distinguished , one supposes, from what is a mere accompaniment, contributory cause, or byproduct of this ‘central’ way. One dimly imagines taking [my tasting experiences] and stripping them down gradually to the essentials, leaving their common residuum, the way [honey tasted to me] at various times….The mistake is not in supposing that we can in practice ever or always perform this act of purification with certainty, but the more fundamental mistake of supposing that there is such a residual property to take seriously [Daniel Dennett, ‘Quining Qualia‘, in Consciousness in Contemporary Science, edited by A. J. Marcel and E. Bisiach, Oxford University Press, (1988)].

If such thoughts are correct, then there was no ‘taste of honey’–always indexed by ‘to me’–there were only various experiences: ‘tasting-honey-during-my-childhood-years;’ ‘tasting-honey-after-I-migrated;’ ‘tasting-honey-as-a-forty-something’–the ‘taste of honey’–the way honey seems to me–is not something that can be drawn apart from these. There’s no articulable qualitative experience, independent of the surrounding ‘context.’

We’ve known this for other supposed qualia too, of course. That shortness of breath, that pounding in your chest, that fire in your legs, those reminders of your determination and outward bound spirit that herald the glory to come as you ascend a steep switchback with a cool wind raking your brow and the aroma of pine trees wafts by, if transplanted to a hospital ward with the sick visible, the smell of disinfectant in your nostrils, becomes ‘unbearable agony.’ There is no separable ‘pain’ here; just a different assemblage of my ‘world-sensation’, experienced differently thanks to its arrangement and presentation and internal relationships. We don’t experience the world as a bunch of separate parcels of sensation and phenomenal experience; the world comes to us a package with each component receiving its ‘meaning’ by its placement within the ‘field,’ by its relationships within it. What we notice, taste, see, smell, hear is a function of the arrangement of this field, and of course, our histories and anticipations (our ‘interests‘) which have performed this arrangement.

We Robot 2012 – Day One

I am posting today from the University of Miami Law School, which is staging the We Robot 2012 conference. I presented and discussed Patrick Hubbard’s (University of South Carolina Law School) Regulation of Liability for Risks of Physical Injury From “Sophisticated Robots”. Presenting someone else’s work presents a difficult challenge; thanks to being an academic I have perfected the dark arts of bullshitting about my own work but doing so about someone else’s work is far more difficult.  I tried my best to present Patrick’s work as comprehensively and fairly as possible and to raise some questions that could spur on some discussion. (I will place the slides online very soon so you can see what I got up to.)

One of the points I raised in response to Patrick’s claim that robots that displayed ’emergent behavior’ would occasion changes in tort doctrine was: How should we understand such emergence? Might we need to see if robots, for instance, displayed  stability, homeostasis and evolvability–all often held to be features of living systems, paradigmatic examples of entities that display emergent behavior. Would robots be judged to display emergent behavior if it was not just a function of its parts but also of the holistic and relational properties of the system. I also asked Patrick how the law should understand autonomy given that some philosophical definitions of autonomy–like Kant’s for instance–would rule out some humans as being autonomous. (Earlier in the morning during discussions in another talk, I suggested another related benchmark that could be useful: Draw upon the suggestion made in Daniel Dennett’s The Case for Rorts that robots  could be viewed as intentional agents when we trust robots as authorities in reporting on their inner states, when its programmers and designers  lose epistemic hegemony.) An interesting section of the discussion that followed my presentation centered on how useful analogizing robots to animals or children or other kinds of entities was likely to be, and if useful, which analogies could work best. (This kind of analogizing was done in Chapter 4 of A Legal Theory of Autonomous Artificial Agents.)

Earlier in the day in discussing automated law enforcement–perhaps done by fleets of Robocops–I was glad to note that one of its positive outcomes was highlighted: that such automation could bring about a reduction of bias in law enforcement. In my comment following the talk, I noted that a fleet of Robocops aware of the Fourth Amendment might be be very welcome news for all those who were the targets of the almost seven hundred thousand Stop-n-Frisk searches in New York City.

As was noted in discussions in the morning, some common threads have already emerged: the suggestion that robots are ‘just tools,’ (which I continue to find bizarre), the not-so-clear distinction–and reliance on–true and apparent autonomy, the concerns about the need to avoid ‘projecting’ human will and agency onto robots and treating them like people (i.e., that we need to avoid the so-called ‘android fallacy.’) I personally don’t think warnings about the android fallacy are very useful; contemporary robots are not sophisticated enough to be people and there is no impossibility proof against them being sophisticated enough to be persons in the future.

Hopefully, I will have another–much more detailed–report from this very interesting and wonderfully well-organized conference tomorrow. (I really haven’t done justice to the rich discussions and presentations yet; for that I need a little more time.)

The Practice of Science According to Article Abstracts and Headers

Sometimes close reading of article headers can pay rich dividends. On Monday morning, my Philosophy of Biology class and I were slated to discuss a debate crucial to understanding adaptationist  paradigms: the role of bodyplan (Bauplan) constraints in restricting an organism’s  occupancy of possible points in developmental space, which complicates our understanding of the supposed ubiquity and optimific qualities of adaptation. This cluster of debates was kicked off by the Spandrels of San Marco controversy (which later morphed into the Gould-Dennett dustup).

For reading, I had assigned the original Gould-Lewontin article, “The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme“, and Chapter 10 of Dennett’s Darwin’s Dangerous Idea. The class discussion on Monday provided a very good example of how a crucial debate in science and the philosophy of science could be put into a broader context. I began the class by putting up on the projection screen, the first page of the G-L article (from the link above); in the seventy-five minutes of class, we did not get beyond a discussion of the title and the abstract; unpacking the meta-data of the article was extraordinarily useful.

As my students and I noted, this was a reproduced scholarly article, one originally published by a reputable source of scientific knowledge–The Royal Society of London; this led to a consideration of the relative  worth of different sources of scientific knowledge and the standards that might evolve for the publication and promulgation of scientific advances, and relatedly, to the role of copyright law in scientific settings. The fact that this article was now available on the Internet spoke to another set of criteria affecting its current availability. We noted that while author affiliations were not available, we could look them up to find out that in this case, the two scientists worked at a very reputable institution; furthermore, the order of the names at least indicated to us that they might have considered alphabetical ordering of their names as a way to brush past the issue of supposed priority in the authoring.

With this preliminary analysis out of the way, we looked at the abstract itself, whose opening lines establish it as the opening volley of a polemical battle that is sought to be engaged:

An adaptationist programme has dominated evolutionary thought in England and the United States during the past forty years. It is based on faith in the power of natural selection as an optimizing agent.

The first sentence clearly lays out the target of the argument to follow; the second provocatively uses the word ‘faith’ to establish what the authors take to be problematic about the target of their critique.

And then, we were off into a consideration of the article’s arguments as foreshadowed in the abstract. But importantly, we were no longer thinking about them in isolation from the larger, social and political setting of the science, the debates within it (and their rhetorical aspects). At the least, our little close reading of a piece of scientific knowledge had made clear many of the institutional features in a domain of scientific knowledge that underwrite and prop up its claims, and yes, its evolution over time.

Adaptation, Abstraction

This spring semester, teaching Philosophy of Biology–especially the Darwinian model of adaptation and environmental filtration– has reminded me of the philosophical subtleties of  ‘abstract model’ and  ‘abstraction’. More generally, it has reminded me  that philosophy of science achieves particularly sharp focus in the philosophy of biology, and that classroom discussions are edifying in crucial ways.

In its most general form, the Darwinian theory of adaptation by ‘natural selection’ states that adaptation results if:

There is reproduction with some inheritance of traits in the next generation.

In each generation, among the inherited traits there is always some variation.

The inherited variants differ in their fitness, in their adaptedness to the environment.

In teaching this version (taken from: Richard Lewontin, Adaptation. Scientific American.  239: 212-228 in Rosenberg and Shea’s Philosophy of Biology) I point out how much this concise statement of the theory leaves unspecified–the entity reproducing, ‘traits,’ the mechanisms of reproduction and inheritance, the sources of variance,  the nature of ‘fitness’, the extent of the environment, and the mechanisms and characteristics of the adaptation–even as it provides an explanatory framework of great power and scope. (This under-specification allows  the model’s statement too, in terms of interactors and replicators.)

The generality of the Darwinian specification reminds us of the practicing mathematician’s adage that the sparsest, barest definitions result in the richest, most interesting theorems. In this case, the theory works with a diversity of hereditary mechanisms and sources of variation, and does not require or imply any particular one. Rather, it merely requires that there be some  mechanism for heredity and some source of variation in heritable traits for every generation in every line of beings. I think it’s a fair bet to say that if there were any appreciative reactions in class to this discussion of the theory, they were grounded in a grasp of the theory’s generality.

Getting clear about the abstraction of the Darwinian model is crucial in understanding why it does not issue teleological explanations, why it cannot be understood as ‘progressive’, and why it is plausibly extensible to different levels of theoretical explanation in more than one domain of application. Later, our descriptions of  blind variation and selective retention as algorithmic processes enabled another reckoning with the abstraction of the model’s substrate neutrality. (Discussing this with my students reminded me of teaching the multiply-realizable computational model of the mind in classes on the philosophical foundations of artificial intelligence, especially as our discussion segued into an attempt to understand the abstract notion of computation.) In general, I sought to clarify why the model specified above is an ‘abstract’ one and what relationship its abstraction has to its generality and its explanatory scope.

Unsurprisingly, at moments in my exposition, I found myself rediscovering admiration at the theory’s Spartan outlines.  I was pleasantly surprised too, by how sophisticated my students’ interjections and questions became as they attempted to take on and apply the theory; they forced me to think on my feet in addressing them. More than anything else, their class responses reminded  me that a particularly important species of learning takes place in the course of teaching.