Honey And Me And Quining Qualia

I grew up loathing honey. I preferred jams: plum, orange. apple, ‘mixed fruit,’ gauva, mango, marmalade. Toasted bread with thick white cream and jam; never honey. Honey was just a little ‘sickly-sweet;’ its taste was a ‘little off.’ It crossed some permissible boundary of ‘sweetness’ and became cloying; it sent shudders through me. I couldn’t wait to get a drink of water, washing out the offending affect. My taste was inexplicable; I could not make sense of it when I made my reluctance to consume honey known. I stood by, a mere onlooker, as others around me sang paeans to its glory.

But then, just as mysteriously, shortly after I moved to the US, I began adoring honey. The ‘taste of honey’ was now a glorious treat, the right attribute of a nectar of sorts. I liked honey with crackers and cheese, on toasted bagels, in iced tea, lemonade–all of it. Sugar seemed a crude sweetener, its ‘taste’ not ‘complex’ enough; honey gave off the right airs of sophistication. Had I, in ‘growing up,’ finally found, in this new maturity, the right apparatus to process honey’s ‘taste’? Or was the honey just ‘better’?

Time rolled by; I found myself growing distant from honey again. Its ‘taste’ lost its standing on the pedestal I had erected for it, and now mingled with the masses. I grew suspicious of sugar and sweeteners and things that gave you insulin spikes; like many men north of the forties, I possessed a new-found rectitude at the dinner table, the salad bar, the diner counter. Honey’s ‘taste’ acquired connotations and allusions; honey entered the precinct marked ‘treats,’ its contents to be pilfered with care. The contrast with all else I ate grew, marking every encounter with honey with a distinctive shock of sorts. The ‘taste of honey’ ain’t what it used to be, no sir.

A curious business then, this ‘taste’ of honey.  Talking about ‘the taste of honey’:

presumes that we can isolate [it] from everything else that is going on….What counts as the way [honey tasted to me] can be distinguished , one supposes, from what is a mere accompaniment, contributory cause, or byproduct of this ‘central’ way. One dimly imagines taking [my tasting experiences] and stripping them down gradually to the essentials, leaving their common residuum, the way [honey tasted to me] at various times….The mistake is not in supposing that we can in practice ever or always perform this act of purification with certainty, but the more fundamental mistake of supposing that there is such a residual property to take seriously [Daniel Dennett, ‘Quining Qualia‘, in Consciousness in Contemporary Science, edited by A. J. Marcel and E. Bisiach, Oxford University Press, (1988)].

If such thoughts are correct, then there was no ‘taste of honey’–always indexed by ‘to me’–there were only various experiences: ‘tasting-honey-during-my-childhood-years;’ ‘tasting-honey-after-I-migrated;’ ‘tasting-honey-as-a-forty-something’–the ‘taste of honey’–the way honey seems to me–is not something that can be drawn apart from these. There’s no articulable qualitative experience, independent of the surrounding ‘context.’

We’ve known this for other supposed qualia too, of course. That shortness of breath, that pounding in your chest, that fire in your legs, those reminders of your determination and outward bound spirit that herald the glory to come as you ascend a steep switchback with a cool wind raking your brow and the aroma of pine trees wafts by, if transplanted to a hospital ward with the sick visible, the smell of disinfectant in your nostrils, becomes ‘unbearable agony.’ There is no separable ‘pain’ here; just a different assemblage of my ‘world-sensation’, experienced differently thanks to its arrangement and presentation and internal relationships. We don’t experience the world as a bunch of separate parcels of sensation and phenomenal experience; the world comes to us a package with each component receiving its ‘meaning’ by its placement within the ‘field,’ by its relationships within it. What we notice, taste, see, smell, hear is a function of the arrangement of this field, and of course, our histories and anticipations (our ‘interests‘) which have performed this arrangement.

RIP Hilary Putnam 1926-2016

During the period of my graduate studies in philosophy,  it came to seem to me that William James‘ classic distinction between tough and tender-minded philosophers had been been reworked just a bit. The tough philosophers were still empiricists and positivists but they had begun to show some of the same inclinations that the supposedly tender-minded in James’ distinction did: they wanted grand over-arching systems, towering receptacles into which all of reality could be neatly poured; they were enamored of reductionism; they had acquired new idols, like science (and metaphysical realism) and new tools, those of mathematics and logic.

Hilary Putnam was claimed as a card-carrying member of this tough-minded group:  he was a logician, mathematician, computer scientist, and analytic philosopher of acute distinction. He wrote non-trivial papers on mathematics and computer science (the MRDP problem, the Davis-Putnam algorithm), philosophy of language (the causal theory of reference), and philosophy of mind (functionalism, the multiple realizability of the mental)–the grand trifecta of the no-bullshit, hard-headed analytic philosopher, the one capable of handing your  woolly, unclear, tender continental philosophy ass to you on a platter.

I read many of Putnam’s classic works as a graduate student; he was always a clear writer, even as he navigated the thickets of some uncompromisingly dense material. Along with Willard Van Orman Quine, he was clearly the idol of many analytic philosophers-in-training; we grew up on a diet of Quine-Putnam-Kripke. You thought of analytic philosophy, and you thought of Putnam. Whether it was this earth, or its twin, there he was.

I was already quite uncomfortable with analytical philosophy’s preoccupations, methods, and central claims as I finished my PhD; I had not become aware that the man I thought of as its standard-bearer had started to step down from that position before I even began graduate school. When I encountered him again, after I had finished my dissertation and my post-doctoral fellowship, I found a new Putnam.

This Putnam was a philosopher who had moved away from metaphysical realism and scientism, who had found something to admire in the American pragmatists, who had become enamored of the Wittgenstein of the Philosophical Investigations. He now dismissed the fact-value dichotomy and indeed, now wrote on subjects that ‘tough-minded analytic philosophers’ from his former camps would not be caught dead writing: political theory and religion in particular. He even fraternized with the enemy, drawing inspiration, for instance, from Jürgen Habermas.

My own distaste for scientism and my interest in pragmatism (of the paleo and neo– varietals) and the late Wittgenstein meant that the new Putnam was an intellectual delight for me. (His 1964 paper ‘Robots: Machines or Artificially Created Life?’ significantly influenced my thoughts as I wrote my book on a legal theory for autonomous artificial agents.)  I read his later works with great relish and marveled at his tone of writing: he was ecumenical, gentle, tolerant, and crucially, wise. He had lived and learned; he had traversed great spaces of learning, finding that many philosophical perspectives abounded, and he had, as a good thinker must, struggled to integrate them into his intellectual framework. He seemed to have realized that the most acute philosophical ideal of all was a constant taking on and trying out of ideas, seeing if they worked in consonance with your life projects and those of the ones you cared for (this latter group can be as broad as the human community.) I was reading a philosopher who seemed to be doing philosophy in the way I understood it, as a way of making sense of this world without dogma.

I never had any personal contact with him, so I cannot share stories or anecdotes, no tales of directed inspiration or encouragement. But I can try to gesture in the direction of the pleasure he provided in his writing and his always visible willingness to work through the challenges of this world, this endlessly complicated existence. Through his life and work he provided an ideal of the engaged philosopher.

RIP Hilary Putnam.

Artificial Intelligence And Go: (Alpha)Go Ahead, Move The Goalposts

In the summer of 1999, I attended my first ever professional academic philosophy conference–in Vienna. At the conference, one titled ‘New Trends in Cognitive Science’, I gave a talk titled (rather pompously) ‘No Cognition without Representation: The Dynamical Theory of Cognition and The Emulation Theory of Mental Representation.’ I did the things you do at academic conferences as a graduate student in a job-strapped field: I hung around senior academics, hoping to strike up conversation (I think this is called ‘networking’); I tried to ask ‘intelligent’ questions at the talks, hoping my queries and remarks would mark me out as a rising star, one worthy of being offered a tenure-track position purely on the basis of my sparking public presence. You know the deal.

Among the talks I attended–a constant theme of which were the prospects of the mechanization of the mind–was one on artificial intelligence. Or rather, more accurately, the speaker concerned himself with evaluating the possible successes of artificial intelligence in domains like game-playing. Deep Blue had just beaten Garry Kasparov in an unofficial chess-human world championship in 1997, and such questions were no longer idle queries. In the wake of Deep Blue’s success the usual spate of responses–to news of artificial intelligence’s advance in some domain–had ensued: Deep Blue’s success did not indicate any ‘true intelligence’ but rather pure ‘computing brute force’; a true test of intelligence awaited in other domains. (Never mind that beating a human champion in chess had always been held out as a kind of Holy Grail for game-playing artificial intelligence.)

So, during this talk, the speaker elaborated on what he took to be artificial intelligence’s true challenge: learning and mastering the game of Go. I did not fully understand the contrasts drawn between chess and Go, but they seemed to come down to two vital ones: human Go players relied, indeed had to, a great deal on ‘intuition’, and on a ‘positional sizing-up’ that could not be reduced to an algorithmic process. Chess did not rely on intuition to the same extent; its board assessments were more amenable to an algorithmic calculation. (Go’s much larger state space was also a problem.) Therefore, roughly, success in chess was not so surprising; the real challenge was Go, and that was never going to be mastered.

Yesterday, Google’s DeepMind AlphaGo system beat the South Korean Go master Lee Se-dol in the first of an intended five-game series. Mr. Lee conceded defeat in three and a half hours. His pre-game mood was optimistic:

Mr. Lee had said he could win 5-0 or 4-1, predicting that computing power alone could not win a Go match. Victory takes “human intuition,” something AlphaGo has not yet mastered, he said.

Later though, he said that “AlphaGo appeared able to imitate human intuition to a certain degree” a fact which was born out to him during the game when “AlphaGo made a move so unexpected and unconventional that he thought “it was impossible to make such a move.”

As Jean-Pierre Dupuy noted in his The Mechanization of Mind, a very common response to the ‘mechanization of mind’ is that such attempts merely simulate or imitate, and are mere fronts for machinic complexity–but these proposals seemingly never consider the possibility that the phenomenon they consider genuine or the model for imitation and simulation can only retain such a status as long as simulations and imitations remain flawed. As those flaws diminish, the privileged status of the ‘real thing’ diminishes in turn. A really good simulation,  indistinguishable from the ‘real thing,’ should make us wonder why we grant it such a distinct station.

The ‘Lone Killer’ And The Mentally Ill World

The invocation of mental illness and lamentations over ‘the state of the American mental health system’ are an inevitable accompaniment to news stories about lone white gunmen who carry out massacres. (c.f. Charleston massacre.)  With that in mind, the following wise remarks by Helen De Cruz are worth pondering:

People are not just motivated by inner mental states, but also by context. That context is one where violence against a subpopulation of the US is condoned and actively perpetuated by police, and one in which it’s normal to have effective killing machines – things that are meant to kill people by functional design, so no analogies with cars please – lying around in your everyday environment. We are embodied, contextual creatures whose actions are influenced by those things at least as much as our internal mental states.

[N]isbett…demonstrated nicely in several experiments how westerners overemphasize personal, internal mental states to explain actions, at the expense of broader cultural context. That’s how westerners keep on seeing white male shooters as lone, unconnected individuals with mental problems (and all the stigmatizing of people with mental disabilities that follows from that), rather than people who live in a culture that normalizes having killing machines lying around and that accepts violence and racism against Black people on a daily basis. [link added]

One of the worst illusions generated by the language of mental states is that it suggests disembodied minds moving through an external landscape, with a full description of the state conveying enough information to predict and understand the behavior of the agent in question. But as De Cruz points out, we are much more; we are agents in tightly embedded, mutually co-determining relationships with our environments. A state  is a static thing but we are dynamic cognizers; we act upon, and are acted upon, by the world around us.

The world that acted upon Dylan Storm Roof has been adequately described above by De Cruz. A mind  at variance with our assessments of ‘normal’ might be particularly susceptible to the violence it enabled and facilitated. It is not too hard to imagine that a different world, a kinder world, a less racist world, one not overrun by deadly weapons and racist rhetoric and infected by a systemic prejudice against entire subclasses of Roof’s fellow humans might not have produced the same massacre as it did this week. The fragile, insecure sensibility that was Roof’s might not have been as easily pushed to breaking point in a world whose airwaves were not saturated with the messages of hate he had so clearly internalized.

The world that Dylan Storm Roof leaves behind is one in which nine families  have been devastated, their hearts and minds made susceptible to anger and despair; it is also one which lays out a template of action for other killers who might be similarly motivated; and lastly, most dangerously of all perhaps, it is one which could play host to a vengeful mind, determined to seek retribution. This is the new environment, this is the new context through which we–the ‘mentally ill’ included–must move now.

We cannot disown the mentally ill; they are of this world and in this world. They are ours.

Academic Writing In Philosophy: On Finding Older Writing Samples

Yesterday, while cleaning up an old homepage of mine, I found some old papers written while I was in graduate school. Overcome by curiosity–and rather recklessly, if I may say so–I converted the old Postscript format to PDF, and took a closer look.

The first is titled ‘No Cognition Without Representation’; its abstract reads:

A critical look at the emulation theory of representation [due to Rick Grush] and its claims to have shown a) the dynamical thesis of cognition to be incomplete and b) to have provided a necessary condition on cognition.

The second is titled ‘Quantum Mechanical Explanation, Nonseperability and Causality’; its abstract reads:

Does using non-separable processes (as quantum mechanical processes might be understood) in scientific explanations violate some crucial methodological principle? I argue that the answer is no.

The third is titled ‘Folk Psychology, Connectionism and Constraints on Believers’; its abstract reads:

An examination of the argument that connectionism leads to eliminativist conclusions about the mind; I argue further that often, constraints placed on believers by proponents of folk psychology seem to be arbitrary.

The fourth is titled ‘Contextualism, Skepticism and Kinds of Possibilities’; its abstract reads:

A sympathetic examination of contextualist claims to have solved the skeptical puzzle.

As might be expected, as I looked through these papers (written between 1994-1999), I experienced some mixed feelings. One of them–the first above–was presented at a conference and featured in its proceedings; I submitted it later to Philosophical Psychology and was asked to revise and resubmit, but never got around to it; a publication opportunity missed.  I was advised to rework my conclusion to the third paper into a longer piece and submit to a journal; again, I was overcome by lassitude. Clearly, I didn’t seem to have been overly eager to add lines to my CV, a rather self-indulgent attitude.

Far more interesting, I think, was my reaction on reading my writing and its so-called ‘style’: I write very much like a generic Anglo-American analytic philosopher. There is a forensic quality to my analysis; I pick arguments apart with some care and precision, deploying the tools of the trade that I had learned, not just by reading journal articles but also by observing verbal disputation at philosophy colloquia (a paper I wrote on Michael Slote‘s From Morality to Virtue was found particularly devastating by my professor; he suggested I had ‘really gone to town on Slote’); I use standard turns of phrase; like all good ‘analytic types’ I sprinkle abbreviations and faux mathematical symbols throughout; my writing has little ornamentation or flourish; it is also not distinctive in any interesting way.

By that stage in my education–as I worked through the large amount of coursework required in my program–it is apparent I had started to learn some of the tricks of the trade: writing in a knowing voice, subconsciously taking on the verbal mannerisms and tics of the writing that I had been exposed to. I was seeking to blend in, to become part of this new group I was seeking admission to; emulation seemed like the best way to do so. There is little doubt in my mind that had I continued to travel in roughly the same philosophical neighborhoods as above–philosophy of mind, philosophy of science, and epistemology–I would have settled into a writing groove, perhaps churning out papers on what I saw as the latest trends and topics in philosophy. (Each of the four topics above was in ‘vogue’ in the 1990s.)

Success–such as it is–in academic writing can very often be a matter of writing in a way that does not induce too much dissonance or discomfort in your referees, your peers; these were, very often, trained just like you were. They regulate membership and admission; to be heard you often must sound like them.

Physical and Psychological Affordance

According to Wikipedia, ‘an affordance is a quality of an object, or an environment, which allows an individual to perform an action. For example, a knob affords twisting, and perhaps pushing, while a cord affords pulling.’ (A photograph of a tea set in the Wikipedia entry bears the caption, ‘The handles on this tea set provide an obvious affordance for holding.’) Later we learn that James J. Gibson introduced ‘affordance’ in his 1977 article “The Theory of Affordances”he ‘defined affordances as all “action possibilities” latent in the environment, objectively measurable and independent of the individual’s ability to recognize them, but always in relation to the actor and therefore dependent on their capabilities.’

I do not now remember where I first encountered the term–perhaps in my readings of embodied cognition literature  in graduate school, probably. It has always struck me as a marvelously evocative term, and one of those that almost immediately serves to illuminate the world in a different light. We are physical beings, minds and bodies united, caught up in a tightly coupled system of world and agent; the world provides us affordances for our particular modes of interactions with it; we modify the world, modifying its affordances and change in response; and so on. The dynamic, mutually determining nature of this interaction stood clarified. Thinking of the world as equipped with affordances helped me envision the evolutionary filtration of the environment better; those creatures with traits suitable for the environment’s affordances were evolutionary successful. Knobs and cords can only be twisted and pulled by those suitably equipped–mentally and physically–for doing so.  Babies learn to walk in an environment that provides them the means for doing so–level, firm surfaces–and not others. An affordance rich environment for walking, perhaps equipped with handles for grasping or helpful parents reaching out to provide support, facilitates the learning of walking. And so on.

But ‘affordance’ need not be restricted to understanding in purely physical terms. We can think of the world of psychological actors as providing psychological affordances too. An agent with a particular psychological makeup is plausibly understood as providing for certain modes of interaction with it: a hostile youngster, bristling with resentment and suspicion of authority restricts the space of possibilities for other agents to interact with him; the affordances he provides are minimal; others are more capacious in the affordances they provide. A psychological agent’s life can be viewed as a movement through a space of affordances; his trajectories through it are determined by his impingement on others and vice-versa; he finds his responses modified by those that the space allows or affords. As parents find out when they raise a child, theories of learning and rearing only go so far; the particular make-up of the pupil feed back to the parent and can modify the rearing strategy; the child has provided only some affordances that work with the child-rearing theory of choice. An inmate in jail is stuck in a very particular domain of psychological affordances; he will find his reactions modified accordingly.

Thinking of our exchanges with the world and other human beings in this light helps illuminate our dependence  and influence on them quite clearly; we are not solitary trailblazers; rather at every step, we are pressed on, and push back. What emerges at every point and at the end bears the impress of these rich relationships with our environment, both physical and psychological.

The Mind is not a Place or an Object

Last week, I participated in an interdisciplinary panel discussion at the Minding the Body: Dualism and its Discontents Conference (held at the CUNY Graduate Center, and organized by the English Students Association.) The other participants in the panel included:  Patricia Ticineto-Clough (Sociology), Gerhard Joseph (English), and Jason Tougaw (English). As might have been expected, with that group of participants the discussion was pretty wide-ranging; I’m not going to attempt to recapitulate it here. I do however want to (very) informally make note of one remark I made in the question and answer session that followed, which touched upon the frequently mentioned, discussed and puzzled-over relationship between the brain and the mind. This discussion was sparked in part, by Jason Tougaw’s remark that he had ‘noticed a recurrent phenomenon in contemporary literature [especially the so-called ‘neuronovel]: scenes in which brains (or other body parts) are  touched or explored for signs of immaterial elements of self: mind, consciousness, affect, emotion, imagination, desire.’

In response to this perennially entertained scientific, philosophical, and literary possibility of ‘locating’ the mind in the material or ‘identifying’ the mind with it, I said it seemed to me these sorts of prospects traded on a confusion about the mind as a place or an object, rather than as a term used to describe an entity’s capacities. The term ‘mind’ is perhaps best understood as having been coined in order to mark out particular kinds of entities that were able to enter into very distinct sorts of relationships with their environments. This ascription in our own human case goes from the ‘inside’ to the ‘outside’ as it were, beginning with mental states perceived from the first-person perspective, but it is then extended by analogy to other creatures that show patterns of behaviors like ours. These relationships display modes of interaction that stand out, for instance, for their rich adaptiveness and flexibility, and show themselves to be receptive to a particular vocabulary of description, explanation and prediction: we might term them ‘mindful’ interactions. So creatures capable of mindful interactions are said to ‘possess’ a mind or ‘have minds’.  But this does not mean that they need be radically similar to us. A different environment and a different entity could conceivably generate the same kind of interactions, perhaps one arrived at by a slow, imperfect evolutionary process like ours. These entities might have brains like ours or they might not; they might have bodies like ours, or they might not; they might have biologies like ours, or not. And so on.

Understood in this way, the term ‘mind’ has come to represent over the years what those creatures capable of ‘mindful’ interactions with their environs ‘have’. But speaking of it as something we ‘have’ send us off and running, looking for it. And since we have bodies with components that seem distinctly articulable, it became natural to try and identify one of its components or locations with the mind. But this, to repeat, is a confusion.

To say that something has a mind is to describe that entity’s capacities, its relationship with its environment, and our modes of understanding, predicting and responding to its behavior.