Neuroscience’s Inference Problem And The Perils Of Scientific Reduction

In Science’s Inference Problem: When Data Doesn’t Mean What We Think It Does, while reviewing Jerome Kagan‘s Five Constraints on Predicting Behavior, James Ryerson writes:

Perhaps the most difficult challenge Kagan describes is the mismatching of the respective concepts and terminologies of brain science and psychology. Because neuroscientists lack a “rich biological vocabulary” for the variety of brain states, they can be tempted to borrow correlates from psychology before they have shown there is in fact a correlation. On the psychology side, many concepts can be faddish or otherwise short-lived, which should make you skeptical that today’s psychological terms will “map neatly” onto information about the brain. If fMRI machines had been available a century ago, Kagan points out, we would have been searching for the neurological basis of Pavlov’s “freedom reflex” or Freud’s “oral stage” of development, no doubt in vain. Why should we be any more confident that today’s psychological concepts will prove any better at cutting nature at the joints?

In a review of Theory and Method in the Neurosciences (Peter K. Machamer, Rick Grush, Peter McLaughlin (eds), University of Pittsburgh Press, 2001), I made note¹ of related epistemological concerns:

When experiments are carried out, neuroscientists continue to run into problems. The level of experimental control available to practitioners in other sciences is simply not available to them, and the theorising that results often seems to be on shaky ground….The localisation techniques that are amongst the most common in neuroscience rely on experimental methods such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and magnetoencephelography (MEG). [In PET] a radioactive tracer consisting of labelled water or glucose analogue molecules is injected into a subject, who is then asked to perform a cognitive task under controlled conditions. The tracer decays and emits positrons and gamma rays that increase the blood flow or glucose metabolism in an area of the brain. It is now assumed that this area is responsible for the cognitive function performed by the subject. The problem with this assumption, of course, is that the increased blood flow might occur in one area, and the relevant neural activity might occur in another, or in no particular area at all….this form of investigation, rather than pointing to the modularity and functional decomposability of the brain, merely assumes it.

The fundamental problem–implicit and explicit in Kagan’s book and my little note above–is the urge to ‘reduce’ psychology to neuroscience, to reduce mind to brain, to eliminate psychological explanations and language in favor of neuroscientific ones, which will introduce precise scientific language in place of imprecise psychological descriptions.  This urge to eliminate one level of explanation in favor of a ‘better, lower, more basic, more fundamental’ one is to put it bluntly, scientistic hubris, and the various challenges Kagan outlines in his book bear out the foolishness of this enterprise. It results in explanations and theories that rest on unstable foundations: optimistic correlations and glib assumptions are the least of it. Worst of all, it contributes to a blindness: what is visible at the level of psychology is not visible at the level of neuroscience. Knowledge should enlighten, not render us myopic.

Note: In Metascience, 11(1): March 2002.

The Fragile Digital World Described By Zeynep Tufkeci Invites Smashing

In “The Looming Digital Meltdown” (New York Times, January 7th), Zeynep Tufekci writes,

We have built the digital world too rapidly. It was constructed layer upon layer, and many of the early layers were never meant to guard so many valuable things: our personal correspondence, our finances, the very infrastructure of our lives. Design shortcuts and other techniques for optimization — in particular, sacrificing security for speed or memory space — may have made sense when computers played a relatively small role in our lives. But those early layers are now emerging as enormous liabilities. The vulnerabilities announced last week have been around for decades, perhaps lurking unnoticed by anyone or perhaps long exploited.

This digital world is intertwined with, works for, and is  used by, an increasingly problematic social, economic, and political post-colonial and post-imperial world, one riven by political crisis and  economic inequality, playing host to an increasingly desperate polity sustained and driven, all too often, by a rage and anger grounded in humiliation and shame. Within this world, all too many have had their noses rubbed in the dirt of their colonial and subjugated pasts, reminded again and again and again of how they are backward and poor and dispossessed and shameful, of how they need to play ‘catch  up,’ to show that they are ‘modern’ and ‘advanced’ and ‘developed’ in all the right ways.  The technology of the digital world has always been understood as the golden road to the future; it is what will make the journey to the land of the developed possible. Bridge the technological gap; all will be well. This digital world also brought with it the arms of the new age: the viruses, the trojan horses, the malwares, the new weapons promising to reduce the gaping disparity between the rich and the poor, between North and South, between East and West–when it comes to the size of their conventional and nuclear arsenals, a disparity that allows certain countries to bomb yet others with impunity, from close, or from afar. The ‘backward world,’ the ‘poor’, the ‘developing countries’ have understood that besides nuclear weapons, digital weapons can also keep them safe, by threatening to bring the digital worlds of their opponents to their knee–perhaps the malware that knocks out a reactor, or a city’s electric supply, or something else.

The marriage of a nihilistic anger with the technical nous of the digital weapon maker and the security vulnerabilities of the digital world is a recipe for disaster. This world, this glittering world, its riches all dressed up and packaged and placed out of reach, invites resentful assault. The digital world, its basket in which it has placed all its eggs, invites smashing; and a nihilistic hacker might just be the person to do it. An arsenal of drones and cruise missiles and ICBMS will not be of much defense against the insidious Trojan Horse, artfully placed to do the most damage to a digital installation. Self-serving security experts, all hungering for the highly-paid consulting gig, have long talked up this threat; but their greed does not make the threat any less real.

Neil deGrasse Tyson And The Perils Of Facile Reductionism

You know the shtick by now–or at least, twitterers and tweeters do. Every few weeks, Neil deGrasse Tyson, one of America’s most popular public ‘scientific’ intellectuals, decides that it is time to describe some social construct in scientific language to show how ‘arbitrary’ and ‘made-up’ it all is–compared to the sheer factitude, the amazing reality-grounded non-arbitrariness of scientific knowledge. Consider for instance, this latest gem, now predictably provoking ridicule from those who found its issuance predictable and tired:

Not that anybody’s asked, but New Years Day on the Gregorian Calendar is a cosmically arbitrary event, carrying no Astronomical significance at all.

A week earlier, Tyson had tweeted:

Merry Christmas to the world’s 2.5 billion Christians. And to the remaining 5 billion people, including Muslims Atheists Hindus Buddhists Animists & Jews, Happy Monday.

Tyson, I think, imagines that he is bringing science to the masses; that he is dispelling the ignorance cast by the veil of imprecise, arbitrary, subjective language that ‘ordinary folk’ use by directing their attention to scientific language, which when used, shows how ridiculous those ‘ordinary folk’ affectations are. Your birthday? Just a date. That date? A ‘cosmically arbitrary event.’ Your child’s laughter? Just sound waves colliding with your eardrum. That friendly smile beamed at you by your school mate? Just facial muscles being stretched. And so on. It’s really easy; almost mechanical. I could, if I wanted, set up a bot-run Neil deGrasse Tyson Parody account on Twitter, and just issue these every once in a while. Easy pickings.

Does Tyson imagine that he is engaging in some form ‘scientific communication’ here, bringing science to the masses? Does he imagined he is introducing greater precision and fidelity to truth in our everyday conversation and discourse, cleaning up the degraded Augean stables of internet chatter? He might think so, but what Tyson is actually engaged in is displaying the perils of facile reductionism and the scientism it invariably accompanies and embellishes; anything can be redescribed in scientific language but that does not mean such redescription is necessary or desirable or even moderately useful. All too often such redescription results in not talking about the ‘same thing’ any more. (All that great literature? Just ink on paper! You know, a chemical pigment on a piece of treated wood pulp.)

There are many ways of talking about the world; science is one of them. Science lets us do many things; other ways of talking about the world let us other do things. Scientific language is a tool; it lets us solve some problems really well; other languages–like those of poetry, psychology, literature, legal theory–help us solve others. The views they introduce of this world show us many things; different objects appear in different views depending on the language adopted. As a result, we are ‘multi-scopic’ creatures; at any time, we entertain multiple perspectives on this world and work with them, shifting between each as my wants and needs require. To figure out what clothes to wear today, I consulted the resources of meteorology; in order to get a fellow human being to come to my aid, I used elementary folk psychology, not neuroscience; to crack a joke and break the ice with co-workers, I relied on humor which deployed imaginary entities. Different tasks; different languages; different tools; it is the basis of the pragmatic attitude, which underwrites the science that Tyson claims to revere.

Tyson has famously dissed philosophy of science and just philosophy in general; his tweeting shows that he would greatly benefit from a philosophy class or two himself.

Contra Cathy O’Neil, The ‘Ivory Tower’ Does Not ‘Ignore Tech’

In ‘Ivory Tower Cannot Keep On Ignoring TechCathy O’Neil writes:

We need academia to step up to fill in the gaps in our collective understanding about the new role of technology in shaping our lives. We need robust research on hiring algorithms that seem to filter out peoplewith mental health disorders…we need research to ensure that the same mistakes aren’t made again and again. It’s absolutely within the abilities of academic research to study such examples and to push against the most obvious statistical, ethical or constitutional failures and dedicate serious intellectual energy to finding solutions. And whereas professional technologists working at private companies are not in a position to critique their own work, academics theoretically enjoy much more freedom of inquiry.

There is essentially no distinct field of academic study that takes seriously the responsibility of understanding and critiquing the role of technology — and specifically, the algorithms that are responsible for so many decisions — in our lives. That’s not surprising. Which academic department is going to give up a valuable tenure line to devote to this, given how much academic departments fight over resources already?

O’Neil’s piece is an unfortunate continuation of a trend to continue to castigate academia for its lack of social responsibility, all the while ignoring the work academics do in precisely those domains where their absence is supposedly felt.

In her Op-Ed, O’Neil ignores science and technology studies, a field of study that “takes seriously the responsibility of understanding and critiquing the role of technology,” and many of whose members are engaged in precisely the kind of studies she thinks should be undertaken at this moment in the history of technology. Moreover, there are fields of academic studies such as philosophy of science, philosophy of technology, and the sociology of knowledge, all of which take very seriously the task of examining and critiquing the conceptual foundations of science and technology; such inquiries are not elucidatory, they are very often critical and skeptical. Such disciplines then, produce work that makes both descriptive and prescriptive claims about the practice of science, and the social, political, and ethical values that underwrite what may seem like purely ‘technical’ decisions pertaining to design and implementation. The humanities are not alone in this regard, most computer science departments now require a class in ‘Computer Ethics’ as part of the requirements for their major (indeed, I designed one such class here at Brooklyn College, and taught it for a few semesters.) And of course, legal academics have, in recent years started to pay attention to these fields and incorporated them in their writings on ‘algorithmic decision making,’ ‘algorithmic control’ and so on. (The work of Frank Pasquale and Danielle Citron is notable in this regard.) If O’Neil is interested, she could dig deeper into the philosophical canon and read works by critical theorists like Herbert Marcuse and Max Horkheimer who mounted rigorous critiques of scientism, reductionism, and positivism in their works. Lastly, O’Neil could read my co-authored work Decoding Liberation: The Promise of Free and Open Source Software, a central claim of which is that transparency, not opacity, should be the guiding principle for software design and deployment. I’d be happy to send her a copy if she so desires.

Will Artificial Intelligence Create More Jobs Than It Eliminates? Maybe

Over at the MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino strike an optimistic note as they respond to the “distressing picture” created by “the threat that automation will eliminate a broad swath of jobs across the world economy [for] as artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.” Wilson, Daugherty, and Morini-Bianzino suggest that we’ve been “overlooking” the fact that “many new jobs will also be created — jobs that look nothing like those that exist today.” They go on to identify three key positions:


This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.


The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases.


The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency.

Unfortunately for these claims, it is all too evident that these positions themselves are vulnerable to automation. That is, future trainers, explainers, and sustainers–if their job descriptions match the ones given above–will also be automated. Nothing in those descriptions indicates that humans are indispensable components of the learning processes they are currently facilitating.

Consider that the ’empathy trainer’ Koko is a machine-learning system which “can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth.” Currently Koko is being trained by a human; but it will then go on to teach Siri and Alexa. Teachers are teaching future teachers how to be teachers; after which they will teach on their own. Moreover, there is no reason why chatbots, which currently need human trainers, cannot be trained–as they already are–on increasingly large corpora of natural language texts like the collections of the world’s libraries (which consist of human inputs.)

Or consider that explainers may rely on an algorithmic system like “the Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction”; human analysts use its output to “pinpoint the data that led to a particular result.” But why wouldn’t LIME’s functioning be enhanced to do the relevant ‘pinpointing’ itself, thus obviating the need for a human analyst?

Finally, note that “AI may become more self-governing….an AI prototype named Quixote…can learn about ethics by reading simple stories….the system is able to “reverse engineer” human values through stories about how humans interact with one another. Quixote has learned…why stealing is not a good idea….even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.” This sounds like the roles and responsibilities for humans in this domain will decrease over time.

There is no doubt that artificial intelligence systems require human inputs, supervision, training, and maintenance; but those roles are diminishing, not increasing; short-term, local, requirements for human skills will not necessarily translate into long-term, global, ones. The economic impact of the coming robotics and artificial intelligence ‘revolution’ is still unclear.

Richard Dawkins’ Inconsistent Reliance On Pragmatism

A very popular video on YouTube featuring Richard Dawkins is titled ‘Science Works, Bitches.’ It periodically makes the rounds on social media; as it does, Dawkins acolytes–in the video and on social media–applaud him as he ‘smacks down’ a questioner who inquires into the ‘justification’ for the scientific method. (A familiar enough question; for instance, science relies on induction, but the justification for induction is that it has worked in the past, which is itself an inductive argument, so how do you break out of this circle, without relying on some kind of ‘faith’?) Dawkins’ response is ‘It works, bitches!’ Science’s claim to rationality rests on its proven track record–going to the moon, curing disease etc.; this is an entirely pragmatic claim with which I’m in total agreement. The success of inductive claims is part of our understanding and definition of rationality; rationality does not exist independent of our practices; they define it.

Still, the provision of this answer also reveals Dawkins’ utter dishonesty when it comes to the matter of his sustained attacks on religion over the years. For the open-mindedness and the acknowledgment of the primacy of practice that is on display in this answer is nowhere visible in his attitude toward religion.

Dawkins is entirely correct in noting that science is superior to religion when it comes to the business of solving certain kinds of problems. You want to make things fly; you rely on science. You want to go to the moon; you rely on science. You want to cure cancer; you rely on science. Rely on religion for any of these things and you will fail miserably. But Dawkins will be simply unwilling to accept as an answer from a religious person that the justification for his or her faith is that ‘it works’ when it comes to providing a ‘solution’ for a ‘problem’ that is not of the kind specified above. At those moments, Dawkins will demand a kind of ‘rational’ answer that he is himself unwilling to–and indeed, cannot–provide for science.

Consider a religious person who when asked to ‘justify’ faith, responds ‘It works for me when it comes to achieving the end or the outcome of making me happy [or more contented, more accepting of my fate, reconciling myself to the death of loved ones or my own death; the list goes on.]’ Dawkins’ response to this would be that this is a pathetic delusional comfort, that this is based on fairy tales and poppycock. Here too, Dawkins would demand that the religious person accept scientific answers to these questions and scientific resolutions of these ‘problems.’ Here, Dawkins would be unable to accept the pragmatic nature of the religious person’s answer that faith ‘works’ for them. Here, Dawkins would demand a ‘justified, rational, grounded in evidence’ answer; that is, an imposition of standards that he is unwilling to place on the foundations of scientific reasoning.

As I noted above, pragmatism is the best justification for science and the scientific method; science works best to achieve particular ends. Dawkins is entirely right to note that religion cannot answer the kinds of questions or solve the kinds of problems science can; he should be prepared to admit the possibility that there are questions to which religion offers answers that ‘work’ for its adherents–in preference to other alternatives. Pragmatism demands we accept this answer too; you can’t lean on pragmatism to defend science, and then abandon it in your attacks on religion. That’s scientism. Which is a load of poppycock.

No, Aristotle Did Not ‘Create’ The Computer

For the past few days, an essay titled “How Aristotle Created The Computer” (The Atlantic, March 20, 2017, by Chris Dixon) has been making the rounds. It begins with the following claim:

The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Dixon then goes on to trace this ‘history of ideas,’ showing how the development–and increasing formalization and rigor–of logic contributed to the development of computer science and the first computing devices. Along the way, Dixon makes note of the contributions-direct and indirect–of: Claude Shannon, Alan Turing, George Boole, Euclid, Rene Descartes, Gottlob Frege, David Hilbert, Gottfried Leibniz, Bertrand Russell, Alfred Whitehead, Alonzo Church, and John Von Neumann. This potted history is exceedingly familiar to students of the foundations of computer science–a demographic that includes computer scientists, philosophers, and mathematical logicians–but presumably that is not the audience that Dixon is writing for; those students might wonder why Augustus De Morgan and Charles Peirce do not feature in it. Given this temporally extended history, with its many contributors and their diverse contributions, why does the article carry the headline “How Aristotle Created the Computer”? Aristotle did not create the computer or anything like it; he did make important contributions to a fledgling field, which took several more centuries to develop into maturity. (The contributions to this field by logicians and systems of logic of alternative philosophical traditions like the Indian one are, as per usual, studiously ignored in Dixon’s history.) And as a philosopher, I cannot resist asking, “what do you mean by ‘created'”? What counts as ‘creating’?

The easy answer is that it is clickbait. Fair enough. We are by now used to the idiocy of the misleading clickbait headline, one designed to ‘attract’ more readers by making it more ‘interesting;’ authors very often have little choice in this matter, and very often have to watch helplessly as hit-hungry editors mangle the impact of the actual content of their work. (As in this case?) But it is worth noting this headline’s contribution to the pernicious notion of the ‘creation’ of the computer and to the idea that it is possible to isolate a singular figure as its creator–a clear hangover of a religious sentiment that things that exist must have creation points, ‘beginnings,’ and creators. It is yet another contribution to the continued mistaken recounting of the history of science as a story of ‘towering figures.’ (Incidentally, I do not agree with Dixon that the history of computers is “better understood as a history of ideas”; that history is instead, an integral component of the history of computing in general, which also includes a social history and an economic one; telling a history of computing as a history of objects is a perfectly reasonable thing to do when we remember that actual, functioning computers are physical instantiations of abstract notions of computation.)

To end on a positive note, here are some alternative headlines: “Philosophy and Mathematics’ Contributions To The Development of Computing”; “How Philosophers and Mathematicians Helped Bring Us Computers”; or “How Philosophical Thinking Makes The Computer Possible.” None of these are as ‘sexy’ as the original headline, but they are far more informative and accurate.

Note: What do you think of my clickbaity headline for this post?