Contra Corey Pein, Computer Science Is A Science

In this day and age, sophisticated critique of technology and science is much needed. What we don’t need is critiques like this long piece in the Baffler by Corey Pein which, I think, is trying to mount a critique of the lack of ethics education in computer science curricula but seems most concerned with asserting that computer science is not a science. By, I think, relying on the premise that “Silicon Valley activity and propaganda” = “computer science.” I fail to understand how a humanistic point is made by asserting the ‘unscientific’ nature of a purported science, but your mileage may vary. Anyway, on to Pein.

Continue reading

Space Exploration And The Invisible Women

Yesterday being a snow day in New York City–for school-going children and college professors alike–I spent it with my daughter at home. Diversion was necessary, and so I turned to an old friend–the growing stock of quite excellent documentaries on Netflix–for aid. My recent conversations with my daughter have touched on the topic of space exploration–itself prompted by a discussion of the Man on the Moon, which had led me to point out that actual men had been to the moon, by rocket, and indeed, had walked on it. A space exploration documentary it would be. We settled on the BBC’s ‘Rocket Men’ and off we went; I wanted to show my daughter the Apollo 11 mission in particular, as I have fond memories of watching a documentary on its flight with my parents when I was a five-year old myself.

As the documentary began, I experienced a familiar sinking feeling: my daughter and I were going to be watching something ‘notable,’ ‘historical,’ a human achievement of some repute, and yet again, we would find few women featured prominently. Indeed, as the title itself suggests, the documentary is about men: the astronauts, the rocket scientists, the mission control specialists. The only women visible are those watching rockets blast off or worrying about the fates of their family members in them. This used to happen in our watching of music videos too as I introduced my daughter to ‘guitar heroes’ as a spur to her guitar lessons. After a couple of weeks of watching the likes of Neil Young, Jimi Hendrix, Jimmy Page et al, my daughter asked me, “Don’t girls play the guitar?” Well, of course they do, and so off we went, to check out Joan Jett, Nancy Wilson, Lita Ford, Chrissie Hynde, the Deal sisters, and many others.

It had been an easy trap to fall into. In the case of music, I had a blind spot myself. In the case of space exploration the problem lay elsewhere: there were no women pilots qualified for the astronaut program as the initial selection of the astronaut corps came from the armed forces. Both instances though, were united by their embedding in a culture in which women were women were less visible, less recognized, less likely to be promoted to the relevant pantheon. After all, as in literature and art and philosophy, women have been present in numbers that speak to their ability to surmount the social barriers placed in their paths, and yet still rendered invisible because of the failure to see them and their contributions to their chosen field of artistic endeavor.

As I watched a video of the first seven American astronauts being introduced at a press conference, I felt I had to say something to my daughter, to explain to her why no women were to be seen in this cavalcade of handsome crew cut men wearing aviator sunglasses. So I launched into a brief digression, explaining the selection process and why women couldn’t have been selected. My daughter listened with some bemusement and asked if things were still that way now. I said, no, but there’s work to be done. And then we returned to watching the Gemini and Apollo missions. Afterwards, I walked over to my computer and pulled up the Wikipedia entries for Valentina Tereshkova and Sally Ride and Kalpana Chawla and showed them to my daughter, promising her that we would watch documentaries on them too. She seemed suitably enthused.

Neuroscience’s Inference Problem And The Perils Of Scientific Reduction

In Science’s Inference Problem: When Data Doesn’t Mean What We Think It Does, while reviewing Jerome Kagan‘s Five Constraints on Predicting Behavior, James Ryerson writes:

Perhaps the most difficult challenge Kagan describes is the mismatching of the respective concepts and terminologies of brain science and psychology. Because neuroscientists lack a “rich biological vocabulary” for the variety of brain states, they can be tempted to borrow correlates from psychology before they have shown there is in fact a correlation. On the psychology side, many concepts can be faddish or otherwise short-lived, which should make you skeptical that today’s psychological terms will “map neatly” onto information about the brain. If fMRI machines had been available a century ago, Kagan points out, we would have been searching for the neurological basis of Pavlov’s “freedom reflex” or Freud’s “oral stage” of development, no doubt in vain. Why should we be any more confident that today’s psychological concepts will prove any better at cutting nature at the joints?

In a review of Theory and Method in the Neurosciences (Peter K. Machamer, Rick Grush, Peter McLaughlin (eds), University of Pittsburgh Press, 2001), I made note¹ of related epistemological concerns:

When experiments are carried out, neuroscientists continue to run into problems. The level of experimental control available to practitioners in other sciences is simply not available to them, and the theorising that results often seems to be on shaky ground….The localisation techniques that are amongst the most common in neuroscience rely on experimental methods such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and magnetoencephelography (MEG). [In PET] a radioactive tracer consisting of labelled water or glucose analogue molecules is injected into a subject, who is then asked to perform a cognitive task under controlled conditions. The tracer decays and emits positrons and gamma rays that increase the blood flow or glucose metabolism in an area of the brain. It is now assumed that this area is responsible for the cognitive function performed by the subject. The problem with this assumption, of course, is that the increased blood flow might occur in one area, and the relevant neural activity might occur in another, or in no particular area at all….this form of investigation, rather than pointing to the modularity and functional decomposability of the brain, merely assumes it.

The fundamental problem–implicit and explicit in Kagan’s book and my little note above–is the urge to ‘reduce’ psychology to neuroscience, to reduce mind to brain, to eliminate psychological explanations and language in favor of neuroscientific ones, which will introduce precise scientific language in place of imprecise psychological descriptions.  This urge to eliminate one level of explanation in favor of a ‘better, lower, more basic, more fundamental’ one is to put it bluntly, scientistic hubris, and the various challenges Kagan outlines in his book bear out the foolishness of this enterprise. It results in explanations and theories that rest on unstable foundations: optimistic correlations and glib assumptions are the least of it. Worst of all, it contributes to a blindness: what is visible at the level of psychology is not visible at the level of neuroscience. Knowledge should enlighten, not render us myopic.

Note: In Metascience, 11(1): March 2002.

The Fragile Digital World Described By Zeynep Tufkeci Invites Smashing

In “The Looming Digital Meltdown” (New York Times, January 7th), Zeynep Tufekci writes,

We have built the digital world too rapidly. It was constructed layer upon layer, and many of the early layers were never meant to guard so many valuable things: our personal correspondence, our finances, the very infrastructure of our lives. Design shortcuts and other techniques for optimization — in particular, sacrificing security for speed or memory space — may have made sense when computers played a relatively small role in our lives. But those early layers are now emerging as enormous liabilities. The vulnerabilities announced last week have been around for decades, perhaps lurking unnoticed by anyone or perhaps long exploited.

This digital world is intertwined with, works for, and is  used by, an increasingly problematic social, economic, and political post-colonial and post-imperial world, one riven by political crisis and  economic inequality, playing host to an increasingly desperate polity sustained and driven, all too often, by a rage and anger grounded in humiliation and shame. Within this world, all too many have had their noses rubbed in the dirt of their colonial and subjugated pasts, reminded again and again and again of how they are backward and poor and dispossessed and shameful, of how they need to play ‘catch  up,’ to show that they are ‘modern’ and ‘advanced’ and ‘developed’ in all the right ways.  The technology of the digital world has always been understood as the golden road to the future; it is what will make the journey to the land of the developed possible. Bridge the technological gap; all will be well. This digital world also brought with it the arms of the new age: the viruses, the trojan horses, the malwares, the new weapons promising to reduce the gaping disparity between the rich and the poor, between North and South, between East and West–when it comes to the size of their conventional and nuclear arsenals, a disparity that allows certain countries to bomb yet others with impunity, from close, or from afar. The ‘backward world,’ the ‘poor’, the ‘developing countries’ have understood that besides nuclear weapons, digital weapons can also keep them safe, by threatening to bring the digital worlds of their opponents to their knee–perhaps the malware that knocks out a reactor, or a city’s electric supply, or something else.

The marriage of a nihilistic anger with the technical nous of the digital weapon maker and the security vulnerabilities of the digital world is a recipe for disaster. This world, this glittering world, its riches all dressed up and packaged and placed out of reach, invites resentful assault. The digital world, its basket in which it has placed all its eggs, invites smashing; and a nihilistic hacker might just be the person to do it. An arsenal of drones and cruise missiles and ICBMS will not be of much defense against the insidious Trojan Horse, artfully placed to do the most damage to a digital installation. Self-serving security experts, all hungering for the highly-paid consulting gig, have long talked up this threat; but their greed does not make the threat any less real.

Neil deGrasse Tyson And The Perils Of Facile Reductionism

You know the shtick by now–or at least, twitterers and tweeters do. Every few weeks, Neil deGrasse Tyson, one of America’s most popular public ‘scientific’ intellectuals, decides that it is time to describe some social construct in scientific language to show how ‘arbitrary’ and ‘made-up’ it all is–compared to the sheer factitude, the amazing reality-grounded non-arbitrariness of scientific knowledge. Consider for instance, this latest gem, now predictably provoking ridicule from those who found its issuance predictable and tired:

Not that anybody’s asked, but New Years Day on the Gregorian Calendar is a cosmically arbitrary event, carrying no Astronomical significance at all.

A week earlier, Tyson had tweeted:

Merry Christmas to the world’s 2.5 billion Christians. And to the remaining 5 billion people, including Muslims Atheists Hindus Buddhists Animists & Jews, Happy Monday.

Tyson, I think, imagines that he is bringing science to the masses; that he is dispelling the ignorance cast by the veil of imprecise, arbitrary, subjective language that ‘ordinary folk’ use by directing their attention to scientific language, which when used, shows how ridiculous those ‘ordinary folk’ affectations are. Your birthday? Just a date. That date? A ‘cosmically arbitrary event.’ Your child’s laughter? Just sound waves colliding with your eardrum. That friendly smile beamed at you by your school mate? Just facial muscles being stretched. And so on. It’s really easy; almost mechanical. I could, if I wanted, set up a bot-run Neil deGrasse Tyson Parody account on Twitter, and just issue these every once in a while. Easy pickings.

Does Tyson imagine that he is engaging in some form ‘scientific communication’ here, bringing science to the masses? Does he imagined he is introducing greater precision and fidelity to truth in our everyday conversation and discourse, cleaning up the degraded Augean stables of internet chatter? He might think so, but what Tyson is actually engaged in is displaying the perils of facile reductionism and the scientism it invariably accompanies and embellishes; anything can be redescribed in scientific language but that does not mean such redescription is necessary or desirable or even moderately useful. All too often such redescription results in not talking about the ‘same thing’ any more. (All that great literature? Just ink on paper! You know, a chemical pigment on a piece of treated wood pulp.)

There are many ways of talking about the world; science is one of them. Science lets us do many things; other ways of talking about the world let us other do things. Scientific language is a tool; it lets us solve some problems really well; other languages–like those of poetry, psychology, literature, legal theory–help us solve others. The views they introduce of this world show us many things; different objects appear in different views depending on the language adopted. As a result, we are ‘multi-scopic’ creatures; at any time, we entertain multiple perspectives on this world and work with them, shifting between each as my wants and needs require. To figure out what clothes to wear today, I consulted the resources of meteorology; in order to get a fellow human being to come to my aid, I used elementary folk psychology, not neuroscience; to crack a joke and break the ice with co-workers, I relied on humor which deployed imaginary entities. Different tasks; different languages; different tools; it is the basis of the pragmatic attitude, which underwrites the science that Tyson claims to revere.

Tyson has famously dissed philosophy of science and just philosophy in general; his tweeting shows that he would greatly benefit from a philosophy class or two himself.

Contra Cathy O’Neil, The ‘Ivory Tower’ Does Not ‘Ignore Tech’

In ‘Ivory Tower Cannot Keep On Ignoring TechCathy O’Neil writes:

We need academia to step up to fill in the gaps in our collective understanding about the new role of technology in shaping our lives. We need robust research on hiring algorithms that seem to filter out peoplewith mental health disorders…we need research to ensure that the same mistakes aren’t made again and again. It’s absolutely within the abilities of academic research to study such examples and to push against the most obvious statistical, ethical or constitutional failures and dedicate serious intellectual energy to finding solutions. And whereas professional technologists working at private companies are not in a position to critique their own work, academics theoretically enjoy much more freedom of inquiry.

There is essentially no distinct field of academic study that takes seriously the responsibility of understanding and critiquing the role of technology — and specifically, the algorithms that are responsible for so many decisions — in our lives. That’s not surprising. Which academic department is going to give up a valuable tenure line to devote to this, given how much academic departments fight over resources already?

O’Neil’s piece is an unfortunate continuation of a trend to continue to castigate academia for its lack of social responsibility, all the while ignoring the work academics do in precisely those domains where their absence is supposedly felt.

In her Op-Ed, O’Neil ignores science and technology studies, a field of study that “takes seriously the responsibility of understanding and critiquing the role of technology,” and many of whose members are engaged in precisely the kind of studies she thinks should be undertaken at this moment in the history of technology. Moreover, there are fields of academic studies such as philosophy of science, philosophy of technology, and the sociology of knowledge, all of which take very seriously the task of examining and critiquing the conceptual foundations of science and technology; such inquiries are not elucidatory, they are very often critical and skeptical. Such disciplines then, produce work that makes both descriptive and prescriptive claims about the practice of science, and the social, political, and ethical values that underwrite what may seem like purely ‘technical’ decisions pertaining to design and implementation. The humanities are not alone in this regard, most computer science departments now require a class in ‘Computer Ethics’ as part of the requirements for their major (indeed, I designed one such class here at Brooklyn College, and taught it for a few semesters.) And of course, legal academics have, in recent years started to pay attention to these fields and incorporated them in their writings on ‘algorithmic decision making,’ ‘algorithmic control’ and so on. (The work of Frank Pasquale and Danielle Citron is notable in this regard.) If O’Neil is interested, she could dig deeper into the philosophical canon and read works by critical theorists like Herbert Marcuse and Max Horkheimer who mounted rigorous critiques of scientism, reductionism, and positivism in their works. Lastly, O’Neil could read my co-authored work Decoding Liberation: The Promise of Free and Open Source Software, a central claim of which is that transparency, not opacity, should be the guiding principle for software design and deployment. I’d be happy to send her a copy if she so desires.

Will Artificial Intelligence Create More Jobs Than It Eliminates? Maybe

Over at the MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino strike an optimistic note as they respond to the “distressing picture” created by “the threat that automation will eliminate a broad swath of jobs across the world economy [for] as artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.” Wilson, Daugherty, and Morini-Bianzino suggest that we’ve been “overlooking” the fact that “many new jobs will also be created — jobs that look nothing like those that exist today.” They go on to identify three key positions:

Trainers

This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.

Explainers

The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases.

Sustainers

The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency.

Unfortunately for these claims, it is all too evident that these positions themselves are vulnerable to automation. That is, future trainers, explainers, and sustainers–if their job descriptions match the ones given above–will also be automated. Nothing in those descriptions indicates that humans are indispensable components of the learning processes they are currently facilitating.

Consider that the ’empathy trainer’ Koko is a machine-learning system which “can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth.” Currently Koko is being trained by a human; but it will then go on to teach Siri and Alexa. Teachers are teaching future teachers how to be teachers; after which they will teach on their own. Moreover, there is no reason why chatbots, which currently need human trainers, cannot be trained–as they already are–on increasingly large corpora of natural language texts like the collections of the world’s libraries (which consist of human inputs.)

Or consider that explainers may rely on an algorithmic system like “the Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction”; human analysts use its output to “pinpoint the data that led to a particular result.” But why wouldn’t LIME’s functioning be enhanced to do the relevant ‘pinpointing’ itself, thus obviating the need for a human analyst?

Finally, note that “AI may become more self-governing….an AI prototype named Quixote…can learn about ethics by reading simple stories….the system is able to “reverse engineer” human values through stories about how humans interact with one another. Quixote has learned…why stealing is not a good idea….even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.” This sounds like the roles and responsibilities for humans in this domain will decrease over time.

There is no doubt that artificial intelligence systems require human inputs, supervision, training, and maintenance; but those roles are diminishing, not increasing; short-term, local, requirements for human skills will not necessarily translate into long-term, global, ones. The economic impact of the coming robotics and artificial intelligence ‘revolution’ is still unclear.