Contra Cathy O’Neil, The ‘Ivory Tower’ Does Not ‘Ignore Tech’

In ‘Ivory Tower Cannot Keep On Ignoring TechCathy O’Neil writes:

We need academia to step up to fill in the gaps in our collective understanding about the new role of technology in shaping our lives. We need robust research on hiring algorithms that seem to filter out peoplewith mental health disorders…we need research to ensure that the same mistakes aren’t made again and again. It’s absolutely within the abilities of academic research to study such examples and to push against the most obvious statistical, ethical or constitutional failures and dedicate serious intellectual energy to finding solutions. And whereas professional technologists working at private companies are not in a position to critique their own work, academics theoretically enjoy much more freedom of inquiry.

There is essentially no distinct field of academic study that takes seriously the responsibility of understanding and critiquing the role of technology — and specifically, the algorithms that are responsible for so many decisions — in our lives. That’s not surprising. Which academic department is going to give up a valuable tenure line to devote to this, given how much academic departments fight over resources already?

O’Neil’s piece is an unfortunate continuation of a trend to continue to castigate academia for its lack of social responsibility, all the while ignoring the work academics do in precisely those domains where their absence is supposedly felt.

In her Op-Ed, O’Neil ignores science and technology studies, a field of study that “takes seriously the responsibility of understanding and critiquing the role of technology,” and many of whose members are engaged in precisely the kind of studies she thinks should be undertaken at this moment in the history of technology. Moreover, there are fields of academic studies such as philosophy of science, philosophy of technology, and the sociology of knowledge, all of which take very seriously the task of examining and critiquing the conceptual foundations of science and technology; such inquiries are not elucidatory, they are very often critical and skeptical. Such disciplines then, produce work that makes both descriptive and prescriptive claims about the practice of science, and the social, political, and ethical values that underwrite what may seem like purely ‘technical’ decisions pertaining to design and implementation. The humanities are not alone in this regard, most computer science departments now require a class in ‘Computer Ethics’ as part of the requirements for their major (indeed, I designed one such class here at Brooklyn College, and taught it for a few semesters.) And of course, legal academics have, in recent years started to pay attention to these fields and incorporated them in their writings on ‘algorithmic decision making,’ ‘algorithmic control’ and so on. (The work of Frank Pasquale and Danielle Citron is notable in this regard.) If O’Neil is interested, she could dig deeper into the philosophical canon and read works by critical theorists like Herbert Marcuse and Max Horkheimer who mounted rigorous critiques of scientism, reductionism, and positivism in their works. Lastly, O’Neil could read my co-authored work Decoding Liberation: The Promise of Free and Open Source Software, a central claim of which is that transparency, not opacity, should be the guiding principle for software design and deployment. I’d be happy to send her a copy if she so desires.

Will Artificial Intelligence Create More Jobs Than It Eliminates? Maybe

Over at the MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino strike an optimistic note as they respond to the “distressing picture” created by “the threat that automation will eliminate a broad swath of jobs across the world economy [for] as artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.” Wilson, Daugherty, and Morini-Bianzino suggest that we’ve been “overlooking” the fact that “many new jobs will also be created — jobs that look nothing like those that exist today.” They go on to identify three key positions:

Trainers

This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.

Explainers

The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases.

Sustainers

The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency.

Unfortunately for these claims, it is all too evident that these positions themselves are vulnerable to automation. That is, future trainers, explainers, and sustainers–if their job descriptions match the ones given above–will also be automated. Nothing in those descriptions indicates that humans are indispensable components of the learning processes they are currently facilitating.

Consider that the ’empathy trainer’ Koko is a machine-learning system which “can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth.” Currently Koko is being trained by a human; but it will then go on to teach Siri and Alexa. Teachers are teaching future teachers how to be teachers; after which they will teach on their own. Moreover, there is no reason why chatbots, which currently need human trainers, cannot be trained–as they already are–on increasingly large corpora of natural language texts like the collections of the world’s libraries (which consist of human inputs.)

Or consider that explainers may rely on an algorithmic system like “the Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction”; human analysts use its output to “pinpoint the data that led to a particular result.” But why wouldn’t LIME’s functioning be enhanced to do the relevant ‘pinpointing’ itself, thus obviating the need for a human analyst?

Finally, note that “AI may become more self-governing….an AI prototype named Quixote…can learn about ethics by reading simple stories….the system is able to “reverse engineer” human values through stories about how humans interact with one another. Quixote has learned…why stealing is not a good idea….even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.” This sounds like the roles and responsibilities for humans in this domain will decrease over time.

There is no doubt that artificial intelligence systems require human inputs, supervision, training, and maintenance; but those roles are diminishing, not increasing; short-term, local, requirements for human skills will not necessarily translate into long-term, global, ones. The economic impact of the coming robotics and artificial intelligence ‘revolution’ is still unclear.

Richard Dawkins’ Inconsistent Reliance On Pragmatism

A very popular video on YouTube featuring Richard Dawkins is titled ‘Science Works, Bitches.’ It periodically makes the rounds on social media; as it does, Dawkins acolytes–in the video and on social media–applaud him as he ‘smacks down’ a questioner who inquires into the ‘justification’ for the scientific method. (A familiar enough question; for instance, science relies on induction, but the justification for induction is that it has worked in the past, which is itself an inductive argument, so how do you break out of this circle, without relying on some kind of ‘faith’?) Dawkins’ response is ‘It works, bitches!’ Science’s claim to rationality rests on its proven track record–going to the moon, curing disease etc.; this is an entirely pragmatic claim with which I’m in total agreement. The success of inductive claims is part of our understanding and definition of rationality; rationality does not exist independent of our practices; they define it.

Still, the provision of this answer also reveals Dawkins’ utter dishonesty when it comes to the matter of his sustained attacks on religion over the years. For the open-mindedness and the acknowledgment of the primacy of practice that is on display in this answer is nowhere visible in his attitude toward religion.

Dawkins is entirely correct in noting that science is superior to religion when it comes to the business of solving certain kinds of problems. You want to make things fly; you rely on science. You want to go to the moon; you rely on science. You want to cure cancer; you rely on science. Rely on religion for any of these things and you will fail miserably. But Dawkins will be simply unwilling to accept as an answer from a religious person that the justification for his or her faith is that ‘it works’ when it comes to providing a ‘solution’ for a ‘problem’ that is not of the kind specified above. At those moments, Dawkins will demand a kind of ‘rational’ answer that he is himself unwilling to–and indeed, cannot–provide for science.

Consider a religious person who when asked to ‘justify’ faith, responds ‘It works for me when it comes to achieving the end or the outcome of making me happy [or more contented, more accepting of my fate, reconciling myself to the death of loved ones or my own death; the list goes on.]’ Dawkins’ response to this would be that this is a pathetic delusional comfort, that this is based on fairy tales and poppycock. Here too, Dawkins would demand that the religious person accept scientific answers to these questions and scientific resolutions of these ‘problems.’ Here, Dawkins would be unable to accept the pragmatic nature of the religious person’s answer that faith ‘works’ for them. Here, Dawkins would demand a ‘justified, rational, grounded in evidence’ answer; that is, an imposition of standards that he is unwilling to place on the foundations of scientific reasoning.

As I noted above, pragmatism is the best justification for science and the scientific method; science works best to achieve particular ends. Dawkins is entirely right to note that religion cannot answer the kinds of questions or solve the kinds of problems science can; he should be prepared to admit the possibility that there are questions to which religion offers answers that ‘work’ for its adherents–in preference to other alternatives. Pragmatism demands we accept this answer too; you can’t lean on pragmatism to defend science, and then abandon it in your attacks on religion. That’s scientism. Which is a load of poppycock.

No, Aristotle Did Not ‘Create’ The Computer

For the past few days, an essay titled “How Aristotle Created The Computer” (The Atlantic, March 20, 2017, by Chris Dixon) has been making the rounds. It begins with the following claim:

The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Dixon then goes on to trace this ‘history of ideas,’ showing how the development–and increasing formalization and rigor–of logic contributed to the development of computer science and the first computing devices. Along the way, Dixon makes note of the contributions-direct and indirect–of: Claude Shannon, Alan Turing, George Boole, Euclid, Rene Descartes, Gottlob Frege, David Hilbert, Gottfried Leibniz, Bertrand Russell, Alfred Whitehead, Alonzo Church, and John Von Neumann. This potted history is exceedingly familiar to students of the foundations of computer science–a demographic that includes computer scientists, philosophers, and mathematical logicians–but presumably that is not the audience that Dixon is writing for; those students might wonder why Augustus De Morgan and Charles Peirce do not feature in it. Given this temporally extended history, with its many contributors and their diverse contributions, why does the article carry the headline “How Aristotle Created the Computer”? Aristotle did not create the computer or anything like it; he did make important contributions to a fledgling field, which took several more centuries to develop into maturity. (The contributions to this field by logicians and systems of logic of alternative philosophical traditions like the Indian one are, as per usual, studiously ignored in Dixon’s history.) And as a philosopher, I cannot resist asking, “what do you mean by ‘created'”? What counts as ‘creating’?

The easy answer is that it is clickbait. Fair enough. We are by now used to the idiocy of the misleading clickbait headline, one designed to ‘attract’ more readers by making it more ‘interesting;’ authors very often have little choice in this matter, and very often have to watch helplessly as hit-hungry editors mangle the impact of the actual content of their work. (As in this case?) But it is worth noting this headline’s contribution to the pernicious notion of the ‘creation’ of the computer and to the idea that it is possible to isolate a singular figure as its creator–a clear hangover of a religious sentiment that things that exist must have creation points, ‘beginnings,’ and creators. It is yet another contribution to the continued mistaken recounting of the history of science as a story of ‘towering figures.’ (Incidentally, I do not agree with Dixon that the history of computers is “better understood as a history of ideas”; that history is instead, an integral component of the history of computing in general, which also includes a social history and an economic one; telling a history of computing as a history of objects is a perfectly reasonable thing to do when we remember that actual, functioning computers are physical instantiations of abstract notions of computation.)

To end on a positive note, here are some alternative headlines: “Philosophy and Mathematics’ Contributions To The Development of Computing”; “How Philosophers and Mathematicians Helped Bring Us Computers”; or “How Philosophical Thinking Makes The Computer Possible.” None of these are as ‘sexy’ as the original headline, but they are far more informative and accurate.

Note: What do you think of my clickbaity headline for this post?

Epistemology and ‘The Leftovers’

Imagine that an extremely improbable event occurs, one for which there was no warning; your best theories of the world assigned it a near-zero probability (indeed, so low was this probability then calculating it would have been a waste of time). This event is inexplicable–no explanations for it are forthcoming, and it cannot be fitted into the explanatory frameworks employed by your current conceptual schemes. What effect would this have on your theory of knowledge, your epistemology, the beliefs you form, and the justifications you consider acceptable for them?

This question is raised with varying degrees of explicitness in HBO’s The Leftovers–which deals with the aftermath of the sudden disappearance of approximately two percent of the earth’s population. ‘The Departure’ selected its ‘victims’ at random; no pattern appeared to connect the victims to each other. The ‘departures’ all happened at the same time, and they left no trace. There is no sign of them anymore; two percent of the world’s population has been vaporized. Literally.

The Leftovers is not a very good show, and I’m not sure I will watch it any more (two seasons has been enough). It did however, afford me an opportunity to engage in the philosophical reflection I note above.

One phenomena that should manifest itself in the aftermath of an event like ‘The Departure’ would be the formation of all kinds of ‘cults,’ groups united by beliefs formerly considered improbable but which now find a new lease on life because the metaphysical reasonableness of the world has taken such a beating. Critics of these cults would find that the solid foundations of their previous critiques had disappeared; if ‘The Departure’ could happen, then so could a great deal else. The Leftovers features some cults and their ‘gullible’ followers but does little of any great interest with them–lost opportunities abound in this show, perhaps an entirely unsurprising denouement given that its creators were responsible for the atrocity called Lost.

As one of the characters notes in the second season, ‘The Departure’ made the holding of ‘false beliefs’ more respectable than it had ever been. And as yet another character notes in the first season, that old knockdown maneuver, the one used to dismiss an implausible claim made by someone else, that ‘the laws of nature won’t allow that,’ is simply not available anymore.  Science used to tell us that its knowledge was defeasible, but now that that dreaded moment, when evidence of the universe’s non-uniformity, irregularity, and non-conformance with scientific laws is upon us, what are we to do? In The Leftovers a scientific effort gets underway to determine if geographical location was determinative of the victims’ susceptibility to being ‘departured,’ but it seems like this is grasping at straws, a pathetic and hopeless attempt to shoehorn ‘The Departure’ into extant scientific frameworks.

So, in the aftermath of ‘The Departure,’ we reside in a zone of epistemic confusion: we do not know how to assign probabilities to our beliefs anymore, for the definition of ‘likely’ and ‘unlikely’ seems to have been radically altered. That old ‘you never know’ has taken on a far more menacing tone. Only the resumption of the ‘normal’ stream of events for a sufficiently long period of time can heal this epistemic and metaphysical rupture; it will be a while before our sense of this world’s apparent predictability will return. But even then, every argument about the plausibility or the implausibility of some epistemic claim will take place in the shadow of that catastrophic disruption of ‘reality;’ the reasonableness of this world will always appear just a tad suspect.

Stephen Jay Gould’s Weak Argument For Science And Religion’s ‘Separate Domains’

Stephen Jay Gould‘s famous ‘Two Separate Domains‘ argues, roughly, that religion and science operate in different domains of inquiry, and as such do not conflict with each other:

We get the age of rocks, and religion retains the rock of ages; we study how the heavens go, and they determine how to go to heaven.

Or, science gets the descriptive and the quantitative, religion gets the prescriptive and the qualitative. Facts on one side; values on the other.

‘Two Separate Domains’ is an essay I read some years ago; yesterday, I discussed it with my philosophy of religion class. On this revisitation, I was struck by how weak and narrowly focused Gould’s arguments are.

Most crucially, Gould is almost entirely concerned with responding to a very particular religious tradition: Christianity. Moreover, within that, he takes himself to be pushing back against that species of Protestant fundamentalism which would indulge in literal interpretations of the Bible to promulgate creationism:

I do not doubt that one could find an occasional nun who would prefer to teach creationism in her parochial school biology class or an occasional orthodox rabbi who does the same in his yeshiva, but creationism based on biblical literalism makes little sense in either Catholicism or Judaism for neither religion maintains any extensive tradition for reading the Bible as literal truth rather than illuminating literature, based partly on metaphor and allegory…and demanding interpretation for proper understanding. Most Protestant groups, of course, take the same position—the fundamentalist fringe notwithstanding.

Later in the essay, Gould concentrates on responding to a pair of Papal encyclicals on the subject of evolution, issued by Pius XII in 1950 and John Paul II in 1996, the differences between which–the latter takes on board the scientific evidence for evolution–Gould takes as evidence for the flexibility of the Church to respond to scientific findings in a manner which preserves its own ‘non-overlapping magisteria.’

Several problems now present themselves. First, there are a diversity of hermeneutical approaches in different religious traditions, with varying reliance on metaphorical, allegorical, literal, or historically contextualized readings, which generate conflicts of various degrees with the content of scientific statements. (As a student in my class said, getting rid of literal interpretations in Islam would remove, for many followers, their reason for believing in the Koran’s claims.) Second, Gould relies on an untenable fact-value distinction. But science’s empirical claims are infused with value-laden choices, and religion’s value-laden claims rest on empirical foundations (neither domain of inquiry offers a purely descriptive or prescriptive claim and are thus entangled.) Third, and perhaps most crucially in my opinion, Gould’s task is made considerably easier–at least apparently, in this essay–by concentrating on a religious tradition which has a central church–the Catholic–with an authoritative head, the Pope, who issues documents which articulate a position representative of the religious institution, and which can be expected to serve as instruction for its many followers’ practices and beliefs. That is, that religion’s practices can be usefully understood as being guided by such institutions, persons, and writings–they are representative of it. Such is obviously not the case with many other religious traditions, and I simply cannot see Gould’s strategy working for Islam or Judaism or Hinduism. (Buddhism is another matter altogether.)

Gould’s irenic stance is admirable, but I cannot see that the strategy adopted in this essay advances his central thesis very much.

Pigliucci And Shaw On The Allegedly Useful Reduction

Massimo Pigliucci critiques the uncritical reductionism that the conflation of philosophy and science brings in its wake, using as a jumping-off point, Tamsin Shaw’s essay in the New York Review of Books, which addresses psychologists’ claims “that human beings are not rational, but rather rationalizing, and that one of the things we rationalize most about is ethics.” Pigliucci notes that Shaw‘s targets “Jonathan Haidt, Steven Pinker, Paul Bloom, Joshua Greene and a number of others….make the same kind of fundamental mistake [a category mistake], regardless of the quality of their empirical research.”

Pigliucci highlights Shaw’s critique of Joshua Greene’s claims that “neuroscientific data can test ethical theories, concluding that there is empirical evidence for utilitarianism.” Shaw had noted that:

Greene interpreted these results in the light of an unverifiable and unfalsifiable story about evolutionary psychology … Greene inferred … that the slower mechanisms we see in the brain are a later development and are superior because morality is properly concerned with impersonal values … [But] the claim here is that personal factors are morally irrelevant, so the neural and psychological processes that track such factors in each person cannot be relied on to support moral propositions or guide moral decisions. Greene’s controversial philosophical claim is simply presupposed; it is in no way motivated by the findings of science. An understanding of the neural correlates of reasoning can tell us nothing about whether the outcome of this reasoning is justified.

At this point Pigliucci intervenes:

Let me interject here with my favorite analogy to explain why exactly Greene’s reasoning doesn’t hold up: mathematics. Imagine we subjected a number of individuals to fMRI scanning of their brain activity while they are in the process of tackling mathematical problems. I am positive that we would conclude the following…

There are certain areas, and not others, of the brain that lit up when a person is engaged with a mathematical problem.

There is probably variation in the human population for the level of activity, and possibly even the size or micro-anatomy, of these areas.

There is some sort of evolutionary psychological story that can be told for why the ability to carry out simple mathematical or abstract reasoning may have been adaptive in the Pleistocene (though it would be much harder to come up with a similar story that justifies the ability of some people to understand advanced math, or to solve Fermat’s Last Theorem).

But none of the above will tell us anything at all about whether the subjects in the experiment got the math right. Only a mathematician — not a neuroscientist, not an evolutionary psychologist — can tell us that.

Correct. Now imagine an ambitious neuroscientist who claims his science has really, really advanced, and indeed, imaging technology has improved so much that Pigliucci’s first premise above should be changed to:

There are certain areas, and not others, of the brain that lit up when a person is working out the correct solution to a particular mathematical problem.

So, contra Pigliucci’s claim above, neuroscience will tell us a great deal about whether the subjects in the experiment got the math right. Our funky imaging science and technology makes that possible now. At this stage, the triumphant reductionist says, “We’ve reduced the doing of mathematics to doing neuroscience; when you think you are doing mathematics, all that is happening is that a bunch of neurons are firing in the following patterns and particular parts of your brain are lighting up. We can now tell a evolutionary psychology story about why the ability to reason correctly may have been adaptive.”

But we may ask: Should the presence of such technology mean we should stop doing mathematics? Have we learned, as a result of such imaging studies, how to do mathematics correctly? We know that when our brains are in particular states, they can be interpreted as doing mathematical problems–‘this activity means you are doing a math problem in this fashion.’ A mathematician looks at proofs; a neuroscientist would look at the corresponding brain scans. We know when one corresponds to another. This is perhaps useful for comparing math-brain-states with poetry-brain-states but it won’t tell us how to write poetry or proofs for theorems. It does not tell us how humans would produce those proofs (or those brain states in their brains.) If a perverse neuroscientist were to suggest that the right way to do maths now would be to aim to put your brain into the states suggested by the imaging machines, we would note we already have a perfectly good of learning how to do good mathematics: learning from masters’ techniques, as found in books, journals, and notebooks.

In short, the reduction of a human activity–math–to its corresponding brain activity achieves precisely nothing when it comes to the doing of the activity. It aids our understanding of that activity in some regards–as in, how does its corresponding brain activity compare to other corresponding brain activities for other actions–and not in others. Some aspects of this reduction will strike us as perfectly pointless, given the antecedent accomplishments of mathematics and mathematicians.

Not all possible reduction is desirable or meaningful.