Will Artificial Intelligence Create More Jobs Than It Eliminates? Maybe

Over at the MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino strike an optimistic note as they respond to the “distressing picture” created by “the threat that automation will eliminate a broad swath of jobs across the world economy [for] as artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.” Wilson, Daugherty, and Morini-Bianzino suggest that we’ve been “overlooking” the fact that “many new jobs will also be created — jobs that look nothing like those that exist today.” They go on to identify three key positions:


This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.


The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases.


The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency.

Unfortunately for these claims, it is all too evident that these positions themselves are vulnerable to automation. That is, future trainers, explainers, and sustainers–if their job descriptions match the ones given above–will also be automated. Nothing in those descriptions indicates that humans are indispensable components of the learning processes they are currently facilitating.

Consider that the ’empathy trainer’ Koko is a machine-learning system which “can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth.” Currently Koko is being trained by a human; but it will then go on to teach Siri and Alexa. Teachers are teaching future teachers how to be teachers; after which they will teach on their own. Moreover, there is no reason why chatbots, which currently need human trainers, cannot be trained–as they already are–on increasingly large corpora of natural language texts like the collections of the world’s libraries (which consist of human inputs.)

Or consider that explainers may rely on an algorithmic system like “the Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction”; human analysts use its output to “pinpoint the data that led to a particular result.” But why wouldn’t LIME’s functioning be enhanced to do the relevant ‘pinpointing’ itself, thus obviating the need for a human analyst?

Finally, note that “AI may become more self-governing….an AI prototype named Quixote…can learn about ethics by reading simple stories….the system is able to “reverse engineer” human values through stories about how humans interact with one another. Quixote has learned…why stealing is not a good idea….even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.” This sounds like the roles and responsibilities for humans in this domain will decrease over time.

There is no doubt that artificial intelligence systems require human inputs, supervision, training, and maintenance; but those roles are diminishing, not increasing; short-term, local, requirements for human skills will not necessarily translate into long-term, global, ones. The economic impact of the coming robotics and artificial intelligence ‘revolution’ is still unclear.

Richard Dawkins’ Inconsistent Reliance On Pragmatism

A very popular video on YouTube featuring Richard Dawkins is titled ‘Science Works, Bitches.’ It periodically makes the rounds on social media; as it does, Dawkins acolytes–in the video and on social media–applaud him as he ‘smacks down’ a questioner who inquires into the ‘justification’ for the scientific method. (A familiar enough question; for instance, science relies on induction, but the justification for induction is that it has worked in the past, which is itself an inductive argument, so how do you break out of this circle, without relying on some kind of ‘faith’?) Dawkins’ response is ‘It works, bitches!’ Science’s claim to rationality rests on its proven track record–going to the moon, curing disease etc.; this is an entirely pragmatic claim with which I’m in total agreement. The success of inductive claims is part of our understanding and definition of rationality; rationality does not exist independent of our practices; they define it.

Still, the provision of this answer also reveals Dawkins’ utter dishonesty when it comes to the matter of his sustained attacks on religion over the years. For the open-mindedness and the acknowledgment of the primacy of practice that is on display in this answer is nowhere visible in his attitude toward religion.

Dawkins is entirely correct in noting that science is superior to religion when it comes to the business of solving certain kinds of problems. You want to make things fly; you rely on science. You want to go to the moon; you rely on science. You want to cure cancer; you rely on science. Rely on religion for any of these things and you will fail miserably. But Dawkins will be simply unwilling to accept as an answer from a religious person that the justification for his or her faith is that ‘it works’ when it comes to providing a ‘solution’ for a ‘problem’ that is not of the kind specified above. At those moments, Dawkins will demand a kind of ‘rational’ answer that he is himself unwilling to–and indeed, cannot–provide for science.

Consider a religious person who when asked to ‘justify’ faith, responds ‘It works for me when it comes to achieving the end or the outcome of making me happy [or more contented, more accepting of my fate, reconciling myself to the death of loved ones or my own death; the list goes on.]’ Dawkins’ response to this would be that this is a pathetic delusional comfort, that this is based on fairy tales and poppycock. Here too, Dawkins would demand that the religious person accept scientific answers to these questions and scientific resolutions of these ‘problems.’ Here, Dawkins would be unable to accept the pragmatic nature of the religious person’s answer that faith ‘works’ for them. Here, Dawkins would demand a ‘justified, rational, grounded in evidence’ answer; that is, an imposition of standards that he is unwilling to place on the foundations of scientific reasoning.

As I noted above, pragmatism is the best justification for science and the scientific method; science works best to achieve particular ends. Dawkins is entirely right to note that religion cannot answer the kinds of questions or solve the kinds of problems science can; he should be prepared to admit the possibility that there are questions to which religion offers answers that ‘work’ for its adherents–in preference to other alternatives. Pragmatism demands we accept this answer too; you can’t lean on pragmatism to defend science, and then abandon it in your attacks on religion. That’s scientism. Which is a load of poppycock.

No, Aristotle Did Not ‘Create’ The Computer

For the past few days, an essay titled “How Aristotle Created The Computer” (The Atlantic, March 20, 2017, by Chris Dixon) has been making the rounds. It begins with the following claim:

The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Dixon then goes on to trace this ‘history of ideas,’ showing how the development–and increasing formalization and rigor–of logic contributed to the development of computer science and the first computing devices. Along the way, Dixon makes note of the contributions-direct and indirect–of: Claude Shannon, Alan Turing, George Boole, Euclid, Rene Descartes, Gottlob Frege, David Hilbert, Gottfried Leibniz, Bertrand Russell, Alfred Whitehead, Alonzo Church, and John Von Neumann. This potted history is exceedingly familiar to students of the foundations of computer science–a demographic that includes computer scientists, philosophers, and mathematical logicians–but presumably that is not the audience that Dixon is writing for; those students might wonder why Augustus De Morgan and Charles Peirce do not feature in it. Given this temporally extended history, with its many contributors and their diverse contributions, why does the article carry the headline “How Aristotle Created the Computer”? Aristotle did not create the computer or anything like it; he did make important contributions to a fledgling field, which took several more centuries to develop into maturity. (The contributions to this field by logicians and systems of logic of alternative philosophical traditions like the Indian one are, as per usual, studiously ignored in Dixon’s history.) And as a philosopher, I cannot resist asking, “what do you mean by ‘created'”? What counts as ‘creating’?

The easy answer is that it is clickbait. Fair enough. We are by now used to the idiocy of the misleading clickbait headline, one designed to ‘attract’ more readers by making it more ‘interesting;’ authors very often have little choice in this matter, and very often have to watch helplessly as hit-hungry editors mangle the impact of the actual content of their work. (As in this case?) But it is worth noting this headline’s contribution to the pernicious notion of the ‘creation’ of the computer and to the idea that it is possible to isolate a singular figure as its creator–a clear hangover of a religious sentiment that things that exist must have creation points, ‘beginnings,’ and creators. It is yet another contribution to the continued mistaken recounting of the history of science as a story of ‘towering figures.’ (Incidentally, I do not agree with Dixon that the history of computers is “better understood as a history of ideas”; that history is instead, an integral component of the history of computing in general, which also includes a social history and an economic one; telling a history of computing as a history of objects is a perfectly reasonable thing to do when we remember that actual, functioning computers are physical instantiations of abstract notions of computation.)

To end on a positive note, here are some alternative headlines: “Philosophy and Mathematics’ Contributions To The Development of Computing”; “How Philosophers and Mathematicians Helped Bring Us Computers”; or “How Philosophical Thinking Makes The Computer Possible.” None of these are as ‘sexy’ as the original headline, but they are far more informative and accurate.

Note: What do you think of my clickbaity headline for this post?

Epistemology and ‘The Leftovers’

Imagine that an extremely improbable event occurs, one for which there was no warning; your best theories of the world assigned it a near-zero probability (indeed, so low was this probability then calculating it would have been a waste of time). This event is inexplicable–no explanations for it are forthcoming, and it cannot be fitted into the explanatory frameworks employed by your current conceptual schemes. What effect would this have on your theory of knowledge, your epistemology, the beliefs you form, and the justifications you consider acceptable for them?

This question is raised with varying degrees of explicitness in HBO’s The Leftovers–which deals with the aftermath of the sudden disappearance of approximately two percent of the earth’s population. ‘The Departure’ selected its ‘victims’ at random; no pattern appeared to connect the victims to each other. The ‘departures’ all happened at the same time, and they left no trace. There is no sign of them anymore; two percent of the world’s population has been vaporized. Literally.

The Leftovers is not a very good show, and I’m not sure I will watch it any more (two seasons has been enough). It did however, afford me an opportunity to engage in the philosophical reflection I note above.

One phenomena that should manifest itself in the aftermath of an event like ‘The Departure’ would be the formation of all kinds of ‘cults,’ groups united by beliefs formerly considered improbable but which now find a new lease on life because the metaphysical reasonableness of the world has taken such a beating. Critics of these cults would find that the solid foundations of their previous critiques had disappeared; if ‘The Departure’ could happen, then so could a great deal else. The Leftovers features some cults and their ‘gullible’ followers but does little of any great interest with them–lost opportunities abound in this show, perhaps an entirely unsurprising denouement given that its creators were responsible for the atrocity called Lost.

As one of the characters notes in the second season, ‘The Departure’ made the holding of ‘false beliefs’ more respectable than it had ever been. And as yet another character notes in the first season, that old knockdown maneuver, the one used to dismiss an implausible claim made by someone else, that ‘the laws of nature won’t allow that,’ is simply not available anymore.  Science used to tell us that its knowledge was defeasible, but now that that dreaded moment, when evidence of the universe’s non-uniformity, irregularity, and non-conformance with scientific laws is upon us, what are we to do? In The Leftovers a scientific effort gets underway to determine if geographical location was determinative of the victims’ susceptibility to being ‘departured,’ but it seems like this is grasping at straws, a pathetic and hopeless attempt to shoehorn ‘The Departure’ into extant scientific frameworks.

So, in the aftermath of ‘The Departure,’ we reside in a zone of epistemic confusion: we do not know how to assign probabilities to our beliefs anymore, for the definition of ‘likely’ and ‘unlikely’ seems to have been radically altered. That old ‘you never know’ has taken on a far more menacing tone. Only the resumption of the ‘normal’ stream of events for a sufficiently long period of time can heal this epistemic and metaphysical rupture; it will be a while before our sense of this world’s apparent predictability will return. But even then, every argument about the plausibility or the implausibility of some epistemic claim will take place in the shadow of that catastrophic disruption of ‘reality;’ the reasonableness of this world will always appear just a tad suspect.

Stephen Jay Gould’s Weak Argument For Science And Religion’s ‘Separate Domains’

Stephen Jay Gould‘s famous ‘Two Separate Domains‘ argues, roughly, that religion and science operate in different domains of inquiry, and as such do not conflict with each other:

We get the age of rocks, and religion retains the rock of ages; we study how the heavens go, and they determine how to go to heaven.

Or, science gets the descriptive and the quantitative, religion gets the prescriptive and the qualitative. Facts on one side; values on the other.

‘Two Separate Domains’ is an essay I read some years ago; yesterday, I discussed it with my philosophy of religion class. On this revisitation, I was struck by how weak and narrowly focused Gould’s arguments are.

Most crucially, Gould is almost entirely concerned with responding to a very particular religious tradition: Christianity. Moreover, within that, he takes himself to be pushing back against that species of Protestant fundamentalism which would indulge in literal interpretations of the Bible to promulgate creationism:

I do not doubt that one could find an occasional nun who would prefer to teach creationism in her parochial school biology class or an occasional orthodox rabbi who does the same in his yeshiva, but creationism based on biblical literalism makes little sense in either Catholicism or Judaism for neither religion maintains any extensive tradition for reading the Bible as literal truth rather than illuminating literature, based partly on metaphor and allegory…and demanding interpretation for proper understanding. Most Protestant groups, of course, take the same position—the fundamentalist fringe notwithstanding.

Later in the essay, Gould concentrates on responding to a pair of Papal encyclicals on the subject of evolution, issued by Pius XII in 1950 and John Paul II in 1996, the differences between which–the latter takes on board the scientific evidence for evolution–Gould takes as evidence for the flexibility of the Church to respond to scientific findings in a manner which preserves its own ‘non-overlapping magisteria.’

Several problems now present themselves. First, there are a diversity of hermeneutical approaches in different religious traditions, with varying reliance on metaphorical, allegorical, literal, or historically contextualized readings, which generate conflicts of various degrees with the content of scientific statements. (As a student in my class said, getting rid of literal interpretations in Islam would remove, for many followers, their reason for believing in the Koran’s claims.) Second, Gould relies on an untenable fact-value distinction. But science’s empirical claims are infused with value-laden choices, and religion’s value-laden claims rest on empirical foundations (neither domain of inquiry offers a purely descriptive or prescriptive claim and are thus entangled.) Third, and perhaps most crucially in my opinion, Gould’s task is made considerably easier–at least apparently, in this essay–by concentrating on a religious tradition which has a central church–the Catholic–with an authoritative head, the Pope, who issues documents which articulate a position representative of the religious institution, and which can be expected to serve as instruction for its many followers’ practices and beliefs. That is, that religion’s practices can be usefully understood as being guided by such institutions, persons, and writings–they are representative of it. Such is obviously not the case with many other religious traditions, and I simply cannot see Gould’s strategy working for Islam or Judaism or Hinduism. (Buddhism is another matter altogether.)

Gould’s irenic stance is admirable, but I cannot see that the strategy adopted in this essay advances his central thesis very much.

Pigliucci And Shaw On The Allegedly Useful Reduction

Massimo Pigliucci critiques the uncritical reductionism that the conflation of philosophy and science brings in its wake, using as a jumping-off point, Tamsin Shaw’s essay in the New York Review of Books, which addresses psychologists’ claims “that human beings are not rational, but rather rationalizing, and that one of the things we rationalize most about is ethics.” Pigliucci notes that Shaw‘s targets “Jonathan Haidt, Steven Pinker, Paul Bloom, Joshua Greene and a number of others….make the same kind of fundamental mistake [a category mistake], regardless of the quality of their empirical research.”

Pigliucci highlights Shaw’s critique of Joshua Greene’s claims that “neuroscientific data can test ethical theories, concluding that there is empirical evidence for utilitarianism.” Shaw had noted that:

Greene interpreted these results in the light of an unverifiable and unfalsifiable story about evolutionary psychology … Greene inferred … that the slower mechanisms we see in the brain are a later development and are superior because morality is properly concerned with impersonal values … [But] the claim here is that personal factors are morally irrelevant, so the neural and psychological processes that track such factors in each person cannot be relied on to support moral propositions or guide moral decisions. Greene’s controversial philosophical claim is simply presupposed; it is in no way motivated by the findings of science. An understanding of the neural correlates of reasoning can tell us nothing about whether the outcome of this reasoning is justified.

At this point Pigliucci intervenes:

Let me interject here with my favorite analogy to explain why exactly Greene’s reasoning doesn’t hold up: mathematics. Imagine we subjected a number of individuals to fMRI scanning of their brain activity while they are in the process of tackling mathematical problems. I am positive that we would conclude the following…

There are certain areas, and not others, of the brain that lit up when a person is engaged with a mathematical problem.

There is probably variation in the human population for the level of activity, and possibly even the size or micro-anatomy, of these areas.

There is some sort of evolutionary psychological story that can be told for why the ability to carry out simple mathematical or abstract reasoning may have been adaptive in the Pleistocene (though it would be much harder to come up with a similar story that justifies the ability of some people to understand advanced math, or to solve Fermat’s Last Theorem).

But none of the above will tell us anything at all about whether the subjects in the experiment got the math right. Only a mathematician — not a neuroscientist, not an evolutionary psychologist — can tell us that.

Correct. Now imagine an ambitious neuroscientist who claims his science has really, really advanced, and indeed, imaging technology has improved so much that Pigliucci’s first premise above should be changed to:

There are certain areas, and not others, of the brain that lit up when a person is working out the correct solution to a particular mathematical problem.

So, contra Pigliucci’s claim above, neuroscience will tell us a great deal about whether the subjects in the experiment got the math right. Our funky imaging science and technology makes that possible now. At this stage, the triumphant reductionist says, “We’ve reduced the doing of mathematics to doing neuroscience; when you think you are doing mathematics, all that is happening is that a bunch of neurons are firing in the following patterns and particular parts of your brain are lighting up. We can now tell a evolutionary psychology story about why the ability to reason correctly may have been adaptive.”

But we may ask: Should the presence of such technology mean we should stop doing mathematics? Have we learned, as a result of such imaging studies, how to do mathematics correctly? We know that when our brains are in particular states, they can be interpreted as doing mathematical problems–‘this activity means you are doing a math problem in this fashion.’ A mathematician looks at proofs; a neuroscientist would look at the corresponding brain scans. We know when one corresponds to another. This is perhaps useful for comparing math-brain-states with poetry-brain-states but it won’t tell us how to write poetry or proofs for theorems. It does not tell us how humans would produce those proofs (or those brain states in their brains.) If a perverse neuroscientist were to suggest that the right way to do maths now would be to aim to put your brain into the states suggested by the imaging machines, we would note we already have a perfectly good of learning how to do good mathematics: learning from masters’ techniques, as found in books, journals, and notebooks.

In short, the reduction of a human activity–math–to its corresponding brain activity achieves precisely nothing when it comes to the doing of the activity. It aids our understanding of that activity in some regards–as in, how does its corresponding brain activity compare to other corresponding brain activities for other actions–and not in others. Some aspects of this reduction will strike us as perfectly pointless, given the antecedent accomplishments of mathematics and mathematicians.

Not all possible reduction is desirable or meaningful.

Contra Damon Linker, ‘Leftist Intellectuals’ Are Not ‘Disconnected From Reality’

Over at The Week, Damon Linker accuses ‘the Left’ of being disconnected from reality, basing this charge on his reading of two recent pieces by Corey Robin and Jedediah Purdy. (It begins with a charge that is all too frequently leveled at the Bernie Sanders campaign: that its political plans are political fantasies.) What gets Linker really offended is that ‘left-wing intellectuals’–who presumably should know better–are trafficking in the same ‘disconnected from reality’ ramblings.

I don’t think they are. Rather, they are doing the exact opposite of what Linker claims, and in this spectacular misreading of them, Linker only indicts his own disconnect from the actual historical realities of how ideas and actions–especially political ones–interact.

First, Linker suggests that Robin thinks that indifference to political reality is a virtue. As he notes:

In a provocative essay for The Chronicle of Higher Education, “How Intellectuals Create a Public,” Robin argues that “the problem with our public intellectuals today is that they are writing for readers who already exist, as they exist,” as opposed to “summoning” a new world, a new public, a new reality, into being.

In his essay, Robin offered a critique of Cass Sunstein‘s libertarian paternalism, suggesting that it merely further reifies an existing political reality, leaving everything as it was before; later Robin goes on to suggest that Ta-Nehisi Coates is afflicted by a kind of ‘impossibilism’ about the possibility of the “politics of a mass mobilization.” (Robin’s take on Coates deserves far more considered analysis than I can provide here. More on that anon.) Linker then, by linking to Marx’s famous quote in the Theses on Fuerbach about the need for philosophers to change the world and not just interpret it, insinuates that Robin is just being an impractical Marxist in accusing Sunstein and Coates of producing “an all too accurate reflection of the world we live in.” (Incidentally, this trope “You sound like Marx; you’re impractical!” is profoundly unimaginative. I’m surprised it still does work for people.) The production of this facsimile for Linker is a virtue; for Robin, in the case of Sunstein, it speaks of a limited imagination (in the case of Coates, I think, again, that matters are very different.)

What makes Linker’s critique of Robin especially bizarre is that from the very outset of his essay, Robin is talking about action, activity, making and remaking, interacting with this world, reshaping and reconfiguring it–through ideas and beliefs, expressed through writing, sent out into this world in an effort to change people’s minds, to make them see the world differently. This is about as far as being disconnected from reality as you can imagine; Robin is not advocating a retreat to the ivory tower, to write complacently for a pre-existent audience that will force the author into the templates of its demands; rather he is suggesting that the author, the intellectual, by the form and content of his ideas–as expressed in his writings–can change and alter those templates and bid his readers follow different trajectories of both thought and action.

As Robin says:

[The public intellectual] is…the literary equivalent of the epic political actor, who sees her writing as a transformative mode of action, a thought-deed in the world.

This is as ‘reality-based’ as you can get, and you only get to doubt that if you, perhaps like Linker, seemingly doubt the power of ideas and beliefs; you know, those things the American pragmatists called ‘rules for action.’ Let’s forget about religion for a second, and simply consider a couple of examples Robin provides: Rachel Carson‘s Silent Spring and Michelle Alexander‘s The New Jim Crow. The former produced an environmental movement; the latter has galvanized a nation-wide movement against mass incarceration.

As Robin goes on to note:

By virtue of the demands they make upon the reader, they force a reckoning. They summon a public into being — if nothing else a public conjured out of opposition to their writing. Democratic publics are always formed in opposition and conflict: “to form itself,” wrote Dewey, “the public has to break existing political forms.

The role for the public intellectual that Robin envisages is the breaking of existing political forms–philosophers of culture like Nietzsche suggested doing this with a hammer; we’ll have to settle for our word processors. Far from being disconnected with reality, Robin is suggesting an active engagement with the world; these engagements, Linker might be surprised to know, take many forms, ranging from the grubby and sordid to the elevated and sublime. Sometimes those forms of engagement are literary, sometimes physical, sometimes performative, sometimes emotional.

The problem is that Linker’s imagination is limited; he is himself cut off from the very reality he claims to be in touch with. Robin’s vision, by extending further than Linker’s, might be informing him that there are more things in this world than he might have allowed for.

Linker then moves on Purdy, summarizing his claims as follows:

[P]olitics and economics have been “denaturalized” in our time, and that even nature itself is undergoing the same process….all appeals to permanent, intrinsic truths or standards by parties involved in political, economic, or environmental debates have become unconvincing. Nothing is natural in the normative sense — no political or economic arrangement, and not even any specific construal of the natural world and its meanings.

All such appeals to nature are in fact conventional, artificial constructs of the human mind imposed upon the world.

Linker suggests that Purdy draws a ‘radical’ conclusion from this:

a wonderful opportunity [which] holds out the possibility of a collective “world-shaping project” that would bring about a radical democratization of politics and economics, and of the relation of both to the natural world.

Linker now fulminates:

The problem with this way of describing the world is not merely that it’s wrong. (As long as human beings have physical bodies that can thrive, be injured, and die, and as long as they live out their lives in a physical world that obeys natural laws disclosed by science, politics and economics will be hemmed in by constraints and obstacles that stand in the way of any number of potential “world-shaping projects.”)

Purdy’s claims are not particularly ‘radical’; instead they build on a rich tradition of deflationary claims about the pretensions of absolutist theorizing about metaphysics, ethics, and politics. Linker should know–if he’s read any philosophy of science or history of science–that science richly interacts with politics and economics and law. Thus the very science that Linker so valorizes is in fact something co-constructed with the society in which its practices are embedded. The politics and economics of this world impinge on the science it practices; a radical remaking of our politics and economics will also remake the science we practice. Not the truths it discovers but what it thinks it is important to research, investigate, and pursue as an object of knowledge.  Science is “hemmed in by constraints and obstacles that stand in the way of any number of potential “world-shaping projects.” Want to build that accelerator? Sorry; we don’t have the funds. Want to go to Mars? Same problem. Want to do stem-cell research? Sorry, no can do. The religious folk won’t stand for it.

If Linker simply wishes to say that our physical bodies and the world limits our physical actions, then he’s stating the obvious. What he missed out on, like he did with Robin above, is that Purdy is speaking of an untrammeled imagination, which hitherto has been restricted and confined to pre-existing categories of thought and possibilities. It is the ‘construals’ of the world that have been limited; change those and you change your sense of what is possible for your interactions with this world. We’ll always bump up against the hard, unforgiving edge of something or the other; but we don’t even know, so long as we are confined by existing construals, what and where those edges are.

And then, Linker levels that old canard:

The even bigger problem with Purdy’s account of things is that it renders political evaluation and judgment impossible. As Will Wilkinson writes in a brilliant critique of the essay, “Appeals to value only make sense…against a background of belief about how things really are. If our best ideas about the way the world works can’t put a boundary around political contestation, then leaving the lead in Flint’s drinking water makes as much sense as taking it out.”

The kind of anti-metaphysical claims that Nietzsche made, the kind of radical undermining he conducted of morality, did not render moral evaluation impossible. Au contraire, it bid us examine the foundations of our moralities to see whose interests were represented therein. We, moral subjects, could radically reconfigure those values by dint of our actions. By, you know, our politics, our imaginations, our actions, our writings.

Accusing of intellectuals of being disconnected from reality is a tired, old, reactionary political trick. It is a ideological maneuver, one that merely indicts the one making the charge of preferring their own fantastic vision of the world.