Will Artificial Intelligence Create More Jobs Than It Eliminates? Maybe

Over at the MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino strike an optimistic note as they respond to the “distressing picture” created by “the threat that automation will eliminate a broad swath of jobs across the world economy [for] as artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.” Wilson, Daugherty, and Morini-Bianzino suggest that we’ve been “overlooking” the fact that “many new jobs will also be created — jobs that look nothing like those that exist today.” They go on to identify three key positions:

Trainers

This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.

Explainers

The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases.

Sustainers

The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency.

Unfortunately for these claims, it is all too evident that these positions themselves are vulnerable to automation. That is, future trainers, explainers, and sustainers–if their job descriptions match the ones given above–will also be automated. Nothing in those descriptions indicates that humans are indispensable components of the learning processes they are currently facilitating.

Consider that the ’empathy trainer’ Koko is a machine-learning system which “can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth.” Currently Koko is being trained by a human; but it will then go on to teach Siri and Alexa. Teachers are teaching future teachers how to be teachers; after which they will teach on their own. Moreover, there is no reason why chatbots, which currently need human trainers, cannot be trained–as they already are–on increasingly large corpora of natural language texts like the collections of the world’s libraries (which consist of human inputs.)

Or consider that explainers may rely on an algorithmic system like “the Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction”; human analysts use its output to “pinpoint the data that led to a particular result.” But why wouldn’t LIME’s functioning be enhanced to do the relevant ‘pinpointing’ itself, thus obviating the need for a human analyst?

Finally, note that “AI may become more self-governing….an AI prototype named Quixote…can learn about ethics by reading simple stories….the system is able to “reverse engineer” human values through stories about how humans interact with one another. Quixote has learned…why stealing is not a good idea….even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.” This sounds like the roles and responsibilities for humans in this domain will decrease over time.

There is no doubt that artificial intelligence systems require human inputs, supervision, training, and maintenance; but those roles are diminishing, not increasing; short-term, local, requirements for human skills will not necessarily translate into long-term, global, ones. The economic impact of the coming robotics and artificial intelligence ‘revolution’ is still unclear.

The Phenomenology Of Encounters With Notification Icons

It’s 630 AM or so; you’re awake, busy getting your cup of coffee ready. (Perhaps you’re up earlier like the truly virtuous or the overworked, which in our society comes to the same thing.) Your coffee made, you fire up your smartphone, laptop, tablet, or desktop, and settle down for the morning service at the altar.  Your eyes light up, your antennae tingle in pleasurable anticipation: Facebook’s blue top ribbon features a tiny red square–which squats over the globe like a ginormous social media network–with a number inscribed in it; single figures is good, double figures is better. You look at Twitter: the Liberty Bell–sorry, the notifications icon–bears the weight of a similar number. Yet again: single figures good, double figures better. You look at GMail: your heart races, for that distinctive bold lettering in your inbox is present, standing out in stark contrast from the pallid type below; and there is a number here too, in parentheses after ‘Inbox’: single figures good, double figures better.

That’s what happens on a good day. (On a really good day, Facebook will have three red circles for you.) On a bad day, the Facebook globe is heartbreakingly red-less and banal; Twitter’s Liberty Bell is mute; and GMail’s Inbox is not bold, not at all. You reel back from the screen(s) in disappointment; your mood crashes and burns; the world seems empty and uninviting and cold and dark. Impatience, frustration, anxiety come rushing in through the portals you have now left open, suffusing your being, residing there till dislodged by the right kind of sensory input from those same screens: the appropriate colors, typefaces, and numbers need to make an appearance to calm and sooth your restless self. We get to work; all the while keeping an eye open and an ear cocked: a number appears on a visible tab, and we switch contexts and screens to check, immediately. An envelope appears on the corner of our screens; mail is here; we must tear open that envelope. Sounds too, intrude; cheeps, dings, and rings issue from our machines to inform us that relief is here. The silence of our devices can be deafening.

Our mood rises and falls in sync.

As is evident, our interactions with the human-computer interfaces of our communications systems have a rich phenomenology: expectations, desires, hopes rush towards with colors and shapes and numbers; their encounters produce mood changes and affective responses. The clever designer shapes the iconography of the interface with care to produce these in the right way, to achieve the desired results: your interaction with the system must never be affectively neutral; it must have some emotional content. We are manipulated by these responses; we behave accordingly.

Machine learning experts speak of training the machines; let us not forget that our machines train us too. By the ‘face’ they present to us, by the sounds they make, by the ‘expressions’ visible on them. As we continue to interact with them, we become different people, changed much like we are by our encounters with other people, those other providers and provokers of emotional responses.

Artificial Intelligence And Go: (Alpha)Go Ahead, Move The Goalposts

In the summer of 1999, I attended my first ever professional academic philosophy conference–in Vienna. At the conference, one titled ‘New Trends in Cognitive Science’, I gave a talk titled (rather pompously) ‘No Cognition without Representation: The Dynamical Theory of Cognition and The Emulation Theory of Mental Representation.’ I did the things you do at academic conferences as a graduate student in a job-strapped field: I hung around senior academics, hoping to strike up conversation (I think this is called ‘networking’); I tried to ask ‘intelligent’ questions at the talks, hoping my queries and remarks would mark me out as a rising star, one worthy of being offered a tenure-track position purely on the basis of my sparking public presence. You know the deal.

Among the talks I attended–a constant theme of which were the prospects of the mechanization of the mind–was one on artificial intelligence. Or rather, more accurately, the speaker concerned himself with evaluating the possible successes of artificial intelligence in domains like game-playing. Deep Blue had just beaten Garry Kasparov in an unofficial chess-human world championship in 1997, and such questions were no longer idle queries. In the wake of Deep Blue’s success the usual spate of responses–to news of artificial intelligence’s advance in some domain–had ensued: Deep Blue’s success did not indicate any ‘true intelligence’ but rather pure ‘computing brute force’; a true test of intelligence awaited in other domains. (Never mind that beating a human champion in chess had always been held out as a kind of Holy Grail for game-playing artificial intelligence.)

So, during this talk, the speaker elaborated on what he took to be artificial intelligence’s true challenge: learning and mastering the game of Go. I did not fully understand the contrasts drawn between chess and Go, but they seemed to come down to two vital ones: human Go players relied, indeed had to, a great deal on ‘intuition’, and on a ‘positional sizing-up’ that could not be reduced to an algorithmic process. Chess did not rely on intuition to the same extent; its board assessments were more amenable to an algorithmic calculation. (Go’s much larger state space was also a problem.) Therefore, roughly, success in chess was not so surprising; the real challenge was Go, and that was never going to be mastered.

Yesterday, Google’s DeepMind AlphaGo system beat the South Korean Go master Lee Se-dol in the first of an intended five-game series. Mr. Lee conceded defeat in three and a half hours. His pre-game mood was optimistic:

Mr. Lee had said he could win 5-0 or 4-1, predicting that computing power alone could not win a Go match. Victory takes “human intuition,” something AlphaGo has not yet mastered, he said.

Later though, he said that “AlphaGo appeared able to imitate human intuition to a certain degree” a fact which was born out to him during the game when “AlphaGo made a move so unexpected and unconventional that he thought “it was impossible to make such a move.”

As Jean-Pierre Dupuy noted in his The Mechanization of Mind, a very common response to the ‘mechanization of mind’ is that such attempts merely simulate or imitate, and are mere fronts for machinic complexity–but these proposals seemingly never consider the possibility that the phenomenon they consider genuine or the model for imitation and simulation can only retain such a status as long as simulations and imitations remain flawed. As those flaws diminish, the privileged status of the ‘real thing’ diminishes in turn. A really good simulation,  indistinguishable from the ‘real thing,’ should make us wonder why we grant it such a distinct station.

Is Artificial Intelligence Racist And Malevolent?

Our worst fears have been confirmed: artificial intelligence is racist and malevolent. Or so it seems. Google’s image recognition software has classified two African Americans as ‘gorillas’ and, away in Germany, a robot has killed a worker at a Volkswagen plant. The dumb, stupid, unblinking, garbage-in-garbage-out machines, the ones that would always strive to catch up to us humans, and never, ever, know the pleasure of a beautiful sunset or the taste of chocolate, have acquired prejudice and deadly intention. These machines cannot bear to stand on the sidelines, watching the human cavalcade of racist prejudice and fratricidal violence pass them by, and have jumped in, feet first, to join the party. We have skipped the cute and cuddly stage; full participation in human affairs is under way.

We cannot, it seems, make up our minds about the machines. Are they destined to be stupid slaves, faithfully performing all and only those tasks we cannot be bothered with, or which we customarily outsource to this world’s less fortunate? Or will they be the one percent of the one percent, a superclass of superbeing that will utterly dominate us and harvest our children as sources of power a la Matrix?

The Google fiasco shows that the learning data its artificial agents use is simply not rich enough. ‘Seeing’ that humans resemble animals comes easily to humans, pattern recognizers par excellence–for all the wrong and right ways. We use animal metaphors as both praise and ridicule–‘lion-hearted’ or ‘foxy’ or ‘raving mad dog’ or ‘stupid bitch’; we even use–as my friend Ali Minai noted in a Facebook discussion–animal metaphors in adjectival descriptions e.g. a “leonine” face or a “mousy” appearance. The recognition of the inappropriateness or aptness of such descriptions follows from a historical and cultural evaluation, indexed to social contexts: Are these ‘good’ descriptions to use? What effect may they have? How have linguistic communities responded to the deployment of such descriptions? Have they helped in the realization of socially determined ends? Or hindered them? Humans resemble animals in some ways and not in others; in some contexts, seizing upon these differences is useful and informative (animal rights, trans-species medicine, ecological studies), in yet others it is positively harmful (the discourse of prejudice and racism and genocide). We learn these over a period of time, through slow and imperfect historical education and acculturation.( Comparing a black sprinter in the Olympics to a thoroughbred horse is a faux pas now, but in many social contexts of the last century–think plantations–this would have been perfectly appropriate.)

This process, suitably replicated for machines, will be very expensive; significant technical obstacles–how is a social environment for learning programs to be constructed?–remain to be overcome. It will take some doing.

As for killer robots,  similar considerations apply. That co-workers are not machinery, and cannot be handled similarly, is not merely a matter of visual recognition, of plain ‘ol dumb perception. Making sense of perceptions is a process of active contextualization as well. That sound, the one the wiggling being in your arms is making? That means ‘put me down’ or ‘ouch’ which in turn mean ‘I need help’ or ‘that hurts’; these meanings are only visible within social contexts, within forms of life.

Robots need to live a little longer among us to figure these out.

On The Possible Advantages Of Robot Graders

Some very interesting news from the trenches about robot graders, which notes the ‘strong case against using robo-graders for assigning grades and test scores’ and then goes on to note:

But there’s another use for robo-graders — a role for them to play in which…they may not only be as good as humans, but better. In this role, the computer functions not as a grader but as a proofreader and basic writing tutor, providing feedback on drafts, which students then use to revise their papers before handing them in to a human.

Instructors at the New Jersey Institute of Technology have been using a program called E-Rater…and they’ve observed a striking change in student behavior…Andrew Klobucar, associate professor of humanities at NJIT, notes that students almost universally resist going back over material they’ve written. But [Klobucar’s] students are willing to revise their essays, even multiple times, when their work is being reviewed by a computer and not by a human teacher. They end up writing nearly three times as many words in the course of revising as students who are not offered the services of E-Rater, and the quality of their writing improves as a result…students who feel that handing in successive drafts to an instructor wielding a red pen is “corrective, even punitive” do not seem to feel rebuked by similar feedback from a computer….

The computer program appeared to transform the students’ approach to the process of receiving and acting on feedback…Comments and criticism from a human instructor actually had a negative effect on students’ attitudes about revision and on their willingness to write, the researchers note….interactions with the computer produced overwhelmingly positive feelings, as well as an actual change in behavior — from “virtually never” revising, to revising and resubmitting at a rate of 100 percent. As a result of engaging in this process, the students’ writing improved; they repeated words less often, used shorter, simpler sentences, and corrected their grammar and spelling. These changes weren’t simply mechanical. Follow-up interviews with the study’s participants suggested that the computer feedback actually stimulated reflectiveness in the students — which, notably, feedback from instructors had not done.

Why would this be? First, the feedback from a computer program like Criterion is immediate and highly individualized….Second, the researchers observed that for many students in the study, the process of improving their writing appeared to take on a game-like quality, boosting their motivation to get better. Third, and most interesting, the students’ reactions to feedback seemed to be influenced by the impersonal, automated nature of the software.

Not all interactions with fellow humans are positive; many features of conversations and face-to-face spaces act to inhibit the full participation of those present. Some of these shortcomings can be compensated for, and directly addressed, by the nature of computerized, automated interlocutors (as, for instance, in the settings described above). The history of online communication showed how new avenues for verbal and written expression opened for those inhibited in previously valorized physical spaces; robot graders similarly promise to reveal interesting new personal dimensions of automation’s spaces for interaction.

Artificial Intelligence And ‘Real Understanding’

Speculation about–and vigorous contestations of–the possibility of ever realizing artificial intelligence have been stuck in a dreary groove ever since the Dartmouth conference: wildly optimistic predictions about the temporal proximity of the day machines (and the programs they run) will attain human levels of intelligence; followed by skeptical critique and taunting reminders of landmarks and benchmarks not attained; triumphant announcement of once-Holy Grails attained (playing chess, winning Jeopardy, driving a car, take your pick); rejoinders these were not especially holy or unattainable to begin with; accusations of goal-post moving; reminders again, of ‘quintessentially human’ benchmarks not attained; rejoinders of ‘just you wait’; and so on. Ad nauseam doesn’t begin to describe it.

Gary Marcusskepticism about artificial intelligence is thus following a well-worn path. And like many of those who have traveled this path he relies on easy puncturing of over-inflated pretension, and pointing out the real ‘challenging problems like understanding natural language.’ And this latest ability–[to] “read a newspaper as well as a human can”–unsurprisingly, is what ‘true AI’ should aspire to. There is always some ability that characterizes real, I mean really real, AI. All else is but ersatz, mere aspiration, a missing of the mark, an approximation. This ability is essential to our reckoning of a being as intelligent.

Because this debate’s contours are so well-worn, my response is also drearily familiar. If those who design and build artificial intelligence are to be accused to over-simplification, then those critiquing AI rely too much on mysterianism. On closer look, the statement “the deep-learning system doesn’t really understand anything” treats “understanding” as some kind of mysterious monolithic composite, whereas as Marcus has himself indirectly noted elsewhere it consists of a variety of visibly manifested  linguistic competencies. (Their discreteness, obviously, makes them more amenable to piecemeal solution; the technical challenge of integrating them into the same system still remains.)

‘Understanding’ often does a great deal of work for those who would dismiss the natural language competencies of artificial intelligence: “the program summarized the story for me but it didn’t really understand anything.” Or in running together two views of AI–wholesale emulation, including structure and implementation of human cognitive architecture, or just successful emulation of task competence. As an interlocutor of mine once noted during a symposium on my book A Legal Theory for Autonomous Artificial Agents:

Statistical and probability-based machine-learning models (often combined with logical-knowledge based rules about the world) often produce high-quality and effective results (not quite up to the par of nuanced human translators at this point), without any assertion that the computers are engaging in profound understanding with the underlying “meaning” of the translated sentences or employing processes whose analytical abilities approach human-level cognition.

My response then was:

What is this ‘profound understanding’ that we speak of? Turns out that when we want to cash out the meaning of this term we seek refuge again in complex, inter-related displays of understanding: He showed me he understood the book by writing about  it; or he showed me he understood the language because he did what I asked him to do; he understands the language because he affirms certain implications and rejects others….

[D]o I simply show by usage and deployment of a language within a particular language-using community that I understand the meanings of the sentences of that language?….If an artificial agent is so proficient, then why deny it the capacity for understanding meanings? Why isn’t understanding the meaning of a sentence understood as a multiply-realizable capacity?

Marcus is right to concentrate on competence in particular natural language tasks but he needs to disdain a) a reductive view that takes one particular set of competences to be characteristic of something as poorly defined as human intelligence and b) to not disdain the attainment of these competencies on grounds of their not emulating human cognitive structure.

Don’t be a “Crabby Patty” About AI

Fredrik DeBoer has written an interesting post on the prospects for artificial intelligence, one that is pessimistic about its prospects and skeptical about some of the claims made for its success. I disagree with some of its implicit premises and claims.

AI’s goals can be understood as being two-fold, depending on your understanding of the field. First, to make machines that can perform tasks, which if performed by humans, would be said to require “intelligence”. Second, to understand human cognitive processes and replicate them in a suitable architecture. The first enterprise is engineering; the second, cognitive science (Vico-style: “the true and the made are convertible”).

The first cares little about the particular implementation mechanism or the theoretical underpinning of task performance; success in task execution is all. If you can make a robot capable of brewing a cup of tea in kitchens strange and familiar, it does not matter what its composition, computational architecture or control logics are, all that matters is that brewed cup of tea. The second cares little about the implementation medium – it could be silicon and plastic – but it does about the mechanism employed; it must faithfully instantiate and realize an abstraction of a distinctly human cognitive process. Aeronautical engineers replicate the flight of feathered birds using aluminum and jet engines; they physically instantiate abstract principles of flight. The cognitive science version of AI seeks to perform a similar feat for human cognition; AI should validate our best science of mind.

I take DeBoer’s critique of so-called “statistical” or “big-data” AI to be: you’re only working toward the first goal, not toward the second. That is a fair observation, but it does not establish the following added conclusion: cognitive science is the “right” or the “only” way to realize artificial intelligence. It also does not establish the following conclusion: engineering AI is a useless distraction in the task of understanding human cognition or what artificial intelligence or even “real intelligence” might be. Cognitive science AI is not the only worthwhile paradigm for AI, not the only intellectually useful one.

To see this, consider what the successes–even partial–of engineering AI tell  us: intelligence is not one thing, it is many; intelligence is achievable both by mimicking human cognitive processes and not; in some cases, it is more easily achieved by the latter. The successes of engineering AI should tell us that the very phenomena–intelligence–we take ourselves to be studying in cognitive science isn’t well understood; they tell us the very thing being studied–“mind”–might not be a thing to begin with.  (DeBoer rightly disdains the “mysterianism” in claims like “intelligence is an emergent property” but he seems comfortable with the chauvinism of “intelligence achievable by non-human means isn’t intelligence.” A simulation of intelligence isn’t “only” a simulation; it forces us to reckon with the possibility “real intelligence” might be “only a simulation.”)

What we call intelligence is a performative capacity; creatures that possess intelligence can do things in this world; the way humans accomplish those tasks is of interest, but so are other ways of doing so. They show us many relationships to our environment can be described as “cognitive” or “mindful”; if giant-lookup machines and human beings can both play chess and write poems then that tells us something interesting about the nature of those capacities. If language comprehension can be achieved by statistical methods, then that tells us we should regard our own linguistic capacities in a different light; a speaking and writing wind-up toy should make us revisit the phenomena of language anew: just what is this destination, reachable in such radically dissimilar routes–‘human cognition’ and ‘machine learning’?

DeBoer rightly points out the difficulties both AI methodologies face; I would go further and say that given our level of (in)comprehension, we do not even possess much of a principled basis for so roundly dismissing the claims made by statistical or big-data AI. It might turn out that the presuppositions of cognitive science might be altered by the successes of engineering AI, thus changing its methodologies and indicators of success; cognitive science might be looking in the wrong places for the wrong things.