Will Artificial Intelligence Create More Jobs Than It Eliminates? Maybe

Over at the MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino strike an optimistic note as they respond to the “distressing picture” created by “the threat that automation will eliminate a broad swath of jobs across the world economy [for] as artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.” Wilson, Daugherty, and Morini-Bianzino suggest that we’ve been “overlooking” the fact that “many new jobs will also be created — jobs that look nothing like those that exist today.” They go on to identify three key positions:


This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.


The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases.


The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency.

Unfortunately for these claims, it is all too evident that these positions themselves are vulnerable to automation. That is, future trainers, explainers, and sustainers–if their job descriptions match the ones given above–will also be automated. Nothing in those descriptions indicates that humans are indispensable components of the learning processes they are currently facilitating.

Consider that the ’empathy trainer’ Koko is a machine-learning system which “can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth.” Currently Koko is being trained by a human; but it will then go on to teach Siri and Alexa. Teachers are teaching future teachers how to be teachers; after which they will teach on their own. Moreover, there is no reason why chatbots, which currently need human trainers, cannot be trained–as they already are–on increasingly large corpora of natural language texts like the collections of the world’s libraries (which consist of human inputs.)

Or consider that explainers may rely on an algorithmic system like “the Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction”; human analysts use its output to “pinpoint the data that led to a particular result.” But why wouldn’t LIME’s functioning be enhanced to do the relevant ‘pinpointing’ itself, thus obviating the need for a human analyst?

Finally, note that “AI may become more self-governing….an AI prototype named Quixote…can learn about ethics by reading simple stories….the system is able to “reverse engineer” human values through stories about how humans interact with one another. Quixote has learned…why stealing is not a good idea….even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.” This sounds like the roles and responsibilities for humans in this domain will decrease over time.

There is no doubt that artificial intelligence systems require human inputs, supervision, training, and maintenance; but those roles are diminishing, not increasing; short-term, local, requirements for human skills will not necessarily translate into long-term, global, ones. The economic impact of the coming robotics and artificial intelligence ‘revolution’ is still unclear.

Robert Mundell On Why The Market Is Feminine

Robert Mundell received the Nobel Prize in Economics in 1999 for his work in “monetary dynamics and optimum currency areas.” (He is currently professor of economics at Columbia University.) For as long as I can remember, I’ve owned a copy of his little primer, Man and Economics (McGraw-Hill, 1968; another edition bears the subtitle The Science of Choice.) Somehow, I’ve never gotten around to reading it. In this regard, Mundell’s book is exactly like many other books on my shelves.  But on Friday, I finally began to make my way through its pages, curious to see what it held within.

Man and Economics  was released in 1968, so I expected some aspects of its discussions of choice, supply, demand, inflation, money, currency rates, recession and unemployment, the gold standard etc to be just a little dated. As I read on, I noticed that what really gave the book’s vintage away was its choice of illustrative examples.

To wit, men are earners and women are spenders. The man brings home money, the woman spends it. This division and classification is then used to illustrate problems of liquidity, budget balancing, and so on. For instance:

There is a certain unevenness in spending and earning patterns . The husband may be paid for his services only once every week, fortnight, or month. Typically, the husband will deposit his salary in the bank every month, while the wife will go about the business of shopping every day or perhaps once a week. In this case, the cash balance will be high at the beginning of the month and gradually fall toward zero…toward the end of the month. Discipline is required at the beginning of the month, since it would be most unwise for the wife to spend a whole month’s income on rent, groceries, and other needs in the first two weeks. If this discipline is not present the family will suffer from a liquidity crisis toward the end of the month. Experience (or intrafamily strife) will teach the wife the expenditure pattern over time that is feasible with a given income, or the husband the income that is needed to maintain a certain expenditure.

But that’s not all.

Consider for instance, Mundell’s description of the language used to describe currency markets:

The language used by foreign-exchange dealers and operators responsible for supervising a market in which the government has a great stake may strike the reader as unusual. One speaks of the “feel of the market,” its “depth, breadth, and resiliency,” “strategy of penetration,” “getting in and out,” “slackness,” “looseness,” and the market “drying up.” It is the language of market intervention, but it all sounds like a scenario for a grand seduction. Indeed, one distinguished dealer from a very important central bank likens intervention to an exercise in applied psychology and manipulation of a market to the management of a woman. When it is troubled, it must be caressed; when it is quiet, it should be left alone; and when it gets hysterical, it has to be slapped. In that sense, the market is feminine.

Markets are temperamental creatures indeed.

Is Economics a Science?

Eric Maskin, 2007 Nobel Prize winner in Economics, responds to Alex Rosenberg and Tyler Curtain’s characterization of economics:

They claim that a scientific discipline is to be judged primarily on its predictions, and on that basis, they suggest, economics doesn’t qualify as a science.

Prediction is certainly a valuable goal in science, but not the only one. Explanation is also important, and there are plenty of sciences that do a lot of explaining and not much predicting. Seismology, for example, has taught us why earthquakes occur, but doesn’t tell Californians when they’ll be hit by “the big one.”

And through meteorology we know essentially how hurricanes form, even though we can’t say where the next storm will arise.

In the same way, economic theory provides a good understanding of how financial derivatives are priced….But that doesn’t mean that we know whether the derivatives market will crash this year.

Perhaps one day earthquakes, hurricanes and financial crashes will all be predictable. But we don’t have to wait until then for seismology, meteorology and economics to become sciences; they already are.

Maskin’s examples should really indicate that seismology and meteorology do make predictions; they just happen to be probabilistic ones like ‘there will almost certainly be an earthquake measuring 7 on the Richter scale in California in the next hundred years’ or ‘this summer’s Atlantic hurricane season will most likely see more hurricanes in the Caribbean than last year’; it is on the basis of these rough and ready predictions and the historical record (and, of course, the extra-scientific assumption that the laws of physics will endure) that building codes in the relevant regions have changed in response.

Still, Maskin is on to something: most careless characterizations of science attribute far too many essential features to science.

Consider for instance, a definition of science that says a scientific discipline necessarily relies on experimentation and produces law-like statements about nature. The former would exclude cosmology, the latter biology. (Rosenberg and Curtain have been careful enough to not talk about laws or experimentation in their description of the ‘essential’ features of science.)

The model of science that Rosenberg and Curtain work with is, unsurprisingly enough, based on physics. Furthermore, the examples they use–predicting the orbit of a satellite around Mars, the explanation of chemical reactions in terms of underlying atomic structure, predictions of eclipses and tides, the prevention of bridge collapses and power failures–are derived from the same terrain.  In general, there seems to be much consensus that a putative candidate for scientific status succeeds the more closely a description of it matches that of paradigmatic theoretical and experimental physics. As this similarity fades, more work has to be done to include that discipline in the scientific cluster.

It is still not clear to me that economics is a science. But that is not because it fails to meet some ‘essential feature’ of science; rather, it is because we still lack a complete understanding of what makes a discipline a science.  There is a persistent difficulty of the characterization problem in the philosophy of science: most definitions of science–as any undergraduate in a philosophy of science class quickly comes to realize–fail to do justice to scientific practice through history and to the actual content of scientific knowledge.

The debate about whether economics is a science is most interesting because it shows the prestige associated with scientific knowledge; a successful classification as a science entails greater acceptance and entrenchment of its claims, and concomitantly, greater support–possibly financial–for its continued practice.

In the marketplace of competing knowledge claims, this is the truly important issue at hand.