Will Artificial Intelligence Create More Jobs Than It Eliminates? Maybe

Over at the MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino strike an optimistic note as they respond to the “distressing picture” created by “the threat that automation will eliminate a broad swath of jobs across the world economy [for] as artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.” Wilson, Daugherty, and Morini-Bianzino suggest that we’ve been “overlooking” the fact that “many new jobs will also be created — jobs that look nothing like those that exist today.” They go on to identify three key positions:

Trainers

This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.

Explainers

The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases.

Sustainers

The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency.

Unfortunately for these claims, it is all too evident that these positions themselves are vulnerable to automation. That is, future trainers, explainers, and sustainers–if their job descriptions match the ones given above–will also be automated. Nothing in those descriptions indicates that humans are indispensable components of the learning processes they are currently facilitating.

Consider that the ’empathy trainer’ Koko is a machine-learning system which “can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth.” Currently Koko is being trained by a human; but it will then go on to teach Siri and Alexa. Teachers are teaching future teachers how to be teachers; after which they will teach on their own. Moreover, there is no reason why chatbots, which currently need human trainers, cannot be trained–as they already are–on increasingly large corpora of natural language texts like the collections of the world’s libraries (which consist of human inputs.)

Or consider that explainers may rely on an algorithmic system like “the Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction”; human analysts use its output to “pinpoint the data that led to a particular result.” But why wouldn’t LIME’s functioning be enhanced to do the relevant ‘pinpointing’ itself, thus obviating the need for a human analyst?

Finally, note that “AI may become more self-governing….an AI prototype named Quixote…can learn about ethics by reading simple stories….the system is able to “reverse engineer” human values through stories about how humans interact with one another. Quixote has learned…why stealing is not a good idea….even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.” This sounds like the roles and responsibilities for humans in this domain will decrease over time.

There is no doubt that artificial intelligence systems require human inputs, supervision, training, and maintenance; but those roles are diminishing, not increasing; short-term, local, requirements for human skills will not necessarily translate into long-term, global, ones. The economic impact of the coming robotics and artificial intelligence ‘revolution’ is still unclear.

One Vision Of A Driverless Car Future: Eliminating Private Car Ownership

Most analysis of a driverless car future concentrates on the gains in safety: ‘robotic’ cars will adhere more closely to speed limits and other traffic rules and over a period of time, by eliminating human error and idiosyncrasies, produce a safer environment on our roads. This might be seen as an architectural modification of human driving behavior to produce safer driving outcomes–rather than making unsafe driving illegal, more expensive, or socially unacceptable, just don’t let humans drive.

But there are other problems–environmental degradation and traffic–that could be addressed by mature driverless car technologies. The key to their solution lies in moving away from private car ownership.

To see this, consider that at any given time, we have too many cars on the roads. Some are being driven, yet others are parked. If you own a car, you drive it from point to point, and park it when you are done using it. Eight hours later–at the end of an average work-day–you leave your office and drive home, park it again, and then use it in the morning. Through the night, your car sits idle again, taking up space. If only someone else could use your car while you didn’t need it. They wouldn’t need to buy a separate car for themselves and add to the congestion on the highways. And in parking lots.

Why not simply replace privately owned, human-driven cars with a gigantic fleet of robotic taxis? When you need a car, you call for one. When you are done using it, you release it back into the pool. You don’t park it; it simply goes back to answering its next call.  Need to go to work in the morning? Call a car. Run an errand with heavy lifting? Call a car. And so on. Cars shared in this fashion could thus eliminate the gigantic redundancy in car ownership that leads to choked highways, mounting smog and pollution, endless, futile construction of parking towers, and elaboration congestion pricing schemes. (The key phrase here is, of course, ‘mature driver-less car technologies.’ If you need a car for an elaborate road-trip through the American West, perhaps you could place a longer, more expensive hold on it, so that it doesn’t drive off while you are taking a quick photo or two of a canyon.)

Such a future entails that there will be no more personal, ineffable, fetishized relationships with cars. They will not be your babies to be cared and loved for. Their upholstery will not remind you of days gone by. Your children will not feel sentimental about the clunker that was a part of their growing up. And so on. I suspect these sorts of attachments to the car will be very easily forgotten once we have reckoned with the sheer pleasure of not having to deal with driving tests–and the terrors of teaching our children how to drive, the DMV, buying car insurance, looking for parking, and best of all, other drivers.

I, for one, welcome our robotic overlords in this domain.

Is Artificial Intelligence Racist And Malevolent?

Our worst fears have been confirmed: artificial intelligence is racist and malevolent. Or so it seems. Google’s image recognition software has classified two African Americans as ‘gorillas’ and, away in Germany, a robot has killed a worker at a Volkswagen plant. The dumb, stupid, unblinking, garbage-in-garbage-out machines, the ones that would always strive to catch up to us humans, and never, ever, know the pleasure of a beautiful sunset or the taste of chocolate, have acquired prejudice and deadly intention. These machines cannot bear to stand on the sidelines, watching the human cavalcade of racist prejudice and fratricidal violence pass them by, and have jumped in, feet first, to join the party. We have skipped the cute and cuddly stage; full participation in human affairs is under way.

We cannot, it seems, make up our minds about the machines. Are they destined to be stupid slaves, faithfully performing all and only those tasks we cannot be bothered with, or which we customarily outsource to this world’s less fortunate? Or will they be the one percent of the one percent, a superclass of superbeing that will utterly dominate us and harvest our children as sources of power a la Matrix?

The Google fiasco shows that the learning data its artificial agents use is simply not rich enough. ‘Seeing’ that humans resemble animals comes easily to humans, pattern recognizers par excellence–for all the wrong and right ways. We use animal metaphors as both praise and ridicule–‘lion-hearted’ or ‘foxy’ or ‘raving mad dog’ or ‘stupid bitch’; we even use–as my friend Ali Minai noted in a Facebook discussion–animal metaphors in adjectival descriptions e.g. a “leonine” face or a “mousy” appearance. The recognition of the inappropriateness or aptness of such descriptions follows from a historical and cultural evaluation, indexed to social contexts: Are these ‘good’ descriptions to use? What effect may they have? How have linguistic communities responded to the deployment of such descriptions? Have they helped in the realization of socially determined ends? Or hindered them? Humans resemble animals in some ways and not in others; in some contexts, seizing upon these differences is useful and informative (animal rights, trans-species medicine, ecological studies), in yet others it is positively harmful (the discourse of prejudice and racism and genocide). We learn these over a period of time, through slow and imperfect historical education and acculturation.( Comparing a black sprinter in the Olympics to a thoroughbred horse is a faux pas now, but in many social contexts of the last century–think plantations–this would have been perfectly appropriate.)

This process, suitably replicated for machines, will be very expensive; significant technical obstacles–how is a social environment for learning programs to be constructed?–remain to be overcome. It will take some doing.

As for killer robots,  similar considerations apply. That co-workers are not machinery, and cannot be handled similarly, is not merely a matter of visual recognition, of plain ‘ol dumb perception. Making sense of perceptions is a process of active contextualization as well. That sound, the one the wiggling being in your arms is making? That means ‘put me down’ or ‘ouch’ which in turn mean ‘I need help’ or ‘that hurts’; these meanings are only visible within social contexts, within forms of life.

Robots need to live a little longer among us to figure these out.

A Rankings Tale (That Might Rankle)

This is a story about rankings. Not of philosophy departments but of law schools. It is only tangentially relevant to the current, ongoing debate in the discipline about the Philosophical Gourmet Report. Still, some might find it of interest. So, without further ado, here goes.

A half a dozen years ago, shortly after my book Decoding Liberation: The Promise of Free and Open Source Software had been published, and after I had begun work on attempting to develop the outlines of a legal theory for artificial intelligence, I considered applying to law school. For these projects, I had taught myself a bit of copyright, patent and trade secret law; I had studied informational privacy, torts, contracts, knowledge attribution, agency law; but all of this was auto-didactic. Perhaps a formal education in law would help my further forays into legal theory (which continue to this day). Living in New York City meant I could have access to some top-class departments–NYU, Columbia, Yale–some of whose scholars would also make for good collaborators in my chosen field of study. I decided to go the whole hog: the LSAT and all of the rest. (Yes, I know it sounds ghastly, but somehow I overcame my instinctive revulsion at the prospect of taking that damn test.)

An application for law school requires recommendation letters. I anticipated no difficulty with this. I knew a few legal scholars–professors at law schools–who were familiar with my work, and I hoped they would write letters for me, perhaps describing the work I had produced till that point in time. The response was gratifying; my acquaintances all said they’d be happy to write me letters.  I went ahead with the rest of my application package, even as I had begun to feel that law school looked like an impractical proposition–thanks to its expenses. Taking out loans would have meant a second mortgage and that seemed a rather bizarre burden to take on.

In any case, I took the LSAT. I did not do particularly well. I used to be good in standardized tests back in my high school and undergraduate days but not any more. My score was a rather mediocre 163 (in the 90th percentile), clearly insufficient for admission to any of the departments I was interested in applying to. Still, I reasoned, perhaps the admissions committees would look past that score. Perhaps they’d consider my logical acumen as being adequately demonstrated by my publications in The Journal of Philosophical Logic; perhaps a doctorate in philosophy would show evidence of my ability to parse arguments and write; and I did have a contract for a book on legal theory. Perhaps that would outweigh this little lacuna.

One of my letter writers, a professor at Columbia Law School, invited me to have coffee with him to talk about my decision to go to law school. When we did so, he told me he had written me an excellent letter but he wondered whether law school was a good idea. He urged me to reconsider my decision, saying I would do better to stay on my auto-didactic path (and besides, the expenses were not inconsiderable). I said I had started to have second thoughts about the whole business and had not yet made up my mind. He then asked me my LSAT score. When I told him, he guffawed: I did not stand a snowball’s chance in hell of getting into the departments I was interested in. But, surely, I said, with a letter and a good word from you, and my publication record, I stood a chance. He guffawed again. Let me tell you a story, he said.

A few years prior, he had met a bright young computer science student, a graduate from a top engineering school, with an excellent GPA, who had wanted to study law at Columbia. He was interested in patent law, and had–I think, if I remember correctly–even written a few essays on software patents, mounting a critique of existing regimes, and outlining alternatives to them. He had asked my current interlocutor to write him a recommendation letter for Columbia. There was just one problem: his LSAT score was in the low 160s. Just like mine, not good enough for Columbia. Time to talk to the Dean, to see if perhaps an exception could be made in his case. The Dean was flabbergasted: there was no way such an exception could be made. But, my letter writer protested, this student met the profile for an ideal Columbia Law student, especially given his interests: he had a stellar undergraduate record in a relevant field, he had shown an aptitude for law, he had overcome personal adversity to make it through college (his family was from a former Soviet republic and he had immigrated with them to the US a few years before after suffering considerable economic hardship). Couldn’t an exception be made in this case?

The Dean listened with some sympathy but said his hands were tied. Admitting a student with such a LSAT score would do damage to their ‘LSAT numbers’ – the ones the US News and World Report used for law school rankings. Admitting a student with with an LSAT score in the low 160s would mean finding someone with a score in the high 170s to make sure the ‘LSAT numbers’–their median value, for instance–remained unaffected. God forbid, if the ‘LSAT numbers’ were hit hard enough, NYU might overtake Columbia in the rankings next year. The fate of a Dean who had allowed NYU to slip past Columbia in the USNWR rankings did not bear thinking about. Sorry, there was little he could do. Ask your admittedly excellent student to apply elsewhere.

Nothing quite made up my mind not to go to law school like that story did. Still, my application was complete; test scores and letters were in. So I applied. And was rejected at every single school I applied to.

Artificial Intelligence And ‘Real Understanding’

Speculation about–and vigorous contestations of–the possibility of ever realizing artificial intelligence have been stuck in a dreary groove ever since the Dartmouth conference: wildly optimistic predictions about the temporal proximity of the day machines (and the programs they run) will attain human levels of intelligence; followed by skeptical critique and taunting reminders of landmarks and benchmarks not attained; triumphant announcement of once-Holy Grails attained (playing chess, winning Jeopardy, driving a car, take your pick); rejoinders these were not especially holy or unattainable to begin with; accusations of goal-post moving; reminders again, of ‘quintessentially human’ benchmarks not attained; rejoinders of ‘just you wait’; and so on. Ad nauseam doesn’t begin to describe it.

Gary Marcusskepticism about artificial intelligence is thus following a well-worn path. And like many of those who have traveled this path he relies on easy puncturing of over-inflated pretension, and pointing out the real ‘challenging problems like understanding natural language.’ And this latest ability–[to] “read a newspaper as well as a human can”–unsurprisingly, is what ‘true AI’ should aspire to. There is always some ability that characterizes real, I mean really real, AI. All else is but ersatz, mere aspiration, a missing of the mark, an approximation. This ability is essential to our reckoning of a being as intelligent.

Because this debate’s contours are so well-worn, my response is also drearily familiar. If those who design and build artificial intelligence are to be accused to over-simplification, then those critiquing AI rely too much on mysterianism. On closer look, the statement “the deep-learning system doesn’t really understand anything” treats “understanding” as some kind of mysterious monolithic composite, whereas as Marcus has himself indirectly noted elsewhere it consists of a variety of visibly manifested  linguistic competencies. (Their discreteness, obviously, makes them more amenable to piecemeal solution; the technical challenge of integrating them into the same system still remains.)

‘Understanding’ often does a great deal of work for those who would dismiss the natural language competencies of artificial intelligence: “the program summarized the story for me but it didn’t really understand anything.” Or in running together two views of AI–wholesale emulation, including structure and implementation of human cognitive architecture, or just successful emulation of task competence. As an interlocutor of mine once noted during a symposium on my book A Legal Theory for Autonomous Artificial Agents:

Statistical and probability-based machine-learning models (often combined with logical-knowledge based rules about the world) often produce high-quality and effective results (not quite up to the par of nuanced human translators at this point), without any assertion that the computers are engaging in profound understanding with the underlying “meaning” of the translated sentences or employing processes whose analytical abilities approach human-level cognition.

My response then was:

What is this ‘profound understanding’ that we speak of? Turns out that when we want to cash out the meaning of this term we seek refuge again in complex, inter-related displays of understanding: He showed me he understood the book by writing about  it; or he showed me he understood the language because he did what I asked him to do; he understands the language because he affirms certain implications and rejects others….

[D]o I simply show by usage and deployment of a language within a particular language-using community that I understand the meanings of the sentences of that language?….If an artificial agent is so proficient, then why deny it the capacity for understanding meanings? Why isn’t understanding the meaning of a sentence understood as a multiply-realizable capacity?

Marcus is right to concentrate on competence in particular natural language tasks but he needs to disdain a) a reductive view that takes one particular set of competences to be characteristic of something as poorly defined as human intelligence and b) to not disdain the attainment of these competencies on grounds of their not emulating human cognitive structure.

Programs as Agents, Persons, or just Programs?

Last week, The Nation published my essay “Programs are People, Too“. In it, I argued for treating smart programs as the legal agents of those that deploy them, a legal change I suggest would be more protective of our privacy rights.

Among some of the responses I received was one from a friend, JW, who wrote:

[You write: But automation somehow deludes some people—besides Internet users, many state and federal judges—into imagining our privacy has not been violated. We are vaguely aware something is wrong, but are not quite sure what.]
 
I think we are aware that something is wrong and that it is less wrong.  We already have an area of the law where we deal with this, namely, dog sniffs.  We think dog sniffs are less injurious than people rifling through our luggage, indeed, the law refers to those sniffs are “sui generis.”  And I think they *are* less injurious, just like it doesn’t bother me that google searches my email with an algorithm.  This isn’t to say that it’s unreasonable for some people to be bothered by it, but I do think people are rightly aware that it is different and less intrusive than if some human were looking through their email.  
 
We don’t need to attribute personhood to dogs to feel violated by police bringing their sniffing up to our house for no reason, but at the same time we basically accept their presence in airports.  And what bothers us isn’t what’s in the dog’s mind, but in the master’s.  If a police dog smelled around my house, made an alert, but no police officer was there to interpret the alert, I’m not sure it would bother me.  
 
Similarly, even attributing intentional states to algorithms as sophisticated as a dog, I don’t think their knowledge would bother me until it was given to some human (what happens when they are as sophisticated as humans is another question).  
 
I’m not sure good old fashioned Fourth Amendment balancing can’t be instructive here.  Do we have a reasonable expectation of privacy in x? What are the governmental interests at stake and how large of an intrusion is being made into the reasonable expectation of privacy?  
 

JW makes two interesting points. First, is scanning or reading by programs of our personal data really injurious to privacy in the way a human’s reading is? Second, is the legal change I’m suggesting even necessary?

Second point first. Treating smart programs as legal persons is not necessary to bring about the changes I’m suggesting in my essay. Plain old legal agency without legal personhood will do just fine. Most legal jurisdictions require legal agents to be persons too, but this has not always been the case. Consider the following passage, which did not make it to the final version of the online essay:

If such a change—to full-blown legal personhood and legal agency—is felt to be too much, too soon, then we could also grant programs a limited form of legal agency without legal personhood. There is a precedent for this too: slaves in Roman times, despite not being persons in the eyes of the law, were allowed to enter into contracts for their masters, and were thus treated as their legal intermediaries. I mention this precedent because the legal system might prefer that the change in legal status of artificial agents be an incremental one; before they become legal persons and thus full legal subjects, they could ‘enjoy’ this form of limited legal subjecthood. As a society we might find this status uncomfortable enough to want to change their status to legal persons if we think its doctrinal and political advantages—like those alluded to here—are significant enough.

Now to JW’s first point. Is a program’s access to my personal data less injurious than a human’s? I don’t think so. Programs can do things with data: they can act on it. The opening example in my essay demonstrates this quite well:

Imagine the following situation: Your credit card provider uses a risk assessment program that monitors your financial activity. Using the information it gathers, it notices your purchases are following a “high-risk pattern”; it does so on the basis of a secret, proprietary algorithm. The assessment program, acting on its own, cuts off the use of your credit card. It is courteous enough to email you a warning. Thereafter, you find that actions that were possible yesterday—like making electronic purchases—no longer are. No humans at the credit card company were involved in this decision; its representative program acted autonomously on the basis of pre-set risk thresholds.

Notice in this example that for my life to be impinged on by the agency/actions of others, it was not necessary that a single human being be involved. We so often interact with the world through programs that they command considerable agency in our lives. Our personal data is valuable to us because control of it may make a difference to our lives; if programs can use the data to do so then our privacy laws should regulate them too–explicitly.

Let us return to JW’s sniffer dog example and update it. The dog is a robotic one; it uses sophisticated scanning technology to detect traces of cocaine on a passenger’s bag. When it does so, the nametag/passport photo associated with the bag are automatically transmitted to a facial recognition system, which establishes a match, and immediately sets off a series of alarms: perhaps my bank accounts are closed, perhaps my sophisticated car is immobilized, and so on. No humans need be involved in this decision; I may find my actions curtailed without any human having taken a single action. We don’t need “a police offer to interpret the alert.” (But I’ve changed his dog to a robotic dog, haven’t I? Yes, because the programs I am considering are, in some dimensions, considerably smarter than a sniffer dog. They are much, much, dumber in others.)

In speaking of the sniffer dog, JW says “I don’t think their knowledge would bother me until it was given to some human.” But as our examples show, a program could make the knowledge available to other programs, which could take actions too.

Indeed, programs could embarrass us too: imagine a society in which sex offenders are automatically flagged in public by publishing their photos on giant television screens in Times Square. Scanning programs intercept an email of mine, in which I have sent photos–of my toddler daughter bathing with her pre-school friend–to my wife. They decide on the basis of this data that I am a sex offender and flag me as such. Perhaps I’m only ‘really’ embarrassed when humans ‘view’ my photo but the safeguards for accessing data and its use need to be placed ‘upstream.’

Humans aren’t the only ones taking actions in this world of ours; programs are agents too. It is their agency that makes their access to our data interesting and possibly problematic. The very notion of an autonomous program would be considerably less useful if they couldn’t act on their own, interact with each other, and bring about changes.

Lastly, JW also raises the question of whether we have a reasonable expectation of privacy in our email–stored on our ISP’s providers’ storage. Thanks to the terrible third-party doctrine, the Supreme Court has decided we do not. But this notion is ripe for over-ruling in these days of cloud computing. Our legal changes–on legal and normative grounds–should not be held up by bad law. But even if this were to stand, it would not affect my arguments in the essay, which conclude that data in transit, which is subject to the Wiretap Act, is still something in which we may find a reasonable expectation of privacy.

Acts of Kindness: Writing to Writers, Especially Academic Ones

A couple of years ago, after reading Neil Grossexcellent biography of Richard Rorty, I sent him a short note of appreciation, telling him how much I enjoyed his book. Gross wrote back; he was clearly pleasantly surprised to have received my email.

I mention this correspondence because it is an instance of an act that I ought to indulge in far more often but almost never do: writing to let an author–especially an academic one!–know you enjoyed his or her work.

Most academic writing is read by only a few readers: some co-workers in a related field of research, some diligent graduate students, perhaps the odd deluded, excessively indulgent family member. (I am not counting those unfortunate spouses, like mine, who have pressed into extensive editorial service for unfinished work. These worthies deserve our unstinting praise and are rightfully generously acknowledged in our works.) Many, many academic trees fall in the forest with no one to hear them.

This state of affairs holds for many other kinds of writers, of course. Online, even if we know someone is reading our writing we might not know whether they thought it was any good; we might note the number of hits on our blogs but remain unaware of whether our words resonated with any of our readers. The unfortunate converse is true; comments spaces tell us, loudly and rudely, just how poor our arguments are, how pointless our analysis, how ineffective our polemicizing. There is no shortage of critique, not at all.

It is a commonplace point to direct at academic writers that their work needs to be made relevant and accessible. Fair enough. I think though, that our tribe would greatly benefit from some positive reader feedback when these standards–besides the usual scholarly ones–are met. Academics often write to one another, indicating their interest in a common field of study, the value of their correspondent’s writing, and sometimes asking for copies of papers. To these existent epistolary relationships I suggest we add the merely appreciative note: I enjoyed your writing and here is why.

These notes are not mere acts of kindness, a dispensing of charity as it were. They encourage and sustain a useful species of human activity. They create an atmosphere, I think, conducive to scholarship and to further striving toward excellence. They make a writer want more of the same.

I know we’re all busy, but the next time you read something you like, see if you can send the writer a little thank-you note. You don’t have to do it all the time, but sometimes wouldn’t hurt.

Go ahead: reach out and touch someone.

Note: I was prompted to write this post by receiving an email from a doctoral student at Cambridge who had just read my A Legal Theory of Autonomous Artificial Agents and found it useful in his work on legal personality.  The almost absurd pleasure I received on reading his email was a wistful reminder of just how much we crave this sort of contact.