Dear Legal Academics, Please Stop Misusing The Word ‘Algorithms’

Everyone is concerned about ‘algorithms.’ Especially legal academics; law review articles, conferences, symposia all bear testimony to this claim. Algorithms and transparency; the tyranny of algorithms; how algorithms can deprive you of your rights; and so on. Algorithmic decision making is problematic; so is algorithmic credit scoring; or algorithmic stock trading. You get the picture; something new and dangerous called the ‘algorithm’ has entered the world, and it is causing havoc. Legal academics are on the case (and they might even occasionally invite philosophers and computer scientists to pitch in with this relief effort.)

There is a problem with this picture. ‘Algorithms’ is the wrong word to describe the object of legal academics’ concern. An algorithm is “an unambiguous specification of how to solve a class of problems” or a step-by-step procedure which terminates with a solution to a given problem. These problems can be of many kinds: mathematical or logical ones are not the only ones, for a cake-baking recipe is also an algorithm, as are instructions for crossing a street. Algorithms can be deterministic or non-deterministic; they can be exact or approximate; and so on. But, and this is their especial feature, algorithms are abstract specifications; they lack concrete implementations.

Computer programs are one kind of implementation of algorithms; but not the only one. The algorithm for long division can be implemented by pencil and paper; it can also be automated on a hand-held calculator; and of course, you can write a program in C or Python or any other language of your choice and then run the program on a hardware platform of your choice. The algorithm to implement the TCP protocol can be programmed to run over an Ethernet network; in principle, it could also be implemented by carrier pigeon. Different implementation, different ‘program,’ different material substrate. For the same algorithm: there are good implementations and bad implementations (the algorithm might give you the right answer for any particular input but its flawed implementation incorporates some errors and does not); some implementations are incomplete; some are more efficient and effective than others. Human beings can implement algorithms; so can well-trained animals. Which brings us to computers and the programs they run.

The reason automation and the computers that deliver it to us are interesting and challenging–conceptually and materially–is because they implement algorithms in interestingly different ways via programs on machines. They are faster; much faster. The code that runs on computers can be obscured–because human-readable text programs are transformed into machine-readable binary code before execution–thus making study, analysis, and critique of the algorithm in question well nigh impossible. Especially when protected by a legal regime as proprietary information. They are relatively permanent; they can be easily copied. This kind of implementation of an algorithm is shared and distributed; its digital outputs can be stored indefinitely. These affordances are not present in other non-automated implementations of algorithms.

The use of ‘algorithm’ in the context of the debate over the legal regulation of automation is misleading. It is the ‘automation’ and ‘computerized implementation’ of an algorithm for credit scoring that is problematic; it is so because of specific features of its implementation. The credit scoring algorithm is, of course, proprietary; moreover, its programmed implementation is proprietary too, a trade secret. The credit scoring algorithm might be a complex mathematical algorithm readable by a few humans; its machine code is only readable by a machine. Had the same algorithm been implemented by hand, by human clerks sitting in an open office, carrying out their calculations by pencil and paper, we would not have the same concerns. (This process could also be made opaque but that would be harder to accomplish.) Conversely, a non-algorithmic, non-machinic–like, a human–process would be subject to the same normative constraints.

None of the concerns currently expressed about ‘the rule/tyranny of algorithms’ would be as salient were the algorithms not being automated on computing systems; our concerns about them would be significantly attenuated. It is not the step-by-step solution–the ‘algorithm’–to a credit scoring problem that is the problem; it is its obscurity, its speed, its placement on a platform supposed to be infallible, a jewel of a socially respected ‘high technology.’

Of course, the claim is often made that algorithmic processes are replacing non-algorithmic–‘intuitive, heuristic, human, inexact’–solutions and processes; that is true, but again, the concern over this replacement would not be the same, qualitatively or quantitatively, were these algorithmic processes not being computerized and automated. It is the ‘disappearance’ into the machine of the algorithm that is the genuine issue at hand here.

Will Artificial Intelligence Create More Jobs Than It Eliminates? Maybe

Over at the MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino strike an optimistic note as they respond to the “distressing picture” created by “the threat that automation will eliminate a broad swath of jobs across the world economy [for] as artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.” Wilson, Daugherty, and Morini-Bianzino suggest that we’ve been “overlooking” the fact that “many new jobs will also be created — jobs that look nothing like those that exist today.” They go on to identify three key positions:

Trainers

This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.

Explainers

The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases.

Sustainers

The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency.

Unfortunately for these claims, it is all too evident that these positions themselves are vulnerable to automation. That is, future trainers, explainers, and sustainers–if their job descriptions match the ones given above–will also be automated. Nothing in those descriptions indicates that humans are indispensable components of the learning processes they are currently facilitating.

Consider that the ’empathy trainer’ Koko is a machine-learning system which “can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth.” Currently Koko is being trained by a human; but it will then go on to teach Siri and Alexa. Teachers are teaching future teachers how to be teachers; after which they will teach on their own. Moreover, there is no reason why chatbots, which currently need human trainers, cannot be trained–as they already are–on increasingly large corpora of natural language texts like the collections of the world’s libraries (which consist of human inputs.)

Or consider that explainers may rely on an algorithmic system like “the Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction”; human analysts use its output to “pinpoint the data that led to a particular result.” But why wouldn’t LIME’s functioning be enhanced to do the relevant ‘pinpointing’ itself, thus obviating the need for a human analyst?

Finally, note that “AI may become more self-governing….an AI prototype named Quixote…can learn about ethics by reading simple stories….the system is able to “reverse engineer” human values through stories about how humans interact with one another. Quixote has learned…why stealing is not a good idea….even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.” This sounds like the roles and responsibilities for humans in this domain will decrease over time.

There is no doubt that artificial intelligence systems require human inputs, supervision, training, and maintenance; but those roles are diminishing, not increasing; short-term, local, requirements for human skills will not necessarily translate into long-term, global, ones. The economic impact of the coming robotics and artificial intelligence ‘revolution’ is still unclear.

The Lost Art Of Navigation And The Making Of New Selves

Giving, and following, driving directions was an art. A cartographic communication, conveyed and conducted by spoken description, verbal transcription, and subsequent decipherment. You asked for a route to a destination, and your partner in navigation issued a list of waypoints, landmarks, and driving instructions; you wrote these down (or bravely, committed them to memory); then, as you drove, you compared those descriptions with actual, physical reality, checking to see if a useful correspondence obtained; then, ideally, you glided ‘home.’ A successful navigation was occasion for celebration by both direction-giver and direction-follower; hermeneutical encounters like theirs deserved no less; before, there was the unknown space, forbidding and inscrutable; afterwards, there was a destination, magically clarified, made visible, and arrived at.   There were evaluative scales here to be found: some were better at giving directions than others, their sequence of instructions clear and concise, explicit specifications expertly balanced by exclusion of superfluous, confusing detail; not all were equally proficient at following directions, some mental compasses were easily confused by turns and intersections, some drivers’ equanimity was easily disturbed by difficult traffic and a missed exit or two. (Reading and making maps, of course, has always been a honorable and valuable skill in our civilizations.)

Which reminds us that driving while trying to navigate was often stressful and sometimes even dangerous (sudden attempts to take an exit or avoid taking one cause crashes all the time.) The anxiety of the lost driver has a peculiar phenomenological quality all its own, enhanced in terrifying ways by the addition of bad neighborhoods, fractious family members, darkness, hostile drivers in traffic. And so, Global Positioning System (GPS) navigators–with their pinpoint, precise, routes colorfully, explicitly marked out–were always destined to find a grateful and receptive following. An interactive, dynamic, realistic, updated in real-time map is a Good Thing, even if the voices in which it issued its commands and directions were sometimes a little too brusque or persistent.

But as has been obvious for a long time now, once you start offloading and outsourcing your navigation skills, you give them away for good. Dependency on the GPS is almost instantaneous and complete; very soon, you cannot drive anywhere without one. (Indeed, many cannot walk without one either.) The deskilling in this domain has been, like many others in which automation has found a dominant role, quite spectacular. Obviously, I speak from personal experience; I was only too happy to rely on GPS navigators when I drive, and now do not trust myself to follow even elementary verbal or written driving directions. I miss some of my old skills navigating skills, but I do not miss, even for a second, the anxiety, frustration, irritation, and desperation of feeling lost . An older set of experiences, an older part of me, is gone, melded and merged with a program, a console, an algorithm; the blessing is, expectedly, a mixed one. Over the years, I expect this process will continue; bits of me will be offloaded into an increasingly technologized environment, and a newer self will emerge.

Handing Over The Keys To The Driverless Car

Early conceptions of a driverless car world spoke of catastrophe: the modern versions of the headless horseman would run amok, driving over toddlers and grandmothers with gay abandon, sending the already stratospheric death toll from automobile accidents into ever more rarefied zones, and sending us all cowering back into our homes, afraid to venture out into a shooting gallery of four-wheeled robotic serial killers. How would the inert, unfeeling, sightless, coldly calculating programs  that ran these machines ever show the skill and judgment of human drivers, the kind that enables them, on a daily basis, to decline to run over a supermarket shopper and decide to take the right exit off the interstate?

Such fond preference for the human over the machinic–on the roads–was always infected with some pretension, some make-believe, some old-fashioned fallacious comparison of the best of the human with the worst of the machine. Human drivers show very little affection for other human drivers; they kill them by the scores every day (thirty thousand fatalities or so in a year); they often do not bother to interact with them sober (over a third of all car accidents involved a drunken driver); they rage and rant at their driving colleagues (the formula for ‘instant asshole’ used to be ‘just add alcohol’ but it could very well be ‘place behind a wheel’ too); they second-guess their intelligence, their parentage on every occasion. When they can be bothered to pay attention to them, often finding their smartphones more interesting as they drive. If you had to make an educated guess who a human driver’s least favorite person in the world was, you could do worse than venture it was someone they had encountered on a highway once. We like our own driving; we disdain that of others. It’s a Hobbesian state of nature out there on the highway.

Unsurprisingly, it seems the biggest problem the driverless car will face is human driving. The one-eyed might be king in the land of the blind, but he is also susceptible to having his eyes put out. The driverless car might follow traffic rules and driving best practices rigorously but such acquiescence’s value is diminished in a world which otherwise pays only sporadic heed to them. Human drivers incorporate defensive and offensive maneuvers into their driving; they presume less than perfect knowledge of the rules of the road on the part of those they interact with; their driving habits bear the impress of long interactions with other, similarly inclined human drivers. A driverless car, one bearing rather more fidelity to the idealized conception of a safe road user, has at best, an uneasy coexistence in a world dominated by such driving practices.

The sneaking suspicion that automation works best when human roles are minimized is upon us again: perhaps driverless cars will only be able to show off their best and deliver on their incipient promise when we hand over the wheels–and keys–to them. Perhaps the machine only sits comfortably in our world when we have made adequate room for it. And displaced ourselves in the process.

 

CP Snow On ‘The Rich And The Poor’

In 1959, while delivering his soon-to-be-infamous Rede Lectures on ‘The Two Cultures‘ at Cambridge University, C. P. Snow–in the third section, titled ‘The Rich and the Poor’–said,

[T]he people in the industrialised countries are getting richer, and those in the non-industrialised countries are at best standing still: so that the gap between the industrialised countries and the rest is widening every day. On the world scale this is the gap between the rich and the poor….Life for the overwhelming majority of mankind has always been nasty, brutish and short. It is so in the poor countries still.

This disparity between the rich and the poor has been noticed. It has been noticed, most acutely and not unnaturally, by the poor. Just because they have noticed it, it won’t last for long. Whatever else in the world we know survives to the year 2000, that won’t. Once the trick of getting rich is known, as it now is, the world can’t survive half rich and half poor. It’s just not on. [C. P. Snow, The Two Cultures, Cambridge University Press, Canto Classics, pp. 42]

Well, well, what extraordinary, almost touching, optimism.

Sir Charles did not understand, or care to, perhaps, the extraordinary pertinacity of the rich, those in power, their capacity to manipulate political and economic systems, their almost total control of consciousness and imagination, their ability to promulgate the central principles of the ideology that drives the economic inequality of this world to ever higher levels. (Snow was certainly correct that the world would not remain ‘half rich and half poor’ – that fraction, an always inaccurate one, tilts more now in the direction of the one percent–ninety-nine percent formulation made famous by Occupy Wall Street.)

Snow was also, as many commentators pointed out at the time in critical responses to his lectures, in the grip of an untenable optimism about the ameliorating effects of the scientific and industrial revolutions on both the world of nature and man: as their effects would spread, bringing in their wake material prosperity and intellectual enlightenment, old social and political structures would give way. But science and technology can comfortably co-exist with reactionary politics; they can be easily deployed to prop up repressive regimes; they can be just as easily used to prop up economic and political injustice as not. There is ample evidence for these propositions in the behavior of modern governments, who for instance, deploy the most sophisticated tools of electronic surveillance to keep their citizens under watch, acquiescent and obedient. And automation, that great savior of human labor, which was supposed to make our lives less ‘nasty and brutish’ might instead, when it takes root in such unequal societies, put all workers out to pasture.

But let us allow ourselves to be captured by the hope shown in Snow’s lectures that such radical inequality as was on display in 1959 and thereafter, cannot be a stable state of affairs. Then we might still anticipate that at some point in the future, armed with–among other tools–the right scientific and technical spanners to throw into the wheels of the political and economic juggernaut that runs over them, the poor will finally rise up.

Nicholas Carr on Automation’s Perils

Nicholas Carr offers us some interesting and thoughtful worries about automation in The Atlantic (‘All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines,’ 23 October 2013). These worries center largely around de-skilling: as automation grows ever more sophisticated–and evidence suggests it is pushing into domains once thought to be inaccessible–humans will lose the precious know-how associated with them, setting themselves up for a situation in which once the technology fails–as it inevitably will–we run the risk of catastrophe. Carr’s examples are alarming; he highlights the use of the ‘substitution fallacy’ in standard defenses of automation; most usefully, he points out that as automation proceeds, all too-many humans will become merely its monitors; and finally, concludes:

Whether it’s a pilot on a flight deck, a doctor in an examination room, or an Inuit hunter on an ice floe, knowing demands doing. One of the most remarkable things about us is also one of the easiest to overlook: each time we collide with the real, we deepen our understanding of the world and become more fully a part of it. While we’re wrestling with a difficult task, we may be motivated by an anticipation of the ends of our labor, but it’s the work itself—the means—that makes us who we are. Computer automation severs the ends from the means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want? If we don’t grapple with that question ourselves, our gadgets will be happy to answer it for us.

This is a good question to ask. I want to complicate the picture somewhat by raising some questions.

1. Does Carr want to suggest we roll back the advancing tide of automation? Should we demarcate some areas of human expertise as ‘too human’ or ‘too important’ to be automated? Should we discourage research on automated driving, navigation systems, spell checking and the like? Should we make a list of ‘core human cognitive capacities’ and then discourage research on automating these? How would we ‘discourage’ such research? By law, the market, or social norming? What would such judgments be based on?  Do we have a set of values that would animate them and that we could rely on?

2. There is a flip-side to the de-skilling blamed on automation: a tremendous increase in human knowledge and technical capacities has been required to create and implement the systems that so alarm us. Where does this knowledge and its associated power reside? As a society, we are witnessing the creation of a new elite of knowledgeable producers, those who make the gadgets that Carr worries are making us dumb. Are we, along with economic inequality, also creating cognitive inequality? Can the technical knowledge gained by work on automation help us alleviate the problems associated with de-skilling?

This latter consideration suggests that perhaps the real problem with automation is not automation per se but automation in a radically inegalitarian and economically skewed society like ours, one whose economic and moral priorities do not permit an adequate amelioration of the effects of automation, or permit a rich enough life that may allow those de-skilled by automation in some domains to develop and apply their talents elsewhere.

If Machines Do All The ‘Work’, What Will Humans Do?

At The Atlantic Moshe Vardi wonders about the consequences of machine intelligence.  Vardi’s article features the subtitle ‘If machines are capable of doing any work that humans can do, then what will humans do?’ and is occasioned by the following:

While the loss of millions of jobs over the past few years has been attributed to the Great Recession, whose end is not yet in sight, it now seems that technology-driven productivity growth is at least a major factor.

As Vardi notes, worries about the loss of employment caused by growth in technological innovation are not new and have often been met by varieties of techno-optimism: ‘new technologies will create new jobs!’ Such optimism includes that of Keynes‘ who

[I]magined 2030 as a time in which most people worked only 15 hours a week, and would occupy themselves mostly with leisure activities.

Vardi is not reassured:

I do not find this to be a promising future. First, if machines can do almost all of our work, then it is not clear that even 15 weekly hours of work will be required. Second, I do not find the prospect of leisure-filled life appealing. I believe that work is essential to human well-being. Third, our economic system would have to undergo a radical restructuring to enable billions of people to live lives of leisure.

But a life full of leisure is only problematic if we conceive of leisure in extremely impoverished ways: perhaps watching television sitcoms endlessly, sitting around twiddling our thumbs, working through one bag of potato chips after another. Why is leisure somehow imagined to be non-intellectually or physically taxing? Why couldn’t leisure involve physical recreation, reading and writing books, proving theorems, painting, or writing poems? Can all these only be done for gainful employment? Perhaps the problem with a world ‘run’ by machines that relieve of us of ‘work’ while leaving us free to pursue ‘leisure’ is not the presence of machines,  but the absence of a richer vision of the human life.

Of course, the worry about an automated future really seems to be that if humans aren’t ‘working’ they aren’t getting ‘paid,’ or rather, they aren’t making ‘money’ to ‘support’ themselves. So this vision of the human future is only frightening if we imagine humans made destitute by machines doing all the work. But then those humans are not going to be in a position to pursue ‘leisure.’ They’ll be too busy robbing, stealing, scrounging and begging to feed their families and themselves. They’ll be ‘working’ pretty hard.

Vardi’s third point is the one he should truly be worried about.The problem is not one of work or leisure. The problem is reconfiguring a political economy centered on massive automation to ensure human beings will not be destitute. Work and leisure are traditionally opposed to each other because we cannot fill our time with pleasurable, leisurely activities (understood broadly as above) without being economically deprived. A world in which the economic needs of man are taken care of by machines leaving us free to do non-coerced work does not sound unpleasant to me; if the automated economy of tomorrow makes it possible for us to do less work-for-wages to meet our needs our leisure time may be devoted to pursuing our intellectual and physical goals. The real problem is the economy of tomorrow is only too likely to be like the economy of today: massive, skewed concentrations of wealth in the hands of monopolists. We won’t have much time for leisure in that one.