Everyone is concerned about ‘algorithms.’ Especially legal academics; law review articles, conferences, symposia all bear testimony to this claim. Algorithms and transparency; the tyranny of algorithms; how algorithms can deprive you of your rights; and so on. Algorithmic decision making is problematic; so is algorithmic credit scoring; or algorithmic stock trading. You get the picture; something new and dangerous called the ‘algorithm’ has entered the world, and it is causing havoc. Legal academics are on the case (and they might even occasionally invite philosophers and computer scientists to pitch in with this relief effort.)
There is a problem with this picture. ‘Algorithms’ is the wrong word to describe the object of legal academics’ concern. An algorithm is “an unambiguous specification of how to solve a class of problems” or a step-by-step procedure which terminates with a solution to a given problem. These problems can be of many kinds: mathematical or logical ones are not the only ones, for a cake-baking recipe is also an algorithm, as are instructions for crossing a street. Algorithms can be deterministic or non-deterministic; they can be exact or approximate; and so on. But, and this is their especial feature, algorithms are abstract specifications; they lack concrete implementations.
Computer programs are one kind of implementation of algorithms; but not the only one. The algorithm for long division can be implemented by pencil and paper; it can also be automated on a hand-held calculator; and of course, you can write a program in C or Python or any other language of your choice and then run the program on a hardware platform of your choice. The algorithm to implement the TCP protocol can be programmed to run over an Ethernet network; in principle, it could also be implemented by carrier pigeon. Different implementation, different ‘program,’ different material substrate. For the same algorithm: there are good implementations and bad implementations (the algorithm might give you the right answer for any particular input but its flawed implementation incorporates some errors and does not); some implementations are incomplete; some are more efficient and effective than others. Human beings can implement algorithms; so can well-trained animals. Which brings us to computers and the programs they run.
The reason automation and the computers that deliver it to us are interesting and challenging–conceptually and materially–is because they implement algorithms in interestingly different ways via programs on machines. They are faster; much faster. The code that runs on computers can be obscured–because human-readable text programs are transformed into machine-readable binary code before execution–thus making study, analysis, and critique of the algorithm in question well nigh impossible. Especially when protected by a legal regime as proprietary information. They are relatively permanent; they can be easily copied. This kind of implementation of an algorithm is shared and distributed; its digital outputs can be stored indefinitely. These affordances are not present in other non-automated implementations of algorithms.
The use of ‘algorithm’ in the context of the debate over the legal regulation of automation is misleading. It is the ‘automation’ and ‘computerized implementation’ of an algorithm for credit scoring that is problematic; it is so because of specific features of its implementation. The credit scoring algorithm is, of course, proprietary; moreover, its programmed implementation is proprietary too, a trade secret. The credit scoring algorithm might be a complex mathematical algorithm readable by a few humans; its machine code is only readable by a machine. Had the same algorithm been implemented by hand, by human clerks sitting in an open office, carrying out their calculations by pencil and paper, we would not have the same concerns. (This process could also be made opaque but that would be harder to accomplish.) Conversely, a non-algorithmic, non-machinic–like, a human–process would be subject to the same normative constraints.
None of the concerns currently expressed about ‘the rule/tyranny of algorithms’ would be as salient were the algorithms not being automated on computing systems; our concerns about them would be significantly attenuated. It is not the step-by-step solution–the ‘algorithm’–to a credit scoring problem that is the problem; it is its obscurity, its speed, its placement on a platform supposed to be infallible, a jewel of a socially respected ‘high technology.’
Of course, the claim is often made that algorithmic processes are replacing non-algorithmic–‘intuitive, heuristic, human, inexact’–solutions and processes; that is true, but again, the concern over this replacement would not be the same, qualitatively or quantitatively, were these algorithmic processes not being computerized and automated. It is the ‘disappearance’ into the machine of the algorithm that is the genuine issue at hand here.