Contra Corey Pein, Computer Science Is A Science

In this day and age, sophisticated critique of technology and science is much needed. What we don’t need is critiques like this long piece in the Baffler by Corey Pein which, I think, is trying to mount a critique of the lack of ethics education in computer science curricula but seems most concerned with asserting that computer science is not a science. By, I think, relying on the premise that “Silicon Valley activity and propaganda” = “computer science.” I fail to understand how a humanistic point is made by asserting the ‘unscientific’ nature of a purported science, but your mileage may vary. Anyway, on to Pein.

Continue reading

Dear Legal Academics, Please Stop Misusing The Word ‘Algorithms’

Everyone is concerned about ‘algorithms.’ Especially legal academics; law review articles, conferences, symposia all bear testimony to this claim. Algorithms and transparency; the tyranny of algorithms; how algorithms can deprive you of your rights; and so on. Algorithmic decision making is problematic; so is algorithmic credit scoring; or algorithmic stock trading. You get the picture; something new and dangerous called the ‘algorithm’ has entered the world, and it is causing havoc. Legal academics are on the case (and they might even occasionally invite philosophers and computer scientists to pitch in with this relief effort.)

There is a problem with this picture. ‘Algorithms’ is the wrong word to describe the object of legal academics’ concern. An algorithm is “an unambiguous specification of how to solve a class of problems” or a step-by-step procedure which terminates with a solution to a given problem. These problems can be of many kinds: mathematical or logical ones are not the only ones, for a cake-baking recipe is also an algorithm, as are instructions for crossing a street. Algorithms can be deterministic or non-deterministic; they can be exact or approximate; and so on. But, and this is their especial feature, algorithms are abstract specifications; they lack concrete implementations.

Computer programs are one kind of implementation of algorithms; but not the only one. The algorithm for long division can be implemented by pencil and paper; it can also be automated on a hand-held calculator; and of course, you can write a program in C or Python or any other language of your choice and then run the program on a hardware platform of your choice. The algorithm to implement the TCP protocol can be programmed to run over an Ethernet network; in principle, it could also be implemented by carrier pigeon. Different implementation, different ‘program,’ different material substrate. For the same algorithm: there are good implementations and bad implementations (the algorithm might give you the right answer for any particular input but its flawed implementation incorporates some errors and does not); some implementations are incomplete; some are more efficient and effective than others. Human beings can implement algorithms; so can well-trained animals. Which brings us to computers and the programs they run.

The reason automation and the computers that deliver it to us are interesting and challenging–conceptually and materially–is because they implement algorithms in interestingly different ways via programs on machines. They are faster; much faster. The code that runs on computers can be obscured–because human-readable text programs are transformed into machine-readable binary code before execution–thus making study, analysis, and critique of the algorithm in question well nigh impossible. Especially when protected by a legal regime as proprietary information. They are relatively permanent; they can be easily copied. This kind of implementation of an algorithm is shared and distributed; its digital outputs can be stored indefinitely. These affordances are not present in other non-automated implementations of algorithms.

The use of ‘algorithm’ in the context of the debate over the legal regulation of automation is misleading. It is the ‘automation’ and ‘computerized implementation’ of an algorithm for credit scoring that is problematic; it is so because of specific features of its implementation. The credit scoring algorithm is, of course, proprietary; moreover, its programmed implementation is proprietary too, a trade secret. The credit scoring algorithm might be a complex mathematical algorithm readable by a few humans; its machine code is only readable by a machine. Had the same algorithm been implemented by hand, by human clerks sitting in an open office, carrying out their calculations by pencil and paper, we would not have the same concerns. (This process could also be made opaque but that would be harder to accomplish.) Conversely, a non-algorithmic, non-machinic–like, a human–process would be subject to the same normative constraints.

None of the concerns currently expressed about ‘the rule/tyranny of algorithms’ would be as salient were the algorithms not being automated on computing systems; our concerns about them would be significantly attenuated. It is not the step-by-step solution–the ‘algorithm’–to a credit scoring problem that is the problem; it is its obscurity, its speed, its placement on a platform supposed to be infallible, a jewel of a socially respected ‘high technology.’

Of course, the claim is often made that algorithmic processes are replacing non-algorithmic–‘intuitive, heuristic, human, inexact’–solutions and processes; that is true, but again, the concern over this replacement would not be the same, qualitatively or quantitatively, were these algorithmic processes not being computerized and automated. It is the ‘disappearance’ into the machine of the algorithm that is the genuine issue at hand here.

Contra Cathy O’Neil, The ‘Ivory Tower’ Does Not ‘Ignore Tech’

In ‘Ivory Tower Cannot Keep On Ignoring TechCathy O’Neil writes:

We need academia to step up to fill in the gaps in our collective understanding about the new role of technology in shaping our lives. We need robust research on hiring algorithms that seem to filter out peoplewith mental health disorders…we need research to ensure that the same mistakes aren’t made again and again. It’s absolutely within the abilities of academic research to study such examples and to push against the most obvious statistical, ethical or constitutional failures and dedicate serious intellectual energy to finding solutions. And whereas professional technologists working at private companies are not in a position to critique their own work, academics theoretically enjoy much more freedom of inquiry.

There is essentially no distinct field of academic study that takes seriously the responsibility of understanding and critiquing the role of technology — and specifically, the algorithms that are responsible for so many decisions — in our lives. That’s not surprising. Which academic department is going to give up a valuable tenure line to devote to this, given how much academic departments fight over resources already?

O’Neil’s piece is an unfortunate continuation of a trend to continue to castigate academia for its lack of social responsibility, all the while ignoring the work academics do in precisely those domains where their absence is supposedly felt.

In her Op-Ed, O’Neil ignores science and technology studies, a field of study that “takes seriously the responsibility of understanding and critiquing the role of technology,” and many of whose members are engaged in precisely the kind of studies she thinks should be undertaken at this moment in the history of technology. Moreover, there are fields of academic studies such as philosophy of science, philosophy of technology, and the sociology of knowledge, all of which take very seriously the task of examining and critiquing the conceptual foundations of science and technology; such inquiries are not elucidatory, they are very often critical and skeptical. Such disciplines then, produce work that makes both descriptive and prescriptive claims about the practice of science, and the social, political, and ethical values that underwrite what may seem like purely ‘technical’ decisions pertaining to design and implementation. The humanities are not alone in this regard, most computer science departments now require a class in ‘Computer Ethics’ as part of the requirements for their major (indeed, I designed one such class here at Brooklyn College, and taught it for a few semesters.) And of course, legal academics have, in recent years started to pay attention to these fields and incorporated them in their writings on ‘algorithmic decision making,’ ‘algorithmic control’ and so on. (The work of Frank Pasquale and Danielle Citron is notable in this regard.) If O’Neil is interested, she could dig deeper into the philosophical canon and read works by critical theorists like Herbert Marcuse and Max Horkheimer who mounted rigorous critiques of scientism, reductionism, and positivism in their works. Lastly, O’Neil could read my co-authored work Decoding Liberation: The Promise of Free and Open Source Software, a central claim of which is that transparency, not opacity, should be the guiding principle for software design and deployment. I’d be happy to send her a copy if she so desires.

Self-Policing In Response To Pervasive Surveillance

On Thursday night, in the course of conversation with some of my Brooklyn College colleagues, I confessed to having internalized a peculiar sort of ‘chilling effect’ induced by a heightened sensitivity to our modern surveillance state. To wit, I said something along the lines of “I would love to travel to Iran and Pakistan, but I’m a little apprehensive about the increased scrutiny that would result.” When pressed to clarify by my companions, I made some rambling remarks that roughly amounted to the following. Travel to Iran and Pakistan–Islamic nations highly implicated in various foreign policy imbroglios with the US and often accused of supporting terrorism–is highly monitored by national law enforcement and intelligence agencies (the FBI, CIA, NSA); I expected to encounter some uncomfortable moments on my arrival back in the US thanks to questioning by customs and immigration officers (with a first name like mine–which is not Arabic in origin but is in point of fact, a very common and popular name in the Middle East–I would expect nothing less). Moreover, given the data point that my wife is Muslim, I would expect such attention to be heightened (data mining algorithms would establish a ‘networked’ connection between us and given my wife’s own problems when flying, I would expect such a connection to possibly be more ‘suspicious’) ; thereafter, I could expect increased scrutiny every time I traveled (and perhaps in other walks of life, given the extent of data sharing between various governmental agencies).

It is quite possible that all of the above sounds extremely paranoid and misinformed, and my worries a little silly, but I do not think there are no glimmers of truth in there. The technical details are not too wildly off the mark; the increased scrutiny after travel is a common occurrence for many travelers deemed ‘suspicious’ for unknown reasons; and so on. The net result is a curious sort of self-policing on my part: as I look to make travel plans for the future I will, with varying degrees of self-awareness about my motivations, prefer other destinations and locales. I will have allowed myself to be subject to an invisible set of constraints not actually experienced (except indirectly, in part, as in my wife’s experiences when flying.)

This sort of ‘indirect control’ might be pervasive surveillance’s most pernicious effect.

Note: My desire to travel to Iran and Pakistan is grounded in some fairly straightforward desires: Iran is a fascinating and complex country, host to an age-old civilization, with a rich culture and a thriving intellectual and academic scene; Pakistan is of obvious interest to someone born in India, but even more so to someone whose ethnic background is Punjabi, for part of the partitioned Punjab is now in Pakistan (as I noted in an earlier post about my ‘passing for Pakistani,’ “my father’s side of the family hails from a little village–now a middling town–called Dilawar Cheema, now in Pakistan, in Gujranwala District, Tehsil Wazirabad, in the former West Punjab.”)

Please, Can We Make Programming Cool?

Is any science as desperate as computer science to be really, really liked? I ask because not for  the first time, and certainly not the last, I am confronted with yet another report of an effort to make computer science ‘cool’, trying in fact, to make its central component–programming–cool.

The presence of technology in the lives of most teenagers hasn’t done much to entice more of them to become programmers. So Hadi Partovi has formed a nonprofit foundation aimed at making computer science as interesting to young people as smartphones, Instagram and iPads. Mr. Partovi…founded Code.org with the goal of increasing the teaching of computer science in classrooms and sparking more excitement about the subject among students….Code.org’s initial effort will be a short film…that will feature various luminaries from the technology industry talking about how exciting and accessible programming is….It also isn’t clear that Code.org’s film will succeed where modern technologies themselves have failed: in getting young people excited about programming.

I don’t know what being cool means for programming, but if it means convincing potential converts that those who program don’t need to think logically or algorithmically or in structured fashion, or that somehow programming can be made to, you know, just flow with no effort, that it can be all fun and games, then like all other efforts before it, Mr. Partovi’s efforts are doomed.

Here is why. Programming is hard. It’s not easy and never will be. When you write programs you will hit walls, you will be frustrated, you will tear your hair out, you will be perplexed. Sometimes things will go right and programs will run beautifully, but it will often take a long time to get things working. When programs work, it is incredibly satisfying, which is why some people enjoy programming and find beauty and power in it. Programming can be used to create shiny toys and things that go pow and zoom and sometimes kapow, but it will never be shiny or go pow and zoom or even kapow. Before the pot of gold is reached, there is a fairly tedious rainbow to be traversed.

Writers write and produce potboilers, pulp fiction, romances, great novels, comedies, screenplays, essays, creative non-fiction, a dazzling array that entertains, beguiles, and fascinates. But is writing fun? FUCK NO. It’s horrible. Yes, you produce great sentences, and yes, sometimes things fall into place, and you see a point made on the page, which comes out just the way you wanted it to. But all too soon, it’s over and you are facing a blank page again. Writing produces glamorous stuff but it is very far from being that; it is tedious, slow, and very likely to induce self-loathing. The folks who write do not try to make writing accessible or fun. Because it isn’t. You do it because you can find moments of beauty in it and because you can solve the puzzle of trying to find the right combination of words that say best what you wanted to say. Programming is like that. Very much so. Word processors can get as flashy as they want, they won’t make writing easier. The slickest programming tools won’t make programming easier either.