Contra Corey Pein, Computer Science Is A Science

In this day and age, sophisticated critique of technology and science is much needed. What we don’t need is critiques like this long piece in the Baffler by Corey Pein which, I think, is trying to mount a critique of the lack of ethics education in computer science curricula but seems most concerned with asserting that computer science is not a science. By, I think, relying on the premise that “Silicon Valley activity and propaganda” = “computer science.” I fail to understand how a humanistic point is made by asserting the ‘unscientific’ nature of a purported science, but your mileage may vary. Anyway, on to Pein.

Continue reading

No, Aristotle Did Not ‘Create’ The Computer

For the past few days, an essay titled “How Aristotle Created The Computer” (The Atlantic, March 20, 2017, by Chris Dixon) has been making the rounds. It begins with the following claim:

The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Dixon then goes on to trace this ‘history of ideas,’ showing how the development–and increasing formalization and rigor–of logic contributed to the development of computer science and the first computing devices. Along the way, Dixon makes note of the contributions-direct and indirect–of: Claude Shannon, Alan Turing, George Boole, Euclid, Rene Descartes, Gottlob Frege, David Hilbert, Gottfried Leibniz, Bertrand Russell, Alfred Whitehead, Alonzo Church, and John Von Neumann. This potted history is exceedingly familiar to students of the foundations of computer science–a demographic that includes computer scientists, philosophers, and mathematical logicians–but presumably that is not the audience that Dixon is writing for; those students might wonder why Augustus De Morgan and Charles Peirce do not feature in it. Given this temporally extended history, with its many contributors and their diverse contributions, why does the article carry the headline “How Aristotle Created the Computer”? Aristotle did not create the computer or anything like it; he did make important contributions to a fledgling field, which took several more centuries to develop into maturity. (The contributions to this field by logicians and systems of logic of alternative philosophical traditions like the Indian one are, as per usual, studiously ignored in Dixon’s history.) And as a philosopher, I cannot resist asking, “what do you mean by ‘created'”? What counts as ‘creating’?

The easy answer is that it is clickbait. Fair enough. We are by now used to the idiocy of the misleading clickbait headline, one designed to ‘attract’ more readers by making it more ‘interesting;’ authors very often have little choice in this matter, and very often have to watch helplessly as hit-hungry editors mangle the impact of the actual content of their work. (As in this case?) But it is worth noting this headline’s contribution to the pernicious notion of the ‘creation’ of the computer and to the idea that it is possible to isolate a singular figure as its creator–a clear hangover of a religious sentiment that things that exist must have creation points, ‘beginnings,’ and creators. It is yet another contribution to the continued mistaken recounting of the history of science as a story of ‘towering figures.’ (Incidentally, I do not agree with Dixon that the history of computers is “better understood as a history of ideas”; that history is instead, an integral component of the history of computing in general, which also includes a social history and an economic one; telling a history of computing as a history of objects is a perfectly reasonable thing to do when we remember that actual, functioning computers are physical instantiations of abstract notions of computation.)

To end on a positive note, here are some alternative headlines: “Philosophy and Mathematics’ Contributions To The Development of Computing”; “How Philosophers and Mathematicians Helped Bring Us Computers”; or “How Philosophical Thinking Makes The Computer Possible.” None of these are as ‘sexy’ as the original headline, but they are far more informative and accurate.

Note: What do you think of my clickbaity headline for this post?

Artificial Intelligence And Go: (Alpha)Go Ahead, Move The Goalposts

In the summer of 1999, I attended my first ever professional academic philosophy conference–in Vienna. At the conference, one titled ‘New Trends in Cognitive Science’, I gave a talk titled (rather pompously) ‘No Cognition without Representation: The Dynamical Theory of Cognition and The Emulation Theory of Mental Representation.’ I did the things you do at academic conferences as a graduate student in a job-strapped field: I hung around senior academics, hoping to strike up conversation (I think this is called ‘networking’); I tried to ask ‘intelligent’ questions at the talks, hoping my queries and remarks would mark me out as a rising star, one worthy of being offered a tenure-track position purely on the basis of my sparking public presence. You know the deal.

Among the talks I attended–a constant theme of which were the prospects of the mechanization of the mind–was one on artificial intelligence. Or rather, more accurately, the speaker concerned himself with evaluating the possible successes of artificial intelligence in domains like game-playing. Deep Blue had just beaten Garry Kasparov in an unofficial chess-human world championship in 1997, and such questions were no longer idle queries. In the wake of Deep Blue’s success the usual spate of responses–to news of artificial intelligence’s advance in some domain–had ensued: Deep Blue’s success did not indicate any ‘true intelligence’ but rather pure ‘computing brute force’; a true test of intelligence awaited in other domains. (Never mind that beating a human champion in chess had always been held out as a kind of Holy Grail for game-playing artificial intelligence.)

So, during this talk, the speaker elaborated on what he took to be artificial intelligence’s true challenge: learning and mastering the game of Go. I did not fully understand the contrasts drawn between chess and Go, but they seemed to come down to two vital ones: human Go players relied, indeed had to, a great deal on ‘intuition’, and on a ‘positional sizing-up’ that could not be reduced to an algorithmic process. Chess did not rely on intuition to the same extent; its board assessments were more amenable to an algorithmic calculation. (Go’s much larger state space was also a problem.) Therefore, roughly, success in chess was not so surprising; the real challenge was Go, and that was never going to be mastered.

Yesterday, Google’s DeepMind AlphaGo system beat the South Korean Go master Lee Se-dol in the first of an intended five-game series. Mr. Lee conceded defeat in three and a half hours. His pre-game mood was optimistic:

Mr. Lee had said he could win 5-0 or 4-1, predicting that computing power alone could not win a Go match. Victory takes “human intuition,” something AlphaGo has not yet mastered, he said.

Later though, he said that “AlphaGo appeared able to imitate human intuition to a certain degree” a fact which was born out to him during the game when “AlphaGo made a move so unexpected and unconventional that he thought “it was impossible to make such a move.”

As Jean-Pierre Dupuy noted in his The Mechanization of Mind, a very common response to the ‘mechanization of mind’ is that such attempts merely simulate or imitate, and are mere fronts for machinic complexity–but these proposals seemingly never consider the possibility that the phenomenon they consider genuine or the model for imitation and simulation can only retain such a status as long as simulations and imitations remain flawed. As those flaws diminish, the privileged status of the ‘real thing’ diminishes in turn. A really good simulation,  indistinguishable from the ‘real thing,’ should make us wonder why we grant it such a distinct station.

John Cheever On Computer Programming

In The Wapshot Chronicle (Harper and Row, New York, 1957), John Cheever writes:

There was a demand that year for Tapers and he pointed this out to Coverly as his best bet. The government would pay half of Coverly’s tuition at the MacIlhenney Institute. It was a four-month course and if he passed his exams he would be taken into government service at seventy-five dollars a week,.  Advised and encouraged by his friend, Coverly enrolled in some night classes on Taping. This involved the translation of physics experiments into the symbols–or tape–that could be fed into a computation machine….

The first lecture was an orientation talk on cybernetics or automation, and if Coverly, with his mildly rueful disposition, had been inclined to find any irony in his future relationship to a thinking machine, he was swiftly disabused. Then they got to work on memorizing the code.

This was like learning a language and a rudimentary one. Everything was done by rote. They were expected to memorize fifty symbols a week. They were quizzed for fifteen minutes at the opening of each class and were given speed tests at the end of the two-hour period. After a month of this the symbols–like the study of any language–had begun to dominate Coverly’s thinking, and walking on the street he had gotten into the habit of regrouping numbers on license plates, prices in store windows and numerals on clocks so that they could be fed into a machine….[pp. 155]

Coverly passed his Civil Service examination and was qualified as a Taper. [pp. 164]

These little excerpts are notable for several reasons:

  1. I have never seen the programming of computers in that early period of computing history referred to as ‘taping’; neither have I seen programmers referred to as ‘tapers.’ I have not been able to find instances of this nomenclature elsewhere. (I have, of course, seen human calculators referred to as ‘computers.’)
  2. Cheever’s descriptions of ‘taping’ and the process of learning a ‘programming language’ are not elementary; I wonder if he had some experience with computers and working on them. (Incidentally, the method of instruction–the memorization of a set numerical quota of symbols every day or week–reminds me of a story a Taiwanese friend once told me about how Chinese is taught to young children in elementary schools.)
  3. Cheever refers to Coverly being employed at “one of the rocket-launching stations where Tapers were employed.” The Wapshot Chronicle was published in 1957; NASA only came into being in 1958, so the activities Cheever would have been referring to would presumably have been those of the US Air Force’s Atlas missile program or perhaps an experimental project run by NASA’s predecessor, the National Advisory Committee on Aeronautics (NACA).
  4. Cheever indicates that Coverly was “qualified as a Taper” on passing a Civil Service examination; I wonder whether there was such an examination and if so, who administered it, what it was called, what its contents were, etc.
  5. I wonder if such a reference to computer programming is among the first–if not the first–in post-war mainstream fiction. (By which I mean works not classified as science fiction.)

An Old Flame (No, Not That Kind)

Writing about the adversarial disputation styles present in academic philosophy reminded me of the time I lost my temper at someone who worked in the same department as me. (I don’t use the term ‘colleague’ advisedly. This dude was anything but.) Then, I was in the computer science department at Brooklyn College, and had for a long time been the subject of a series of personal attacks by a senior professor in the department. He made insulting remarks at department meetings about my research, my work on the curriculum committee, attacked me during my promotion interview, and of course, made many, many snide, offensive remarks over the departmental mailing list. (I was not alone in being his target; many other members of my department had been attacked by him as well.)

Finally, after he had yet another crude comment on the mailing list about my work, matters came to a head. I lost my temper and wrote back:

Ok, its been mildly diverting for a while. But I’ve tired of dealing with your sub-literate philistine self.

First, I don’t care what your middle name is. I made one up; you want me to be careful in how I address you? When all I am subjected to is more of the stinking piles of meshugna hodgepodge that is periodically deposited in my inbox?

Secondly, you bore me. You are excessively pompous, and your actions and pronouncements reek of a disturbing misanthropy. You are a legend in your own mind, and nowhere else. You pontificate excessively, lack basic reading skills and are constitutionally incapable of constructing an argument. You suffer under the delusion that your laughable savant-like talents actually have something to do with intelligence. You strut around, convinced that you make sense, while what you really should do is pay less attention to those voices in your head.

Thirdly, while I could take some time to construct a rebuttal of your useless ramblings, I’d rather spend some time insulting you in public. That’s what you like to do, so why don’t I just play along for a bit? But only as long as you don’t bore me excessively. When it gets to that point, I’ll have my SPAM filter mark your emails as SPAM and toss them in the trash where they belong. I like a little light amusement once in a while, and you occasionally provide it. Its cheap, low-brow entertainment. I think [senior professors] should be good for more than cheap entertainment but you have set your sights very low, so I should humor you for a bit before I go back to work. Its the least I can do for a ‘colleague’.

I used to flame self-deluded folks like you for fun back in the good ol’ Usenet days; if you want to join in and stick a bulls-eye on your forehead, be my guest. I miss the days of flaming Penn State undergrads who ran to post ramblings like yours five minutes after they had received their first BITNET accounts. But those guys could read at least, so flaming them was fun. With you, I’m not sure. Maybe you should go write a grant, schmooze with a grants program officer, or take a journal editor out for lunch. Or perhaps take a history lesson in computer science. One thing you do need is an education. In manners, first and foremost, but once you are done with that, I’ll send you a list of other subjects you need to catch up on. There’s a whole world out there. Try it sometime.

When you can construct a flame, get back to me, bring an asbestos suit, and I’ll get to work. But please, try to entertain me. If I am to be subjected to foolishness, I want to be entertained as well.  You’re a bit like Borat without the satire or irony. Or humor. Or entertainment value. In short, (stop me if you’ve heard this before), mostly, you just bore me.

Now, I command you: entertain me. Write an email that makes sense. Otherwise, run along. I’ve got serious research to do.

This might seem like fun. But it wasn’t. It was draining and dispiriting. I had been provoked, and I had fallen for it.

Won’t get fooled again.

Changing Philosophical Career Paths

I began my academic philosophy career as a ‘logician.’ I wrote a dissertation on belief revision, and was advised by a brilliant logician, Rohit Parikh, someone equally comfortable in the departments of computer science, philosophy and mathematics. Belief revision (or ‘theory change’ if you prefer) is a topic of interest to mathematicians, logicians, and computer scientists. Because of the last-named demographic, I was able to apply for, and be awarded, a post-doctoral fellowship with a logics for artificial intelligence group in Sydney, Australia. (In my two years on the philosophy job market, I sent out one hundred and fourteen applications, and scored precisely zero interviews. My visits to the APA General Meeting in 1999 and 2001 were among the most dispiriting experiences of my life; I have never attended that forum again. The scars run too deep.)

During my post-doctoral fellowship, I continued to write on belief revision but I also co-authored papers on belief merging (which has the same formal structure as social choice theory), non-monotonic logic, and dynamic logic (in the area known as ‘reasoning about actions.’) Some papers went into the Journal of Philosophical Logic and related journals; yet others went into the refereed proceedings of the most important artificial intelligence conferences. Because of my publication record and because of my stellar job hunt numbers in the philosophy job market, I decided to apply for a computer science job at Brooklyn College. I interviewed, and got the job; I was assured I could teach in the philosophy department as well. (So, in philosophy, my numbers were 0-114; in computer science 1-1.)

A few years later, around 2005 or so, I stopped working in logic. I declined invitations to conferences, I dropped out of co-authored projects; I put away, on the back-burner, many unfinished projects. (Some of them still strike me as very interesting, and if a promising graduate student would ever want to work on them, I would consider advising him/her.) This was not a very smart move professionally; I had finally, after five years of work, acquired a cohort of like-minded researchers (and friends); I had become a member of academic and professional networks; some of my work had become known to people in the field; I had managed to secure funding for release time, conference visits, and academic visitors; my work was generating problems, like the ones mentioned above, which could be worked on by graduate students; and so on. This was especially not a clever move given my impending tenure and promotion review in 2007; I would have to begin anew in a new field and make headway there.

But I just couldn’t work as a logician any more. I wasn’t an incompetent logician but I wasn’t as good a logician as the folks I regularly interacted with in the course of my work. Working in logic didn’t come easily to me; I had to work harder than most to get anything done in it. I had also realized that while I enjoyed puzzling out the philosophical and conceptual implications of the various models for belief change, belief merging, and reasoning about actions that I worked on, I did not so much enjoy working on producing rigorous proofs of the various propositions these models entailed. I did not feel, as it were, a ‘flow‘ in my work. (I continued to enjoy teaching related material; I taught discrete mathematics, artificial intelligence, and the theory of computation for the computer science department, and loved every minute of it. Well, I exaggerate, but you catch my drift.)

So I turned my mind to something else. I have no regrets about my decision. And I have no regrets about having spent six years working in the areas I did work in prior to my ‘departure.’ I learned a great deal of logic; I grew to appreciate the work of mathematicians and logicians and theoretical computer scientists; straddling disciplines was a deeply edifying experience; I am happy I did not spend all my time talking to philosophers. But I couldn’t stay there.

I continued to teach in both departments, and then finally, in 2010, I transferred from the computer science department to the philosophy department. I was finally ‘home.’

I wonder if readers of this blog have changed their career paths for reasons similar or dissimilar to mine. Perhaps you found yourself no longer engaged by the central problems in your area of interest; or perhaps something else altogether. I am curious, and interested.

Please, Can We Make Programming Cool?

Is any science as desperate as computer science to be really, really liked? I ask because not for  the first time, and certainly not the last, I am confronted with yet another report of an effort to make computer science ‘cool’, trying in fact, to make its central component–programming–cool.

The presence of technology in the lives of most teenagers hasn’t done much to entice more of them to become programmers. So Hadi Partovi has formed a nonprofit foundation aimed at making computer science as interesting to young people as smartphones, Instagram and iPads. Mr. Partovi…founded Code.org with the goal of increasing the teaching of computer science in classrooms and sparking more excitement about the subject among students….Code.org’s initial effort will be a short film…that will feature various luminaries from the technology industry talking about how exciting and accessible programming is….It also isn’t clear that Code.org’s film will succeed where modern technologies themselves have failed: in getting young people excited about programming.

I don’t know what being cool means for programming, but if it means convincing potential converts that those who program don’t need to think logically or algorithmically or in structured fashion, or that somehow programming can be made to, you know, just flow with no effort, that it can be all fun and games, then like all other efforts before it, Mr. Partovi’s efforts are doomed.

Here is why. Programming is hard. It’s not easy and never will be. When you write programs you will hit walls, you will be frustrated, you will tear your hair out, you will be perplexed. Sometimes things will go right and programs will run beautifully, but it will often take a long time to get things working. When programs work, it is incredibly satisfying, which is why some people enjoy programming and find beauty and power in it. Programming can be used to create shiny toys and things that go pow and zoom and sometimes kapow, but it will never be shiny or go pow and zoom or even kapow. Before the pot of gold is reached, there is a fairly tedious rainbow to be traversed.

Writers write and produce potboilers, pulp fiction, romances, great novels, comedies, screenplays, essays, creative non-fiction, a dazzling array that entertains, beguiles, and fascinates. But is writing fun? FUCK NO. It’s horrible. Yes, you produce great sentences, and yes, sometimes things fall into place, and you see a point made on the page, which comes out just the way you wanted it to. But all too soon, it’s over and you are facing a blank page again. Writing produces glamorous stuff but it is very far from being that; it is tedious, slow, and very likely to induce self-loathing. The folks who write do not try to make writing accessible or fun. Because it isn’t. You do it because you can find moments of beauty in it and because you can solve the puzzle of trying to find the right combination of words that say best what you wanted to say. Programming is like that. Very much so. Word processors can get as flashy as they want, they won’t make writing easier. The slickest programming tools won’t make programming easier either.