Contra Corey Pein, Computer Science Is A Science

In this day and age, sophisticated critique of technology and science is much needed. What we don’t need is critiques like this long piece in the Baffler by Corey Pein which, I think, is trying to mount a critique of the lack of ethics education in computer science curricula but seems most concerned with asserting that computer science is not a science. By, I think, relying on the premise that “Silicon Valley activity and propaganda” = “computer science.” I fail to understand how a humanistic point is made by asserting the ‘unscientific’ nature of a purported science, but your mileage may vary. Anyway, on to Pein.

Continue reading

Leaving Facebook: You Can Run, But You Can’t Hide

I first quit Facebook in 2010, in response to a talk Eben Moglen gave at NYU about Facebook’s privacy-destroying ways; one of his most memorable lines was:

The East German Stasi used to have to deploy a fleet of undercover agents and wiretaps to find out what people did, who they met, what they ate, which books they read; now we just have a bunch of Like buttons and people tell a data monetizing corporation the same information for free.

That talk–in which Moglen referred to Mark Zuckerberg as a ‘thug’–also inspired a couple of young folk, then in attendance, to start Diaspora, an alternative social network in which users would own their data. I signed up for Diaspora soon after kicked off; I also signed up for Google+. I returned to Facebook in 2012, a few months after starting my blog, because it was the only way I could see to distribute my posts. Diaspora and Google+ never ‘took off’; a certain kind of ‘first-mover status, and its associated network effects had made sure there was little social networking on those alternative platforms.

Since then, I’ve stayed on Facebook, sharing photos, bragging about my daughter and my various published writings, and so on. I use the word ‘bragging’ advisedly; no matter how much you dress it up, that’s what I’ve been doing. But it has been a horrible experience in many ways: distraction, lowered self-esteem, envy, have been but its most prominent residues. Moreover, to have substantive discussions  on Facebook, you must write. A lot. I’d rather write somewhere else, like here, or work on my books and essays. So, I desperately want to leave, to work on my writing. But, ironically, as a writer, I feel I have to stay on. Folks who have already accomplished a great deal offline, can afford to stay off; those of us struggling to make a mark, to be noticed, have to stay here. (Consider that literary agents now want non-fiction writers to demonstrate that they have a ‘social media presence’; that they have a flourishing Facebook and Twitter presence, which will make the marketing of their writings easier.) I know, I know; as a writer, I should work on my craft, produce my work, and not worry about anything else. I know the wisdom of that claim and reconciling it to the practical demands of this life is an ongoing challenge.

So, let’s say, ‘we,’ the user ‘community’ on Facebook decide to leave; and we find an alternative social network platform. I’m afraid little will have changed unless the rest of the world also changes; the one in which data is monetized for profit, coupled with a social and moral and economic principle that places all values subservient to the making of profit. The problem isn’t Facebook. We could migrate to another platform; sure. They need to survive in this world, the one run by capital and cash; right. So they need to monetize data; ours. They will. Money has commodified all relationships; including the ones with social network platforms. So long as data is monetizable, we will face the ‘Facebook problem.’

Dear Legal Academics, Please Stop Misusing The Word ‘Algorithms’

Everyone is concerned about ‘algorithms.’ Especially legal academics; law review articles, conferences, symposia all bear testimony to this claim. Algorithms and transparency; the tyranny of algorithms; how algorithms can deprive you of your rights; and so on. Algorithmic decision making is problematic; so is algorithmic credit scoring; or algorithmic stock trading. You get the picture; something new and dangerous called the ‘algorithm’ has entered the world, and it is causing havoc. Legal academics are on the case (and they might even occasionally invite philosophers and computer scientists to pitch in with this relief effort.)

There is a problem with this picture. ‘Algorithms’ is the wrong word to describe the object of legal academics’ concern. An algorithm is “an unambiguous specification of how to solve a class of problems” or a step-by-step procedure which terminates with a solution to a given problem. These problems can be of many kinds: mathematical or logical ones are not the only ones, for a cake-baking recipe is also an algorithm, as are instructions for crossing a street. Algorithms can be deterministic or non-deterministic; they can be exact or approximate; and so on. But, and this is their especial feature, algorithms are abstract specifications; they lack concrete implementations.

Computer programs are one kind of implementation of algorithms; but not the only one. The algorithm for long division can be implemented by pencil and paper; it can also be automated on a hand-held calculator; and of course, you can write a program in C or Python or any other language of your choice and then run the program on a hardware platform of your choice. The algorithm to implement the TCP protocol can be programmed to run over an Ethernet network; in principle, it could also be implemented by carrier pigeon. Different implementation, different ‘program,’ different material substrate. For the same algorithm: there are good implementations and bad implementations (the algorithm might give you the right answer for any particular input but its flawed implementation incorporates some errors and does not); some implementations are incomplete; some are more efficient and effective than others. Human beings can implement algorithms; so can well-trained animals. Which brings us to computers and the programs they run.

The reason automation and the computers that deliver it to us are interesting and challenging–conceptually and materially–is because they implement algorithms in interestingly different ways via programs on machines. They are faster; much faster. The code that runs on computers can be obscured–because human-readable text programs are transformed into machine-readable binary code before execution–thus making study, analysis, and critique of the algorithm in question well nigh impossible. Especially when protected by a legal regime as proprietary information. They are relatively permanent; they can be easily copied. This kind of implementation of an algorithm is shared and distributed; its digital outputs can be stored indefinitely. These affordances are not present in other non-automated implementations of algorithms.

The use of ‘algorithm’ in the context of the debate over the legal regulation of automation is misleading. It is the ‘automation’ and ‘computerized implementation’ of an algorithm for credit scoring that is problematic; it is so because of specific features of its implementation. The credit scoring algorithm is, of course, proprietary; moreover, its programmed implementation is proprietary too, a trade secret. The credit scoring algorithm might be a complex mathematical algorithm readable by a few humans; its machine code is only readable by a machine. Had the same algorithm been implemented by hand, by human clerks sitting in an open office, carrying out their calculations by pencil and paper, we would not have the same concerns. (This process could also be made opaque but that would be harder to accomplish.) Conversely, a non-algorithmic, non-machinic–like, a human–process would be subject to the same normative constraints.

None of the concerns currently expressed about ‘the rule/tyranny of algorithms’ would be as salient were the algorithms not being automated on computing systems; our concerns about them would be significantly attenuated. It is not the step-by-step solution–the ‘algorithm’–to a credit scoring problem that is the problem; it is its obscurity, its speed, its placement on a platform supposed to be infallible, a jewel of a socially respected ‘high technology.’

Of course, the claim is often made that algorithmic processes are replacing non-algorithmic–‘intuitive, heuristic, human, inexact’–solutions and processes; that is true, but again, the concern over this replacement would not be the same, qualitatively or quantitatively, were these algorithmic processes not being computerized and automated. It is the ‘disappearance’ into the machine of the algorithm that is the genuine issue at hand here.

Death Of A Password

Time to bid farewell to an old, dear, and familiar friend, a seven-character one whose identity was inscribed, as if by magic, on my fingertips, which flew over the keyboard to bring it to life, time and time again. The time has come for me to lay it to rest, after years and years of yeoman service as a gatekeeper and sentry sans pareil. For years it guarded my electronic stores, my digital repositories of files and email messages. It made sure no interlopers trespassed on these vital treasures, perhaps to defile and destroy, or worse, to embarrass me by firing off missives to all and sundry in the world signed by me, and invoking the wrath of those offended and displeased upon my head. It’s ‘design’ was simple, the artful placement of a special character between a pair of triplet letters that served to produce a colloquial term referring to a major rock band. (Sorry for being coy, but I have hopes of resurrecting this password at some point in the future when the madness about overly-secure passwords and yet utterly useless passwords has broken down.) Once devised this password worked like magic; it was easy to remember, and I never forgot it, no matter how dire the circumstances.

Once my life became sufficiently complicated to require more than one computer account, as an increasing number of aspects of my life moved online, this password was pressed into double and later, triple and quadruple duty: email clients, utilities billing accounts, mortgage payments, online streaming sites, and all of the rest. I knew this was a security risk of sorts but I persisted; like many other computer users, I dreaded having to learn new, increasingly complicated passwords, and of course, I was just plain lazy. And yet, I was curiously protective of my password; I never shared it with anyone, not even a cohabiting girlfriend. My resistance broke down once I got married; my life was now even more intertwined with another person, our affairs messily tangled up; we often needed access to each others’ computer accounts. And so, it came to be: I shared my password with my wife. I wondered, as I wrote it down for her, whether she’d notice my little verbal trick, my little attempt to be clever. Much to my disappointment she did not; she was all business; all she wanted was a string of letters that would let her retrieve a piece of information that she needed.

The end when it came, was prompted by a series of mishaps and by increasingly onerous security policies: my Twitter account was hacked and many new online accounts required new passwords whose requirements could not be met by my old password. With some reluctance, I began adopting a series of new passwords, slowly consolidating them into a pair of alphanumeric combinations. My older password still worked, but on increasingly fewer accounts. Finally, another security breach was the last straw; I had been caught, and found wanting; the time had come to move on. So I did. But not without the odd backward glance or two, back at an older and simpler time.

The Fragile Digital World Described By Zeynep Tufkeci Invites Smashing

In “The Looming Digital Meltdown” (New York Times, January 7th), Zeynep Tufekci writes,

We have built the digital world too rapidly. It was constructed layer upon layer, and many of the early layers were never meant to guard so many valuable things: our personal correspondence, our finances, the very infrastructure of our lives. Design shortcuts and other techniques for optimization — in particular, sacrificing security for speed or memory space — may have made sense when computers played a relatively small role in our lives. But those early layers are now emerging as enormous liabilities. The vulnerabilities announced last week have been around for decades, perhaps lurking unnoticed by anyone or perhaps long exploited.

This digital world is intertwined with, works for, and is  used by, an increasingly problematic social, economic, and political post-colonial and post-imperial world, one riven by political crisis and  economic inequality, playing host to an increasingly desperate polity sustained and driven, all too often, by a rage and anger grounded in humiliation and shame. Within this world, all too many have had their noses rubbed in the dirt of their colonial and subjugated pasts, reminded again and again and again of how they are backward and poor and dispossessed and shameful, of how they need to play ‘catch  up,’ to show that they are ‘modern’ and ‘advanced’ and ‘developed’ in all the right ways.  The technology of the digital world has always been understood as the golden road to the future; it is what will make the journey to the land of the developed possible. Bridge the technological gap; all will be well. This digital world also brought with it the arms of the new age: the viruses, the trojan horses, the malwares, the new weapons promising to reduce the gaping disparity between the rich and the poor, between North and South, between East and West–when it comes to the size of their conventional and nuclear arsenals, a disparity that allows certain countries to bomb yet others with impunity, from close, or from afar. The ‘backward world,’ the ‘poor’, the ‘developing countries’ have understood that besides nuclear weapons, digital weapons can also keep them safe, by threatening to bring the digital worlds of their opponents to their knee–perhaps the malware that knocks out a reactor, or a city’s electric supply, or something else.

The marriage of a nihilistic anger with the technical nous of the digital weapon maker and the security vulnerabilities of the digital world is a recipe for disaster. This world, this glittering world, its riches all dressed up and packaged and placed out of reach, invites resentful assault. The digital world, its basket in which it has placed all its eggs, invites smashing; and a nihilistic hacker might just be the person to do it. An arsenal of drones and cruise missiles and ICBMS will not be of much defense against the insidious Trojan Horse, artfully placed to do the most damage to a digital installation. Self-serving security experts, all hungering for the highly-paid consulting gig, have long talked up this threat; but their greed does not make the threat any less real.

Ken Englehart’s Exceedingly Lame Argument Against Net Neutrality

Over at the New York Times, Ken Englehart, “a lawyer specializing in communications law, is a senior adviser for StrategyCorp, an adjunct professor at Osgoode Hall Law School and a senior fellow at the C. D. Howe Institute” offers us an astonishing argument suggesting we not worry about the FCC’s move to repeal Net Neutrality. It roughly consists of saying “Don’t worry, corporations will do right by you.” Englehart accepts that the concerns raised by opponents of the FCC–” getting rid of neutrality regulation will lead to a “two-tier” internet: Internet service providers will start charging fees to websites and apps, and slow down or block the sites that don’t pay up…users will have unfettered access to only part of the internet, with the rest either inaccessible or slow”–have some merit for he makes note  of abuses by ISPs that confirm just those fears. But he just does not think we need worry that ISPs will abuse their new powers:

[T]hese are rare examples, for a reason: The public blowback was fierce, scaring other providers from following suit. Second, blocking competitors to protect your own services is anticompetitive conduct that might well be stopped by antitrust laws without any need for network neutrality regulations.

How reassuring. “Public blowback” seems unlikely to have any effect on the behavior of folks who run quasi-monopolies. Moreover, the idea that we might should trust our ISPs to not indulge in behavior that “might well be stopped by antitrust laws” also sounds unlikely to assuage any concerns pertaining to the abuse of ISP powers. It gets better, of course:

Net-neutrality defenders also worry that some service providers could slow down high-data peer-to-peer traffic, like BitTorrent. And again, it has happened, most notably in 2007, when Comcast throttled some peer-to-peer file sharing.

But it’s still good:

So why am I not worried? I worked for a telecommunications company for 25 years, and whatever one may think about corporate control over the internet, I know that it simply is not in service providers’ interests to throttle access to what consumers want to see. Neutral broadband access is a cash cow; why would they kill it?

Because service providers will make all the money they need by providing faster services to premium customers and not give a damn about the plebes?

But don’t worry:

[T]here’s still competition: Some markets may have just one cable provider, but phone companies offer increasingly comparable internet access — so if the cable provider slowed down or blocked some sites, the phone company could soak up the affected customers simply by promising not to do so.

Or they could collude, with both charging high prices because they know customers have nowhere to go?

Is this the best defenders of the FCC can do? The old ‘market pressures will make corporations behave’ pony trick? Englehart’s cleverest trick, I will admit, is the aside that “the current net neutrality rule was put in place by the Obama administration.” That’s a good dog-whistle to blow. Anything done by the Obama administration is worth repealing by anyone connected with this administration. And their cronies, like Englehart.

Contra Cathy O’Neil, The ‘Ivory Tower’ Does Not ‘Ignore Tech’

In ‘Ivory Tower Cannot Keep On Ignoring TechCathy O’Neil writes:

We need academia to step up to fill in the gaps in our collective understanding about the new role of technology in shaping our lives. We need robust research on hiring algorithms that seem to filter out peoplewith mental health disorders…we need research to ensure that the same mistakes aren’t made again and again. It’s absolutely within the abilities of academic research to study such examples and to push against the most obvious statistical, ethical or constitutional failures and dedicate serious intellectual energy to finding solutions. And whereas professional technologists working at private companies are not in a position to critique their own work, academics theoretically enjoy much more freedom of inquiry.

There is essentially no distinct field of academic study that takes seriously the responsibility of understanding and critiquing the role of technology — and specifically, the algorithms that are responsible for so many decisions — in our lives. That’s not surprising. Which academic department is going to give up a valuable tenure line to devote to this, given how much academic departments fight over resources already?

O’Neil’s piece is an unfortunate continuation of a trend to continue to castigate academia for its lack of social responsibility, all the while ignoring the work academics do in precisely those domains where their absence is supposedly felt.

In her Op-Ed, O’Neil ignores science and technology studies, a field of study that “takes seriously the responsibility of understanding and critiquing the role of technology,” and many of whose members are engaged in precisely the kind of studies she thinks should be undertaken at this moment in the history of technology. Moreover, there are fields of academic studies such as philosophy of science, philosophy of technology, and the sociology of knowledge, all of which take very seriously the task of examining and critiquing the conceptual foundations of science and technology; such inquiries are not elucidatory, they are very often critical and skeptical. Such disciplines then, produce work that makes both descriptive and prescriptive claims about the practice of science, and the social, political, and ethical values that underwrite what may seem like purely ‘technical’ decisions pertaining to design and implementation. The humanities are not alone in this regard, most computer science departments now require a class in ‘Computer Ethics’ as part of the requirements for their major (indeed, I designed one such class here at Brooklyn College, and taught it for a few semesters.) And of course, legal academics have, in recent years started to pay attention to these fields and incorporated them in their writings on ‘algorithmic decision making,’ ‘algorithmic control’ and so on. (The work of Frank Pasquale and Danielle Citron is notable in this regard.) If O’Neil is interested, she could dig deeper into the philosophical canon and read works by critical theorists like Herbert Marcuse and Max Horkheimer who mounted rigorous critiques of scientism, reductionism, and positivism in their works. Lastly, O’Neil could read my co-authored work Decoding Liberation: The Promise of Free and Open Source Software, a central claim of which is that transparency, not opacity, should be the guiding principle for software design and deployment. I’d be happy to send her a copy if she so desires.