Report On Brooklyn College Teach-In On ‘Web Surveillance And Security’

Yesterday, as part of ‘The Brooklyn College Teach-In & Workshop Series on Resistance to the Trump Agenda,’ I facilitated a teach-in on the topic of ‘web surveillance and security.’ During my session I made note of some of the technical and legal issues that are play in these domains, and how technology and law have conspired to ensure that: a) we live in a regime of constant, pervasive surveillance; b) current legal protections–including the disastrous ‘third-party doctrine‘ and the rubber-stamping of governmental surveillance ‘requests’ by FISA courts–are simply inadequate to safeguard our informational and decisional privacy; c) there is no daylight between the government and large corporations in their use and abuse of our personal information. (I also pointed my audience to James Grimmelmann‘s excellent series of posts on protecting digital privacy, which began the day after Donald Trump was elected and continued right up to inauguration. In that post, Grimmelmann links to ‘self-defense’ resources provided by the Electronic Frontier Foundation and Ars Technica.)

I began my talk by describing how the level of surveillance desired by secret police organizations of the past–like the East German Stasi, for instance–was now available to the NSA, CIA, and FBI, because of social networking systems; our voluntary provision of every detail of our lives to these systems is a spook’s delight. For instance, the photographs we upload to Facebook will, eventually, make their way into the gigantic corpus of learning data used by law enforcement agencies’ facial recognition software.

During the ensuing discussion I remarked that traditional activism directed at increasing privacy protections–or the enacting of ‘self-defense’ measures–should be part of a broader strategy aimed at reversing the so-called ‘asymmetric panopticon‘: citizens need to demand ‘surveillance’ in the other direction, back at government and corporations. For the former, this would mean pushing back against the current classification craze, which sees an increasing number of documents marked ‘Secret’ ‘Top Secret’ or some other risible security level–and which results in absurd sentences being levied on those who, like Chelsea Manning, violate such constraints; for the latter, this entails demanding that corporations offer greater transparency about their data collection, usage, and analysis–and are not able to easily rely on the protection of trade secret law in claiming that these techniques are ‘proprietary.’ This ‘push back,’ of course, relies on changing the nature of the discourse surrounding governmental and corporate secrecy, which is all too often able to offer facile arguments that link secrecy and security or secrecy and business strategy. In many ways, this might be the  most onerous challenge of all; all too many citizens are still persuaded by the ludicrous ‘if you’ve done nothing illegal you’ve got nothing to hide’ and ‘knowing everything about you is essential for us to keep you safe (or sell you goods’ arguments.

Note: After I finished my talk and returned to my office, I received an email from one of the attendees who wrote:

 

No, Aristotle Did Not ‘Create’ The Computer

For the past few days, an essay titled “How Aristotle Created The Computer” (The Atlantic, March 20, 2017, by Chris Dixon) has been making the rounds. It begins with the following claim:

The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Dixon then goes on to trace this ‘history of ideas,’ showing how the development–and increasing formalization and rigor–of logic contributed to the development of computer science and the first computing devices. Along the way, Dixon makes note of the contributions-direct and indirect–of: Claude Shannon, Alan Turing, George Boole, Euclid, Rene Descartes, Gottlob Frege, David Hilbert, Gottfried Leibniz, Bertrand Russell, Alfred Whitehead, Alonzo Church, and John Von Neumann. This potted history is exceedingly familiar to students of the foundations of computer science–a demographic that includes computer scientists, philosophers, and mathematical logicians–but presumably that is not the audience that Dixon is writing for; those students might wonder why Augustus De Morgan and Charles Peirce do not feature in it. Given this temporally extended history, with its many contributors and their diverse contributions, why does the article carry the headline “How Aristotle Created the Computer”? Aristotle did not create the computer or anything like it; he did make important contributions to a fledgling field, which took several more centuries to develop into maturity. (The contributions to this field by logicians and systems of logic of alternative philosophical traditions like the Indian one are, as per usual, studiously ignored in Dixon’s history.) And as a philosopher, I cannot resist asking, “what do you mean by ‘created'”? What counts as ‘creating’?

The easy answer is that it is clickbait. Fair enough. We are by now used to the idiocy of the misleading clickbait headline, one designed to ‘attract’ more readers by making it more ‘interesting;’ authors very often have little choice in this matter, and very often have to watch helplessly as hit-hungry editors mangle the impact of the actual content of their work. (As in this case?) But it is worth noting this headline’s contribution to the pernicious notion of the ‘creation’ of the computer and to the idea that it is possible to isolate a singular figure as its creator–a clear hangover of a religious sentiment that things that exist must have creation points, ‘beginnings,’ and creators. It is yet another contribution to the continued mistaken recounting of the history of science as a story of ‘towering figures.’ (Incidentally, I do not agree with Dixon that the history of computers is “better understood as a history of ideas”; that history is instead, an integral component of the history of computing in general, which also includes a social history and an economic one; telling a history of computing as a history of objects is a perfectly reasonable thing to do when we remember that actual, functioning computers are physical instantiations of abstract notions of computation.)

To end on a positive note, here are some alternative headlines: “Philosophy and Mathematics’ Contributions To The Development of Computing”; “How Philosophers and Mathematicians Helped Bring Us Computers”; or “How Philosophical Thinking Makes The Computer Possible.” None of these are as ‘sexy’ as the original headline, but they are far more informative and accurate.

Note: What do you think of my clickbaity headline for this post?

The ‘True Image Of A Writer’ And Online Writing

Shortly after I first began writing on the ‘Net–way back in 1988–I noticed that there was, very often, a marked contrast between the online and offline personas of some of the writers I encountered online. (I am referring to a small subset of the writers I read online; these were folks who worked with me in campus research labs but also wrote actively on Usenet newsgroups.) One of the most stunning contrasts was provided by a pair of young men, who were both brilliant programmers, but also afflicted with terrible stutters. Conversations with them were invariably affairs requiring a great deal of patience on the part of their interlocutors; their stutters very frequently derailed their attempts to coherently communicate. (I had suffered from a stutter myself once, as pre-teen, so I instantly sympathized with them, even as I did my best to decipher their speech at times.)

This was not the case online. Both wrote brilliantly and voluminously online; they wrote long and short pieces; they wrote on politics and technical matters alike with style and verve; they possessed a caustic sense of humor and were not afraid to put it on display. Quite simply, they were different persons online. One of them met his future wife online; she wrote from South America; he from New Jersey; she fell in love with ‘him,’ with his online persona, and traveled to the US to meet him; when she met him in person and encountered his stutter for the first time, she–as she put it herself later–realized it was too late, because she had already fallen in love with him. The unpleasant converse of the situation I describe here is the internet troll, the keyboard warrior, who ‘talks big’ online, and uses the online forum as an outlet for his misanthropy and aggression–all the while being a singularly meek and timid and physically uninspiring person offline. The very anonymity that makes the troll possible is, of course, what lets the silenced and intimidated speak up online. Without exaggeration, my memory of these gentlemen, and of the many other instances I observed of shy and reticent folks finding their voices online, has informed my resistance to facile claims that traditional, in-class, face-to-face education is invariably superior to online education. Any modality of instruction that could provide a voice to the voiceless was doing something right.

In Moments of Reprieve: A Memoir of Auschwitz (Penguin, New York, 1986) Primo Levi writes:

Anyone who has the opportunity to compare the true image of a writer with what can be deduced from his writings knows how frequently they do not coincide. The delicate investigator of movements of the spirit, vibrant as an oscillating circuit, proves to be a pompous oaf, morbidly full of himself, greedy for money and adulation blind  to his neighbor’s suffering. The orgiastic and sumptuous poet, in Dionysiac communion with the universe, is an abstinent, abstemious little man, not by ascetic choice but by medical prescription.

The ‘true image of the writer’ is an ambiguous notion; online writing has made it just a little more so.

Note: I wonder if Levi had Nietzsche in mind in his second example above.

Drones And The Beautiful World They Reveal

Over the past year or so, I have, on multiple occasions, sat down with my toddler daughter to enjoy BBC’s epic nature documentary series Planet Earth. Narrated by the incomparable David Attenborough, it offers up hour-long packages of visual delight in stunning high-definition: giant waterfalls, towering mountains and icebergs, gigantic flocks of birds, roaring volcanoes and river rapids, deep canyons, majestic creatures of all kinds; the eye-candy is plentiful, and it is dished out in large portions. While watching it, I’ve been moved to remark that my co-viewing of it in the company of my daughter–and sensing her delight as we do so–has been one of the highlights of my parental responsibilities.

Filming a documentary like Planet Earth, the most expensive ever, takes time and money and technical aid. The featurettes for the various episodes explain how they were filmed: sometimes using a cinebulle, sometimes “the Heligimbal, a powerful, gyro-stabilised camera mounted beneath a helicopter.” Now comes news that Planet Earth II, the second installment of the series will deploy even more advanced technology:

The BBC…has not only shot the whole thing in UHD, but it also used the latest camera stabilisation, remote recording, and aerial drone technology, too.

The use of drones should make perfectly good sense. Drones can be commandeered into remote and difficult to access territories and zones with great ease and precision; they can be made to wait for the perfect shot for long periods of time; they can generate huge amounts of visual image data which can then be sorted through to select the best images; without a doubt, their usage will result in the previously hidden–and beautiful–coming to light. Perhaps they will descend into the craters of volcanoes; perhaps they will hover above herds of animals, tracking their every move to record and reveal the mysteries of migration; perhaps they will enable closer looks at the dynamics of waterfalls and whirlpools; perhaps they will fly amidst flocks of birds.

Their use will remind us once again of the mixed blessings of technology. Drones can be used for surveillance, for privacy invasions, for the violations of human rights; they can be used to conduct warfare from on high, sending down deadly munitions directed at civilians; they can also be used to reveal the beauties of this world in a manner that reminds us, yet again, that our planet is a beautiful place, one worth preserving for the sake of future generations. Technology facilitates the exploitation of nature but also, hopefully, its conservation and sensible stewardship thanks to the beauties of the images brought back to us by the drones we use. The use of drones in Planet Earth II may refine our aesthetic sensibilities further: many of our aesthetic superlatives are drawn from nature, but that entity’s contours will now be revealed in ever greater detail, with more aspects brought front and center. And so, as we have never stopped noticing, even as technology makes the world more understandable, it reveals its ever greater mysteries.  Technology may make the world mundane, quantify it all the better to tame it, but it may also reveal facets of the world we may have been previously blind to, rendering some sensibilities duller and yet others more acute.

Black Mirror’s Third Season Nosedives In The First Episode

Black Mirror used to be the real deal: a television show that brought us clever, scary satire about the brave new dystopic, over-technologized world that we are already living in. It was creepy; it was brutal in its exposure of human frailty in the face of technology’s encroachment on our sense of self and our personal relationships.  We are fast becoming–indeed, we already are–slaves to our technology in ways that are warping our moral and psychological being; we are changing, and not always in ways that are pleasant.

That old Black Mirror is no longer so–at least, if the first episode of the rebooted third season is any indication. (Netflix has made the show its own; six new episodes are on display starting yesterday.) In particular, the show has been ‘Americanized’–in the worst way possible, by being made melodramatic. This has been accomplished by violating one of the cardinal principles of storytelling: show, don’t tell.

Season three’s first episode–‘Nosedive‘–takes our current fears about social media and elevates them in the context of a ratings scheme for the offline social world–complete with likes and indexed scores of social likeability based on instant assessments of everyone by everyone as they interact with each other in various social settings. See a person, interact with them, rate them; then, draw on your cumulative indexed score to score social benefits. Or, be locked out of society because your score, your social quotient, the number that reflects how others see you, is too low.

The stuff of nightmares, you’ll agree. Except that ‘Nosedive’ doesn’t pull it off. Its central character, Lacie Pound, a young woman overly anxious about her social ranking, commits to attending a social encounter that will hopefully raise her social quotient, thus enabling her to qualify for a loan discount and a dream apartment; but the journey to that encounter, and her actual presence there, is a catastrophe that has exactly the opposite effect. In the hands of the right director and writer this could have been a devastating tale.

But ‘Nosedive’s makers are not content to let the story and the characters speak for themselves. Instead, they beat us over the head with gratuitous moralizing, largely by inserting two superfluous characters: a brother who seems to exist merely to lecture the young woman about her misguided subscription to current social media fashions, and a kindly old outcast woman–with a low social quotient, natch–who suggests there is more to life than getting the best possible ranking. These characters are irritating and misplaced; they drag the story down, telling us much that only needed to be shown, sonorously droning on about how the show is meant to be understood. It is as if the show’s makers did not trust their viewers to make the kinds of inferences they think we should be making.

The old Black Mirror was austere and grim; its humor was black. This new season’s first episode was confused in tone: almost as if it felt its darkness needed to leavened by some heavy-handed relief. I’ll keep watching for now; perhaps the gloom will return.

The Phenomenology Of Encounters With Notification Icons

It’s 630 AM or so; you’re awake, busy getting your cup of coffee ready. (Perhaps you’re up earlier like the truly virtuous or the overworked, which in our society comes to the same thing.) Your coffee made, you fire up your smartphone, laptop, tablet, or desktop, and settle down for the morning service at the altar.  Your eyes light up, your antennae tingle in pleasurable anticipation: Facebook’s blue top ribbon features a tiny red square–which squats over the globe like a ginormous social media network–with a number inscribed in it; single figures is good, double figures is better. You look at Twitter: the Liberty Bell–sorry, the notifications icon–bears the weight of a similar number. Yet again: single figures good, double figures better. You look at GMail: your heart races, for that distinctive bold lettering in your inbox is present, standing out in stark contrast from the pallid type below; and there is a number here too, in parentheses after ‘Inbox’: single figures good, double figures better.

That’s what happens on a good day. (On a really good day, Facebook will have three red circles for you.) On a bad day, the Facebook globe is heartbreakingly red-less and banal; Twitter’s Liberty Bell is mute; and GMail’s Inbox is not bold, not at all. You reel back from the screen(s) in disappointment; your mood crashes and burns; the world seems empty and uninviting and cold and dark. Impatience, frustration, anxiety come rushing in through the portals you have now left open, suffusing your being, residing there till dislodged by the right kind of sensory input from those same screens: the appropriate colors, typefaces, and numbers need to make an appearance to calm and sooth your restless self. We get to work; all the while keeping an eye open and an ear cocked: a number appears on a visible tab, and we switch contexts and screens to check, immediately. An envelope appears on the corner of our screens; mail is here; we must tear open that envelope. Sounds too, intrude; cheeps, dings, and rings issue from our machines to inform us that relief is here. The silence of our devices can be deafening.

Our mood rises and falls in sync.

As is evident, our interactions with the human-computer interfaces of our communications systems have a rich phenomenology: expectations, desires, hopes rush towards with colors and shapes and numbers; their encounters produce mood changes and affective responses. The clever designer shapes the iconography of the interface with care to produce these in the right way, to achieve the desired results: your interaction with the system must never be affectively neutral; it must have some emotional content. We are manipulated by these responses; we behave accordingly.

Machine learning experts speak of training the machines; let us not forget that our machines train us too. By the ‘face’ they present to us, by the sounds they make, by the ‘expressions’ visible on them. As we continue to interact with them, we become different people, changed much like we are by our encounters with other people, those other providers and provokers of emotional responses.

My First Phone Number

I grew up–till the age of eleven–without a telephone in my household. A phone line was a rarity–expensive, hard to obtain with a long waiting line–even for the Indian middle-class, and in any case my family lived for the most part on air force stations. But even when we lived in the city, we made do without a phone. If you wanted to talk to someone, you visited them. Without calling. Sometimes they were at home, sometimes they weren’t. It was an acceptable uncertainty of sorts. If you just had to make a phone call–on the occasion of an emergency for instance–you relied on a neighbor’s generosity to share their phone line with you.  A phone was a big deal; only the select few had one.

Shortly after my father retired from the air force and started a small business, he ‘applied’ for a phone line (these applications were processed by the governmental telecommunications authority, which ‘awarded’ lines on the basis of need); his application specified that the phone would be a necessary accessory to his business, thus hopefully placing it higher in the prioritized queue of potential owners. News of the success of this application–a few months later–was greeted with some incredulity at home; was it really going to be the case that we were going to have that magical instrument at home, one that would let us simply pick up the receiver, dial a few numbers, and talk to friends and family?

Apparently so. Soon enough, a technician showed up to install our phone; cables were run along walls, a phone jack mysteriously appeared, and then, incredibly enough, a phone set itself, complete with black handset–the kind I had seen people cradling up against their ears–and a rotary dial. The moment of truth was here. Our family, our household, would now have a new address, a new association: our phone number.

Unsurprisingly perhaps, I still remember it: 61-69-42. I break it up that way because that’s how I remembered it: Six One, Six Nine, Four Two. My mother was the first user of the phone; she called her mother to let her know the news. My father went next, calling an old friend. My brother and I had no one to call; we had never bothered to ask anyone’s phone numbers at school. We didn’t call our friends; why did we need their numbers? Indeed, I did not even know who among my friends owned a phone.

But the next day at school, I came to know who did. I told my classmates I had a new phone number, proudly rattling off its magical digits–there had been no need for me to write them down, they were instantly memorable–even as I asked for theirs and encouraged them to call me. Some did; conversations on the phone–some of which ran over an hour–now suddenly emerged as a magical new form of interaction with folks I had previously only known in the flesh.

Some thirty-eight years later, I hardly ever talk on the phone. Email and text messages rule the roost; when I do talk on the phone, I’m a model of efficiency. A quick exchange of information, and I’m done. Just like the phone displaced older forms of communication, it has been impolitely shoved aside by newer ones. No one’s grieving; we’re too busy being socially networked.