Resisting Big Data: Interfering With ‘Collaboration,’ Nonconsensually

Consider the various image-sharing databases online: Facebook’s photo stores, Instagram, Flickr. These contain trillions of photographs, petabytes of fragile digital data, growing daily, without limit; every day, millions of users worldwide upload the  images they capture on their phones and cameras to the cloud, there to be stored, processed, enhanced, shared, tagged, commented on. And to be used as learning data for facial recognition software–the stuff that identifies your ‘friends’ in your photos in case you want to tag them.

This gigantic corpus of data is a mere court-issued order away from being used by the nation’s law enforcement agencies to train their own facial surveillance software–to be used, for instance, in public space cameras, port-of-entry checks, correctional facilities, prisons etc. (FISA courts can be relied upon to issue warrants in response to any law enforcement agency requests; and internet service providers and media companies respond with great alacrity to government subpoenas.) Openly used and deployed, that is. With probability one, the NSA, FBI, and CIA have already ‘scraped’, using a variety of methods, these image data stores, and used them in the manner indicated. We have actively participated and collaborated, and continue to do so, in the construction of the world’s largest and most sophisticated image surveillance system. We supply the data by which we may be identified; those who want to track our movements and locations use this data to ‘train’ their artificial agents to surveil us, to report on us if we misbehave, trespass, or don’t conform to whichever spatial or physical or legal or ‘normative’ constraint happens to direct us at any given instant. The ‘eye’ watches; it relies for its accuracy on what we have ‘told’ it, through our images and photographs.

Now imagine a hacktivist programmer who writes a Trojan horse that infiltrates such photo stores and destroys all their data–permanently, for backups are also taken out. This is a ‘feat’ that is certainly technically possible; encryption will not prevent a drive from being formatted; and security measures of all kinds can be breached. Such an act of ‘hacktivism’ would be destructive; it would cause the loss of much ‘precious data’: memories and recollections of lives and the people who live them, all gone, irreplaceable.  Such an act of destruction would be justified, presumably, on the grounds that to do so would be to cripple a pernicious system of surveillance and control. Remember that your photos don’t train image recognition systems to recognize just you; they also train it to not recognize someone else as you; our collaboration does not just hurt us, it hurts others; we are complicit in the surveillance and control of others.

I paint this admittedly unlikely scenario to point attention to a few interesting features of our data collection and analysis landscape: a) we participate, by conscious action and political apathy, in the construction and maintenance of our own policing; b) we are asymmetrically exposed because our surveillers enjoy maximal secrecy while we can draw on none; c) collective, organized resistance is so difficult to generate that the most effective political action might be a quasi-nihilist act of loner ‘civil disobedience’–if you do not cease and desist from ‘collaborating,’ the only choice left to others still concerned about their freedom from surveillance might to be nonconsensually interrupt such collaboration.

Report On Brooklyn College Teach-In On ‘Web Surveillance And Security’

Yesterday, as part of ‘The Brooklyn College Teach-In & Workshop Series on Resistance to the Trump Agenda,’ I facilitated a teach-in on the topic of ‘web surveillance and security.’ During my session I made note of some of the technical and legal issues that are play in these domains, and how technology and law have conspired to ensure that: a) we live in a regime of constant, pervasive surveillance; b) current legal protections–including the disastrous ‘third-party doctrine‘ and the rubber-stamping of governmental surveillance ‘requests’ by FISA courts–are simply inadequate to safeguard our informational and decisional privacy; c) there is no daylight between the government and large corporations in their use and abuse of our personal information. (I also pointed my audience to James Grimmelmann‘s excellent series of posts on protecting digital privacy, which began the day after Donald Trump was elected and continued right up to inauguration. In that post, Grimmelmann links to ‘self-defense’ resources provided by the Electronic Frontier Foundation and Ars Technica.)

I began my talk by describing how the level of surveillance desired by secret police organizations of the past–like the East German Stasi, for instance–was now available to the NSA, CIA, and FBI, because of social networking systems; our voluntary provision of every detail of our lives to these systems is a spook’s delight. For instance, the photographs we upload to Facebook will, eventually, make their way into the gigantic corpus of learning data used by law enforcement agencies’ facial recognition software.

During the ensuing discussion I remarked that traditional activism directed at increasing privacy protections–or the enacting of ‘self-defense’ measures–should be part of a broader strategy aimed at reversing the so-called ‘asymmetric panopticon‘: citizens need to demand ‘surveillance’ in the other direction, back at government and corporations. For the former, this would mean pushing back against the current classification craze, which sees an increasing number of documents marked ‘Secret’ ‘Top Secret’ or some other risible security level–and which results in absurd sentences being levied on those who, like Chelsea Manning, violate such constraints; for the latter, this entails demanding that corporations offer greater transparency about their data collection, usage, and analysis–and are not able to easily rely on the protection of trade secret law in claiming that these techniques are ‘proprietary.’ This ‘push back,’ of course, relies on changing the nature of the discourse surrounding governmental and corporate secrecy, which is all too often able to offer facile arguments that link secrecy and security or secrecy and business strategy. In many ways, this might be the  most onerous challenge of all; all too many citizens are still persuaded by the ludicrous ‘if you’ve done nothing illegal you’ve got nothing to hide’ and ‘knowing everything about you is essential for us to keep you safe (or sell you goods’ arguments.

Note: After I finished my talk and returned to my office, I received an email from one of the attendees who wrote:


Drones And The Beautiful World They Reveal

Over the past year or so, I have, on multiple occasions, sat down with my toddler daughter to enjoy BBC’s epic nature documentary series Planet Earth. Narrated by the incomparable David Attenborough, it offers up hour-long packages of visual delight in stunning high-definition: giant waterfalls, towering mountains and icebergs, gigantic flocks of birds, roaring volcanoes and river rapids, deep canyons, majestic creatures of all kinds; the eye-candy is plentiful, and it is dished out in large portions. While watching it, I’ve been moved to remark that my co-viewing of it in the company of my daughter–and sensing her delight as we do so–has been one of the highlights of my parental responsibilities.

Filming a documentary like Planet Earth, the most expensive ever, takes time and money and technical aid. The featurettes for the various episodes explain how they were filmed: sometimes using a cinebulle, sometimes “the Heligimbal, a powerful, gyro-stabilised camera mounted beneath a helicopter.” Now comes news that Planet Earth II, the second installment of the series will deploy even more advanced technology:

The BBC…has not only shot the whole thing in UHD, but it also used the latest camera stabilisation, remote recording, and aerial drone technology, too.

The use of drones should make perfectly good sense. Drones can be commandeered into remote and difficult to access territories and zones with great ease and precision; they can be made to wait for the perfect shot for long periods of time; they can generate huge amounts of visual image data which can then be sorted through to select the best images; without a doubt, their usage will result in the previously hidden–and beautiful–coming to light. Perhaps they will descend into the craters of volcanoes; perhaps they will hover above herds of animals, tracking their every move to record and reveal the mysteries of migration; perhaps they will enable closer looks at the dynamics of waterfalls and whirlpools; perhaps they will fly amidst flocks of birds.

Their use will remind us once again of the mixed blessings of technology. Drones can be used for surveillance, for privacy invasions, for the violations of human rights; they can be used to conduct warfare from on high, sending down deadly munitions directed at civilians; they can also be used to reveal the beauties of this world in a manner that reminds us, yet again, that our planet is a beautiful place, one worth preserving for the sake of future generations. Technology facilitates the exploitation of nature but also, hopefully, its conservation and sensible stewardship thanks to the beauties of the images brought back to us by the drones we use. The use of drones in Planet Earth II may refine our aesthetic sensibilities further: many of our aesthetic superlatives are drawn from nature, but that entity’s contours will now be revealed in ever greater detail, with more aspects brought front and center. And so, as we have never stopped noticing, even as technology makes the world more understandable, it reveals its ever greater mysteries.  Technology may make the world mundane, quantify it all the better to tame it, but it may also reveal facets of the world we may have been previously blind to, rendering some sensibilities duller and yet others more acute.

Self-Policing In Response To Pervasive Surveillance

On Thursday night, in the course of conversation with some of my Brooklyn College colleagues, I confessed to having internalized a peculiar sort of ‘chilling effect’ induced by a heightened sensitivity to our modern surveillance state. To wit, I said something along the lines of “I would love to travel to Iran and Pakistan, but I’m a little apprehensive about the increased scrutiny that would result.” When pressed to clarify by my companions, I made some rambling remarks that roughly amounted to the following. Travel to Iran and Pakistan–Islamic nations highly implicated in various foreign policy imbroglios with the US and often accused of supporting terrorism–is highly monitored by national law enforcement and intelligence agencies (the FBI, CIA, NSA); I expected to encounter some uncomfortable moments on my arrival back in the US thanks to questioning by customs and immigration officers (with a first name like mine–which is not Arabic in origin but is in point of fact, a very common and popular name in the Middle East–I would expect nothing less). Moreover, given the data point that my wife is Muslim, I would expect such attention to be heightened (data mining algorithms would establish a ‘networked’ connection between us and given my wife’s own problems when flying, I would expect such a connection to possibly be more ‘suspicious’) ; thereafter, I could expect increased scrutiny every time I traveled (and perhaps in other walks of life, given the extent of data sharing between various governmental agencies).

It is quite possible that all of the above sounds extremely paranoid and misinformed, and my worries a little silly, but I do not think there are no glimmers of truth in there. The technical details are not too wildly off the mark; the increased scrutiny after travel is a common occurrence for many travelers deemed ‘suspicious’ for unknown reasons; and so on. The net result is a curious sort of self-policing on my part: as I look to make travel plans for the future I will, with varying degrees of self-awareness about my motivations, prefer other destinations and locales. I will have allowed myself to be subject to an invisible set of constraints not actually experienced (except indirectly, in part, as in my wife’s experiences when flying.)

This sort of ‘indirect control’ might be pervasive surveillance’s most pernicious effect.

Note: My desire to travel to Iran and Pakistan is grounded in some fairly straightforward desires: Iran is a fascinating and complex country, host to an age-old civilization, with a rich culture and a thriving intellectual and academic scene; Pakistan is of obvious interest to someone born in India, but even more so to someone whose ethnic background is Punjabi, for part of the partitioned Punjab is now in Pakistan (as I noted in an earlier post about my ‘passing for Pakistani,’ “my father’s side of the family hails from a little village–now a middling town–called Dilawar Cheema, now in Pakistan, in Gujranwala District, Tehsil Wazirabad, in the former West Punjab.”)

Are There No Ethically Uncompromised Lunches In The Universe?

Once upon a time a farmer told his neighbors that they could use his land for ‘free’–as a kind of community recreational space. His neighbors were told they could set up little stalls. where they could play music, show off their handicrafts, display family photo albums, and of course, walk over to their friends’ spaces and chat with them. A large sign in small print that hung outside the entrance to the field informed the farmer’s neighbors how they should behave when they were on the premises. Most families stopped briefly to read the sign but intimidated by the number of the words on the sign, and the twisted prose, which appeared to have been composed by committee, they moved on, trusting their neighbor to do well by them.

The community meeting and recreational space soon bloomed; the number of stalls grew rapidly. The local residents got to know each other much better and many enjoyed the opportunity to inspect the personal details of their neighbors’ homes and lives. Indeed, a visit to the ‘meeting space’ became an integral part of most people’s routines; stop in for a bit, ‘check in,’ say hi to a few folk, show off your new baby, brag about your car, your vacation, and so on.

The local folk often wondered why the farmer had been so ‘generous.’ What was he getting in exchange for this ‘gift’? Cynics talked about the impossibility of free lunches, and sure enough, it was becoming clear there wasn’t one to be had in this ‘community space.’ For the benevolent farmer was indeed exacting a price of sorts.

The farmer had many business associates who wanted to sell the locals their goods–fertilizer for their fields, goods that could be gifted to their children on their birthdays, clothes to be worn at their weddings, and so on. To find out what the locals’ tastes were would have required conducting expensive, tedious market surveys; the farmer’s business associates would have had to go from door to door, asking people to fill out forms. But in this space, the farmers neighbors happily gave this information away without being asked. And the reason this information was ‘given away’ was that it was ‘being taken’ as well.

Hidden cameras and microphones recorded their comings and goings and sundry activities: who they met, what they ate at their friends’ stalls, and indeed, what they ate at home, because the locals proudly showed photos of their food at their stalls (you could build some walls around your stall but most people, finding the construction of these to be too onerous, just went in for a wall-less design), what clothes they wore, who their ‘best friends’ were, who they talked to for medical advice, who they asked for help when the going was tough, what kind of music they listened to (and played for their neighbors by way of demonstration.)

When news of the hidden cameras and microphones broke, some of the locals were irate. They didn’t like the idea of being ‘spied on’ and worried that the local potentate, always eager to exert his control over the land just a little more efficiently, would find this sort of information very useful. Yet others thought that the local robber barons, who controlled the potentate in any case, would grow more powerful as a result of knowing so much about them. And some complained that the hidden microphones sometimes reported their conversations and displays to the farmer, who cracked down on them if he didn’t like what they said or what they showed off.

But others hushed their concerns, using that ancient piece of wisdom, which the robber barons themselves had promulgated: How can you look a ‘free’ gift horse in the mouth? You got to use this space for ‘free,’ didn’t you? When the locals said that they hadn’t signed on for this surveillance, yet others told them to read the sign on the entrance carefully, and if they didn’t like it, to leave, and to take their stalls with them. So some did even as they said the sign on the entrance was vague and unspecific. Yet others, finding that the space had become an indispensable arena for communication for matters pertaining to the local village and shire, stayed on.

But many continued to ask themselves: Was it a fair ‘deal’? Indeed, was it a deal at all? Had the farmer really behaved like a neighbor in spying on his neighbors after he had invited them to use his land for ‘free’? Did the non-existence of free lunches in the universe entail that those lunches had to be ethically compromised too?

Facebook and Writers’ Status Messages

My last post on Facebook led me to think a bit more its–current and possible–integration into our lives, especially those conducted online.

As ‘net users are by now aware, almost any site you visit on the ‘net features a Facebook button so that you can indicate whether you ‘Like’ the page and thus, share it with your ‘Friends.’ Of course, in so doing, you also leave a digital trail of sorts, indicating what you have read, what music you have listened to, which videos you have viewed, which jokes you found funny, and so on. As Eben Moglen put it rather memorably at a talk at NYU a few years ago, (and I quote from memory):

In the old days, the East German Stasi used to have to follow people, bug them, intimidate their friends to find out what they read, what they got up to in their spare time. Now. we have ‘Like’ buttons that do the same for us.

The surveillance, the generation of data detailing our habits, our inclinations, our predilections, is indeed quite efficient; it is made all the more so by having outsourced it to those being surveilled, by dint of the provision of simple tools for doing so.

I personally do not get very creeped out by the notion of hitting ‘Like’ on a article that I enjoyed reading–though, struck by Moglen’s remark, I have not done so even once since returning to Facebook in 2010. I do however find it very creepy that Netflix asks me if I would like to share my movie viewing preferences with my friends on Facebook; that seems excessively invasive. 

In any case, I do not think the limits of this kind of ‘integration’ of Facebook with the information we consume and the software we use have yet been reached.

Here is at least one more possible avenue for Facebook’s designers to consider. Many ‘net users access it via an ‘always-on’ connection. Thus, even when they are not actively using an Internet application–like say, a word processor, or a spreadsheet–they are still connected to the ‘net. In the not so distant future, these programs could be designed–by close cooperation between Facebook and the software vendor in question–to supply information about our usage of these applications to our ‘Friends.’ On a real-time basis.

Thus, for instance, when I would open a file on my word processor, my ‘Friends’ would be so informed; they would then learn how long I had continued editing, how many breaks I took, (and of course, if those breaks were online, they would be told which pages I had opened, and how long I had spent there), and so on. Our software would come with this feature turned on; you would have to opt-out or customize your sharing.

This way, all those status messages we are often treated to on Facebook: ‘Hooray, first draft complete!’ or ‘Finally got five hundred words written today’ or ‘I just can’t seem to get anything written today’ could be automated. Extremely convenient, don’t you think? Examples like this–for other kinds of applications–can be readily supplied, I’m sure.

Nice Try NSA-Defenders (Not!)

There are two very bad arguments and one rather illiterate confusion making the rounds in the wake of the NSA surveillance scandal. I’ll consider each of them briefly.

First, we have the ‘it was legal’ argument: the surveillance was sanctioned by the Patriot Act, approved by FISA courts, and Congress was in the loop etc. Now, the elementary distinction between legality and morality, between what the law permits and proscribes and what we might consider the right thing to do is just that: elementary. The undergraduates in my Philosophy of Law classes don’t need to be introduced to the distinction between natural law and positive law or to the assigned readings which inquire into our supposed obligations to the law to understand and know this difference. Their lived lives have given them ample proof of this gap as have the most basic history lessons. (Slavery is everyone’s favorite example but many more can be found rather easily.) Indeed, why would we ever have impassioned debates about ‘bad laws’ that need to be revised if the ‘it’s legal’ argument was such a clincher?

Furthermore, the folks complaining about the NSA surveillance are not just complaining about the legality of this eavesdropping and surveillance: they are suggesting the application of some laws is an onerous imposition on them, one that grants the government too much power. They are suggesting this is a moment when the laws of the land require revisitation. This is especially true of the obnoxious Patriot Act. (In another context, consider the draconian Digital Millenium Copyright Act.) Or consider that FISA courts routinely approve all requests made to them, and that the NSA has seven days in which to mine data before it applies for a warrant. All of this is legal. Is it problematic? We could talk about it so long as we aren’t shut up by the ‘its legal’ argument.

Second, we have the vampire ‘if you have nothing to hide, then what do you have to worry about’ argument – it simply refuses to die. No matter how many times it is explained that privacy is not about the hiding of secrets but about the creation of a space within which a certain kind of human flourishing can take place, this hoary nonsequitur is dragged out and flogged for all it is worth. But let me try real quick: we need privacy because without it, very basic forms of life would not be possible. An important example of this is the personal relationship. For these to be built, maintained and enriched, privacy is required. We do not generate and sustain intimacy–emotional and sexual–under observation and analysis; we do so far away from the madding crowd. I am not doing anything illegal or secretive in the maintenance of my personal relationships but I would still like their details to be private. Hopefully, that’s clear. (Who am I kidding?)

Lastly, there is a dangerous conflation between paper records and electronic records. For instance, David Simon, the latest to join the ‘relax, its legal and being done to protect us’ brigade, runs an analogy with the Baltimore wiretaps carried out by the local police and concludes:

Here, too, the Verizon data corresponds to the sheets and sheets of printouts of calls from the Baltimore pay phones, obtainable with a court order and without any demonstration of probable cause against any specific individual.

Except that it doesn’t. Those ‘sheets and sheets’ do not correspond to the billions of digital records obtained from Verizon, which can be stored indefinitely and subjected to data analysis in a way that the hard-copy data cannot.

These arguments will be made again and again in this context; might as well get some brief refutations out there.