Leaving Facebook: You Can Run, But You Can’t Hide

I first quit Facebook in 2010, in response to a talk Eben Moglen gave at NYU about Facebook’s privacy-destroying ways; one of his most memorable lines was:

The East German Stasi used to have to deploy a fleet of undercover agents and wiretaps to find out what people did, who they met, what they ate, which books they read; now we just have a bunch of Like buttons and people tell a data monetizing corporation the same information for free.

That talk–in which Moglen referred to Mark Zuckerberg as a ‘thug’–also inspired a couple of young folk, then in attendance, to start Diaspora, an alternative social network in which users would own their data. I signed up for Diaspora soon after kicked off; I also signed up for Google+. I returned to Facebook in 2012, a few months after starting my blog, because it was the only way I could see to distribute my posts. Diaspora and Google+ never ‘took off’; a certain kind of ‘first-mover status, and its associated network effects had made sure there was little social networking on those alternative platforms.

Since then, I’ve stayed on Facebook, sharing photos, bragging about my daughter and my various published writings, and so on. I use the word ‘bragging’ advisedly; no matter how much you dress it up, that’s what I’ve been doing. But it has been a horrible experience in many ways: distraction, lowered self-esteem, envy, have been but its most prominent residues. Moreover, to have substantive discussions  on Facebook, you must write. A lot. I’d rather write somewhere else, like here, or work on my books and essays. So, I desperately want to leave, to work on my writing. But, ironically, as a writer, I feel I have to stay on. Folks who have already accomplished a great deal offline, can afford to stay off; those of us struggling to make a mark, to be noticed, have to stay here. (Consider that literary agents now want non-fiction writers to demonstrate that they have a ‘social media presence’; that they have a flourishing Facebook and Twitter presence, which will make the marketing of their writings easier.) I know, I know; as a writer, I should work on my craft, produce my work, and not worry about anything else. I know the wisdom of that claim and reconciling it to the practical demands of this life is an ongoing challenge.

So, let’s say, ‘we,’ the user ‘community’ on Facebook decide to leave; and we find an alternative social network platform. I’m afraid little will have changed unless the rest of the world also changes; the one in which data is monetized for profit, coupled with a social and moral and economic principle that places all values subservient to the making of profit. The problem isn’t Facebook. We could migrate to another platform; sure. They need to survive in this world, the one run by capital and cash; right. So they need to monetize data; ours. They will. Money has commodified all relationships; including the ones with social network platforms. So long as data is monetizable, we will face the ‘Facebook problem.’

Report On Brooklyn College Teach-In On ‘Web Surveillance And Security’

Yesterday, as part of ‘The Brooklyn College Teach-In & Workshop Series on Resistance to the Trump Agenda,’ I facilitated a teach-in on the topic of ‘web surveillance and security.’ During my session I made note of some of the technical and legal issues that are play in these domains, and how technology and law have conspired to ensure that: a) we live in a regime of constant, pervasive surveillance; b) current legal protections–including the disastrous ‘third-party doctrine‘ and the rubber-stamping of governmental surveillance ‘requests’ by FISA courts–are simply inadequate to safeguard our informational and decisional privacy; c) there is no daylight between the government and large corporations in their use and abuse of our personal information. (I also pointed my audience to James Grimmelmann‘s excellent series of posts on protecting digital privacy, which began the day after Donald Trump was elected and continued right up to inauguration. In that post, Grimmelmann links to ‘self-defense’ resources provided by the Electronic Frontier Foundation and Ars Technica.)

I began my talk by describing how the level of surveillance desired by secret police organizations of the past–like the East German Stasi, for instance–was now available to the NSA, CIA, and FBI, because of social networking systems; our voluntary provision of every detail of our lives to these systems is a spook’s delight. For instance, the photographs we upload to Facebook will, eventually, make their way into the gigantic corpus of learning data used by law enforcement agencies’ facial recognition software.

During the ensuing discussion I remarked that traditional activism directed at increasing privacy protections–or the enacting of ‘self-defense’ measures–should be part of a broader strategy aimed at reversing the so-called ‘asymmetric panopticon‘: citizens need to demand ‘surveillance’ in the other direction, back at government and corporations. For the former, this would mean pushing back against the current classification craze, which sees an increasing number of documents marked ‘Secret’ ‘Top Secret’ or some other risible security level–and which results in absurd sentences being levied on those who, like Chelsea Manning, violate such constraints; for the latter, this entails demanding that corporations offer greater transparency about their data collection, usage, and analysis–and are not able to easily rely on the protection of trade secret law in claiming that these techniques are ‘proprietary.’ This ‘push back,’ of course, relies on changing the nature of the discourse surrounding governmental and corporate secrecy, which is all too often able to offer facile arguments that link secrecy and security or secrecy and business strategy. In many ways, this might be the  most onerous challenge of all; all too many citizens are still persuaded by the ludicrous ‘if you’ve done nothing illegal you’ve got nothing to hide’ and ‘knowing everything about you is essential for us to keep you safe (or sell you goods’ arguments.

Note: After I finished my talk and returned to my office, I received an email from one of the attendees who wrote:

 

Programs as Agents, Persons, or just Programs?

Last week, The Nation published my essay “Programs are People, Too“. In it, I argued for treating smart programs as the legal agents of those that deploy them, a legal change I suggest would be more protective of our privacy rights.

Among some of the responses I received was one from a friend, JW, who wrote:

[You write: But automation somehow deludes some people—besides Internet users, many state and federal judges—into imagining our privacy has not been violated. We are vaguely aware something is wrong, but are not quite sure what.]
 
I think we are aware that something is wrong and that it is less wrong.  We already have an area of the law where we deal with this, namely, dog sniffs.  We think dog sniffs are less injurious than people rifling through our luggage, indeed, the law refers to those sniffs are “sui generis.”  And I think they *are* less injurious, just like it doesn’t bother me that google searches my email with an algorithm.  This isn’t to say that it’s unreasonable for some people to be bothered by it, but I do think people are rightly aware that it is different and less intrusive than if some human were looking through their email.  
 
We don’t need to attribute personhood to dogs to feel violated by police bringing their sniffing up to our house for no reason, but at the same time we basically accept their presence in airports.  And what bothers us isn’t what’s in the dog’s mind, but in the master’s.  If a police dog smelled around my house, made an alert, but no police officer was there to interpret the alert, I’m not sure it would bother me.  
 
Similarly, even attributing intentional states to algorithms as sophisticated as a dog, I don’t think their knowledge would bother me until it was given to some human (what happens when they are as sophisticated as humans is another question).  
 
I’m not sure good old fashioned Fourth Amendment balancing can’t be instructive here.  Do we have a reasonable expectation of privacy in x? What are the governmental interests at stake and how large of an intrusion is being made into the reasonable expectation of privacy?  
 

JW makes two interesting points. First, is scanning or reading by programs of our personal data really injurious to privacy in the way a human’s reading is? Second, is the legal change I’m suggesting even necessary?

Second point first. Treating smart programs as legal persons is not necessary to bring about the changes I’m suggesting in my essay. Plain old legal agency without legal personhood will do just fine. Most legal jurisdictions require legal agents to be persons too, but this has not always been the case. Consider the following passage, which did not make it to the final version of the online essay:

If such a change—to full-blown legal personhood and legal agency—is felt to be too much, too soon, then we could also grant programs a limited form of legal agency without legal personhood. There is a precedent for this too: slaves in Roman times, despite not being persons in the eyes of the law, were allowed to enter into contracts for their masters, and were thus treated as their legal intermediaries. I mention this precedent because the legal system might prefer that the change in legal status of artificial agents be an incremental one; before they become legal persons and thus full legal subjects, they could ‘enjoy’ this form of limited legal subjecthood. As a society we might find this status uncomfortable enough to want to change their status to legal persons if we think its doctrinal and political advantages—like those alluded to here—are significant enough.

Now to JW’s first point. Is a program’s access to my personal data less injurious than a human’s? I don’t think so. Programs can do things with data: they can act on it. The opening example in my essay demonstrates this quite well:

Imagine the following situation: Your credit card provider uses a risk assessment program that monitors your financial activity. Using the information it gathers, it notices your purchases are following a “high-risk pattern”; it does so on the basis of a secret, proprietary algorithm. The assessment program, acting on its own, cuts off the use of your credit card. It is courteous enough to email you a warning. Thereafter, you find that actions that were possible yesterday—like making electronic purchases—no longer are. No humans at the credit card company were involved in this decision; its representative program acted autonomously on the basis of pre-set risk thresholds.

Notice in this example that for my life to be impinged on by the agency/actions of others, it was not necessary that a single human being be involved. We so often interact with the world through programs that they command considerable agency in our lives. Our personal data is valuable to us because control of it may make a difference to our lives; if programs can use the data to do so then our privacy laws should regulate them too–explicitly.

Let us return to JW’s sniffer dog example and update it. The dog is a robotic one; it uses sophisticated scanning technology to detect traces of cocaine on a passenger’s bag. When it does so, the nametag/passport photo associated with the bag are automatically transmitted to a facial recognition system, which establishes a match, and immediately sets off a series of alarms: perhaps my bank accounts are closed, perhaps my sophisticated car is immobilized, and so on. No humans need be involved in this decision; I may find my actions curtailed without any human having taken a single action. We don’t need “a police offer to interpret the alert.” (But I’ve changed his dog to a robotic dog, haven’t I? Yes, because the programs I am considering are, in some dimensions, considerably smarter than a sniffer dog. They are much, much, dumber in others.)

In speaking of the sniffer dog, JW says “I don’t think their knowledge would bother me until it was given to some human.” But as our examples show, a program could make the knowledge available to other programs, which could take actions too.

Indeed, programs could embarrass us too: imagine a society in which sex offenders are automatically flagged in public by publishing their photos on giant television screens in Times Square. Scanning programs intercept an email of mine, in which I have sent photos–of my toddler daughter bathing with her pre-school friend–to my wife. They decide on the basis of this data that I am a sex offender and flag me as such. Perhaps I’m only ‘really’ embarrassed when humans ‘view’ my photo but the safeguards for accessing data and its use need to be placed ‘upstream.’

Humans aren’t the only ones taking actions in this world of ours; programs are agents too. It is their agency that makes their access to our data interesting and possibly problematic. The very notion of an autonomous program would be considerably less useful if they couldn’t act on their own, interact with each other, and bring about changes.

Lastly, JW also raises the question of whether we have a reasonable expectation of privacy in our email–stored on our ISP’s providers’ storage. Thanks to the terrible third-party doctrine, the Supreme Court has decided we do not. But this notion is ripe for over-ruling in these days of cloud computing. Our legal changes–on legal and normative grounds–should not be held up by bad law. But even if this were to stand, it would not affect my arguments in the essay, which conclude that data in transit, which is subject to the Wiretap Act, is still something in which we may find a reasonable expectation of privacy.