Report On Brooklyn College Teach-In On ‘Web Surveillance And Security’

Yesterday, as part of ‘The Brooklyn College Teach-In & Workshop Series on Resistance to the Trump Agenda,’ I facilitated a teach-in on the topic of ‘web surveillance and security.’ During my session I made note of some of the technical and legal issues that are play in these domains, and how technology and law have conspired to ensure that: a) we live in a regime of constant, pervasive surveillance; b) current legal protections–including the disastrous ‘third-party doctrine‘ and the rubber-stamping of governmental surveillance ‘requests’ by FISA courts–are simply inadequate to safeguard our informational and decisional privacy; c) there is no daylight between the government and large corporations in their use and abuse of our personal information. (I also pointed my audience to James Grimmelmann‘s excellent series of posts on protecting digital privacy, which began the day after Donald Trump was elected and continued right up to inauguration. In that post, Grimmelmann links to ‘self-defense’ resources provided by the Electronic Frontier Foundation and Ars Technica.)

I began my talk by describing how the level of surveillance desired by secret police organizations of the past–like the East German Stasi, for instance–was now available to the NSA, CIA, and FBI, because of social networking systems; our voluntary provision of every detail of our lives to these systems is a spook’s delight. For instance, the photographs we upload to Facebook will, eventually, make their way into the gigantic corpus of learning data used by law enforcement agencies’ facial recognition software.

During the ensuing discussion I remarked that traditional activism directed at increasing privacy protections–or the enacting of ‘self-defense’ measures–should be part of a broader strategy aimed at reversing the so-called ‘asymmetric panopticon‘: citizens need to demand ‘surveillance’ in the other direction, back at government and corporations. For the former, this would mean pushing back against the current classification craze, which sees an increasing number of documents marked ‘Secret’ ‘Top Secret’ or some other risible security level–and which results in absurd sentences being levied on those who, like Chelsea Manning, violate such constraints; for the latter, this entails demanding that corporations offer greater transparency about their data collection, usage, and analysis–and are not able to easily rely on the protection of trade secret law in claiming that these techniques are ‘proprietary.’ This ‘push back,’ of course, relies on changing the nature of the discourse surrounding governmental and corporate secrecy, which is all too often able to offer facile arguments that link secrecy and security or secrecy and business strategy. In many ways, this might be the  most onerous challenge of all; all too many citizens are still persuaded by the ludicrous ‘if you’ve done nothing illegal you’ve got nothing to hide’ and ‘knowing everything about you is essential for us to keep you safe (or sell you goods’ arguments.

Note: After I finished my talk and returned to my office, I received an email from one of the attendees who wrote:

 

Programs as Agents, Persons, or just Programs?

Last week, The Nation published my essay “Programs are People, Too“. In it, I argued for treating smart programs as the legal agents of those that deploy them, a legal change I suggest would be more protective of our privacy rights.

Among some of the responses I received was one from a friend, JW, who wrote:

[You write: But automation somehow deludes some people—besides Internet users, many state and federal judges—into imagining our privacy has not been violated. We are vaguely aware something is wrong, but are not quite sure what.]
 
I think we are aware that something is wrong and that it is less wrong.  We already have an area of the law where we deal with this, namely, dog sniffs.  We think dog sniffs are less injurious than people rifling through our luggage, indeed, the law refers to those sniffs are “sui generis.”  And I think they *are* less injurious, just like it doesn’t bother me that google searches my email with an algorithm.  This isn’t to say that it’s unreasonable for some people to be bothered by it, but I do think people are rightly aware that it is different and less intrusive than if some human were looking through their email.  
 
We don’t need to attribute personhood to dogs to feel violated by police bringing their sniffing up to our house for no reason, but at the same time we basically accept their presence in airports.  And what bothers us isn’t what’s in the dog’s mind, but in the master’s.  If a police dog smelled around my house, made an alert, but no police officer was there to interpret the alert, I’m not sure it would bother me.  
 
Similarly, even attributing intentional states to algorithms as sophisticated as a dog, I don’t think their knowledge would bother me until it was given to some human (what happens when they are as sophisticated as humans is another question).  
 
I’m not sure good old fashioned Fourth Amendment balancing can’t be instructive here.  Do we have a reasonable expectation of privacy in x? What are the governmental interests at stake and how large of an intrusion is being made into the reasonable expectation of privacy?  
 

JW makes two interesting points. First, is scanning or reading by programs of our personal data really injurious to privacy in the way a human’s reading is? Second, is the legal change I’m suggesting even necessary?

Second point first. Treating smart programs as legal persons is not necessary to bring about the changes I’m suggesting in my essay. Plain old legal agency without legal personhood will do just fine. Most legal jurisdictions require legal agents to be persons too, but this has not always been the case. Consider the following passage, which did not make it to the final version of the online essay:

If such a change—to full-blown legal personhood and legal agency—is felt to be too much, too soon, then we could also grant programs a limited form of legal agency without legal personhood. There is a precedent for this too: slaves in Roman times, despite not being persons in the eyes of the law, were allowed to enter into contracts for their masters, and were thus treated as their legal intermediaries. I mention this precedent because the legal system might prefer that the change in legal status of artificial agents be an incremental one; before they become legal persons and thus full legal subjects, they could ‘enjoy’ this form of limited legal subjecthood. As a society we might find this status uncomfortable enough to want to change their status to legal persons if we think its doctrinal and political advantages—like those alluded to here—are significant enough.

Now to JW’s first point. Is a program’s access to my personal data less injurious than a human’s? I don’t think so. Programs can do things with data: they can act on it. The opening example in my essay demonstrates this quite well:

Imagine the following situation: Your credit card provider uses a risk assessment program that monitors your financial activity. Using the information it gathers, it notices your purchases are following a “high-risk pattern”; it does so on the basis of a secret, proprietary algorithm. The assessment program, acting on its own, cuts off the use of your credit card. It is courteous enough to email you a warning. Thereafter, you find that actions that were possible yesterday—like making electronic purchases—no longer are. No humans at the credit card company were involved in this decision; its representative program acted autonomously on the basis of pre-set risk thresholds.

Notice in this example that for my life to be impinged on by the agency/actions of others, it was not necessary that a single human being be involved. We so often interact with the world through programs that they command considerable agency in our lives. Our personal data is valuable to us because control of it may make a difference to our lives; if programs can use the data to do so then our privacy laws should regulate them too–explicitly.

Let us return to JW’s sniffer dog example and update it. The dog is a robotic one; it uses sophisticated scanning technology to detect traces of cocaine on a passenger’s bag. When it does so, the nametag/passport photo associated with the bag are automatically transmitted to a facial recognition system, which establishes a match, and immediately sets off a series of alarms: perhaps my bank accounts are closed, perhaps my sophisticated car is immobilized, and so on. No humans need be involved in this decision; I may find my actions curtailed without any human having taken a single action. We don’t need “a police offer to interpret the alert.” (But I’ve changed his dog to a robotic dog, haven’t I? Yes, because the programs I am considering are, in some dimensions, considerably smarter than a sniffer dog. They are much, much, dumber in others.)

In speaking of the sniffer dog, JW says “I don’t think their knowledge would bother me until it was given to some human.” But as our examples show, a program could make the knowledge available to other programs, which could take actions too.

Indeed, programs could embarrass us too: imagine a society in which sex offenders are automatically flagged in public by publishing their photos on giant television screens in Times Square. Scanning programs intercept an email of mine, in which I have sent photos–of my toddler daughter bathing with her pre-school friend–to my wife. They decide on the basis of this data that I am a sex offender and flag me as such. Perhaps I’m only ‘really’ embarrassed when humans ‘view’ my photo but the safeguards for accessing data and its use need to be placed ‘upstream.’

Humans aren’t the only ones taking actions in this world of ours; programs are agents too. It is their agency that makes their access to our data interesting and possibly problematic. The very notion of an autonomous program would be considerably less useful if they couldn’t act on their own, interact with each other, and bring about changes.

Lastly, JW also raises the question of whether we have a reasonable expectation of privacy in our email–stored on our ISP’s providers’ storage. Thanks to the terrible third-party doctrine, the Supreme Court has decided we do not. But this notion is ripe for over-ruling in these days of cloud computing. Our legal changes–on legal and normative grounds–should not be held up by bad law. But even if this were to stand, it would not affect my arguments in the essay, which conclude that data in transit, which is subject to the Wiretap Act, is still something in which we may find a reasonable expectation of privacy.

Orin Kerr Thinks Executive Branch Searches of The Press Are a ‘Non-Story’

Orin Kerr suggests the story of the US Department of Justice seizing AP phone records isn’t one, wraps up with a flourish, hands out a few pokes at anti-government paranoia, and then asks a series of what he undoubtedly takes to be particularly incisive and penetrating questions:

Based on what we know so far, then, I don’t see much evidence of an abuse. Of course, I realize that some VC readers strongly believe that everything the government does is an abuse: All investigations are abuses unless there is proof beyond a reasonable doubt to the contrary. To not realize this is to be a pro-government lackey. Or even worse, Stewart Baker. But I would ask readers inclined to see this as an abuse to identify exactly what the government did wrong based on what we know so far. Was the DOJ wrong to investigate the case at all? If it was okay for them to investigate the case, was it wrong for them to try to find out who the AP reporters were calling? If it was okay for them to get records of who the AP reporters were calling, was it wrong for them to obtain the records from the personal and work phone numbers of all the reporters whose names were listed as being involved in the story and their editor? If it was okay for them to obtain the records of those phone lines, was the problem that the records covered two months — and if so, what was the proper length of time the records should have covered?

I get that many people will want to use this story as a generic “DOJ abuse” story and not look too closely at it. And I also understand that those who think leaks are good things will see investigations of leaks as inherently bad. But at least based on what we know so far, I don’t yet see a strong case that collecting these records was an abuse of the investigative process.

This summation and dismissal of the ‘non-story’ of a major news organization having its phone records seized by the legal wing of the executive branch is remarkable for its straightforward intention to treat the questions above as merely rhetorical: Of course, the DOJ is not ‘wrong’ to investigate aggressively, using all means at its disposal, whistleblowers providing information to the press. It should therefore seek to identify them relying on problematic doctrines of search and seizure of personal information provided to third-parties. These searches should be broad and extensive, casting as wide a net as possible.

In this conception of executive power, there is a visible asymmetry: the threat might be perceived dimly, but the response is clear and powerful, with few limits on its application.

For Kerr, therefore, it is a ‘non-story’ when a massive exertion of executive branch power is directed at a component of the polity vital to its information gathering and reporting functions, one of whose central functions has been exercising vigilance and oversight on that same power; it is a ‘non-story’ when exercises of executive power directed toward dubious ends such as prosecuting whistleblowers might result in an attenuated and impaired  domain of political discourse. This is of little concern to Kerr in his reckonings of whether legal propriety has been kept, of whether there has been an ‘abuse of the investigative process.’ But how could there be one, when the context of the ‘investigative process’ matters so little?