Paul Ryan Wants A Fig Leaf From Donald Trump

Over at The Nation John Nichols makes note of Paul Ryan’s undignified ‘dance’ with Donald Trump:

Ryan says he is “not ready” to formally endorse Trump’s unpopular presidential candidacy. Trump says he is “not ready” to embrace Ryan’s unpopular austerity agenda. But after speaking with Ryan, Republican National Committee chairman Reince Priebus says the speaker is prepared to “work through” these differences in order to “get there” on an endorsement of the billionaire. Blah. Blah. Blah.

Ryan is being portrayed in much of the media as an honorable Republican who is courageously refusing to board the Trump train….Ryan’s maneuvering has very little to do with honor and courage, and very much to do with ego and political positioning.

As I noted here a little while ago, the Republican Party will not find it too difficult to absorb and assimilate Donald Trump. But to do so, it needs some cover, even if purely nominal. It would like Donald Trump to put down his megaphone and use a dog whistle instead, for instance. It would like Donald Trump to stop making it so hard for them to speak up on his behalf, a difficulty made especially galling by the knowledge these Republican folk have that in Hillary Clinton they have their dream electoral opposition: a figure universally reviled on the American Right who can be effortlessly linked with Bill Clinton, another bogeyman for the party faithful. The Republican Party knows the Clinton candidacy, that seeming inevitability, can be beaten by their tried and trusted combinations of obfuscation, sabre rattling, seemingly outward directed xenophobia, and loud, persistent, dog-whistling. Surely there is no need to pick new fights with new enemies here?

Little is required for party unity in the present situation. Trump would only need to sound ‘presidential’ on a couple of occasions, and those pronouncements would easily become the foci of attention for Republicans. They would allow Republicans to point to a ‘maturing,’ ‘evolving,’ Trump, and allow them to virtuously insist on conversations about ‘the issues.’ Such conversations are not possible when every news cycle brings further reports about acerbic Trump responses to official Republican condemnation.

So this picture that Ryan seeks to paint for us of a courageous Speaker holding the fort for technocratic conservatives against the advancing forces of an unsophisticated, nativist populism needs emendation; his desperate wails–‘A fig leaf, a fig leaf, my kingdom for a fig leaf!” indicate a wholly disparate set of desires and motivations. As Nichols notes, “Ryan says he “wants to” back Trump and has indicated that he hopes ‘to be a part of this unifying process.'” Ryan’s ‘wants’ and ‘hopes’ are self-serving; he does not want to be deposed by Trump, to lose his political career to this unregenerate parvenu. Ryan seeks not to become politically irrelevant, to not be shoved aside by the Trump Express that has scorched a new path through the Republican marshaling yard. Because Ryan is the one who seeks to be survivor, he will grasp at any lifeline thrown to him. Perhaps this coming week’s meeting with Donald Trump will provide him with one such.

Beyonce And The Singularity

A couple of decades ago, I strolled through Washington Square Park on a warm summer night, idly observing the usual hustle and bustle of students, tourists, drunks, buskers,  hustlers, stand-up comedians, and sadly, folks selling oregano instead of good-to-honest weed. As I did so, I noticed a young man, holding up flyers and yelling, ‘Legalize Marijuana! Impeach George Bush! [Sr., not Jr., though either would have done just fine.].”  I walked over, and asked for a flyer. Was a new political party being floated with these worthy objectives as central platform  issues? Was there a political movement afoot, one worthy of my support? Was a meeting being called?

The flyers were for a punk rock band’s live performance the following night–at a club, a block or so away. Clickbait, you see, is as old as the hills.

Clickbait works. From the standard ‘You won’t believe what this twelve-year old did to get his divorced parents back together’ to ‘Ten signs your daughter is going to date a loser in high school’, to ‘Nine ways you are wasting money everyday’ – they all work. You are intrigued; you click; the hit-count goes up; little counters spin; perhaps some unpaid writer gets paid as a threshold is crossed; an advertiser forks out money to the site posting the link. Or something like that. It’s all about the hits; they keep the internet engine running; increasing their number justifies any means.

Many a writer finds out that the headlines for their posts changed to something deemed more likely to bring in readers. They often do not agree with these changes–especially when irate readers complain about their misleading nature. This becomes especially pernicious when trash talking about a piece of writing spreads–based not on its content, but on its headline, one not written by the author, but dreamed up by a website staffer instructed to do anything–anything!–to increase the day’s hit-count.

A notable personal instance of this phenomenon occurred with an essay I wrote for The Nation a little while ago. My original title for the essay was: was Programs, Not Just People, Can Violate Your Privacy. I argued that smart programs could violate privacy just like humans could, and that the standard defense used by their deployers–“Don’t worry, no humans are reading your email”–was deliberately and dangerously misleading. I then went to suggest granting a limited form of legal agency to these programs–so that their deployers could be understood as their legal principals and hence, attributed their knowledge and made liable for their actions. I acknowledged the grant of personhood as a legal move that would also solve this problem, but that was not the main thrust of my argument–the grant of legal agency to invoke agency law would be enough.

My essay went online as Programs Are People, Too. It was a catchy title, but it was clickbait. And it created predictable misunderstanding: many readers–and non-readers–simply assumed I was arguing for greater ‘legal rights’ for programs, and immediately put me down as some kind of technophilic anti-humanist. Ironically, someone arguing for the protection of user rights online was pegged as arguing against them. The title was enough to convince them of it. I had thought my original title was more accurate and certainly seemed catchy enough to me. Not so apparently for the folks who ran The Nation‘s site. C’est la vie.

As for Beyonce, I have no idea what she thinks about the singularity.

Programs as Agents, Persons, or just Programs?

Last week, The Nation published my essay “Programs are People, Too“. In it, I argued for treating smart programs as the legal agents of those that deploy them, a legal change I suggest would be more protective of our privacy rights.

Among some of the responses I received was one from a friend, JW, who wrote:

[You write: But automation somehow deludes some people—besides Internet users, many state and federal judges—into imagining our privacy has not been violated. We are vaguely aware something is wrong, but are not quite sure what.]
 
I think we are aware that something is wrong and that it is less wrong.  We already have an area of the law where we deal with this, namely, dog sniffs.  We think dog sniffs are less injurious than people rifling through our luggage, indeed, the law refers to those sniffs are “sui generis.”  And I think they *are* less injurious, just like it doesn’t bother me that google searches my email with an algorithm.  This isn’t to say that it’s unreasonable for some people to be bothered by it, but I do think people are rightly aware that it is different and less intrusive than if some human were looking through their email.  
 
We don’t need to attribute personhood to dogs to feel violated by police bringing their sniffing up to our house for no reason, but at the same time we basically accept their presence in airports.  And what bothers us isn’t what’s in the dog’s mind, but in the master’s.  If a police dog smelled around my house, made an alert, but no police officer was there to interpret the alert, I’m not sure it would bother me.  
 
Similarly, even attributing intentional states to algorithms as sophisticated as a dog, I don’t think their knowledge would bother me until it was given to some human (what happens when they are as sophisticated as humans is another question).  
 
I’m not sure good old fashioned Fourth Amendment balancing can’t be instructive here.  Do we have a reasonable expectation of privacy in x? What are the governmental interests at stake and how large of an intrusion is being made into the reasonable expectation of privacy?  
 

JW makes two interesting points. First, is scanning or reading by programs of our personal data really injurious to privacy in the way a human’s reading is? Second, is the legal change I’m suggesting even necessary?

Second point first. Treating smart programs as legal persons is not necessary to bring about the changes I’m suggesting in my essay. Plain old legal agency without legal personhood will do just fine. Most legal jurisdictions require legal agents to be persons too, but this has not always been the case. Consider the following passage, which did not make it to the final version of the online essay:

If such a change—to full-blown legal personhood and legal agency—is felt to be too much, too soon, then we could also grant programs a limited form of legal agency without legal personhood. There is a precedent for this too: slaves in Roman times, despite not being persons in the eyes of the law, were allowed to enter into contracts for their masters, and were thus treated as their legal intermediaries. I mention this precedent because the legal system might prefer that the change in legal status of artificial agents be an incremental one; before they become legal persons and thus full legal subjects, they could ‘enjoy’ this form of limited legal subjecthood. As a society we might find this status uncomfortable enough to want to change their status to legal persons if we think its doctrinal and political advantages—like those alluded to here—are significant enough.

Now to JW’s first point. Is a program’s access to my personal data less injurious than a human’s? I don’t think so. Programs can do things with data: they can act on it. The opening example in my essay demonstrates this quite well:

Imagine the following situation: Your credit card provider uses a risk assessment program that monitors your financial activity. Using the information it gathers, it notices your purchases are following a “high-risk pattern”; it does so on the basis of a secret, proprietary algorithm. The assessment program, acting on its own, cuts off the use of your credit card. It is courteous enough to email you a warning. Thereafter, you find that actions that were possible yesterday—like making electronic purchases—no longer are. No humans at the credit card company were involved in this decision; its representative program acted autonomously on the basis of pre-set risk thresholds.

Notice in this example that for my life to be impinged on by the agency/actions of others, it was not necessary that a single human being be involved. We so often interact with the world through programs that they command considerable agency in our lives. Our personal data is valuable to us because control of it may make a difference to our lives; if programs can use the data to do so then our privacy laws should regulate them too–explicitly.

Let us return to JW’s sniffer dog example and update it. The dog is a robotic one; it uses sophisticated scanning technology to detect traces of cocaine on a passenger’s bag. When it does so, the nametag/passport photo associated with the bag are automatically transmitted to a facial recognition system, which establishes a match, and immediately sets off a series of alarms: perhaps my bank accounts are closed, perhaps my sophisticated car is immobilized, and so on. No humans need be involved in this decision; I may find my actions curtailed without any human having taken a single action. We don’t need “a police offer to interpret the alert.” (But I’ve changed his dog to a robotic dog, haven’t I? Yes, because the programs I am considering are, in some dimensions, considerably smarter than a sniffer dog. They are much, much, dumber in others.)

In speaking of the sniffer dog, JW says “I don’t think their knowledge would bother me until it was given to some human.” But as our examples show, a program could make the knowledge available to other programs, which could take actions too.

Indeed, programs could embarrass us too: imagine a society in which sex offenders are automatically flagged in public by publishing their photos on giant television screens in Times Square. Scanning programs intercept an email of mine, in which I have sent photos–of my toddler daughter bathing with her pre-school friend–to my wife. They decide on the basis of this data that I am a sex offender and flag me as such. Perhaps I’m only ‘really’ embarrassed when humans ‘view’ my photo but the safeguards for accessing data and its use need to be placed ‘upstream.’

Humans aren’t the only ones taking actions in this world of ours; programs are agents too. It is their agency that makes their access to our data interesting and possibly problematic. The very notion of an autonomous program would be considerably less useful if they couldn’t act on their own, interact with each other, and bring about changes.

Lastly, JW also raises the question of whether we have a reasonable expectation of privacy in our email–stored on our ISP’s providers’ storage. Thanks to the terrible third-party doctrine, the Supreme Court has decided we do not. But this notion is ripe for over-ruling in these days of cloud computing. Our legal changes–on legal and normative grounds–should not be held up by bad law. But even if this were to stand, it would not affect my arguments in the essay, which conclude that data in transit, which is subject to the Wiretap Act, is still something in which we may find a reasonable expectation of privacy.