The Phenomenology Of Encounters With Notification Icons

It’s 630 AM or so; you’re awake, busy getting your cup of coffee ready. (Perhaps you’re up earlier like the truly virtuous or the overworked, which in our society comes to the same thing.) Your coffee made, you fire up your smartphone, laptop, tablet, or desktop, and settle down for the morning service at the altar.  Your eyes light up, your antennae tingle in pleasurable anticipation: Facebook’s blue top ribbon features a tiny red square–which squats over the globe like a ginormous social media network–with a number inscribed in it; single figures is good, double figures is better. You look at Twitter: the Liberty Bell–sorry, the notifications icon–bears the weight of a similar number. Yet again: single figures good, double figures better. You look at GMail: your heart races, for that distinctive bold lettering in your inbox is present, standing out in stark contrast from the pallid type below; and there is a number here too, in parentheses after ‘Inbox’: single figures good, double figures better.

That’s what happens on a good day. (On a really good day, Facebook will have three red circles for you.) On a bad day, the Facebook globe is heartbreakingly red-less and banal; Twitter’s Liberty Bell is mute; and GMail’s Inbox is not bold, not at all. You reel back from the screen(s) in disappointment; your mood crashes and burns; the world seems empty and uninviting and cold and dark. Impatience, frustration, anxiety come rushing in through the portals you have now left open, suffusing your being, residing there till dislodged by the right kind of sensory input from those same screens: the appropriate colors, typefaces, and numbers need to make an appearance to calm and sooth your restless self. We get to work; all the while keeping an eye open and an ear cocked: a number appears on a visible tab, and we switch contexts and screens to check, immediately. An envelope appears on the corner of our screens; mail is here; we must tear open that envelope. Sounds too, intrude; cheeps, dings, and rings issue from our machines to inform us that relief is here. The silence of our devices can be deafening.

Our mood rises and falls in sync.

As is evident, our interactions with the human-computer interfaces of our communications systems have a rich phenomenology: expectations, desires, hopes rush towards with colors and shapes and numbers; their encounters produce mood changes and affective responses. The clever designer shapes the iconography of the interface with care to produce these in the right way, to achieve the desired results: your interaction with the system must never be affectively neutral; it must have some emotional content. We are manipulated by these responses; we behave accordingly.

Machine learning experts speak of training the machines; let us not forget that our machines train us too. By the ‘face’ they present to us, by the sounds they make, by the ‘expressions’ visible on them. As we continue to interact with them, we become different people, changed much like we are by our encounters with other people, those other providers and provokers of emotional responses.

Artificial Agents, Knowledge Attribution, and Privacy Violations

I am a subscriber to a mailing list dedicated to discussing the many legal, social, and economic issues that arise out of the increasing use of drones. Recently on the list, the discussion turned to the privacy implications of drones. I was asked whether the doctrines developed in my book A Legal Theory of Autonomous Artificial Agents were relevant to the privacy issues raised by drones. I wrote a brief reply on the list indicating  that yes, they are.  I am posting a brief excerpt from the book here to address that question more fully (for the full argument, please see Chapter 3 of the book):

Knowledge Attribution and Privacy Violations

The relationship between knowledge and legal regimes for privacy is straightforward: privacy laws place restrictions, inter alia, on what knowledge may be acquired, and how.  Of course, knowledge acquisition does not exhaust the range of privacy protections  afforded under modern legal systems. EU privacy law, for example, is triggered when mere processing of personal data is involved. Nevertheless acquisition of knowledge of  someone’s affairs, by human or automated means, crosses an important threshold with regards to privacy protection.

Privacy obligations are implicitly relevant to the attribution of knowledge held by agents to their principals in two ways: confidentiality obligations can restrict such attribution and horizontal information barriers such as medical privacy obligations can prevent corporations being fixed with collective knowledge for liability purposes.

Conversely, viewing artificial agents as legally recognized “knowers” of digitized personal information on behalf of their principals brings conceptual clarity in answering the question of when automated access to personal data amounts to a privacy violation.

The problem with devising legal protections against privacy violations by artificial agents is not that current statutory regimes are weak; it is that they have not been interpreted appropriately given the functionality of agents and the nature of modern internet-based communications. The first move in this regard is to regard artificial agents as legal agents
of their principals capable of information and knowledge acquisition.

A crucial disanalogy drawn between artificial and human agents plays a role in the denial that artificial agents’ access to personal data can constitute a privacy violation: the argument that the automated nature of artificial agents provides reassurance sensitive personal data is “untouched by human hands, unseen by human eyes.” The artificial agent becomes a convenient surrogate, one that by its automated nature neatly takes the burden of responsibility off the putative corporate or governmental offender. Here the intuition that “programs don’t know what your email is about” allows the principal to put up an “automation screen” between themselves and the programs deployed by them. For
instance, Google has sought to assuage concerns over possible violations of privacy in connection with scanning of Gmail email messages by pointing to the non-involvement of humans in the scanning process.

Similarly, the U.S. Government, in the 1995 Echelon case, responded to complaints about its monitoring of messages flowing through Harvard University’s computer network by stating no privacy interests had been violated because all the scanning had been carried out by programs.

This putative need for humans to access personal data before a privacy violation can occur underwrites such defenses.

Viewing, as we do, the programs engaged in such monitoring or surveillance as legal agents capable of knowledge acquisition denies the legitimacy of the Google and Echelon defenses. An agent that has acquired user’s personal data acquires functionality that makes possible the processing or onward disclosure of that data in such a way as to constitute privacy violations. (Indeed, the very functionality enabled by the access to such data is what would permit the claim to be made under our knowledge analysis conditions that the agent in question knows a user’s personal data.)