I am a subscriber to a mailing list dedicated to discussing the many legal, social, and economic issues that arise out of the increasing use of drones. Recently on the list, the discussion turned to the privacy implications of drones. I was asked whether the doctrines developed in my book A Legal Theory of Autonomous Artificial Agents were relevant to the privacy issues raised by drones. I wrote a brief reply on the list indicating that yes, they are. I am posting a brief excerpt from the book here to address that question more fully (for the full argument, please see Chapter 3 of the book):
Knowledge Attribution and Privacy Violations
The relationship between knowledge and legal regimes for privacy is straightforward: privacy laws place restrictions, inter alia, on what knowledge may be acquired, and how. Of course, knowledge acquisition does not exhaust the range of privacy protections afforded under modern legal systems. EU privacy law, for example, is triggered when mere processing of personal data is involved. Nevertheless acquisition of knowledge of someone’s affairs, by human or automated means, crosses an important threshold with regards to privacy protection.
Privacy obligations are implicitly relevant to the attribution of knowledge held by agents to their principals in two ways: confidentiality obligations can restrict such attribution and horizontal information barriers such as medical privacy obligations can prevent corporations being fixed with collective knowledge for liability purposes.
Conversely, viewing artificial agents as legally recognized “knowers” of digitized personal information on behalf of their principals brings conceptual clarity in answering the question of when automated access to personal data amounts to a privacy violation.
The problem with devising legal protections against privacy violations by artificial agents is not that current statutory regimes are weak; it is that they have not been interpreted appropriately given the functionality of agents and the nature of modern internet-based communications. The first move in this regard is to regard artificial agents as legal agents
of their principals capable of information and knowledge acquisition.
A crucial disanalogy drawn between artificial and human agents plays a role in the denial that artificial agents’ access to personal data can constitute a privacy violation: the argument that the automated nature of artificial agents provides reassurance sensitive personal data is “untouched by human hands, unseen by human eyes.” The artificial agent becomes a convenient surrogate, one that by its automated nature neatly takes the burden of responsibility off the putative corporate or governmental offender. Here the intuition that “programs don’t know what your email is about” allows the principal to put up an “automation screen” between themselves and the programs deployed by them. For
instance, Google has sought to assuage concerns over possible violations of privacy in connection with scanning of Gmail email messages by pointing to the non-involvement of humans in the scanning process.
Similarly, the U.S. Government, in the 1995 Echelon case, responded to complaints about its monitoring of messages flowing through Harvard University’s computer network by stating no privacy interests had been violated because all the scanning had been carried out by programs.
This putative need for humans to access personal data before a privacy violation can occur underwrites such defenses.
Viewing, as we do, the programs engaged in such monitoring or surveillance as legal agents capable of knowledge acquisition denies the legitimacy of the Google and Echelon defenses. An agent that has acquired user’s personal data acquires functionality that makes possible the processing or onward disclosure of that data in such a way as to constitute privacy violations. (Indeed, the very functionality enabled by the access to such data is what would permit the claim to be made under our knowledge analysis conditions that the agent in question knows a user’s personal data.)
One thought on “Artificial Agents, Knowledge Attribution, and Privacy Violations”