Drones And The Beautiful World They Reveal

Over the past year or so, I have, on multiple occasions, sat down with my toddler daughter to enjoy BBC’s epic nature documentary series Planet Earth. Narrated by the incomparable David Attenborough, it offers up hour-long packages of visual delight in stunning high-definition: giant waterfalls, towering mountains and icebergs, gigantic flocks of birds, roaring volcanoes and river rapids, deep canyons, majestic creatures of all kinds; the eye-candy is plentiful, and it is dished out in large portions. While watching it, I’ve been moved to remark that my co-viewing of it in the company of my daughter–and sensing her delight as we do so–has been one of the highlights of my parental responsibilities.

Filming a documentary like Planet Earth, the most expensive ever, takes time and money and technical aid. The featurettes for the various episodes explain how they were filmed: sometimes using a cinebulle, sometimes “the Heligimbal, a powerful, gyro-stabilised camera mounted beneath a helicopter.” Now comes news that Planet Earth II, the second installment of the series will deploy even more advanced technology:

The BBC…has not only shot the whole thing in UHD, but it also used the latest camera stabilisation, remote recording, and aerial drone technology, too.

The use of drones should make perfectly good sense. Drones can be commandeered into remote and difficult to access territories and zones with great ease and precision; they can be made to wait for the perfect shot for long periods of time; they can generate huge amounts of visual image data which can then be sorted through to select the best images; without a doubt, their usage will result in the previously hidden–and beautiful–coming to light. Perhaps they will descend into the craters of volcanoes; perhaps they will hover above herds of animals, tracking their every move to record and reveal the mysteries of migration; perhaps they will enable closer looks at the dynamics of waterfalls and whirlpools; perhaps they will fly amidst flocks of birds.

Their use will remind us once again of the mixed blessings of technology. Drones can be used for surveillance, for privacy invasions, for the violations of human rights; they can be used to conduct warfare from on high, sending down deadly munitions directed at civilians; they can also be used to reveal the beauties of this world in a manner that reminds us, yet again, that our planet is a beautiful place, one worth preserving for the sake of future generations. Technology facilitates the exploitation of nature but also, hopefully, its conservation and sensible stewardship thanks to the beauties of the images brought back to us by the drones we use. The use of drones in Planet Earth II may refine our aesthetic sensibilities further: many of our aesthetic superlatives are drawn from nature, but that entity’s contours will now be revealed in ever greater detail, with more aspects brought front and center. And so, as we have never stopped noticing, even as technology makes the world more understandable, it reveals its ever greater mysteries.  Technology may make the world mundane, quantify it all the better to tame it, but it may also reveal facets of the world we may have been previously blind to, rendering some sensibilities duller and yet others more acute.

Sanctimony, Hypocrisy, Nuclear Weapons, and Drones

A couple of days ago, on this blog, I wrote a post attempting to refute the charge of ‘selective outrage’ that is often leveled against critics of Israeli policies in the current conflict in Gaza. In it, I pointed out how the accusation of hypocrisy made against the proponent of a claim does not affect its logical force, but must still be reckoned with for its rhetorical impact. Today, I want to note how accusations of hypocrisy often derail American attempts to provide moral instruction and leadership to the rest of the world.

Consider, for instance, Barack Obama’s statements during a White House briefing session yesterday:

President Barack Obama somberly warned on Friday that a forthcoming Senate Intelligence Committee report will show that the United States “tortured some folks” before he took office. But he dismissed “sanctimonious” calls to punish any individuals responsible and rejected calls for CIA Director John Brennan’s resignation.

In response, on a Facebook comment space, I wrote:

Why, oh why, is the world so strangely reluctant to accept our leadership in all things moral?

Many US presidents–and their administrations–before Barack Obama–and his staff–have used the bully pulpit provided by their office and delivered countless, sonorous, lectures to the rest of the world on the ethical and moral values that should underwrite their political policies.  They, and many Americans, have often wondered why these instructions are not taken more seriously, and are instead responded to with a febrile mix of resentment, rage, and sometimes outright violence. These reactions then provoke the plaintive suggestions that these behavioral patterns are merely the ressentiment of the weak, or perhaps more ambitiously, an expression of an underlying hatred of the American way of life and its unique freedoms.

The answer is considerably less complicated.  As I noted in a post on the problem of nuclear disarmament and nonproliferation:

Perhaps the biggest stumbling-block to nonproliferation has been the failure of the ‘non-proliferation complex’ to internalize a simple truth:

[I]f smaller states are to be discouraged from acquiring a bomb, nuclear states will need to take real steps towards disarmament. Otherwise, non-nuclear states will regard their demands as self-serving and hypocritical – reason enough to think about creating an arsenal of their own. [from: Campbell Craig and Jan Ruzicka, ‘Who’s In, Who’s Out‘, (London Review of Books, 23 February 2012, Vol 34, No.4, pp 37-38),]

The self-serving hypocrisy of nuclear weapon states, and its implicit acceptance by the ‘complex’ is a long-running farce, depressingly well-known to most.  This hypocrisy is the single most important factor in ensuring that non-proliferation is a non-starter; it ensures the non-proliferation manifesto is foundationally malformed.

Nuclear nonproliferation is a very good idea, as is nuclear disarmament; they can be backed up by very good economic, political, and moral arguments, and many of these have been made by very eloquent spokespersons. Their efforts, however, have always been handicapped because, all too often, they were deployed by the self-serving, sanctimonious, hypocritical members of the Nuclear Weapons Club, which merely seemed to be serving double-helpings of ‘pull up the ladder, I’m aboard.’ (I can personally testify that during my university years, as a young hot-head, despite having internalized quite well the arguments against India’s going nuclear for its domestic energy needs–on grounds of inappropriate technological fit especially–I was left almost speechless with rage on reading American lectures on the same topic; these also, for good measure, very often suggested Indians were simply incapable of managing technology of such sophistication.)

Barack Obama warns us against sanctimony, blithely unaware of his own. His listeners however, are not. They are similarly aware that when he ponders the question of which country would tolerate missiles being rained down on it from on high, he is conveniently forgetting about things that fly in the sky and rhyme with ‘phone.’

We Robot 2012 – UAVs and a Pilot-Free World

Day Two at the We Robot 2012 conference at the University of Miami Law School.

Amir Rahmani‘s presentation Micro Aerial Vehicles: Opportunity or Liability? prompted a set of thoughts sparked by the idea of planes not flown by human beings, and in turn, the idea of an aviator-free world.  It has been some 109 years since Kitty Hawk, and in that time we have come to the point that we might seriously consider the idea of all aircraft being exclusively robotic (I should hasten to add that I doubt man will ever stop flying but at the least, a very significant attenuation of the role of the pilot looks likely. Peter W. Singer’s Wired for War notes, for instance, that UAV operations in Afghanistan, which account for a significant percentage of all aerial operations in that theater of operations, are carried out by desk-pilots working from home bases in the US. The culture that has sprung up around that community is interestingly different from that of pilots who fly combat aircraft from front-line bases.) While I generally welcome the idea of a ‘robotic uprising,’ i.e., a  greater role for robots in our society as a means of spurring greater introspection about ourselves and our place in this world, in this domain I find the idea of a pilot-free world curiously melancholic. And it is entirely unsurprising that such a thought is sparked by a set of deeply personal interests: After all, I did grow up on air force bases, watching jets take off, and admiring, like only young boys can, all those impossibly dashing, crew-cut, sunglasses-wearing aviators (then, they were exclusively men; now, women have joined the ranks of armed forces aviators as well).

The twentieth-century might have been the century of the pilot, and all the imaginative possibilities associated with the image of man borne aloft on wings, above this grubby world, into the skies, placed in a position, as John Gillespie Magee put it, to ‘reach out and touch the face of God.’  It was a century that saw the rich flowering of  a literature born from  the radically different viewpoint of man that aviation  afforded its practitioners (and those who admired them).  Antoine Saint-Exupery was a product of that century, as was Michael Collins (whose Carrying The Fire still remains one of most literate and passionate books about aviation and manned space flight).

So my concern here is not so much the loss of employment for pilots, a rather mundane economic worry. Rather, it is the idea that a whole domain of creative imagination might be lost. Hopefully, new creative possibilities might spring into being. Perhaps the little flying that will be done by humans in the future will generate a new form of literature, one that sees the aviator’s role not as a ‘worker’ flying airlines or as a ‘soldier’ flying combat aircraft, but returns perhaps to the original role of the aviator as an adventurer trying out and flying radically new craft. Perhaps. More on this possibility later.

Artificial Agents, Knowledge Attribution, and Privacy Violations

I am a subscriber to a mailing list dedicated to discussing the many legal, social, and economic issues that arise out of the increasing use of drones. Recently on the list, the discussion turned to the privacy implications of drones. I was asked whether the doctrines developed in my book A Legal Theory of Autonomous Artificial Agents were relevant to the privacy issues raised by drones. I wrote a brief reply on the list indicating  that yes, they are.  I am posting a brief excerpt from the book here to address that question more fully (for the full argument, please see Chapter 3 of the book):

Knowledge Attribution and Privacy Violations

The relationship between knowledge and legal regimes for privacy is straightforward: privacy laws place restrictions, inter alia, on what knowledge may be acquired, and how.  Of course, knowledge acquisition does not exhaust the range of privacy protections  afforded under modern legal systems. EU privacy law, for example, is triggered when mere processing of personal data is involved. Nevertheless acquisition of knowledge of  someone’s affairs, by human or automated means, crosses an important threshold with regards to privacy protection.

Privacy obligations are implicitly relevant to the attribution of knowledge held by agents to their principals in two ways: confidentiality obligations can restrict such attribution and horizontal information barriers such as medical privacy obligations can prevent corporations being fixed with collective knowledge for liability purposes.

Conversely, viewing artificial agents as legally recognized “knowers” of digitized personal information on behalf of their principals brings conceptual clarity in answering the question of when automated access to personal data amounts to a privacy violation.

The problem with devising legal protections against privacy violations by artificial agents is not that current statutory regimes are weak; it is that they have not been interpreted appropriately given the functionality of agents and the nature of modern internet-based communications. The first move in this regard is to regard artificial agents as legal agents
of their principals capable of information and knowledge acquisition.

A crucial disanalogy drawn between artificial and human agents plays a role in the denial that artificial agents’ access to personal data can constitute a privacy violation: the argument that the automated nature of artificial agents provides reassurance sensitive personal data is “untouched by human hands, unseen by human eyes.” The artificial agent becomes a convenient surrogate, one that by its automated nature neatly takes the burden of responsibility off the putative corporate or governmental offender. Here the intuition that “programs don’t know what your email is about” allows the principal to put up an “automation screen” between themselves and the programs deployed by them. For
instance, Google has sought to assuage concerns over possible violations of privacy in connection with scanning of Gmail email messages by pointing to the non-involvement of humans in the scanning process.

Similarly, the U.S. Government, in the 1995 Echelon case, responded to complaints about its monitoring of messages flowing through Harvard University’s computer network by stating no privacy interests had been violated because all the scanning had been carried out by programs.

This putative need for humans to access personal data before a privacy violation can occur underwrites such defenses.

Viewing, as we do, the programs engaged in such monitoring or surveillance as legal agents capable of knowledge acquisition denies the legitimacy of the Google and Echelon defenses. An agent that has acquired user’s personal data acquires functionality that makes possible the processing or onward disclosure of that data in such a way as to constitute privacy violations. (Indeed, the very functionality enabled by the access to such data is what would permit the claim to be made under our knowledge analysis conditions that the agent in question knows a user’s personal data.)