Ken Englehart’s Exceedingly Lame Argument Against Net Neutrality

Over at the New York Times, Ken Englehart, “a lawyer specializing in communications law, is a senior adviser for StrategyCorp, an adjunct professor at Osgoode Hall Law School and a senior fellow at the C. D. Howe Institute” offers us an astonishing argument suggesting we not worry about the FCC’s move to repeal Net Neutrality. It roughly consists of saying “Don’t worry, corporations will do right by you.” Englehart accepts that the concerns raised by opponents of the FCC–” getting rid of neutrality regulation will lead to a “two-tier” internet: Internet service providers will start charging fees to websites and apps, and slow down or block the sites that don’t pay up…users will have unfettered access to only part of the internet, with the rest either inaccessible or slow”–have some merit for he makes note  of abuses by ISPs that confirm just those fears. But he just does not think we need worry that ISPs will abuse their new powers:

[T]hese are rare examples, for a reason: The public blowback was fierce, scaring other providers from following suit. Second, blocking competitors to protect your own services is anticompetitive conduct that might well be stopped by antitrust laws without any need for network neutrality regulations.

How reassuring. “Public blowback” seems unlikely to have any effect on the behavior of folks who run quasi-monopolies. Moreover, the idea that we might should trust our ISPs to not indulge in behavior that “might well be stopped by antitrust laws” also sounds unlikely to assuage any concerns pertaining to the abuse of ISP powers. It gets better, of course:

Net-neutrality defenders also worry that some service providers could slow down high-data peer-to-peer traffic, like BitTorrent. And again, it has happened, most notably in 2007, when Comcast throttled some peer-to-peer file sharing.

But it’s still good:

So why am I not worried? I worked for a telecommunications company for 25 years, and whatever one may think about corporate control over the internet, I know that it simply is not in service providers’ interests to throttle access to what consumers want to see. Neutral broadband access is a cash cow; why would they kill it?

Because service providers will make all the money they need by providing faster services to premium customers and not give a damn about the plebes?

But don’t worry:

[T]here’s still competition: Some markets may have just one cable provider, but phone companies offer increasingly comparable internet access — so if the cable provider slowed down or blocked some sites, the phone company could soak up the affected customers simply by promising not to do so.

Or they could collude, with both charging high prices because they know customers have nowhere to go?

Is this the best defenders of the FCC can do? The old ‘market pressures will make corporations behave’ pony trick? Englehart’s cleverest trick, I will admit, is the aside that “the current net neutrality rule was put in place by the Obama administration.” That’s a good dog-whistle to blow. Anything done by the Obama administration is worth repealing by anyone connected with this administration. And their cronies, like Englehart.

Contra Cathy O’Neil, The ‘Ivory Tower’ Does Not ‘Ignore Tech’

In ‘Ivory Tower Cannot Keep On Ignoring TechCathy O’Neil writes:

We need academia to step up to fill in the gaps in our collective understanding about the new role of technology in shaping our lives. We need robust research on hiring algorithms that seem to filter out peoplewith mental health disorders…we need research to ensure that the same mistakes aren’t made again and again. It’s absolutely within the abilities of academic research to study such examples and to push against the most obvious statistical, ethical or constitutional failures and dedicate serious intellectual energy to finding solutions. And whereas professional technologists working at private companies are not in a position to critique their own work, academics theoretically enjoy much more freedom of inquiry.

There is essentially no distinct field of academic study that takes seriously the responsibility of understanding and critiquing the role of technology — and specifically, the algorithms that are responsible for so many decisions — in our lives. That’s not surprising. Which academic department is going to give up a valuable tenure line to devote to this, given how much academic departments fight over resources already?

O’Neil’s piece is an unfortunate continuation of a trend to continue to castigate academia for its lack of social responsibility, all the while ignoring the work academics do in precisely those domains where their absence is supposedly felt.

In her Op-Ed, O’Neil ignores science and technology studies, a field of study that “takes seriously the responsibility of understanding and critiquing the role of technology,” and many of whose members are engaged in precisely the kind of studies she thinks should be undertaken at this moment in the history of technology. Moreover, there are fields of academic studies such as philosophy of science, philosophy of technology, and the sociology of knowledge, all of which take very seriously the task of examining and critiquing the conceptual foundations of science and technology; such inquiries are not elucidatory, they are very often critical and skeptical. Such disciplines then, produce work that makes both descriptive and prescriptive claims about the practice of science, and the social, political, and ethical values that underwrite what may seem like purely ‘technical’ decisions pertaining to design and implementation. The humanities are not alone in this regard, most computer science departments now require a class in ‘Computer Ethics’ as part of the requirements for their major (indeed, I designed one such class here at Brooklyn College, and taught it for a few semesters.) And of course, legal academics have, in recent years started to pay attention to these fields and incorporated them in their writings on ‘algorithmic decision making,’ ‘algorithmic control’ and so on. (The work of Frank Pasquale and Danielle Citron is notable in this regard.) If O’Neil is interested, she could dig deeper into the philosophical canon and read works by critical theorists like Herbert Marcuse and Max Horkheimer who mounted rigorous critiques of scientism, reductionism, and positivism in their works. Lastly, O’Neil could read my co-authored work Decoding Liberation: The Promise of Free and Open Source Software, a central claim of which is that transparency, not opacity, should be the guiding principle for software design and deployment. I’d be happy to send her a copy if she so desires.

The Distinct Relief Of Being (Partially) ‘Off-Line’

I’ve been off blogging for a while, and for good reason: I’d been traveling and did not bother to try to stay online during my travels. Interestingly enough, had I bothered to exert myself ever so slightly in this regard, I could have maintained a minimal presence online here at this blog by posting a quick photo or two–you know, the ones that let you know what you are missing out on, or perhaps even a couple of sentences on my various journeys–which might even have risen above the usual ‘oh my god, my mind is blown’ reactions to spectacular landscapes; network connectivity has improved, and we are ever more accessible even as we venture forth into the ‘outdoors’; after all, doesn’t it seem obligatory for travelers to remote ends of the earth to keep us informed on every weekly, daily, hourly increment in their progress?  (Some five years ago, I’d enforced a similar hiatus on this blog; then, staying offline was easier as my cellphone signal-finding rarely found purchase on my road-trip through the American West.)

But indolence and even more importantly, relief at the cessation of the burden of staying ‘online’ and ‘updated’ and ‘current’ and ‘visible’ kicked in all too soon; and my hand drifted from the wheel, content to let this blog’s count of days without a new post rack up ever so steadily, and for my social media ‘updates’ to become ever more sporadic: I posted no links on Facebook, and only occasionally dispensed some largesse to my ‘friends’ in the form of a ‘like’ or a ‘love,’ my tweeting came to a grinding halt. Like many others who have made note of the experience of going ‘off-line’ in some shape or form, I experienced relief of a very peculiar and particular kind. I continued to check email obsessively; I sent text messages to my family and video chatted with my wife and daughter when we were separated from each other. Nothing quite brought home the simultaneous remoteness and connectedness of my location in northwest Iceland like being able to chat in crystal clear video from a location eight arc-minutes south of the Arctic Circle with my chirpy daughter back in Brooklyn. This connectedness helps keep us safe, of course; while hiking alone in Colorado, I was able to inform my local friends of my arrivals at summits,  my time of commencing return, and then my arrival back at the trailhead; for that measure of anxiety reduction, I’m truly grateful.

Now, I’m back, desk-bound again. Incomplete syllabi await completion; draft book manuscripts call me over to inspect their discombobulated state; unanswered email stacks rise ominously; textbook order reminders frown at me.  It will take some time for me to plow my way out from under this pile; writing on this blog will help reduce the inevitable anxiety that will accompany me on these salvage operations. (Fortunately, I have not returned overweight and out-of-shape; thanks to my choice of activities on my travels, those twin post-journey curses have not been part of my fate this summer.)

On to the rest of the summer and then, the fall.

Proprietary Software And Our Hackable Elections

Bloomberg reports that:

Russia’s cyberattack on the U.S. electoral system before Donald Trump’s election was far more widespread than has been publicly revealed, including incursions into voter databases and software systems in almost twice as many states as previously reported. In Illinois, investigators found evidence that cyber intruders tried to delete or alter voter data. The hackers accessed software designed to be used by poll workers on Election Day, and in at least one state accessed a campaign finance database….the Russian hackers hit systems in a total of 39 states

In Decoding Liberation: The Promise of Free and Open Source Software, Scott Dexter and I wrote:

Oversight of elections, considered by many to be the cornerstone of modern representational democracies, is a governmental function; election commissions are responsible for generating ballots; designing, implementing, and maintaining the voting infrastructure; coordinating the voting process; and generally insuring the integrity and transparency of the election. But modern voting technology, specifically that of the computerized electronic voting machine that utilizes closed software, is not inherently in accord with these norms. In elections supported by these machines, a great mystery takes place. A citizen walks into the booth and “casts a vote.” Later, the machine announces the results. The magical transformation from a sequence of votes to an electoral decision is a process obscure to all but the manufacturers of the software. The technical efficiency of the electronic voting process becomes part of a package that includes opacity and the partial relinquishing of citizens’ autonomy.

This “opacity” has always meant that the software used to, quite literally, keep our democracy running has its quality and operational reliability vetted, not by the people, or their chosen representatives, but only by the vendor selling the code to the government. There is no possibility of say, a fleet of ‘white-hat’ hackers–concerned citizens–putting the voting software through its paces, checking for security vulnerabilities and points of failure. The kinds that hostile ‘black-hat’ hackers, working for a foreign entity like, say, Russia, could exploit. These concerns are not new.

Dexter and I continue:

The plethora of problems attributed to the closed nature of electronic voting machines in the 2004 U.S. presidential election illustrates the ramifications of tolerating such an opaque process. For example, 30 percent of the total votes were cast on machines that lacked ballot-based audit trails, making accurate recounts impossible….these machines are vulnerable to security hacks, as they rely in part on obscurity….Analyses of code very similar to that found in these machines reported that the voting system should not be used in elections as it failed to meet even the most minimal of security standards.

There is a fundamental political problem here:

The opaqueness of these machines’ design is a secret compact between governments and manufacturers of electronic voting machines, who alone are privy to the details of the voting process.

The solution, unsurprisingly, is one that calls for greater transparency; the use of free and open source software–which can be copied, modified, shared, distributed by anyone–emerges as an essential requirement for electronic voting machines.

The voting process and its infrastructure should be a public enterprise, run by a non-partisan Electoral Commission with its operational procedures and functioning transparent to the citizenry. Citizens’ forums demand open code in electoral technology…that vendors “provide election officials with access to their source code.” Access to this source code provides the polity an explanation of how voting results are reached, just as publicly available transcripts of congressional sessions illustrate governmental decision-making. The use of FOSS would ensure that, at minimum, technology is held to the same standards of openness.

So long as our voting machines run secret, proprietary software, our electoral process remains hackable–not just by Russian hackers but also by anyone that wishes to subvert the process to help realize their own political ends.

Will Artificial Intelligence Create More Jobs Than It Eliminates? Maybe

Over at the MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino strike an optimistic note as they respond to the “distressing picture” created by “the threat that automation will eliminate a broad swath of jobs across the world economy [for] as artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.” Wilson, Daugherty, and Morini-Bianzino suggest that we’ve been “overlooking” the fact that “many new jobs will also be created — jobs that look nothing like those that exist today.” They go on to identify three key positions:

Trainers

This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.

Explainers

The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases.

Sustainers

The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency.

Unfortunately for these claims, it is all too evident that these positions themselves are vulnerable to automation. That is, future trainers, explainers, and sustainers–if their job descriptions match the ones given above–will also be automated. Nothing in those descriptions indicates that humans are indispensable components of the learning processes they are currently facilitating.

Consider that the ’empathy trainer’ Koko is a machine-learning system which “can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth.” Currently Koko is being trained by a human; but it will then go on to teach Siri and Alexa. Teachers are teaching future teachers how to be teachers; after which they will teach on their own. Moreover, there is no reason why chatbots, which currently need human trainers, cannot be trained–as they already are–on increasingly large corpora of natural language texts like the collections of the world’s libraries (which consist of human inputs.)

Or consider that explainers may rely on an algorithmic system like “the Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction”; human analysts use its output to “pinpoint the data that led to a particular result.” But why wouldn’t LIME’s functioning be enhanced to do the relevant ‘pinpointing’ itself, thus obviating the need for a human analyst?

Finally, note that “AI may become more self-governing….an AI prototype named Quixote…can learn about ethics by reading simple stories….the system is able to “reverse engineer” human values through stories about how humans interact with one another. Quixote has learned…why stealing is not a good idea….even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.” This sounds like the roles and responsibilities for humans in this domain will decrease over time.

There is no doubt that artificial intelligence systems require human inputs, supervision, training, and maintenance; but those roles are diminishing, not increasing; short-term, local, requirements for human skills will not necessarily translate into long-term, global, ones. The economic impact of the coming robotics and artificial intelligence ‘revolution’ is still unclear.

Report On Brooklyn College Teach-In On ‘Web Surveillance And Security’

Yesterday, as part of ‘The Brooklyn College Teach-In & Workshop Series on Resistance to the Trump Agenda,’ I facilitated a teach-in on the topic of ‘web surveillance and security.’ During my session I made note of some of the technical and legal issues that are play in these domains, and how technology and law have conspired to ensure that: a) we live in a regime of constant, pervasive surveillance; b) current legal protections–including the disastrous ‘third-party doctrine‘ and the rubber-stamping of governmental surveillance ‘requests’ by FISA courts–are simply inadequate to safeguard our informational and decisional privacy; c) there is no daylight between the government and large corporations in their use and abuse of our personal information. (I also pointed my audience to James Grimmelmann‘s excellent series of posts on protecting digital privacy, which began the day after Donald Trump was elected and continued right up to inauguration. In that post, Grimmelmann links to ‘self-defense’ resources provided by the Electronic Frontier Foundation and Ars Technica.)

I began my talk by describing how the level of surveillance desired by secret police organizations of the past–like the East German Stasi, for instance–was now available to the NSA, CIA, and FBI, because of social networking systems; our voluntary provision of every detail of our lives to these systems is a spook’s delight. For instance, the photographs we upload to Facebook will, eventually, make their way into the gigantic corpus of learning data used by law enforcement agencies’ facial recognition software.

During the ensuing discussion I remarked that traditional activism directed at increasing privacy protections–or the enacting of ‘self-defense’ measures–should be part of a broader strategy aimed at reversing the so-called ‘asymmetric panopticon‘: citizens need to demand ‘surveillance’ in the other direction, back at government and corporations. For the former, this would mean pushing back against the current classification craze, which sees an increasing number of documents marked ‘Secret’ ‘Top Secret’ or some other risible security level–and which results in absurd sentences being levied on those who, like Chelsea Manning, violate such constraints; for the latter, this entails demanding that corporations offer greater transparency about their data collection, usage, and analysis–and are not able to easily rely on the protection of trade secret law in claiming that these techniques are ‘proprietary.’ This ‘push back,’ of course, relies on changing the nature of the discourse surrounding governmental and corporate secrecy, which is all too often able to offer facile arguments that link secrecy and security or secrecy and business strategy. In many ways, this might be the  most onerous challenge of all; all too many citizens are still persuaded by the ludicrous ‘if you’ve done nothing illegal you’ve got nothing to hide’ and ‘knowing everything about you is essential for us to keep you safe (or sell you goods’ arguments.

Note: After I finished my talk and returned to my office, I received an email from one of the attendees who wrote: