The Distinct Relief Of Being (Partially) ‘Off-Line’

I’ve been off blogging for a while, and for good reason: I’d been traveling and did not bother to try to stay online during my travels. Interestingly enough, had I bothered to exert myself ever so slightly in this regard, I could have maintained a minimal presence online here at this blog by posting a quick photo or two–you know, the ones that let you know what you are missing out on, or perhaps even a couple of sentences on my various journeys–which might even have risen above the usual ‘oh my god, my mind is blown’ reactions to spectacular landscapes; network connectivity has improved, and we are ever more accessible even as we venture forth into the ‘outdoors’; after all, doesn’t it seem obligatory for travelers to remote ends of the earth to keep us informed on every weekly, daily, hourly increment in their progress?  (Some five years ago, I’d enforced a similar hiatus on this blog; then, staying offline was easier as my cellphone signal-finding rarely found purchase on my road-trip through the American West.)

But indolence and even more importantly, relief at the cessation of the burden of staying ‘online’ and ‘updated’ and ‘current’ and ‘visible’ kicked in all too soon; and my hand drifted from the wheel, content to let this blog’s count of days without a new post rack up ever so steadily, and for my social media ‘updates’ to become ever more sporadic: I posted no links on Facebook, and only occasionally dispensed some largesse to my ‘friends’ in the form of a ‘like’ or a ‘love,’ my tweeting came to a grinding halt. Like many others who have made note of the experience of going ‘off-line’ in some shape or form, I experienced relief of a very peculiar and particular kind. I continued to check email obsessively; I sent text messages to my family and video chatted with my wife and daughter when we were separated from each other. Nothing quite brought home the simultaneous remoteness and connectedness of my location in northwest Iceland like being able to chat in crystal clear video from a location eight arc-minutes south of the Arctic Circle with my chirpy daughter back in Brooklyn. This connectedness helps keep us safe, of course; while hiking alone in Colorado, I was able to inform my local friends of my arrivals at summits,  my time of commencing return, and then my arrival back at the trailhead; for that measure of anxiety reduction, I’m truly grateful.

Now, I’m back, desk-bound again. Incomplete syllabi await completion; draft book manuscripts call me over to inspect their discombobulated state; unanswered email stacks rise ominously; textbook order reminders frown at me.  It will take some time for me to plow my way out from under this pile; writing on this blog will help reduce the inevitable anxiety that will accompany me on these salvage operations. (Fortunately, I have not returned overweight and out-of-shape; thanks to my choice of activities on my travels, those twin post-journey curses have not been part of my fate this summer.)

On to the rest of the summer and then, the fall.

Proprietary Software And Our Hackable Elections

Bloomberg reports that:

Russia’s cyberattack on the U.S. electoral system before Donald Trump’s election was far more widespread than has been publicly revealed, including incursions into voter databases and software systems in almost twice as many states as previously reported. In Illinois, investigators found evidence that cyber intruders tried to delete or alter voter data. The hackers accessed software designed to be used by poll workers on Election Day, and in at least one state accessed a campaign finance database….the Russian hackers hit systems in a total of 39 states

In Decoding Liberation: The Promise of Free and Open Source Software, Scott Dexter and I wrote:

Oversight of elections, considered by many to be the cornerstone of modern representational democracies, is a governmental function; election commissions are responsible for generating ballots; designing, implementing, and maintaining the voting infrastructure; coordinating the voting process; and generally insuring the integrity and transparency of the election. But modern voting technology, specifically that of the computerized electronic voting machine that utilizes closed software, is not inherently in accord with these norms. In elections supported by these machines, a great mystery takes place. A citizen walks into the booth and “casts a vote.” Later, the machine announces the results. The magical transformation from a sequence of votes to an electoral decision is a process obscure to all but the manufacturers of the software. The technical efficiency of the electronic voting process becomes part of a package that includes opacity and the partial relinquishing of citizens’ autonomy.

This “opacity” has always meant that the software used to, quite literally, keep our democracy running has its quality and operational reliability vetted, not by the people, or their chosen representatives, but only by the vendor selling the code to the government. There is no possibility of say, a fleet of ‘white-hat’ hackers–concerned citizens–putting the voting software through its paces, checking for security vulnerabilities and points of failure. The kinds that hostile ‘black-hat’ hackers, working for a foreign entity like, say, Russia, could exploit. These concerns are not new.

Dexter and I continue:

The plethora of problems attributed to the closed nature of electronic voting machines in the 2004 U.S. presidential election illustrates the ramifications of tolerating such an opaque process. For example, 30 percent of the total votes were cast on machines that lacked ballot-based audit trails, making accurate recounts impossible….these machines are vulnerable to security hacks, as they rely in part on obscurity….Analyses of code very similar to that found in these machines reported that the voting system should not be used in elections as it failed to meet even the most minimal of security standards.

There is a fundamental political problem here:

The opaqueness of these machines’ design is a secret compact between governments and manufacturers of electronic voting machines, who alone are privy to the details of the voting process.

The solution, unsurprisingly, is one that calls for greater transparency; the use of free and open source software–which can be copied, modified, shared, distributed by anyone–emerges as an essential requirement for electronic voting machines.

The voting process and its infrastructure should be a public enterprise, run by a non-partisan Electoral Commission with its operational procedures and functioning transparent to the citizenry. Citizens’ forums demand open code in electoral technology…that vendors “provide election officials with access to their source code.” Access to this source code provides the polity an explanation of how voting results are reached, just as publicly available transcripts of congressional sessions illustrate governmental decision-making. The use of FOSS would ensure that, at minimum, technology is held to the same standards of openness.

So long as our voting machines run secret, proprietary software, our electoral process remains hackable–not just by Russian hackers but also by anyone that wishes to subvert the process to help realize their own political ends.

Will Artificial Intelligence Create More Jobs Than It Eliminates? Maybe

Over at the MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino strike an optimistic note as they respond to the “distressing picture” created by “the threat that automation will eliminate a broad swath of jobs across the world economy [for] as artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.” Wilson, Daugherty, and Morini-Bianzino suggest that we’ve been “overlooking” the fact that “many new jobs will also be created — jobs that look nothing like those that exist today.” They go on to identify three key positions:

Trainers

This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.

Explainers

The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases.

Sustainers

The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency.

Unfortunately for these claims, it is all too evident that these positions themselves are vulnerable to automation. That is, future trainers, explainers, and sustainers–if their job descriptions match the ones given above–will also be automated. Nothing in those descriptions indicates that humans are indispensable components of the learning processes they are currently facilitating.

Consider that the ’empathy trainer’ Koko is a machine-learning system which “can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth.” Currently Koko is being trained by a human; but it will then go on to teach Siri and Alexa. Teachers are teaching future teachers how to be teachers; after which they will teach on their own. Moreover, there is no reason why chatbots, which currently need human trainers, cannot be trained–as they already are–on increasingly large corpora of natural language texts like the collections of the world’s libraries (which consist of human inputs.)

Or consider that explainers may rely on an algorithmic system like “the Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction”; human analysts use its output to “pinpoint the data that led to a particular result.” But why wouldn’t LIME’s functioning be enhanced to do the relevant ‘pinpointing’ itself, thus obviating the need for a human analyst?

Finally, note that “AI may become more self-governing….an AI prototype named Quixote…can learn about ethics by reading simple stories….the system is able to “reverse engineer” human values through stories about how humans interact with one another. Quixote has learned…why stealing is not a good idea….even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.” This sounds like the roles and responsibilities for humans in this domain will decrease over time.

There is no doubt that artificial intelligence systems require human inputs, supervision, training, and maintenance; but those roles are diminishing, not increasing; short-term, local, requirements for human skills will not necessarily translate into long-term, global, ones. The economic impact of the coming robotics and artificial intelligence ‘revolution’ is still unclear.

Report On Brooklyn College Teach-In On ‘Web Surveillance And Security’

Yesterday, as part of ‘The Brooklyn College Teach-In & Workshop Series on Resistance to the Trump Agenda,’ I facilitated a teach-in on the topic of ‘web surveillance and security.’ During my session I made note of some of the technical and legal issues that are play in these domains, and how technology and law have conspired to ensure that: a) we live in a regime of constant, pervasive surveillance; b) current legal protections–including the disastrous ‘third-party doctrine‘ and the rubber-stamping of governmental surveillance ‘requests’ by FISA courts–are simply inadequate to safeguard our informational and decisional privacy; c) there is no daylight between the government and large corporations in their use and abuse of our personal information. (I also pointed my audience to James Grimmelmann‘s excellent series of posts on protecting digital privacy, which began the day after Donald Trump was elected and continued right up to inauguration. In that post, Grimmelmann links to ‘self-defense’ resources provided by the Electronic Frontier Foundation and Ars Technica.)

I began my talk by describing how the level of surveillance desired by secret police organizations of the past–like the East German Stasi, for instance–was now available to the NSA, CIA, and FBI, because of social networking systems; our voluntary provision of every detail of our lives to these systems is a spook’s delight. For instance, the photographs we upload to Facebook will, eventually, make their way into the gigantic corpus of learning data used by law enforcement agencies’ facial recognition software.

During the ensuing discussion I remarked that traditional activism directed at increasing privacy protections–or the enacting of ‘self-defense’ measures–should be part of a broader strategy aimed at reversing the so-called ‘asymmetric panopticon‘: citizens need to demand ‘surveillance’ in the other direction, back at government and corporations. For the former, this would mean pushing back against the current classification craze, which sees an increasing number of documents marked ‘Secret’ ‘Top Secret’ or some other risible security level–and which results in absurd sentences being levied on those who, like Chelsea Manning, violate such constraints; for the latter, this entails demanding that corporations offer greater transparency about their data collection, usage, and analysis–and are not able to easily rely on the protection of trade secret law in claiming that these techniques are ‘proprietary.’ This ‘push back,’ of course, relies on changing the nature of the discourse surrounding governmental and corporate secrecy, which is all too often able to offer facile arguments that link secrecy and security or secrecy and business strategy. In many ways, this might be the  most onerous challenge of all; all too many citizens are still persuaded by the ludicrous ‘if you’ve done nothing illegal you’ve got nothing to hide’ and ‘knowing everything about you is essential for us to keep you safe (or sell you goods’ arguments.

Note: After I finished my talk and returned to my office, I received an email from one of the attendees who wrote:

 

No, Aristotle Did Not ‘Create’ The Computer

For the past few days, an essay titled “How Aristotle Created The Computer” (The Atlantic, March 20, 2017, by Chris Dixon) has been making the rounds. It begins with the following claim:

The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Dixon then goes on to trace this ‘history of ideas,’ showing how the development–and increasing formalization and rigor–of logic contributed to the development of computer science and the first computing devices. Along the way, Dixon makes note of the contributions-direct and indirect–of: Claude Shannon, Alan Turing, George Boole, Euclid, Rene Descartes, Gottlob Frege, David Hilbert, Gottfried Leibniz, Bertrand Russell, Alfred Whitehead, Alonzo Church, and John Von Neumann. This potted history is exceedingly familiar to students of the foundations of computer science–a demographic that includes computer scientists, philosophers, and mathematical logicians–but presumably that is not the audience that Dixon is writing for; those students might wonder why Augustus De Morgan and Charles Peirce do not feature in it. Given this temporally extended history, with its many contributors and their diverse contributions, why does the article carry the headline “How Aristotle Created the Computer”? Aristotle did not create the computer or anything like it; he did make important contributions to a fledgling field, which took several more centuries to develop into maturity. (The contributions to this field by logicians and systems of logic of alternative philosophical traditions like the Indian one are, as per usual, studiously ignored in Dixon’s history.) And as a philosopher, I cannot resist asking, “what do you mean by ‘created'”? What counts as ‘creating’?

The easy answer is that it is clickbait. Fair enough. We are by now used to the idiocy of the misleading clickbait headline, one designed to ‘attract’ more readers by making it more ‘interesting;’ authors very often have little choice in this matter, and very often have to watch helplessly as hit-hungry editors mangle the impact of the actual content of their work. (As in this case?) But it is worth noting this headline’s contribution to the pernicious notion of the ‘creation’ of the computer and to the idea that it is possible to isolate a singular figure as its creator–a clear hangover of a religious sentiment that things that exist must have creation points, ‘beginnings,’ and creators. It is yet another contribution to the continued mistaken recounting of the history of science as a story of ‘towering figures.’ (Incidentally, I do not agree with Dixon that the history of computers is “better understood as a history of ideas”; that history is instead, an integral component of the history of computing in general, which also includes a social history and an economic one; telling a history of computing as a history of objects is a perfectly reasonable thing to do when we remember that actual, functioning computers are physical instantiations of abstract notions of computation.)

To end on a positive note, here are some alternative headlines: “Philosophy and Mathematics’ Contributions To The Development of Computing”; “How Philosophers and Mathematicians Helped Bring Us Computers”; or “How Philosophical Thinking Makes The Computer Possible.” None of these are as ‘sexy’ as the original headline, but they are far more informative and accurate.

Note: What do you think of my clickbaity headline for this post?

The ‘True Image Of A Writer’ And Online Writing

Shortly after I first began writing on the ‘Net–way back in 1988–I noticed that there was, very often, a marked contrast between the online and offline personas of some of the writers I encountered online. (I am referring to a small subset of the writers I read online; these were folks who worked with me in campus research labs but also wrote actively on Usenet newsgroups.) One of the most stunning contrasts was provided by a pair of young men, who were both brilliant programmers, but also afflicted with terrible stutters. Conversations with them were invariably affairs requiring a great deal of patience on the part of their interlocutors; their stutters very frequently derailed their attempts to coherently communicate. (I had suffered from a stutter myself once, as pre-teen, so I instantly sympathized with them, even as I did my best to decipher their speech at times.)

This was not the case online. Both wrote brilliantly and voluminously online; they wrote long and short pieces; they wrote on politics and technical matters alike with style and verve; they possessed a caustic sense of humor and were not afraid to put it on display. Quite simply, they were different persons online. One of them met his future wife online; she wrote from South America; he from New Jersey; she fell in love with ‘him,’ with his online persona, and traveled to the US to meet him; when she met him in person and encountered his stutter for the first time, she–as she put it herself later–realized it was too late, because she had already fallen in love with him. The unpleasant converse of the situation I describe here is the internet troll, the keyboard warrior, who ‘talks big’ online, and uses the online forum as an outlet for his misanthropy and aggression–all the while being a singularly meek and timid and physically uninspiring person offline. The very anonymity that makes the troll possible is, of course, what lets the silenced and intimidated speak up online. Without exaggeration, my memory of these gentlemen, and of the many other instances I observed of shy and reticent folks finding their voices online, has informed my resistance to facile claims that traditional, in-class, face-to-face education is invariably superior to online education. Any modality of instruction that could provide a voice to the voiceless was doing something right.

In Moments of Reprieve: A Memoir of Auschwitz (Penguin, New York, 1986) Primo Levi writes:

Anyone who has the opportunity to compare the true image of a writer with what can be deduced from his writings knows how frequently they do not coincide. The delicate investigator of movements of the spirit, vibrant as an oscillating circuit, proves to be a pompous oaf, morbidly full of himself, greedy for money and adulation blind  to his neighbor’s suffering. The orgiastic and sumptuous poet, in Dionysiac communion with the universe, is an abstinent, abstemious little man, not by ascetic choice but by medical prescription.

The ‘true image of the writer’ is an ambiguous notion; online writing has made it just a little more so.

Note: I wonder if Levi had Nietzsche in mind in his second example above.