Tesla’s ‘Irma Update’ Shows The Dangers Of Proprietary Software

By now, you know the story. Tesla magically (remotely) updated the software of its cars during Hurricane Irma:

Tesla remotely sent a free software update to some drivers across Florida over the weekend, extending the battery capacity of cars and giving extra range to those fleeing Hurricane Irma.

According to reports, the update temporarily unlocked the full-battery potential for 75-kilowatt-hour Model S sedans and Model X SUVs, adding around 30 to 40 miles to their range.

“Cars with a 75-kilowatt-hour battery pack were previously software limited to 210 miles of driving range per single charge and will now get 249 miles, the full range capacity of the battery,” the company wrote on a blog.

As is evident from this description, the software regulating battery life is ‘autonomous’ of the user; the user cannot change it, or tweak it in any way to reflect changing user needs or driving conditions (like, say, the need to drive to a distant point in order to escape a potentially life-threatening change in the weather.) In short, the software that runs on Tesla’s cars is not ‘free‘–not in the sense that you have to pay money for it, but in the sense that you cannot do what you, as the user of the software, might or might not want to do with it: like share it, copy it, modify it. If the user needs ‘help’ he or she must wait for the benevolent corporation to come to its aid.

We, as software users, are used to this state of affairs. Most of the software we use is indeed not ‘free’ in this sense: the source code is kept a trade secret and cannot be inspected to figure out how it does what it does, the binary executables are copyrighted and cannot be copied, lastly, the software’s algorithms are patented. You cannot read the code, you cannot change it to better reflect your needs, and you cannot make copies of something you ‘own’ to give it to others who might need it. As software users eventually come to realize, you don’t ‘own’ proprietary software in the traditional sense of the term, you license it for a limited period of time, subject to many constraints, some reasonable, others not.

In an interview with 3AM magazine, while talking about my book Decoding Liberation: The Promise of Free and Open Source Software I had made note of some of the political implications of the way software is regulated by law. The following exchange sums up the issues at play:

3:AM: One aspect of the book that was particularly interesting to me was your vision of a world full of code, a cyborg world where ‘distinctions between human and machine evanesce’ and where ‘personal and social freedoms in this domain are precisely the freedoms granted or restricted by software.’ Can you say something about what you argued for there?

SC: I think what we were trying to get at was that it seemed the world was increasingly driven by software, which underwrote a great deal of the technology that extends us and makes our cyborg selves possible. In the past, our cyborg selves were constructed by things like eyeglasses, pencils, abacuses and the like—today, by smartphones, wearable computers, tablets and other devices like them. These are all driven by software. So our extended mind, our extended self, is very likely to be largely a computational device. Who controls that software? Who writes it? Who can modify it? Look at us today, tethered to our machines, unable to function without them, using software written by someone else. How free can we be if we don’t have some very basic control over this technology? If the people who write the software are the ones who have exclusive control over it, then I think we are giving up some measure of freedom in this cyborg society. Remember that we can enforce all sorts of social control over people by writing it into the machines that they use for all sorts of things. Perhaps our machines of tomorrow will come with porn filters embedded in the code that we cannot remove; perhaps with code in the browsers that mark off portions of the Net as forbidden territory, perhaps our reading devices will not let us read certain books, perhaps our smartphones will not let us call certain numbers, perhaps prosthetic devices will not function in ‘no-go zones’, perhaps the self-driving cars of tomorrow will not let us drive faster than a certain speed; the control possibilities are endless. The more technologized we become and the more control we hand over to those who can change the innards of the machines, the less free we are. What are we to do? Just comply? This all sounds very sci-fi, but then, so would most of contemporary computing to folks fifty years ago. We need to be in charge of the machines that we use, that are our extensions.

We, in short, should be able to hack ourselves.

Tesla’s users were not free during Irma; they were at the mercy of the company, which in this case, came to their aid. Other users, of other technologies, might not be so fortunate; they might not be the masters of their destiny.

Proprietary Software And Our Hackable Elections

Bloomberg reports that:

Russia’s cyberattack on the U.S. electoral system before Donald Trump’s election was far more widespread than has been publicly revealed, including incursions into voter databases and software systems in almost twice as many states as previously reported. In Illinois, investigators found evidence that cyber intruders tried to delete or alter voter data. The hackers accessed software designed to be used by poll workers on Election Day, and in at least one state accessed a campaign finance database….the Russian hackers hit systems in a total of 39 states

In Decoding Liberation: The Promise of Free and Open Source Software, Scott Dexter and I wrote:

Oversight of elections, considered by many to be the cornerstone of modern representational democracies, is a governmental function; election commissions are responsible for generating ballots; designing, implementing, and maintaining the voting infrastructure; coordinating the voting process; and generally insuring the integrity and transparency of the election. But modern voting technology, specifically that of the computerized electronic voting machine that utilizes closed software, is not inherently in accord with these norms. In elections supported by these machines, a great mystery takes place. A citizen walks into the booth and “casts a vote.” Later, the machine announces the results. The magical transformation from a sequence of votes to an electoral decision is a process obscure to all but the manufacturers of the software. The technical efficiency of the electronic voting process becomes part of a package that includes opacity and the partial relinquishing of citizens’ autonomy.

This “opacity” has always meant that the software used to, quite literally, keep our democracy running has its quality and operational reliability vetted, not by the people, or their chosen representatives, but only by the vendor selling the code to the government. There is no possibility of say, a fleet of ‘white-hat’ hackers–concerned citizens–putting the voting software through its paces, checking for security vulnerabilities and points of failure. The kinds that hostile ‘black-hat’ hackers, working for a foreign entity like, say, Russia, could exploit. These concerns are not new.

Dexter and I continue:

The plethora of problems attributed to the closed nature of electronic voting machines in the 2004 U.S. presidential election illustrates the ramifications of tolerating such an opaque process. For example, 30 percent of the total votes were cast on machines that lacked ballot-based audit trails, making accurate recounts impossible….these machines are vulnerable to security hacks, as they rely in part on obscurity….Analyses of code very similar to that found in these machines reported that the voting system should not be used in elections as it failed to meet even the most minimal of security standards.

There is a fundamental political problem here:

The opaqueness of these machines’ design is a secret compact between governments and manufacturers of electronic voting machines, who alone are privy to the details of the voting process.

The solution, unsurprisingly, is one that calls for greater transparency; the use of free and open source software–which can be copied, modified, shared, distributed by anyone–emerges as an essential requirement for electronic voting machines.

The voting process and its infrastructure should be a public enterprise, run by a non-partisan Electoral Commission with its operational procedures and functioning transparent to the citizenry. Citizens’ forums demand open code in electoral technology…that vendors “provide election officials with access to their source code.” Access to this source code provides the polity an explanation of how voting results are reached, just as publicly available transcripts of congressional sessions illustrate governmental decision-making. The use of FOSS would ensure that, at minimum, technology is held to the same standards of openness.

So long as our voting machines run secret, proprietary software, our electoral process remains hackable–not just by Russian hackers but also by anyone that wishes to subvert the process to help realize their own political ends.

Apple’s ‘Code Is Speech’ Argument, The DeCSS Case, And Free Software

In its ongoing battle with federal law enforcement agencies over its refusal to unlock the iPhone, Apple has mounted a ‘Code is Speech’ defense arguing that “the First Amendment prohibits the government from compelling Apple to make code.” This has provoked some critical commentary, including an article by Neil Richards, which argues that Apple’s argument is “dangerous.”

Richards alludes to some previous legal wrangling over the legal status of computer code, but does not name names. Here is an excerpt from my book Decoding Liberation: The Promise of Free and Open Source Software (co-authored with Scott Dexter) that makes note of a relevant court decision and offers arguments for treating code as speech protected under the First Amendment. (To fully flesh out these arguments in their appropriate contexts, do read Chapters 4 and 5 of Decoding Liberation. I’d be happy to mail PDFs to anyone interested.) Continue reading

A Rankings Tale (That Might Rankle)

This is a story about rankings. Not of philosophy departments but of law schools. It is only tangentially relevant to the current, ongoing debate in the discipline about the Philosophical Gourmet Report. Still, some might find it of interest. So, without further ado, here goes.

A half a dozen years ago, shortly after my book Decoding Liberation: The Promise of Free and Open Source Software had been published, and after I had begun work on attempting to develop the outlines of a legal theory for artificial intelligence, I considered applying to law school. For these projects, I had taught myself a bit of copyright, patent and trade secret law; I had studied informational privacy, torts, contracts, knowledge attribution, agency law; but all of this was auto-didactic. Perhaps a formal education in law would help my further forays into legal theory (which continue to this day). Living in New York City meant I could have access to some top-class departments–NYU, Columbia, Yale–some of whose scholars would also make for good collaborators in my chosen field of study. I decided to go the whole hog: the LSAT and all of the rest. (Yes, I know it sounds ghastly, but somehow I overcame my instinctive revulsion at the prospect of taking that damn test.)

An application for law school requires recommendation letters. I anticipated no difficulty with this. I knew a few legal scholars–professors at law schools–who were familiar with my work, and I hoped they would write letters for me, perhaps describing the work I had produced till that point in time. The response was gratifying; my acquaintances all said they’d be happy to write me letters.  I went ahead with the rest of my application package, even as I had begun to feel that law school looked like an impractical proposition–thanks to its expenses. Taking out loans would have meant a second mortgage and that seemed a rather bizarre burden to take on.

In any case, I took the LSAT. I did not do particularly well. I used to be good in standardized tests back in my high school and undergraduate days but not any more. My score was a rather mediocre 163 (in the 90th percentile), clearly insufficient for admission to any of the departments I was interested in applying to. Still, I reasoned, perhaps the admissions committees would look past that score. Perhaps they’d consider my logical acumen as being adequately demonstrated by my publications in The Journal of Philosophical Logic; perhaps a doctorate in philosophy would show evidence of my ability to parse arguments and write; and I did have a contract for a book on legal theory. Perhaps that would outweigh this little lacuna.

One of my letter writers, a professor at Columbia Law School, invited me to have coffee with him to talk about my decision to go to law school. When we did so, he told me he had written me an excellent letter but he wondered whether law school was a good idea. He urged me to reconsider my decision, saying I would do better to stay on my auto-didactic path (and besides, the expenses were not inconsiderable). I said I had started to have second thoughts about the whole business and had not yet made up my mind. He then asked me my LSAT score. When I told him, he guffawed: I did not stand a snowball’s chance in hell of getting into the departments I was interested in. But, surely, I said, with a letter and a good word from you, and my publication record, I stood a chance. He guffawed again. Let me tell you a story, he said.

A few years prior, he had met a bright young computer science student, a graduate from a top engineering school, with an excellent GPA, who had wanted to study law at Columbia. He was interested in patent law, and had–I think, if I remember correctly–even written a few essays on software patents, mounting a critique of existing regimes, and outlining alternatives to them. He had asked my current interlocutor to write him a recommendation letter for Columbia. There was just one problem: his LSAT score was in the low 160s. Just like mine, not good enough for Columbia. Time to talk to the Dean, to see if perhaps an exception could be made in his case. The Dean was flabbergasted: there was no way such an exception could be made. But, my letter writer protested, this student met the profile for an ideal Columbia Law student, especially given his interests: he had a stellar undergraduate record in a relevant field, he had shown an aptitude for law, he had overcome personal adversity to make it through college (his family was from a former Soviet republic and he had immigrated with them to the US a few years before after suffering considerable economic hardship). Couldn’t an exception be made in this case?

The Dean listened with some sympathy but said his hands were tied. Admitting a student with such a LSAT score would do damage to their ‘LSAT numbers’ – the ones the US News and World Report used for law school rankings. Admitting a student with with an LSAT score in the low 160s would mean finding someone with a score in the high 170s to make sure the ‘LSAT numbers’–their median value, for instance–remained unaffected. God forbid, if the ‘LSAT numbers’ were hit hard enough, NYU might overtake Columbia in the rankings next year. The fate of a Dean who had allowed NYU to slip past Columbia in the USNWR rankings did not bear thinking about. Sorry, there was little he could do. Ask your admittedly excellent student to apply elsewhere.

Nothing quite made up my mind not to go to law school like that story did. Still, my application was complete; test scores and letters were in. So I applied. And was rejected at every single school I applied to.