Over at Concurring Opinions, Deven Desai makes note of an interesting study–whose details I have not yet had the time to investigate–underwritten by the William and Flora Hewlett Foundation and conducted by a team of “experts in educational measurement and assessment, led by Dr. Mark Shermis, dean of the College of Education at The University of Akron.” The study claims to have found that,
A direct comparison between human graders and software designed to score student essays achieved virtually identical levels of accuracy, with the software in some cases proving to be more reliable [I am not quite sure what ‘reliable’ means here]
The reaction of at least one kind of college professor is, I suspect, likely to be: Hallelujah, no more grading! Another kind will mutter and grumble about the invasion of a domain of faculty privilege, the mechanization of a humanist skill, the loss to students of vital professorial feedback and so on. I’m not quite sure which camp I fall into.
The reason for that ambiguous response is that I find the business of grading papers (student writing assignments) genuinely perplexing. I’ve now been grading papers, on and off, for some fifteen years. (That is how long I have been teaching philosophy, first as a graduate teaching fellow, and then later, of course, as a full-time faculty member; before that my teaching was centered on computer science classes and there was little writing to grade.) In that time, I have never had a teaching assistant to help me with grading but neither have I had to teach a class with more thirty students in it. But twenty or so six-page or four-page papers–the standard length of my assignments, of which I assign three in a typical philosophy class–is still plenty of work.
And that is so because fifteen years on, I’m still not quite sure how to provide good feedback to my students. I find writing to be very hard work; I struggle with it constantly; I still remain terrified by the blank page. More to the point, when confronted by a piece of writing that doesn’t ‘read well,’ I don’t quite know how to instruct someone other than me in the business of how to make it better. There is an exaggeration here, of course; I can point out problems in relevance (‘You haven’t addressed the question I asked!’); I can note elementary mistakes in spelling and grammar; I can point to mangled sentences and constructions that don’t make sense. And so on. But at the end of this process it still seems like there is something that I haven’t managed to convey to my students. It is for this reason that I urge my student to consult with writing tutors, to have their papers read by their friends (or even their parents, if they have time!).
The long and short of it is that I continue to find writing a bit of a mystery, and given that I find it so intractable, I find the task of teaching someone else how to do it to be particularly insuperable.
Any help would be much appreciated. Bring on the robotic graders!
I heard a radio piece about this. Here’s the link:
http://www.npr.org/blogs/alltechconsidered/2012/04/24/151308789/for-automatic-essay-graders-efficiency-trumps-accuracy
If I remember right: The computer did well with technical aspects of writing: grammar, subject-verb agreement, complexity of sentence structure, richness of word choice. But it had a horrible time with truth. “World War II was fought between 1861 and 1865 by zombie zebras” could very well pass muster with the computer. It would likely not with a human … unless it was creative writing class, and then there is no objective scale for taste or style.
Peter,
Thanks for the link!
That’s sort of what I would expect from these kinds of graders – it might be nice to have them as preliminary screeners and possibly alerters to other, more significant problems.