Some very interesting news from the trenches about robot graders, which notes the ‘strong case against using robo-graders for assigning grades and test scores’ and then goes on to note:
But there’s another use for robo-graders — a role for them to play in which…they may not only be as good as humans, but better. In this role, the computer functions not as a grader but as a proofreader and basic writing tutor, providing feedback on drafts, which students then use to revise their papers before handing them in to a human.
Instructors at the New Jersey Institute of Technology have been using a program called E-Rater…and they’ve observed a striking change in student behavior…Andrew Klobucar, associate professor of humanities at NJIT, notes that students almost universally resist going back over material they’ve written. But [Klobucar’s] students are willing to revise their essays, even multiple times, when their work is being reviewed by a computer and not by a human teacher. They end up writing nearly three times as many words in the course of revising as students who are not offered the services of E-Rater, and the quality of their writing improves as a result…students who feel that handing in successive drafts to an instructor wielding a red pen is “corrective, even punitive” do not seem to feel rebuked by similar feedback from a computer….
The computer program appeared to transform the students’ approach to the process of receiving and acting on feedback…Comments and criticism from a human instructor actually had a negative effect on students’ attitudes about revision and on their willingness to write, the researchers note….interactions with the computer produced overwhelmingly positive feelings, as well as an actual change in behavior — from “virtually never” revising, to revising and resubmitting at a rate of 100 percent. As a result of engaging in this process, the students’ writing improved; they repeated words less often, used shorter, simpler sentences, and corrected their grammar and spelling. These changes weren’t simply mechanical. Follow-up interviews with the study’s participants suggested that the computer feedback actually stimulated reflectiveness in the students — which, notably, feedback from instructors had not done.
Why would this be? First, the feedback from a computer program like Criterion is immediate and highly individualized….Second, the researchers observed that for many students in the study, the process of improving their writing appeared to take on a game-like quality, boosting their motivation to get better. Third, and most interesting, the students’ reactions to feedback seemed to be influenced by the impersonal, automated nature of the software.
Not all interactions with fellow humans are positive; many features of conversations and face-to-face spaces act to inhibit the full participation of those present. Some of these shortcomings can be compensated for, and directly addressed, by the nature of computerized, automated interlocutors (as, for instance, in the settings described above). The history of online communication showed how new avenues for verbal and written expression opened for those inhibited in previously valorized physical spaces; robot graders similarly promise to reveal interesting new personal dimensions of automation’s spaces for interaction.