Wednesday, May 04, 2005

So that's how they score those essays, or just add monkeys?

The online edition of the St. Louis Dispatch reported on Sunday about use of a software program to grade essays written for a Sociology class. The professor at the University of Missouri at Columbia created his own software for students to submit their essays for grading by a computer. He does state that his TA's and him still grade all the final essays, but even then, he encourages them to turn the papers in to the machine to get a higher chance at an A. Of course, those in favor of such systems praise its efficiency and speed at providing a result. Those against argue that something as complicated as grading an essay is better done by humans given the subjective nature of the task. After all, all writers are individuals, or so argue many writing teachers and experts. What caught my eye and prompted the title of my essay was this little passage:

>>When the University of California at Davis tried out such technology a couple years back, lecturer Andy Jones decided to try to trick "e-Rater."

Prompted to write on workplace injuries, Jones instead input a letter of recommendation, substituting "risk of personal injury" for the student's name. "My thinking was, 'This is ridiculous, I'm sure it will get a zero,'" he said.

He got a five out of six.

A second time around, Jones scattered "chimpanzee" throughout the essay, guessing unusual words would yield him a higher score.

He got a six.

In Brent's class, sophomore Brady Didion submitted drafts of his papers numerous times to ensure his final version included everything the computer wanted.

"What you're learning, really, is how to cheat the program," he said.<<



So, add a few monkeys here and there, and the very smart computer figures out you have a nice elevated vocabulary, and therefore gives you a higher score. I know I am oversimplifying a bit here, but the fact that you can so easily fool the computer says a lot about why this may not be such a hot idea. While it is accepted in composition studies that educators can agree on what makes a good paper or not, I don't quite think the machine can detect the nuances and subtleties in an individual piece of writing. True, teachers create rubrics and other basic scales to decide on how good a paper is. I used to use rubrics in some of my classes back in the day, but often these rubrics were discussed in class as part of the learning process, negotiated somewhat with students, and students would learn how to recognize good writing through practice. This took time, then again, anything worth doing right takes time. And maybe that is what makes me wonder about using software to replace something that takes time and effort for the sake of efficiency and savings. Sure, you can feed a rubric to a machine and program it to look for certain traits, but unless the computer can actually "read" the essay rather than scan it for certain things, the results may not be the most desirable. Anyhow, this seems something interesting to keep an eye on as more and more administrators and bosses look to cut corners with automation for the sake of saving money.

As a note, the link to the article leads to the newspaper's website. The usual caveats about access after a while apply.

No comments: