Computer essay graders put students at disadvantage
Published: Tuesday, April 24, 2012
Updated: Tuesday, April 24, 2012 00:04
Students anxiously awaiting results on papers during finals week may be wishing professors could grade them faster, yet a new technological development may provide the perfect example of “be careful what you wish for.”
Computer essay graders are now more efficient, accurate and cost-effective than humans, setting U.S. education down a dangerous course.
A recent study by the University of Akron concluded that computer-generated essay scores are just as, or in some cases, more reliable and much faster than human graders.
E-Rater, an automated reader created by the Educational Testing Service (ETS), “can grade 16,000 essays in 20 seconds,” according to the New York Times. Pearson Education Inc. says human readers working at maximum capacity can score 30 essays in an hour.
The robot-graders save time, and therefore save teachers and standardized testing companies money, but they can’t verify facts or fully comprehend the meaning behind the words they scan for structural and technical difficulties.
Massachusetts Institute of Technology professor Les Perelman found that the e-Rater does not consider content, but rather length, sentence structure and word usage. An e-Rater can’t tell if information is factual and “doesn’t care if you say the War of 1812 started in 1945,” he told the Times.
With Florida teachers’ salaries tied to their students’ test scores, thanks to a bill signed by Gov. Rick Scott in 2011, those desperate for a pay raise can easily coach students to outsmart the system instead of worrying about actual content, reducing the value of a student’s education.
University students may also rightly be concerned at the idea of a test essay dictating their college prospects and, thus, their futures, only getting 1/800 of a second of thought.
If e-Raters were used in grading college papers, even larger questions would be raised about a college education’s worth as the cost of tuition increases every year.
According to the Times, the e-Rater has already been used to grade essays for the Graduate Record Exam (GRE). While e-Rater creators said GRE samples are also read by humans, the website for the Graduate Management Admission Test, gmat-pacer.com, states that “human graders are aware that there is a computer double-checking their work, and they are more likely to follow the e-Rater’s strict grading parameters.”
The site urges students to be “particularly concerned with structure,” use “buzzwords” and “be a conformist … the e-Rater is not programmed to appreciate individuality, humor or poetic inspiration.”
Though such programs can help identify technical issues in students’ writing, they can in no way replace the careful thought and logic that can only come from an intelligent human being. Individual thought is a gift that educators shouldn’t discard so quickly.