Who Grades What?
In his expose book about the grading of essays on standardized tests, Making the Grades, Todd Farley tells of his experiences as an essay grader for one of the largest testing companies. It's a great read and a scary story of just how things can go awry. When you consider how many students take these tests and how many essays and open ended questions they write, it's hard to imagine how they can all be evaluated fairly.
Mr. Farley admits to many faults in the grading systems and, having been an English teacher myself for so many years, I tend to believe his version of what happens rather than the testing company's denials. I might have had as many as 150 students in my class schedule each year and if I assigned esssay/written work to all of them perhaps three times a week (usually it was more) that meant I would have some 450 papers to grade. Actually, it was more, but that's a good benchmark. I wouldn't have to grade all those papers that week, but I always felt my students needed feedback sooner than later.
As a result, for a lot of the work I required I created all kinds of grading streamlines, looking for specific information, writing requirements, or more general elements for each assignment, allowing me to grade quickly and efficiently for much of the less complex work. Some assignments require more attention and time to be fairly scored, of course, but even averaging 4 minutes a paper, that meant some thirty hours of grading papers per week. Now, I didn't spend that much time each week but it probably happened at least ten times a year, maybe more.
Now, picture the test scoring room for standardized tests. From Mr. Farley's description, dozens a paid scorers sit all day--an eight hour workday--reading and scoring tests. They might score more than 250 in a day. That's less then 3 minutes an essay for some high stakes test scores--scores that might determine whether or not a student will graduate from high school.
Scorers have limited training unless they are already trained teachers. But Mr. Farley tells us many of the scorers hired were not teachers and many even had limited English language skills. They were taught to look for key words, phrases, and basic elements easy to find in a minute or so. This is a form of holistic grading and can work provided the grader has a good grasp of the material being evaluated. It's not true for random topics and scorers not expert in the field.
Essentially, it's not fair. Sorry, but anyone grading hundreds of essays in one sitting is going to get a bit cross-eyed to say nothing of cross after a few hours. It's almost impossible to maintain a fair standard, and do an effective job.
It wouldn't matter too much if the scores meant nothing, but that's not the way of standardized, mandated testing today. In some states, teachers' job performances are evaluated on how well their students do on the tests. Students' ability to graduate might be impacted by test scores. And school funding and overall success can be dependent on the scores.
It's just not fair.
If you want to read Mr. Farley's book--and trust me, if you are at all interested in the standardized testing controversy it's well worth it--you can find it at Amazon. Making the Grades
Also, a Google search will find several interviews with Mr. Farley. Another expert on the subject, Diane Ravitch is another excellent source of insight into the world of high stakes testing.
If you care, take some time to investigate.