I hate standardized assessments. There is simply no way that one-size-fits-all batch assessments can ever allow for an individual student’s strengths to shine through.
At the same time, examinations and tests are so ubiquitous and so entrenched that it may take a long time for us to outgrow them. But while we are still mired in the era of standardized testing, surely we need to find ways to make them work better?
Specifically, it is my contention that we need the eliminate marker bias, inconsistency and gross incompetence that so often goes along with these assessments – particularly in the realm of school-leaving examinations.
And I reckon the best way to do this is by turning to technology. If we are going to try to measure our students’ skills and altitudes objectively, and if these benchmarks are indeed so important, shouldn’t we ensure that the assessment of these homogenized tests is equally objective?
It’s a bit like using cars to get around. We know they’re bad for the environment. We know there has to be a better way. But until that way comes along, the best option is to keep improving car technology; making them less harmful and more efficient. (While secretly hoping that someone somewhere invents a viable replacement. My bet is on Elon Musk.)
Some teachers will by now have skipped down to the comments section at the bottom of this post to assure me that they always try to be fair and impartial in the setting and assessing of their assessments. While I am sure they try to be, I am sure not one of them has ever set and marked an assessment that meets all of these criteria:
- The assessment has absolutely no errors in grammar, formatting, mark allocation and cognitive taxonomical weighting.
- The marking tool is 100% comprehensive.
- None of the questions favor the aptitudes, linguistic abilities and / or backgrounds of any one student above any others.
- Test sections feature a stimulating mix of resource materials.
- No student is on the receiving end of a marker who has their own internal biases, beliefs and opinions – be they those of the politically correct variety, those concerning the assessor’s own moral code or even their religious views.
- No student is ever hard done by because they know more than the marker.
- The test itself and all marking are rigorously and reliably moderated.
- Teachers do not mark when they are tired or grumpy so as to prevent their mood from affecting their marking.
- Credit is properly assigned to insightful answers.
- (Most importantly) No student is put at a disadvantage because of a conflict of character with the teacher or because of the teacher’s preconceived notions of the ‘capabilities’ of that student – or even as a means of psychological manipulation by the teacher.
It takes a very honest teacher to admit to having committed one or more of these assessment sins. And those who will not are lying. And there are many, many liars. Plain and simple.
The Marking Conveyor Belt and the Three I’s
The big problem really comes in with high-stakes testing. The kind of examinations which are externally set, moderated and assessed. And the most notorious of these is the school-leaving exam.
Assessors of these collective, uniform exams have to mark their way through hundreds of submissions. They usually sit in a room with piles of exams around them and mark for days on end until those piles disappear. And often they are under-motivated, under-qualified, over-tired and not in the mood to look for the three ‘i’s’: insight, independent thinking and innovative thought. They just want to switch on their autonomic brains and send as many assessments through the marking conveyor belt as they can.
The tragedy is that everyone knows this. Yet nothing is done about it. So long as the average mark is sound, who cares if a few students are hard done by? What you lose on the swings, you gain on the roundabouts.
So, since the marking is entirely mechanical and soulless anyway, I would like to suggest that we replace markers with an automatic system. Something along the lines of the Optical Mark Recognition (OMR) systems (or multiple option ‘bubble sheets’) used by many European and American assessment bodies. There are a few simple reasons why I think OMRs are a better option than teachers with red pens:
1) Errors are vastly reduced.
2) The system is faster and more efficient (and cheaper).
3) Assessments will be (almost) entirely objective.
4) Any errors which do occur are easily rectified.
The only possible objection to the use of OMRs that I have seen is that students have to pick a correct answer from a few options, and thus no higher order thinking can occur. In fact, this is simply untrue. Higher order skills CAN be tested with limited option answers. The questions just have to be very carefully set.
And since so much independent thought is missed by the incompetent / fatigued / disinterested markers of high stakes assessments, machine marking can only be an improvement.