In preparation for this week’s session with Dr Sue Martin on norm- and criterion-referenced assessment, we were asked to assess an assignment submitted by a previous student on this very topic (I assume it was written a few years ago). This was a really interesting task and I enjoyed it much more than I thought I was going to. I initally didn’t feel that I was qualified to pass judgement on it, but reflecting on the peer-assessment activities I ask the ICM students to take part in (and the frustration I feel when they express the exact same feelings as a reason for not participating) encouraged me to put those thoughts aside.
My feedback on the assignment was as follows:
This assignment demonstrates extensive knowledge of the issues pertinent to norm- and criterion-referenced assessment. The depth of analysis is sufficient in places and lacking in others; on future assignments you might like to consider relying less on bulleted lists. Such lists do not lend themselves easily to analysis or criticism of the ideas presented, or even reporting of the sources of those ideas (see Page 2 for an example). Citing your sources will give you the opportunity to demonstrate your ability to critically analyse the ideas and conclusions of others.
The content of the assignment is clearly relevant to the question and the structure leads to a well-reasoned conclusion. You have drawn on your own experience where required. The section showing justification for criterion-referencing could have benefited from the making of comparisons and connections with the relevant literature, as you have done to good effect in the following section.
Where you have cited academic sources, you have used them well to support the arguments presented. I would actually have been interested to see more evidence of conflict and contradiction in the sources you used; for such a complex topic your argument and conclusion are both very ‘neat’. You may have benefited from making a deeper exploration of the uncertainties, assumptions and values underlying the conclusions made.
You have used a selection of relevant and recent literature and have relied mainly on books. In future assignments you may benefit from exploring a wider range of journal articles and recent papers; you may find that this enables you to unearth conflicting views, and conclusions that you feel more able or willing to challenge and question. This will enable you to demonstrate that you have explored and analysed a range of options, and provide the fuel for you to present ideas or responses that are truly original.
I was satisfied with the depth of the feedback I felt able to give, and felt that I’d framed it in a positive way with a formative emphasis. However, I felt entirely unable to allocate a percentage mark. I don’t feel that this was necessarily due to lack of marking experience or confidence in my judgement; I suspect there is some degree of norm-referencing hard-wiring at work, which makes us feel the need to compare a piece of work with another in order to make sense of the subjective statements made in the assessment criteria! Perhaps that’s what this exercise was designed to show…? If so, it was very effective!
This exercise gave me a fresh perspective of the subjectivity of assessment criteria, and how susceptible they might be to the values and priorities of different markers. For example, one of the first elements in the MA assessment criteria is depth of analysis. What constitutes deep analysis? How long is a piece of string? I have a sheet of paper above my desk that presents two lists – one list of questions as tools for critical thinking, and one list of questions as tools for reflective thinking. They come in handy when I need to dig deep, or even when I’m just trying to work out how I feel about something. They also came in handy when assessing this assignment, as I could pick out the questions that seemed relevant and look for evidence that the student had answered them – or even evidence that those questions had been pondered upon. In most cases there wasn’t much evidence of this, but common sense tells me that this is probably the case for the majority of M-level assignment submissions. I suspect that many students are thinking critically and reflectively, but find it difficult to provide evidence of this while trying to present a scholarly and ‘well-reasoned’ piece of work. The phrase ‘well-reasoned’ implies an argument where all the pieces fit together neatly in support of a conclusion. Incorporating doubt and conflict into this picture presents quite a challenge.
So – we have a situation where I felt that there wasn’t much evidence of deep analysis in this assignment. But on the other hand, if it had been placed alongside several other students’ assignments and looked rather good in comparison, would this justify awarding it a high mark? On yet another hand, I may be off-centre in my standards of judgement as to what constitutes ‘deep analysis’.
Something else I thought about when assessing this assignment was the degree of description that is necessary when writing an M-level assignment. What level of knowledge and understanding can or should one assume of the reader? Did the features of criterion- and norm-referenced assessment have to be listed in the (descriptive) way they were, or were those characteristics established and agreed upon in the literature to such an extent that they could have been tucked away in an appendix, leaving some more leeway in the word count for in-depth analysis and debate?
It’s funny – I’ve read so often about how powerful it is to get students to engage with the marking criteria through peer- and self-assessment exercises, and believed it, and even implemented the theory in the courses I design, but never actually engaged in it myself to this degree. It really is powerful. I feel much better equipped now to write my own assignments…!