Writing the rules of the grading game (part I): The grade changes the child

These three blogs (part I here, part II, part III) are based on a talk I gave at Headteachers’ Roundtable Summit in March 2019. My thoughts on this topic have been extensively shaped by conversations with Ben White, a psychology teacher in Kent. Neither of us yet know what we think!

Teachers are rarely trained in how to give formal, summative feedback to students and their parents – following a test, in a school report, or in a conversation at parents’ evening. We instinctively form views of how a student is doing in relation to others we teach, yet when we report attainment we frequently translate these relative perspectives into some sort of strange scale that it is hard to interpret.

“Stuart is performing at a Level 2W”
“Mark achieved a Grade B+”
“James scored 68% in the class test”
“Laura has been award a Grade Triangle”

Grades, levels, percentages, arbitrary notions of expected, targets, progress measures, GCSE grades given in year 7, flightpaths… Why do you make it so complicated when the truth about attainment is really quite simple?

I want to explore the arguments of the ‘truth-advocates’ out there. There are those who think we should simply tell students what we can validly infer from the end-of-term test we write and administer in our school. Teacher-bloggers such as Mark Enser and Matthew Benyohai rightly point out that when we provide marks without norm-referencing information about how others have performed, our so-called grade actually provides them with little information (of course, students scramble around class to figure out this cohort-referenced information themselves). A ‘Grade B’ isn’t inherently a phrase that contains meaning; its meaning for students arises through knowledge of the distribution of grades awarded. The ‘cleanest’ version of cohort-referencing – just handing out class or school rankings – seems to be quite commonplace in certain secondary school subject departments today, according to this Teacher Tapp sample. However, are there considerations beyond validity of inference we should consider before handing out rankings or grades?

The second set of ‘truth-advocates’ who like school-rankings prioritise attainment feedback’s role in an educational process where they are hoping to change student behaviours. You can read Deborah Hawkins’ blog about how termly rank order assessments are used at Glenmoor and Winton Academies to induce student effort. By creating a termly ranking game, they recognise the challenges schools face in trying to persuade students to spend time on their game – the game of getting good at maths or French – rather than the other games of life – being popular, playing sport, pursuing new relationships, and so on. Creating a game where students are induced to work harder to climb up the bell curve of achievement is potentially so powerful because we care a great deal about our place in social hierarchies. The economist Adam Smith (1759) once said, “rank among our equals, is, perhaps, the strongest of all our desires”. Biological research has shown that high rank is often associated with high concentrations of serotonin, a neurotransmitter in the brain that enhances feelings of well-being. What’s more, social comparisons are an indispensable part of bonding among adolescents.

It is easy to find academic studies that back up the grading policies of schools that report rankings or cohort-referenced grades. For example, a school in Spain experimented with giving students ‘grade-curving’ information, alongside the grades they had always received (e.g. supplementing the news of receiving a Grade C with information that the class average was a Grade B+). The provision of this cohort-referenced information led to an increase of 5% in students’ grades and the effect was significant for both low and high attainers. When the information was removed the following year, the effect disappeared. Similarly, a randomised trial on more than 1,000 sixth graders in Swedish primary schools found student performance was significantly higher with relative grading than with standard absolute grading. These positive effects of cohort-referencing are mirrored in numerous university field and lab experiments (e.g. here and here).

So, why don’t we all follow the ‘truth-advocates’ and give students clear, cohort or nationally-referenced feedback on how they are doing, allowing them to compete with their peers? We avoid this feedback, of course, because we are nervous about how our students will respond to it. Whether we are conscious of it or not, we all have a mental model of how we hope reporting attainment might change student behaviours. It is these implicit, mental models that explain why we might tell a half-truth to a student or parent, assuring them they are doing fine when the opposite is true. Mental models explain what meaning we hope to convey when we tell a student they have 68% in a test or why we allow students a few minutes to compare their marked papers with others in class. They explain why we send quite odd ‘tracking’ data home to parents, and are privately quite content that it doesn’t allow them to infer whether their child is doing better or worse than average.

In the blog posts that follow, I hope to persuade you of the importance of developing a clear mental model of how your students might respond to learning their attainment. I am collating together a messy empirical literature from education, behavioural psychology and economics, which lacks a consistent theoretical footing from which to build generalisable findings. Having read these studies, I think it is most useful for teachers to develop mental models that emphasise changes to the students beliefs about themselves. Beliefs often fulfil important psychological and functional needs of the individual [£]. This literature emphasises three dimensions to describe how grading feedback affects student behaviour:[i]

  1. The effect on student beliefs about their attainment
  2. The effect on student beliefs about their ability to learn
  3. The effect on their willingness to play the game you want them to play

Talking about attainment is something that no teacher can avoid, and choosing to use fuzzy and ambiguous language with parents and students is not a value-neutral approach (for reasons that will become clear by blog three). Neither is telling children the whole truth about how they are performing. For so much of our time as teachers we talk about how the child can change the grade they receive. In these posts, we will be talking about how the grade can change the child!

You can find part II here.


[i] Note – this differs somewhat from the favoured feedback model of educationalists – Kluger and DeNisi’s (1996) Feedback Intervention Theory – which seems particularly pertinent to predicting mechanistic responses to task-based feedback but isn’t well-aligned with the disciplinary traditions of the empirical research I am reviewing here.

10 thoughts on “Writing the rules of the grading game (part I): The grade changes the child

  1. Joseph Eamer

    This is a really interesting idea, competition and game playing are really important at motivating students and “winning a game” is a great way to engage otherwise less willing learners. In terms of ranking the danger is that whilst those at the top end become more motivated, those at the bottom could become the opposite when they may not actually be performing that badly. This is certainly true for setted subjects (which I think should mostly be avoided). I went to a private school and was in the bottom set at maths and thought I was no good at that subject at all which shaped my A-level choices and ultimately my career. Having said that the concept is certainly something I want to explore in my department and I wonder if ranking could be linked to GCSE grades? After all they are essentially set as a rank after exam percentages are all collated, and comparing pupils to this percentage ladder could carry some of the benefits described above. We do give GCSE grades at KS3 but are careful to link these to grade descriptors as well as percentages etc. As I said this is a really interesting idea with what appears to be a lots of good evidence to back it up. Motivation is definitely something we need to tackle in our school and this could be very useful!

  2. I think this is really interesting. There seems to be contradictory evidence for the efficacy or otherwise of attainment grouping, for instance “Factors deterring schools from mixed attainment teaching practice” (Taylor et. al., 2017). Suggests room for further study.

  3. Pingback: Trying to learn from my mistakes about assessment and reporting to parents. – Thoughts on Teaching

  4. Pingback: Writing the rules of the grading game (part II): The games children play – Becky Allen

  5. Pingback: Effective assessment to close the gap… – classroomBUZZ

  6. Rebecca, is there a way of using ‘rank changes’ as the motivator, especially for parents.
    If a student is going up then parents will be trying to keep them motivated, going down and they get the good old ‘hurry up’.
    The issue then becomes how much ‘noise’ as you put in another blog, should be allowed.

  7. Pingback: Flightpaths, are they relevant in our new World? (Key Stage 2 & 3) – Higginsonmaths

  8. Pingback: Should education be more like a game? – Becky Allen

  9. Pingback: Writing the rules of the grading game (part III): There is no value-neutral approach to giving feedback – Becky Allen

  10. Pingback: Ofsted, the problem, the sequel | Faith in Learning

Leave a comment