These three blogs (part I here, part II, part III) are based on a talk I gave at Headteachers’ Roundtable Summit in March 2019. My thoughts on this topic have been extensively shaped by conversations with Ben White, a psychology teacher in Kent. Neither of us yet know what we think!
Teachers are rarely trained in how to give formal, summative feedback to students and their parents – following a test, in a school report, or in a conversation at parents’ evening. We instinctively form views of how a student is doing in relation to others we teach, yet when we report attainment we frequently translate these relative perspectives into some sort of strange scale that it is hard to interpret.
“Stuart is performing at a Level 2W”
“Mark achieved a Grade B+”
“James scored 68% in the class test”
“Laura has been award a Grade Triangle”
Grades, levels, percentages, arbitrary notions of expected, targets, progress measures, GCSE grades given in year 7, flightpaths… Why do you make it so complicated when the truth about attainment is really quite simple?
I want to explore the arguments of the ‘truth-advocates’ out there. There are those who think we should simply tell students what we can validly infer from the end-of-term test we write and administer in our school. Teacher-bloggers such as Mark Enser and Matthew Benyohai rightly point out that when we provide marks without norm-referencing information about how others have performed, our so-called grade actually provides them with little information (of course, students scramble around class to figure out this cohort-referenced information themselves). A ‘Grade B’ isn’t inherently a phrase that contains meaning; its meaning for students arises through knowledge of the distribution of grades awarded. The ‘cleanest’ version of cohort-referencing – just handing out class or school rankings – seems to be quite commonplace in certain secondary school subject departments today, according to this Teacher Tapp sample. However, are there considerations beyond validity of inference we should consider before handing out rankings or grades?
The second set of ‘truth-advocates’ who like school-rankings prioritise attainment feedback’s role in an educational process where they are hoping to change student behaviours. You can read Deborah Hawkins’ blog about how termly rank order assessments are used at Glenmoor and Winton Academies to induce student effort. By creating a termly ranking game, they recognise the challenges schools face in trying to persuade students to spend time on their game – the game of getting good at maths or French – rather than the other games of life – being popular, playing sport, pursuing new relationships, and so on. Creating a game where students are induced to work harder to climb up the bell curve of achievement is potentially so powerful because we care a great deal about our place in social hierarchies. The economist Adam Smith (1759) once said, “rank among our equals, is, perhaps, the strongest of all our desires”. Biological research has shown that high rank is often associated with high concentrations of serotonin, a neurotransmitter in the brain that enhances feelings of well-being. What’s more, social comparisons are an indispensable part of bonding among adolescents.
It is easy to find academic studies that back up the grading policies of schools that report rankings or cohort-referenced grades. For example, a school in Spain experimented with giving students ‘grade-curving’ information, alongside the grades they had always received (e.g. supplementing the news of receiving a Grade C with information that the class average was a Grade B+). The provision of this cohort-referenced information led to an increase of 5% in students’ grades and the effect was significant for both low and high attainers. When the information was removed the following year, the effect disappeared. Similarly, a randomised trial on more than 1,000 sixth graders in Swedish primary schools found student performance was significantly higher with relative grading than with standard absolute grading. These positive effects of cohort-referencing are mirrored in numerous university field and lab experiments (e.g. here and here).
So, why don’t we all follow the ‘truth-advocates’ and give students clear, cohort or nationally-referenced feedback on how they are doing, allowing them to compete with their peers? We avoid this feedback, of course, because we are nervous about how our students will respond to it. Whether we are conscious of it or not, we all have a mental model of how we hope reporting attainment might change student behaviours. It is these implicit, mental models that explain why we might tell a half-truth to a student or parent, assuring them they are doing fine when the opposite is true. Mental models explain what meaning we hope to convey when we tell a student they have 68% in a test or why we allow students a few minutes to compare their marked papers with others in class. They explain why we send quite odd ‘tracking’ data home to parents, and are privately quite content that it doesn’t allow them to infer whether their child is doing better or worse than average.
In the blog posts that follow, I hope to persuade you of the importance of developing a clear mental model of how your students might respond to learning their attainment. I am collating together a messy empirical literature from education, behavioural psychology and economics, which lacks a consistent theoretical footing from which to build generalisable findings. Having read these studies, I think it is most useful for teachers to develop mental models that emphasise changes to the students beliefs about themselves. Beliefs often fulfil important psychological and functional needs of the individual [£]. This literature emphasises three dimensions to describe how grading feedback affects student behaviour:[i]
- The effect on student beliefs about their attainment
- The effect on student beliefs about their ability to learn
- The effect on their willingness to play the game you want them to play
Talking about attainment is something that no teacher can avoid, and choosing to use fuzzy and ambiguous language with parents and students is not a value-neutral approach (for reasons that will become clear by blog three). Neither is telling children the whole truth about how they are performing. For so much of our time as teachers we talk about how the child can change the grade they receive. In these posts, we will be talking about how the grade can change the child!
You can find part II here.
[i] Note – this differs somewhat from the favoured feedback model of educationalists – Kluger and DeNisi’s (1996) Feedback Intervention Theory – which seems particularly pertinent to predicting mechanistic responses to task-based feedback but isn’t well-aligned with the disciplinary traditions of the empirical research I am reviewing here.