It’s not easy to contribute to a government report with recommendations when your modus operandi is explaining what’s gone wrong in schools, then declare it tricky to fix. But making data work better in schools is what I, alongside a dozen teachers, heads, inspectors, unions and government officials, were ask to write about.
Our starting point was observing the huge range of current practice in schools, from the minimalist approach of spending little time collecting and analysing data through to the large multi-academy trust with automated systems of monitoring everything right down to termly item-level test scores.
Whilst we could all agree that these extremes – the ‘minimalist’ and ‘automated’ models of data management – were making quite good trade-offs between time-invested and inferences made, something was going very wrong somewhere in the middle of the data continuum. These are the schools without the resources and infrastructure to automate all data collection, so require teachers and senior leaders to spend hours each week submitting un-standardised data for few gains.
And herein lies one problem… in the past we’ve told schools to collect data and use it again and again in as many systems as possible: to report to RSCs, Ofsted, governing boards, parents, pupils and in teacher performance management. But this assumes that data is impartial – that it measures the things we mean it to measure with precision and without bias. On the contrary, data is collected by humans and so is shaped by the purposes to which that human believes it will be used.
Our problems with data are not just lack of time. We could spend every day in schools collecting and compiling test score data in wonderful automated systems, but we’d still be constrained in how we were able to interpret the data. When I talk to heads, I often use the table below to frame a conversation about how little the tests they are using can actually tell them.
Teacher-designed test used in one school |
Teacher-designed test used in 5 schools |
Commercial standardised test |
|
To record which sub-topics students understand |
Somewhat |
Somewhat |
Rarely |
Report whether student is likely to cope with next year’s curriculum |
Depends on test and subject |
Depends on test and subject |
Depends on test and subject |
Measure pupil attainment, relative to school peers |
Yes, noisily |
Yes, noisily |
Yes, noisily |
Measure pupil progress, relative to school peers |
Only for more extreme cases |
Only for more extreme cases |
Only for more extreme cases |
Check if student is ‘on track’ against national standards |
No |
Not really |
Under some conditions |
Department or teacher performance management |
No |
No |
Under unlikely conditions |
A lot of data currently compiled by schools is pretty meaningless because of the inherent challenges in measuring what has been learnt. Everyone involved in writing the report agreed that meaningless data is meaningless. Actually, it’s worse than meaningless because of the costs involved in collecting it. And it’s worse than meaningless if teachers then feel under pressure from data that doesn’t reflect their efforts, their teaching quality, or what students have learnt.
Education is a strange business, and it doesn’t tend to work out well when we try to transplant ideas from other industries. Teachers aren’t just sceptical about data because they are opposed to close monitoring; they simply know the numbers on a spreadsheet are a rather fuzzy image of what children are capable of at any point in time. If only we could implant electronic devices inside children’s brains to monitor exactly how they had responded to the introduction of a new idea or exactly what they could accurately recall on any topic of our choosing! This might sound a ludicrous extension of the desire to collect better attainment data, but it serves as a reminder of how incredibly complex – messy, even – the job of trying to educate a child is.
The challenge for the group who wrote this report is that research doesn’t help us decide whether some of the most common data practices in schools are helpful or a hindrance. For example, there is no study that demonstrates introducing a data tracking system tends to raise academic achievement; equally, there is no study the demonstrates it does not! Similarly, whilst use of target grades is now widespread in secondary schools, their use as motivational devices has not yet been evaluated. Given that the education community appears so divided in their perceptions about the value of data processes in schools, it seems that careful scientific research in a few key areas is now the only way we can move forward.
More research needed! How could an academic conclude anything else?
Read our report by clicking on the link below:
“More research needed” – yes, but also the insights from industry regarding meaningless data in significantly more deterministic circumstances. See W. Edwards Deming and the Red Bead experiment.
Professor Plomin’s new book Blueprint will, when it speaks to teachers, blow a great hole in what the government, Ofsted, and maybe head teachers think that education can achieve. It will surely show that data collection is mostly a costly waste of time. And I say this as a school governor.
Hi. Very interesting!
In particular this; “whilst use of target grades is now widespread in secondary schools, their use as motivational devices has not yet been evaluated.”
Do you know of any research from education/other fields on this please? In particular whether high target grades (E.g FFT5) motivate and lead to better progress? When it comes to target grades, does ‘shooting for the moon’ work?
Pingback: Newsround – John Dabell
Reblogged this on The Echo Chamber.
Pingback: What every teacher needs to know about assessment (Q2) – KristianStill
This report is outstanding. Using it as my starting point for Governor training in Warwickshire.
Zbig