Don’t let ‘perfect’ become the enemy of ‘better’ in the revision of accountability metrics

Last week, Ed Dorrell wrote a strange editorial in TES called Why attaching excluded pupils’ results to their school won’t work‘. I say it was strange because he failed to address the major impediment to including off-rolled pupils in accountability metrics (i.e. finding them… for that, read on). There is no doubt that there are some complicated choices, trade-offs to consider, and new sets of undesirable behaviours that would arise from making schools accountable for the pupils they teach. However, we are not starting from a neutral position and everyone agrees that the current accountability measures reward schools who find a way to lose students from their roll AND that this is an increasing problem.

We have to act, and do so with the following principle in mind. When we construct accountability metrics, our primary goal is that they encourage schools to behave as we want them to – in their admissions, expulsions, and nature of education provisions. Ensuring the accountability metric fairly represents school quality should be second order, as tough as that feels to heads. (Why? I’ll write about it another time, but essentially the mechanisms by which having greater precision on estimates of school quality feed through to improved educational standards are pretty blunt.)

The question of how we should count a student when they leave the school roll or arrive part-way through school should be viewed through the lens of the school behaviours we’d like to invoke. We want the school to maximise the educational success of the community, including that student. We want them to remove students if they are disrupting others (and thus lowering likely GCSE scores). If schools do remove them, we want to them to take an interest in ensuring that student then transfers to another school or alternative provision, rather than encouraging them to be ‘home schooled’. If the student is not disrupting the school community, and is more likely to be successful at their school than elsewhere, then barring specific circumstances (e.g. breaking the law or serious school rules), we want them to retain the student.

Ed poses a set of questions that suggest it is ‘unfeasibly complicated and impossible’ to explain and police a measure of Progress 8 that weights student results according to which of the 15 terms of secondary education they had spent in their secondary school (or equivalent in a middle school system). He is correct in the sense that there are literally an infinite number of choices we have to make – as there were when we made decisions about how to create the current Progress 8, incidentially. But choice from an infinite choice set is not ‘unfeasibly complicated and impossible’. All we need to do is pick an algorithm – any algorithm – that produces BETTER behaviours than we currently have. Again, whether or not it represents precisely how ‘good’ the school is isn’t our primary consideration.

Here are four alternative choices:

  1. Status quo = each student present in Spring y11 (term 14 out of 15) is weighted with a value of 1. The behavioural response is well known – there is a strong incentive to find a way to remove from the school roll any student who is likely to have a strongly negative progress score.
  2. Year 7 base option = schools are held accountable for the results of those who were admitted, regardless of whether they complete their education there. The advantage is that this produces a strong incentive to maximise the exam outcomes of each student who is enrolled, whether on roll or off roll. The disadvantage is that students admitted after year 7 will not count and so there is no need to maximise their GCSE outcomes. That said, it would encourage schools to feel comfortable in accepting previously excluded students from other schools as a fresh start, knowing that they will not be penalised in their performance table metrics.
  3. FFT Education Datalab proposal = each student is weighted according to the number of terms they spent at the school. This means schools would need to consider the best needs of every student that passes through the school, whether they are still on-roll or not. However, it does create an incentive to accelerate moving off-roll any student that is struggling. Would this produce large numbers of students being moved off roll during years 7 and 8 in a manner that is worse than current practice? This is judgement call.
  4. Ever seen option = every student who appears at any stage in the school is weighted with a value of 1 (this is the unweighted version of the FFT Education Datalab proposal). This fixes the problem with the weighted method whereby off-rolling early is better than off-rolling late. However, it doesn’t fix the current incentive to avoid taking on previously excluded students from other schools to give them a fresh start.

All other options (e.g. ever seen since year 9; weighting ks4 terms more than ks3 terms; etc…) can be viewed as a minor deviation from the above in terms of the types of behaviour they induce.

These adjustments to a progress score DO NOT ‘disproportionately punish the majority of schools – those who strive to get the best out of even the most challenging students and for whom exclusion is a last resort’. Quite the opposite; they are less punishing to those schools who take on previously excluded students to give them a second chance in mainstream education.

As FFT Education Datalab showed, the vast majority of schools who have pupils come and go as normal should not worry since any modification to Progress 8 will not materially affect them. It is only high-mobility schools where there are substantial differences between the types of students who leave the school and the types of students who arrive at the school that are likely to be affected.

None of this is difficult to implement and progress measure tweaks are entirely independent of issues around the commissioning of alternative provision. The termly pupil census means we can straightforwardly calculate progress figures and match pupils to their examinations data.

That said, Ed’s piece fails to identify the one major impediment to holding schools accountable for pupils they taught, even after they left. There are two situations where we do not want to hold them accountable for a student who receives no GCSE examination results: (1) where that student left the country; and (2) where the student died or was unable to complete their education for serious medical reasons. When students disappear from all school and examination records, central government does not know the reason why because we have no census which covers children outside of education. School accountability isn’t a good enough reason to set-up a full annual census of children, using GP records as a starting point. But, given the rise in ‘home schooling’ where parents are not even present to educate the teenager, there are very clear safeguarding reasons why it is time to look again at introducing one.

In the meantime, I don’t see concerns about death and migration as so material that we should continue with the damaging incentives set up by Progress 8 which currently allows substantial off-rolling without consequence.

Remember: there isn’t a world in which accountability is perfect, but there are many accountability measures that are better than the status quo.

Who fails wins? The impact of failing an Ofsted Inspection

CMPO Viewpoint

Rebecca Allen and Simon Burgess

What is the best way to deal with under-performing schools? This is a key policy concern for an education system. There clearly has to be a mechanism for identifying such schools. But what should then be done with schools which are highlighted as failing their pupils? There are important trade-offs to be considered: rapid intervention may be an over-reaction to a freak year of poor performance, but a more measured approach may condemn many cohorts of students to under-achieve.

This is the issue that Ofsted tackles. Its inspection system identifies failing schools and supervises their recovery. How effective is this? Is it even positive, or does labelling a school as failing push it to ever lower outcomes for its students?

It’s not clear what to expect. Ofsted inspections are often dreaded, and a fail judgement seen as being disastrous. It has been argued it triggers…

View original post 706 more words