If CPD is so important, then why is so much of it so bad?

Towards the end of last year I took part in a debate about the quality of CPD. I was asked to take one side of the argument, so this is my deliberately one-sided perspective on it. The wonderful people of edu-twitter helped me compile the bizarre examples of CPD that you’ll read below.

Everybody remembers their worst ever INSET day, don’t they? I remember being excited to take part in one early on in my PGCE. This was in the early 2000s, so the National Strategies were landing in Key Stage 3. The Local Education Authority’s maths officer came to the school to help every teacher embed numeracy in the Key Stage 3 curriculum. It was a strange morning for us, as a department (business/economics) who taught no Key Stage 3 and yet weren’t allowed to get on with other things. It was also a strange morning for the maths department, who presumably had already embedded numeracy in their curriculum. And I’d guess at least half the other departments were pretty annoyed to have to think of a contrived way to mention numbers during their lessons. Whose fault was it that so few teachers got anything out of that day? Should the headteacher have allowed it to happen? Should the LEA have known better than to blithely follow DfE ideas about numeracy at KS3? And where was teacher agency in the decision about allocating INSET time to this?

I asked teachers on twitter to name their worst ever CPD, and it quickly became clear they had examples far worse than mine:

It was one which postulated the theory that we have 5 brains, one of which developed when humans were running away from dinosaurs. I kid you not.— David Williams (@davowillz) October 20, 2018

A day on packtypes. We all answered questions about ourselves and others to find out what dog we were. Then we made bunting to go around school to show our packtypes. I have since read the research behind it but the watered down approach did not communicate anything useful— Mrs Cadman (@Y6Mrs) October 20, 2018

The worst I can recall was about 25 years ago at least on ‘Instrumental Enrichment’. I had no idea what it meant and even less after the CPD.— Gradgrind (@ThomGradgrind) October 20, 2018

It was something to do with animals representing different types of learners. The CPD was ten minutes and we were then given a week’s deadline to ’embed the animals in schemes of learning’.— Deborah Hawkins. (@debbs_198) October 20, 2018

In groups, we had to spend an hour making a 2 minute, creative video about one of the schools values. It was shown to the other staff on the day, but no discernible purpose – even when I asked.— Stuart Garner (@SJGarner76) October 20, 2018

All of the PD events on questioning… Where no actual questioning takes place. And instead mind numbing droning for 1-2 hours. #HowNotToTeach— Mr. OatesSoSimple (@MrOatesSoSimple) October 20, 2018

Now twitter trades on anecdotes so let’s turn to a slightly larger pool of teacher opinion on Teacher Tapp. Less than a third of classroom teachers agree with the following statement: ‘Time and resources allocated to professional development are used in ways that enhance teachers’ instructional capabilities.’ (Of course, the vast majority of senior leaders agreed with the statement!) We consistently see that it is secondary school classroom teachers that are the most negative about their experiences of CPD, perhaps because so little is subject specific. For example, 40% of them feel that CPD has had little or NO impact on their classroom practice. Many classroom teachers also report that abandoning INSET days would have no effect on their teaching!

This presents us with a serious question to answer: If professional development is so important, then why is so much CPD so bad?

I think it is reasonable to suggest that the people who commission training, particularly in secondary schools, either don’t know how to find good provision or are severely resource constrained, which in turn leads them to deliver quite generic whole school training.

However, I’d like us to consider an alternative explanation. That is that ‘we’, as a profession, don’t really know how to do CPD. This reminds me of the arguments as to why the NHS does not fund greater mental health provision. Of course it is, in part, because they lack the funds. But it is also because we haven’t worked out any scaleable cost-effective means of treating it yet. Just as doctors don’t reliably know how to make unhappy people happier, education professionals don’t reliably know how to make teachers more effective.

One promising vein of research has been into instructional coaching, where a number of randomised controlled trials have estimated positive programme effects. Does this mean all teachers should have instructional coaches who are trained in how to use the observational rubrics to give feedback? No. Coaching is a great, but expensive, way to get teachers from being not OK at teaching to being good. Simple rubrics can’t support coaches in taking teachers from good to great because this is likely to be highly sensitive to the curriculum and demographics of students.

Moreover, even for inexperienced teachers, the expense means we have to ration the treatment. As with one-to-one therapy or tuition, we withhold it from most teachers and hope they figure out how to get better through a cheaper method – such as reading books or trial and error.

We have some of the best brains in the education system thinking about how best to spend CPD money (e.g. David Weston, the people at Institute for Teaching). However, I wonder whether there are even more important things to worry about first. I’d start with worrying about whether schools can provide teachers with a stable curriculum, with assignment to appropriate classes, with a healthy approach to workload to give teachers the space to think about their teaching, and of course a culture where teachers can teach and are encouraged to get better at teaching on their own.

When I think of the incredible teachers I’ve watched over the past few years, I wonder how they got so good at teaching. When I’ve had the opportunity to ask them, they have never volunteered that formal professional development courses made a material contribution. Of course, these teachers who made it to great without losing morale or leaving first are, alas, the exception. But what makes us think that the courses they didn’t use to successfully get better at teaching will be helpful to others who want to get there too?!?

I think that supporting teachers in getting better at teaching is critical to teacher morale and the long-term health of schooling system. This is the central thesis of our book, The Teacher Gap.

I just don’t know whether we’ve found our medicine.


It was fun arguing with people from across the world about how we get professional development right. Somebody on twitter pointed me to this gem of a quote that will always make me smile:

I hope I die during an in-service session because the transition between life and death would be so subtle.

Helen Timperley (2011) quoting an anonymous teacher

New grammar school rules, OK?

Sigh! New year, new grammar school paper. This time a HEPI paper by an ex-Civil Servant, Iain Mansfield, who has turned his hand to quantitative social research, starting with one of the most complex questions it is possible to devise. Thankfully I missed most of the commentary on it (bout of tonsillitis). If you didn’t, Lindsey Macmillan et al. have written one response. However, tonight I am better and from my quick read of the HEPI report it is immediately clear how poor much of the analysis is. Sigh again! I am determined that we don’t have to go through this annual charade ad-infinitum.

So I’ve got two rules that I think would help calm the debate and raise the quality of the argument that we are having about what age we should allow academic selection.

Rule 1: You can’t publish research yourself on the question of the causal impact of academic selection until you have passed a test to show you understand why the existing literature is so complex. Seriously, there is a reason why academics are forced to summarise the existing literature before they are allowed to publish their own findings! You need to be able to answer questions like:

  • From the following set of papers, which explicitly (a) acknowledge and (b) attempt to deal with the fact that over 20% of students at grammar schools live in a different local authority?
  • What are the consequences of ignoring the fact that 12% of students at grammar schools transferred from private primaries? Name five challenges that researchers face in incorporating private schools into analysis.
  • Contrast at least two different approaches to constructing the set of pseudo secondary-moderns that have been used in the literature to-date. What are the pros and cons of these approaches?
  • Manning and Pischke argued the Fernando Galindo-Rueda and Vignoles paper was invalid by invoking what seemed to be a neat falsification test. What was the test and under what sorts of assumptions would it have been valid?

Rule 2: You cannot be a public commentator on a piece of ‘research’ about the causal impact of grammar schools unless you can first answer the following questions about the research you plan to comment on.

  • Have the authors acknowledged that large numbers in grammar schools live in different, and usually non-selective, local authorities and do you understand how they have dealt with this problem?
  • Have the authors acknowledged that the presence of grammar schools distorts the nature of local private schools? Have they dealt with this (e.g. how are private school students included in their analysis groups)?
  • When considering the impact of selective areas as a whole, how do they define non-selective schools in selective areas, i.e. the group of schools that students are going to who fail the 11+? (Top tip – most are not categorised as secondary moderns in DfE databases.)
  • What is their counterfactual to living in a selective local authority? How have they ensured the types of families and students who are living in the counterfactual areas are similar?

There are plenty of public commentators who are capable to reading research carefully enough that they can meet Rule 2. You can’t screen them by their job title or fancy letters before or after their name. That’s why we need new rules. And these rules only apply to impact analysis of the sort that HEPI published today. Very happy to have people writing and talking qualitatively about the system. These perspectives are also important. Moreover, there are many, many questions where basic exploratory analysis is really interesting. It’s what I like to do most days. The causal impact of academic selection at the age of 11 isn’t one of them.

Sorry if this all sounds exclusive but… well… grammar schools are exclusive and they get great results. That’s the point! So let’s make the conversation about their impact on the system a little more sophisticated and maybe we’ll get a better result too.

Poor attainment data often comes too late!

It’s time to get positive about data. The right kind of data.

In my blogpost on the question of why we cannot easily measure progress, I explained why short, one-hour tests are rarely reliable enough to tell us anything interesting about whether or not a student has made sufficient progress over the course of a year. This is a source of worry for schools because measuring and reporting pupil progress is hard-baked into our school accountability system. My response about what to do was to tell teachers not to worry too much about progress since attainment is the thing we almost always want to know about anyway. If you still think that ‘progress’ is a meaningful numerical construct, I’d urge you to take a look at Tom Sherrington’s blog post on the matter.

I’ve since become even more convinced that measuring pupil progress is worse than irrelevant through conversations with Ben White, who pointed out to me that intervening on progress data is frequently unjust and disadvantages those who have historically struggled at school. Suppose you find two students who get 47% in your end of year 7 history test. It isn’t a great score and suggests they haven’t learnt many parts of the year’s curriculum sufficiently well. Will you intervene to give either of them support? The response in many secondary schools nowadays would be to interpret the 47% in relation to their Key Stage Two data. For the student who achieved good scaled scores at age 11 of around 107, the 47% suggests they are not on track to achieve predicted GCSE results and so will make a negative contribution to Progress 8. They are therefore marked down for intervention support. The other student left primary school with scaled scores around 94, so despite their poor historical knowledge at the end of Year 7, they are still on track to achieve their own predicted GCSE results. No intervention necessary here. It strikes Ben (and I) as deeply unjust that those who, for whatever reasons (chance, tutoring, high quality primary school, etc…) get high Key Stage 2 scores are then more entitled to support than those who have identical attainment now, but who once held lower Key Stage 2 scores. It would seem to be entrenching pre-existing inequalities in attainment. For me, the only justification for this kind of behaviour is some sort of genetic determinism, where their SATs scores are treated as a proxy for IQ and we should make no special efforts to help students break free of the pre-determined flightpaths we’ve set up for them. Aside from questions of social justice, it makes no sense to expect pupil attainment to follow these predictable trajectories – they simply won’t, regardless of how much you wish they would.

But all of that is an aside and doesn’t address the question of what we should do if we find out that a student hasn’t learnt much / has made poor progress / has fallen behind peers / has low attainment [delete as appropriate according to your conception of what you are trying to measure]. The trouble is, by the time we find out that attainment data is poor in an end-of-year test the damage has already been done and it is very hard to un-do.

The response of most tracking systems to this problem is simply to collect attainment data more frequently, thus bringing forward the point where the damage can be spotted. The problem with this – apart from the destruction of teachers’ lives through test marking and data dropping – is that it is very hard to spot the emergence of falling behind after just six weeks of lessons. Remember we have uncertainty of ‘true’ attainment at each testing point, so it is very hard to use a one-hour test to distinguish genuine difficulties in learning that are causing a student to slip behind their peers (rather than just having a one-off poorer score). If you intervene on everyone that shows poor progress in each six week testing period then you’ll over-intervene with those who don’t really need outside class support, thus spreading your resource too thinly rather than concentrating on the smaller group who really do need help.

There is an alternative. The most forward-thinking leadership teams in schools I have met start by planning what sorts of actions they need information for. Starting with this perspective yields a desire to seek out leading indicators that suggest a student might need some support, before the damage to attainment kicks in. Matthew Evans has a nice blog post where he describes how and why he is trying to prioritise ‘live’ data collection over ‘periodic’ data. Every school’s circumstances is slightly different but the cycle of learning isn’t so unique. Here is some data that really could lead to some actionable changes to improve learning schools:

  1. Which parents do I need to send letters or request meetings about poor school attendance? Data needed = live attendance records. See Stephen Tierney’s blog on how to write an effective letter home to parents.
  2. Which classes do I need to observe to review why school behaviour systems are not proving effective and support the teacher in improving classroom behaviour? Data needed = live behaviour records, logged as a simple code as incidents occur. (Combined with asking teachers how you can help, of course!)
  3. Which students now need an accelerated assessment of why they are not coping with the classroom environment, perhaps across several classrooms? Data needed = combining live behaviour records with periodic student or staff surveys of effort in class, attitudes to learning, levels of distraction. Beware! A music teacher should not be expected to do this for 400 students or for 20 individual classes. Concentrate on deep assessment of newly arrived year groups with simple ‘cause for concern’ calls for established students.
  4. How many students must I create provision for who have specific deficiencies in prior knowledge or skills that will make classes inaccessible? Data needed = periodic assessments of a set of narrowly defined skills – e.g. at the start of secondary school these might be fluency in number bonds, multiplication, arithmetic routines, clear handwriting, sufficiently fast reading speed, basic spelling and grammar. SATs and CAT tests are very poor proxies for these competencies that do not allow for efficiently targeted interventions.
  5. Which students might need alternative provision in place to complete homework? Data needed = live homework records if they are collected, or a period survey of homework completion. If centralised systems do not exist, do not ask every teacher to enter a data point for every student they teach when a simple ’cause for concern’ call will suffice. Many schools are now organising an early parents evening to bring families where homework is an issue into school to find out why. For parents who themselves did not enjoy school, this early conversation might be enough for them to feel motivated to support their own children in completing homework. Otherwise, silent study facilities should be put in place.

Measuring attainment is like a rain collection device that tells us how much it has rained in the past. An action-orientated data collection approach requires us to create barometers – devices that tell us we may have a problem before the damage is done.

Attainment is useful for retrospective monitoring, but is less useful for choosing optimal actions by senior leadership. Of course, this doesn’t mean that teachers should neglect to check that students seem to be learning what is expected of them in day-to-day lessons. But for management it simply isn’t straightforward to generate frequent, reliable, summative assessment data across most subjects. And even if they could, once the attainment data reveals that a student or class has a problem, it has already been going on for some time. Attainment data is a lagged indicator that a student or staff member had a problem. Poor attainment data often comes too late. The trick is to sniff out the leading indicators that tell leaders where to step in before the damage is done.

Meaningless data is meaningless

It’s not easy to contribute to a government report with recommendations when your modus operandi is explaining what’s gone wrong in schools, then declare it tricky to fix. But making data work better in schools is what I, alongside a dozen teachers, heads, inspectors, unions and government officials, were ask to write about.

Our starting point was observing the huge range of current practice in schools, from the minimalist approach of spending little time collecting and analysing data through to the large multi-academy trust with automated systems of monitoring everything right down to termly item-level test scores.

Whilst we could all agree that these extremes – the ‘minimalist’ and ‘automated’ models of data management – were making quite good trade-offs between time-invested and inferences made, something was going very wrong somewhere in the middle of the data continuum. These are the schools without the resources and infrastructure to automate all data collection, so require teachers and senior leaders to spend hours each week submitting un-standardised data for few gains.

And herein lies one problem… in the past we’ve told schools to collect data and use it again and again in as many systems as possible: to report to RSCs, Ofsted, governing boards, parents, pupils and in teacher performance management. But this assumes that data is impartial – that it measures the things we mean it to measure with precision and without bias. On the contrary, data is collected by humans and so is shaped by the purposes to which that human believes it will be used.

Our problems with data are not just lack of time. We could spend every day in schools collecting and compiling test score data in wonderful automated systems, but we’d still be constrained in how we were able to interpret the data. When I talk to heads, I often use the table below to frame a conversation about how little the tests they are using can actually tell them.

Teacher-designed test used in one school

Teacher-designed test used in 5 schools

Commercial standardised test

To record which sub-topics students understand

Somewhat

Somewhat

Rarely

Report whether student is likely to cope with next year’s curriculum

Depends on test and subject

Depends on test and subject

Depends on test and subject

Measure pupil attainment, relative to school peers

Yes, noisily

Yes, noisily

Yes, noisily

Measure pupil progress, relative to school peers

Only for more extreme cases

Only for more extreme cases

Only for more extreme cases

Check if student is ‘on track’ against national standards

No

Not really

Under some conditions

Department or teacher performance management

No

No

Under unlikely conditions

A lot of data currently compiled by schools is pretty meaningless because of the inherent challenges in measuring what has been learnt. Everyone involved in writing the report agreed that meaningless data is meaningless. Actually, it’s worse than meaningless because of the costs involved in collecting it. And it’s worse than meaningless if teachers then feel under pressure from data that doesn’t reflect their efforts, their teaching quality, or what students have learnt.

Education is a strange business, and it doesn’t tend to work out well when we try to transplant ideas from other industries. Teachers aren’t just sceptical about data because they are opposed to close monitoring; they simply know the numbers on a spreadsheet are a rather fuzzy image of what children are capable of at any point in time. If only we could implant electronic devices inside children’s brains to monitor exactly how they had responded to the introduction of a new idea or exactly what they could accurately recall on any topic of our choosing! This might sound a ludicrous extension of the desire to collect better attainment data, but it serves as a reminder of how incredibly complex – messy, even – the job of trying to educate a child is.

The challenge for the group who wrote this report is that research doesn’t help us decide whether some of the most common data practices in schools are helpful or a hindrance. For example, there is no study that demonstrates introducing a data tracking system tends to raise academic achievement; equally, there is no study the demonstrates it does not! Similarly, whilst use of target grades is now widespread in secondary schools, their use as motivational devices has not yet been evaluated. Given that the education community appears so divided in their perceptions about the value of data processes in schools, it seems that careful scientific research in a few key areas is now the only way we can move forward.

More research needed! How could an academic conclude anything else?


Read our report by clicking on the link below:

Making data work

The pupil premium is not working (part III): Can within-classroom inequalities ever be closed?

On Saturday 8th September 2018 I gave a talk to researchED London about the pupil premium. It was too long for my 40-minute slot, and the written version is similarly far too long for one post. So I am posting my argument in three parts [pt I is here and pt II is here].

I used to think social inequalities in educational outcomes could be substantially reduced by ensuring everyone had equal access to our best schools. That is why I devoted so many years to researching school admissions. Our schools are socially stratified and those serving disadvantaged communities are more likely have unqualified, inexperienced and non-specialist teachers. We should fix this, but even if we do these inequalities in access to experienced teachers are nowhere near stark enough to make a substantial dent on the attainment gap. In a rare paper to address this exact question, Graham Hobbs found just 7% of social class differences in educational achievement at age 11 can be accounted for by differences in the effectiveness of schools attended.

Despite wishing it weren’t true for the past 15 years of my research career, I have to accept that inequalities in our schooling system largely emerge between children who are sitting in the same classroom. If you want to argue with me that it doesn’t happen in your own classroom, then I urge you to read the late Graham Nuthall’s book, The Hidden Lives of Learners, to appreciate why you are (probably) largely unaware of individual student learning taking place. This makes uncomfortable reading for teachers and presents something of an inconvenience to policy-makers because it gives us few obvious levers to close the attainment gap.

So, what should we do? We could declare it all hopeless because social inequalities in attainment are inevitable. Perhaps they arise through powerful biological and environmental forces that are beyond the capabilities of schools to overcome. If you read a few papers about genetics and IQ it is easy start viewing schools as a ‘bit part’ in the production of intelligence. However, at least for me, there is a ray of hope. For these studies can only tell us how genetic markers are correlated with educational success in the past, without reference to the environmental circumstances that have allowed these relationships to emerge. Similarly, children’s home lives heavily influences attainment, but how we organise our schools and classrooms is an important moderator in how and why that influence emerges. Kris Boulton has written that he now views ‘ability’ as something that determines a child’s sensitivity to methods of instruction; so the question for us should be what classroom instructional approaches help those children most at risk of falling behind.

Having made it this far through my blogs, I suspect you are hoping for an answer as to what we should do about the attainment gap. I don’t have one, but I am sure that if there were any silver bullets – universal advice that works in all subjects across all age ranges – we would have stumbled on them by now. Instead, I’d like to take the final words to persuade you that our developing understanding of the human mind provides teachers with a useful language for thinking about why attainment gaps emerge within their own classrooms. Whether or not they choose to do anything about that is another matter entirely.

Focusing on inequalities in cognitive function rather than socio-economic status

In earlier blogs I have argued that noting the letters ‘PP’ on seating plans does not provide teachers with useful information for classroom instruction. Labelling students by their educational needs is helpful (and essential for secondary teachers who encounter hundreds of children each week) and I think paying more attention to variation in cognitive function within a class has far more value than their pupil premium status. Cognitive functions are top-down processes, initiated from the pre-frontal cortex of the brain, that are required for deliberate thought processes such as forming goals, planning ahead, carrying out a goal-directed plan, and performing effectively.

The neuroscience of socio-economic status is a new but rapidly growing field and SES-related disparities have already been consistently observed for working memory, inhibitory control, cognitive flexibility and attention. There is much that is still to be understood about why these inequalities emerge, but for a teacher faced with a class to teach, their origins are not particularly important. What matters is that they use instructional methods that give students in their class the best possible chances of success, given the variation in cognitive function they will possess.

Implications for the classroom

Unfortunately, translating this knowledge about social inequalities in cognitive function into actionable classroom practice is difficult and rather depends on the subject and age of children you teach. Maths teacher-bloggers find cognitive load theory insightful; other subjects less so. This is because developing strategies to overcome limitations in working memory through crystallised knowledge is more productive in hierarchical knowledge domains (maths, languages, handwriting, etc) where the benefits of accumulating knowledge and fluency in a few key areas spill across the entire curriculum.

That said, I think social inequalities in attention and inhibitory control affect almost all classroom settings. Attention is the ability to focus on particular pieces of information by engaging in a selection process that allows for further processing of incoming stimuli. Again, this is a young field but there are studies (e.g. here and here) that suggest it is a very important mediator in the relationship between socio-economic status and intelligence.

When you see a child who is not paying attention in class, what are they attending to? Graham Nuthall’s New Zealand studies showed how students live in a personal and social world of their own in the classroom:

They whispered to each other and passed notes. They spread rumours about girlfriends and boyfriends, they organised their after-school social life, continued arguments that started in the playground. They cared more about how their peers evaluated their behaviour than they cared about the teacher’s judgement… Within these standard patterns of whole-class management, students learn how to manage and carry out their own private and social agendas. They learn how and when the teacher will notice them and how to give the appearance of active involvement. They get upset and anxious if they notice that the teacher is keeping more than a passing eye on them.

We tend to assume that attentiveness is an attribute of the child, rather than something it is our job to manipulate. Teacher and psychology researcher, Mike Hobbiss, says we should instead view ‘paying attention’ as the outcome of instruction methods. In a blog post he urges us to create classroom conditions that are likely to engender the effect of focused attention by making our stimuli as attractive as possible and by reducing other distractors. We could do this by having students face the front, by controlling low-level disruption, and by removing mobile phones and fancy stationery materials, and so on. And since attention is limited (and more so in some children than others), he points out that: ‘capturing attention is not in itself the aim. The goal is to provide the optimal conditions so that attention is captured by the exact stimuli that we have identified as most valuable’.

There are a number of very successful schools I have visited where shutting down the choices about what students get to pay attention to during class is clearly the principal instrument for success. I am glad I have visited them, despite the state of cognitive dissonance they induce in me. On the one hand, I am excited to see schools where the quality of student work is beyond anything I thought it was possible to achieve at scale. On the other hand, their culture violates all my preconceptions about what school should be like. Childhood is for living, as well as for learning, and I find it uncomfortable to imagine my own children experiencing anything other than the messy classrooms of educational, social and interpersonal interactions that I did.

However, I do now think that we have to face up to the trade-offs that exist in the way we organise our classrooms. If we care about closing the attainment gap and we accept the relationship between SES and cognitive function, then surely our first port of call should be to create classroom environments and instructional programmes that prioritise the needs of those who are most constrained by their cognitive function? In many respects, we are still working out what this means for the classroom, but I’m pretty sure that being laissez-faire about what students can choose to pay attention to in class is likely to widen the attainment gap.

Graham Nuthall was not particularly optimistic about disrupting the cultural rituals of our classroom practice to improve what children are able to learn. He believed these rituals persist across generations because we learn about what it means to be a teacher through our own schooling as a child. We have deeply embedded values about the kinds of experiences we want our students to have in our classrooms. For him, the cultural values of teachers are the glue that maintains our schooling system as it is, with the consequence that it entrenches the attainment gaps we’ve always had.

Conclusion

The pupil premium, as a bundle of cash that sits outside general school funding with associated monitoring and reporting requirements, isn’t helping us close the attainment gap. We should just roll it into general school funding, preserving the steep social gradient in funding levels that we currently have. When we teach children from households that are educationally disengaged there is a lot we can do to help by way of pastoral and cultural support. This costs money and monitoring test scores isn’t the right way to check this provision is appropriate.

We shouldn’t ring fence funds for pupil premium students, not least because they may not be lowest income or most educationally disadvantaged students in the school. We should stop measuring or monitoring school attainment gaps because it is a largely statistically meaningless exercise that doesn’t help us identify what is and isn’t working in our school. In any case, ‘gaps’ matter little to students from poorer backgrounds; absolute levels of attainment do.

I understand the argument that marking ‘PP’ on a seating plan or generating a ‘PP’ report introduces a language and focus around helping the most disadvantaged in the school. I have argued that this language is of little value if it distorts optimal decision-making and takes the focus away from effective classroom practice. Instead, by focusing on disadvantage in the classroom – that is, cognitive functions that place students at an educational disadvantage – we have the opportunity to better understand how our choice of instructional methods maximises the chances of success for those most at risk of falling behind. I very much doubt it enables us to close the attainment gap, but I like to think it will help us achieve more success than we’ve had so far.

I am not unrealistic about how hard this is: our teachers have amongst the highest contact hours in the OECD and this has to change if they are to have the time to modify how they teach. But more importantly, we have to decide that changing classroom practice is something we want to do, even if it disrupts our long-held cultural ideals of what education should look like.

The pupil premium is not working (part II): Reporting requirements drive short-term, interventionist behaviour

On Saturday 8th September 2018 I gave a talk to researchED London about the pupil premium. It was too long for my 40-minute slot, and the written version is similarly far too long for one post. So I am posting my argument in three parts [pt I is here and pt III is here].

Most school expenditure sustains a standardised model of education where 30 children are placed in a room with a teacher (and a teacher assistant if you are lucky). Now, for the government to sustain its pupil premium strategy, it makes schools evidence the impact of their pupil premium spending on attainment. But it’s hard to build evidence for that impact if you’re just spending the cash sustaining a well-established, standardised model. (Unless… you segregate all the pupil premium children into one classroom first… though you really shouldn’t, and I have only come across one school so far that is mad enough to do that.)

Instead, in their efforts to close the gap between students sitting in the same classroom, schools ‘target’ pupil premium students with activities and interventions that sit outside the standard whole class activities of a school: tutoring, withdrawal from class with teaching assistants, breakfast clubs, extracurricular activities, and so on. Intervention-type activities suit this short-termist funding stream that is entirely dependent on whether pupil premium eligible students enroll, or not. The chart below shows that over half of the 2,500 teachers answering the Teacher Tapp survey app reported that targeted interventions were provided to pupil premium students, a group that I’ve argued do not have a well-defined set of social or educational needs.

10TT2

In the classroom too, pupil premium students frequently receive different treatment. 63% of teachers say they are required to monitor their progress more closely than other students; 18% say they mark their books first; two-thirds of secondary teachers are required to mark out the status of pupil premium students on their seating plans.

9TT1

You could argue that all this is, at worst, inefficient both in its choice of activities and targeting of pupils. But headteachers frequently explain to me the ethical dilemmas this raises in their own schools, where pupils in greater need are excluded from clubs or provision in a manner that can be impossible to explain to parents without identifying those who are disadvantaged.

History teacher, Tom Rogers, has written several posts explaining how the pupil premium has pushed ethical boundaries too far. Here he explains:

11TES1

In another post, he describes how it affects classroom teachers:

12TES2

At this stage I know there will be some school leaders and consultants thinking “Yes, but you don’t have to do any of these things. You can spend the money supporting interventions and high quality teaching for all those who need them”. In a sense they are right: the pupil premium hypothecation is only notional and nobody asks to see an audit trail of the expenditure. But if this is our best argument for sustaining the pupil premium as it is, then surely we should just roll it into the general schools funding formula with all the other money that disproportionately flows to schools serving disadvantaged communities?

In any case, it takes a brave headteacher and governing body to explain to Ofsted that they choose to spend their pupil premium funding on non-pupil premium students in need. After all, newspaper articles such as this by Louise Tickle in the Guardian constantly remind them that expenditure must raise the attainment of pupil premium children:

13Guardian1

Ofsted comment on pupil premium expenditure and attainment more often than not, even during short inspections. In a sample of 663 Ofsted reports we reviewed from the 2017/18 academic year, 51% mention the pupil premium and well over half of these assert that inspectors can see the monies are being spent effectively!

Where their comments are critical of pupil premium expenditure, they rarely make concrete recommendations that could be useful to anyone, except to the industry of consultants and conferences that help schools solve the riddle of how to spend the pupil premium. These are example quotes from inspection reports (with the one mentioning external review appearing regularly):

  • The school does not meet requirements on the publication of information about the pupil premium spending plan on its website
  • The leaders and managers do not focus sharply enough on evaluating the amount of progress in learning made by the various groups of pupils at the school, particularly the pupils eligible for the pupil premium …
  • An external review of the school’s use of the pupil premium funding should be undertaken in order to assess how this aspect of leadership and management may be improved

Governors are expected to take a central role in relation to monitoring this pot of money (one-third of Ofsted’s pupil premium comments mention governors). Not only must they be trained in how to monitor and evaluate their attainment gap, they should be capable of examining what interventions have been shown to work and be able to analyse pupil attainment data ‘forensically’ (according to an EEF employee quoted in this article).

What should money for disadvantaged pupils be spent on if we want to close the gap?

I have argued that the pupil premium is constructed in a way that encourages interventionist rather than whole class approaches to education improvement, and it does so for a group of students without a well specified set of needs.

Schools that serve more disadvantaged communities do need considerably more money to operate. Their students frequently have greater pastoral needs and they face higher costs of dealing with safeguarding, attendance and behaviour. Equally, we want these schools to provide rich cultural experiences that the students might not otherwise afford. And yet, many of these things we’d like schools to spend money on aren’t central to the question of how we should spend money to raise attainment (remember, the pupil premium is supposed to be used to raise attainment).

Beyond the obvious provision to help make home life matter less to education (e.g. attendance and homework support), we struggle to make highly evidenced and concrete recommendations, in part because ‘money’ has a poor track record in raising educational standards in general. The Education Endowment Foundation was established alongside the pupil premium with the expectation they would identify effective programmes or widgets that schools could then spend money on. Unfortunately, most trials have shown that programmes are no more effective than existing school practice, and in any case free school meal eligible children do not disproportionately benefit from them.

And if we turn to the bigger picture, there is a large literature on the relationship between money spent and pupil outcomes. This isn’t the place to review the literature, but studies (particularly UK ones) frequently show that money does not matter to pupil attainment as much as we think it should. I wish it did, for that would give us a policy lever to improve education.

Money changes the way we educate. It changes the way that education feels to those involved and it changes the diversity of experiences we can give students in school, but that is a different thing to saying it directly affects how students learn.

The curious question is why money and attainment are not more tightly linked.

I don’t think governments help themselves here when they ring-fence money or give it an expiry date which prevents schools making efficient expenditure decisions. And, as discussed earlier in relation to EEF trials, we simply do not have good evidence that it is possible to go and purchase off-the-shelf programmes that are demonstrably effective.

But equally, schools don’t always spend money in a way that increases test scores because they have other considerations, not least making the lives of their staff more manageable. We know from the IFS paper that the majority of the increase in cash over the Labour Government period (which disproportionately went to disadvantaged schools) was spent on expanding the team of teachers who rarely teach (the senior leadership team), teaching assistants, and general wages.

Equally, from a Teacher Tapp question asked last week, we know teachers in secondary schools would choose to spend money on more classroom teachers, presumably to reduce class sizes. Primary school teachers would elect to have more teaching assistants. Both smaller class sizes and teaching assistants are resources that make the lives of teachers more manageable, but evidence says they have little immediate impact on pupil attainment. They certainly do support the long-term health of the teaching profession, which I believe is the most important determinant of pupil attainment in the future (see my Teacher Gap book). But this money does not buy us better pupil attainment today.

15TT3

To be clear, as a parent whose own children are educated in one of the most poorly funded counties in England, I am gravely concerned about how the current funding crisis is damaging both the quality of the experiences they have and the well-being of their teachers. But equally, as a researcher in this field, I would not be able to give a school well-evidenced advice about how to use money to close the attainment gap. I think this is because improved classroom instruction isn’t something it is easy to buy. Is it possible to teach in a way that disproportionately benefits those in the classroom from disadvantaged backgrounds? This is the question that we will turn to in Part III.

What’s coming up…

Part III asks whether within-classroom inequalities can ever be closed

(Punchline for the nervous… No, I don’t think the pupil premium should be removed. I suggest it should be rolled into general school funding.)

The pupil premium is not working (part I): Do not measure attainment gaps

On Saturday 8th September 2018 I gave a talk to researchED London about the pupil premium. It was too long for my 40-minute slot, and the written version is similarly far too long for one post. So I am posting my argument in three parts [pt II is here and pt III is here].

Every education researcher I have met shares a desire to work out how we can support students from disadvantaged backgrounds as they navigate the education system. I wrote my PhD thesis about why school admissions help middle class families get ahead. No politician is crazy enough to do anything about that; but they have been brave enough to put their money where their mouth is, using cash to try to close the attainment gap. This series of blog posts explains why I think the pupil premium hasn’t worked and why it diverts the education system away from things that might work somewhat better. I suggest it is time to re-focus our energies on constructing classrooms that give the greatest chance of success to those most likely to fall behind.

Money, money, money…

We think about attaching money to free school meal students as a Coalition policy, but the decision to substantially increase the amount going to schools serving disadvantaged communities came during the earlier Labour Government. The charts below come from an IFS paper that shows how increases in funding were tilted towards more disadvantaged schools from 1999 onwards. The subsequent ‘pupil premium’ (currently £1,320 for primary and £935 for secondary pupils) really was just the icing on the cake.

1Funding

However, the icing on the cake turned out to have a slightly bitter taste, for it came with pretty onerous expenditure and reporting requirements:

  1. The money must be spent on pupil premium students, and not simply placed into the general expenditure bucket
  2. Schools must develop and publish a strategy for spending the money
  3. Governors and Ofsted must check that the strategy is sound and that the school tracks the progress of the pupil premium students to show they are closing the attainment gap

The pupil premium does not target our lowest income students

Using school free school meal eligibility as an element in a school funding formula is a perfectly fine idea, but translating this into a hypothecated grant attached to an actual child makes no sense. The first reason why is that free school meals eligibility does not identify the poorest children in our schools. This was well known by researchers at the time the pupil premium was introduced thanks to a paper by Hobbs and Vignoles that showed a large proportion of free school meal eligible children (between 50% and 75%) were not in the lowest income households (see chart below from their paper). One reason why is that the very act of receiving the means-tested benefits and tax credits that in turn entitle the child to free school meals raises their household income above the ‘working poor’.

7FSMpoverty

Poverty is a poor proxy for educational and social disadvantage

Even if free school meal eligibility perfectly captured our poorest children, it would still make little sense to direct resources to these children since poverty is a poor proxy for the thing that teachers and schools care about: the educational and social disadvantage of families. Children who come from households who are time-poor and haven’t themselves experienced success at school often do need far more support to succeed at school, not least because:

  • Their household financial and time investment in their child’s education is frequently lower
  • Their child’s engagement in school and motivation could be lower
  • The child’s cognitive function might lead them to struggle (of which more in part 3)

These are social, rather than income, characteristics of the family.

Pupil premium students do not have homogeneous needs

There are pupil premium students who experience difficulties with attendance and behaviour; there are pupil premium students who do not. There are non-pupil premium students who experience difficulties with attendance and behaviour; there are those who do not. Categorising students as a means of allocating resources in schools is very sensible, if done along educationally meaningful lines (e.g. the group who do not read at home with their parents; the group who cannot write fluently; the group who are frequently late to school). Categorising students as pupil premium or not is a bizarre way to make decisions about who gets access to scarce resources in schools.

Yes, there are mean average differences by pupil premium status in attendance, behaviour and attainment. However, the group means mask the extent to which pupil premium students are almost as different from each other than they are from the non-pupil premium group of students. The DfE chart below highlights this nicely.

8FSMdistribution

In his book, Factfulness, the great, late Hans Rosling implores us not to overuse this type of analysis of group mean averages to make inferences about the world. He explains that ‘gap stories’ are almost always a gross over-simplification. They encourage us to stereotype groups of people who are not as dissimilar to others as the mean average would have us believe.

Why do we like these ‘gap stories’? We like them because we humans like the pattern forming that group analysis facilitates, and having formed the gap story, we are then naturally drawn to thinking of pupil cases that conform to the stereotypes.

Your school’s gap depends on your non-PP demographic

I’ve explained how the pupil premium group in schools do not have a homogeneous background and set of needs. Students not eligible for the pupil premium are even more diverse.

When we ask schools to monitor and report their pupil premium attainment gap, the size of the gap is largely a function of the demographic make-up of the non-pupil premium students at the school. Non-pupil premium students include the children of bus drivers and bankers; it is harder to ‘close the gap’ if yours are the latter. Many schools that boast a ‘zero’ gap (as did one where I was once a governor) simply recruit all their pupils from one housing estate where all the residents are equally financial stretched and socially struggling, though some are not free school meal eligible.  Schools that serve truly diverse communities are always going to struggle on this kind of accountability metric.

Tracking whether or not ‘the gap’ has closed over time is largely meaningless, even at the national level

There are dozens of published attainment gap charts out there, all vaguely showing the same thing: the national attainment gap isn’t closing, or it isn’t closing that much. None of them are worth dwelling on too much since the difference between average FSM and non-FSM attainment is very sensitive to two things that are entirely unrelated to what students know:

  1. We regularly change the tests and other assessments that we bundle into attainment measures at age 5, 7, 11 and 16. This includes everything from excluding qualifications, changing coursework or teacher assessment mix, to rescaling the mapping of GCSE grades to numerical values. Generally speaking, changes that disproportionately benefit higher attaining students widen the gap.
  2. The group of students labelled as pupil premium at any point in time is affected by the economic cycle, by changes in benefit entitlements and by changes to the list of benefits that attract free school meals. For example, recessions tend to close the gap because they temporarily bring children onto free school meals who have parents more attached to the labour market.

It is also worth noting that FSM eligibility falls continuously from age 4 onwards as parents gradually choose (or are forced) to return to the labour market. This means comparisons of FSP, KS1, KS2 and KS4 gaps aren’t interesting.

Don’t mind your own school gap

Your school’s attainment gap, whether compared with other schools, compared with your own school over time, or compared across Key Stages, cannot tell you the things you might think it can, for all the reasons listed above.

Moreover, it isn’t possible for a school to conduct the impact analysis required by DfE and Ofsted to ‘prove’ that their pupil premium strategy is working for all the usual reasons. Sample sizes in schools are usually far too small to make any meaningful inferences about the impact of expenditure, and no school ever gets to see the counterfactual (what would have happened without the money).

What’s coming up…

Part II explains how reporting requirements drive short-term, interventionist behaviour

Part III asks whether within-classroom inequalities can ever be closed

(Punchline for the nervous… No, I don’t think the pupil premium should be removed. I suggest it should be rolled into general school funding.)