When would you like to be in a smaller class: age 5 or age 15?

Question: What links GCSE Design and Technology* with my 4 year old’s class size?

Answer: Money. And the choices we’ve made about how to spend it.

We’ve made the strangest resourcing choices in England, although it is so ingrained in our societal norms that is hard for us to recognise it. Children start at age 4 – younger than most other countries in the world – and from day one we place them into classes that are enormous, by international standards. It is common for us to have reception classes of 30 – OECD data reports the average in primary state-funded schools is 27.1. In other OECD countries primary class sizes are typically around 21 students.

Now, of course we aren’t the only country to have large primary classes, but we are distinctive in that our class sizes shrink considerably as children get older. Class sizes is one of the primary drivers of school funding demands across phases of education. Nobody else makes the same relative funding choices as us.

ClassSizes

In most countries class sizes tend to grow as students get older. Perhaps they form a judgement that older children are able to cope in larger classes. Perhaps they feel that smaller classes are needed for pedagogical approaches more widely used with younger children.

(Now, it’s true that a Reception Year class nearly always has a full-time teaching assistant, but they aren’t trained teachers and it doesn’t compensate for limited physical space that becomes easily jammed as 4 year-olds shuffle between activities. By the age of 8 the full-time teaching assistants are long gone in most schools.)

ClassSizesRank

So what are we buying with the cash we’ve saved through these large primary classes? If I told you secondary class sizes are, on average, under 21 students (OECD figures again), would you believe me? Many secondary teachers I speak to are surprised by this low figure because it doesn’t resonate with the classes they teach. The cash doesn’t reduce class sizes across the board. Instead, it is used to buy us two different things.

First, many schools run tiny lower attainment sets in core subjects. This makes sense for these students, who are struggling to access the GCSE curriculum and I am pleased schools make this resourcing decision. However, we have to ask how resourcing deployment contributed to them arriving unable to access the secondary curriculum at age 11 in the first place. Is it optimal to deliver tiny maths class sizes at age 14 (as we do) or at age 8? I couldn’t tell you, but this is a testable question (and evidence on the benefits of small class sizes suggests deploying them in younger age groups is optimal).

Second, we are relatively unusual by international standards because our elective curriculum starts from age 14 (or even 13). For most schools this means delivering partly empty classes, since students rarely select optional subjects in neat multiples of 30. If you try to preserve free choice, but require class sizes to rise as has happened during austerity, then schools inevitably abandon subjects that I personally think are important, e.g. languages and music.

Most people (except Michael Fordham and I) seem to like subject choice. Students love giving up subjects (or teachers!) they dislike. Teachers like losing students who are disinterested in their subject. As someone who is neither a teacher nor a student any more, I am intrigued by the arguments teachers make for ever earlier curriculum restriction. The new GCSE Geography curriculum may indeed be so deep and complex that it requires a three year programme of study, and yet geography doesn’t appear to be so important that all our future citizens have a right to study it to age 16 (or indeed 14 in many schools now)!

This is not a ‘GCSE reform is due’ blogpost – I’d just like us to talk more about all the trade-offs we make when we allow subject choice at Key Stage 4. Giving greater depth to optional subjects through long GCSE programmes comes with two major costs. It removes study time in the subjects for those who do not continue it after Key Stage 3 and it requires us to fund the smaller class sizes that inevitably arise.

I know… I know… you are thinking you’d like to preserve the status-quo in secondaries of small Key Stage 4 class sizes and reduce class sizes in infants. But resourcing education is all about trade offs, and often these trade offs need to happen within education, rather than between education and other parts of the economy. If we want smaller classes in infants then we have to think about whether we are prepared to give up anything else to achieve it.

This isn’t an adjustment that the education system could ever make on its own because it means taking cash away from some schools and giving it to others. Why would secondary teachers sign up for a reform that delivers larger class sizes that include students who would give up studying their subject given half the chance? I’m not saying we should enact this re-distributive policy, but I’d like us to have a fuller conversation about what sort of evidence would help us to make optimal resourcing decisions across phases.


*…or any other GCSE optional subject

It’s not (just) what teachers know, it’s who teachers know

I have been talking to many teachers and school leaders recently about what information needs to be recorded, whether in a markbook or in a centralised system, for a teacher to teach effectively. The answer is, partly, that it depends on what information the teacher is able to hold in their head, without the need for taking notes! A primary school teacher who spends 25 hours a week with the same 30 children has a rather easier job here than the secondary school music teacher who sees over 300 students pass through their classroom each week.

I recently dropped in to see a school where the need for written documentation was about as low as it is possible for it to be: a one-form entry primary school, very low family mobility, stable teaching staff, a headteacher that can know all the students and their parents by name. We spoke for some time about what he thinks his teachers need to ‘know’ about students to do their job – the importance of nuanced views of what a child can already do, how difficult students are likely to find a new task they encounter and how best to engage the child in learning. We spoke about how he thinks teachers accumulate this information about their students over the course of a year, and what is lost when students move to a new class in September.

He then mentioned in passing that they had decided to keep a number of classes with the same teacher last September. ‘So powerful to start the year already knowing your students!’, he said. In reply, I told him about a recent US research study that backs up his intuition. American educators call the policy of keeping students with the same teacher for a second academic year looping – what a great phrase! The study (£) in elementary schools showed small academic gains from keeping students with the same teacher for a second year. It is important to note that the effect size here isn’t massive, but in education policy we are almost always in the business of marginal gains.

Of course, the popularity of looping rather depends on having a pool of consistently effective teachers. In my family, we often still talk about one person’s disastrous three-year ‘looping’ experience as an infant pupil with an ineffective teacher. Looping practice for up to eight years in Steiner (Waldorf) Schools is said to lead to parents removing their children en masse from one class if they aren’t happy with their teacher.

Studies on the benefits of looping serve to remind us about the importance of teachers knowing their classes. Secondary schools make an effort to loop in years 10 and 11, but perhaps they should seek to extend this looping back into the younger years too. Other practices, such as ensuring form tutors also get to teach their classes or minimising incidences of ‘split’ classes, would seem to be important too but are increasingly hard to achieve where tight budgets leave no flexibility in timetabling arrangements. More controversially, it highlights one difficulty with job share arrangements in primary schools, where the part-time teachers necessarily take longer to get to know their class at the start of the year.

Other commentators have rightly drawn parallels with another study where elementary school teachers specialised in (usually) two of maths, science, English or social science and taught these subjects across multiple classes. The effect of this subject specialism was to lower pupil achievement. The author reported that “… teacher specialization, if anything, decreases student achievement, decreases student attendance, and increases student behavioural problems.”

Now, this wasn’t proper subject specialism that included training to become specialists: headteachers at each school simply helped to identify who should be allocated to specialise in which subject. That said, I interpret this second study as showing that we might want to think again about the trade-offs between having teachers who are subject experts, able to benefit from both disciplinary expertise and repeating the same lessons, and teachers who are experts in the students they teach. It is inevitably hard to be an expert in both.

In England, our schooling careers are U-shaped with respect to whether teachers know us well or not. Our youngest and oldest students benefit from a few teachers who get to know them very well. By contrast, between the ages of 11 and 14, students troop between a dozen different teachers each week. Are we sure we always get the trade-offs right? For example, how did we decide it was optimal to have one generalist teacher for ten year-olds, followed by ten subject specialists for eleven year-olds? Did middle schools who chose to run a core part of each teaching day with the form teacher get something right? And whilst history teachers love not having to teach religious studies and physics teachers love not having to teach biology, how far should a school fractionalise a pupil’s timetable before it becomes damaging to their academic and pastoral experience? These are empirical questions that we cannot yet answer.

It is great that so much policy energy has been focused on a more sophisticated understanding of the curriculum, of what makes subject knowledge domains distinctive and of what this implies for subject-specialist pedagogy. We should harness this sophisticated understanding of the number of hours it would take to train a teacher to deliver a particular curriculum, to a particular age group, with resources we may or may not have provided for them, to think deeply about whether we’ve always got the trade-offs right between becoming specialists in subjects or specialists in children.

If CPD is so important, then why is so much of it so bad?

Towards the end of last year I took part in a debate about the quality of CPD. I was asked to take one side of the argument, so this is my deliberately one-sided perspective on it. The wonderful people of edu-twitter helped me compile the bizarre examples of CPD that you’ll read below.

Everybody remembers their worst ever INSET day, don’t they? I remember being excited to take part in one early on in my PGCE. This was in the early 2000s, so the National Strategies were landing in Key Stage 3. The Local Education Authority’s maths officer came to the school to help every teacher embed numeracy in the Key Stage 3 curriculum. It was a strange morning for us, as a department (business/economics) who taught no Key Stage 3 and yet weren’t allowed to get on with other things. It was also a strange morning for the maths department, who presumably had already embedded numeracy in their curriculum. And I’d guess at least half the other departments were pretty annoyed to have to think of a contrived way to mention numbers during their lessons. Whose fault was it that so few teachers got anything out of that day? Should the headteacher have allowed it to happen? Should the LEA have known better than to blithely follow DfE ideas about numeracy at KS3? And where was teacher agency in the decision about allocating INSET time to this?

I asked teachers on twitter to name their worst ever CPD, and it quickly became clear they had examples far worse than mine:

It was one which postulated the theory that we have 5 brains, one of which developed when humans were running away from dinosaurs. I kid you not.— David Williams (@davowillz) October 20, 2018

A day on packtypes. We all answered questions about ourselves and others to find out what dog we were. Then we made bunting to go around school to show our packtypes. I have since read the research behind it but the watered down approach did not communicate anything useful— Mrs Cadman (@Y6Mrs) October 20, 2018

The worst I can recall was about 25 years ago at least on ‘Instrumental Enrichment’. I had no idea what it meant and even less after the CPD.— Gradgrind (@ThomGradgrind) October 20, 2018

It was something to do with animals representing different types of learners. The CPD was ten minutes and we were then given a week’s deadline to ’embed the animals in schemes of learning’.— Deborah Hawkins. (@debbs_198) October 20, 2018

In groups, we had to spend an hour making a 2 minute, creative video about one of the schools values. It was shown to the other staff on the day, but no discernible purpose – even when I asked.— Stuart Garner (@SJGarner76) October 20, 2018

All of the PD events on questioning… Where no actual questioning takes place. And instead mind numbing droning for 1-2 hours. #HowNotToTeach— Mr. OatesSoSimple (@MrOatesSoSimple) October 20, 2018

Now twitter trades on anecdotes so let’s turn to a slightly larger pool of teacher opinion on Teacher Tapp. Less than a third of classroom teachers agree with the following statement: ‘Time and resources allocated to professional development are used in ways that enhance teachers’ instructional capabilities.’ (Of course, the vast majority of senior leaders agreed with the statement!) We consistently see that it is secondary school classroom teachers that are the most negative about their experiences of CPD, perhaps because so little is subject specific. For example, 40% of them feel that CPD has had little or NO impact on their classroom practice. Many classroom teachers also report that abandoning INSET days would have no effect on their teaching!

This presents us with a serious question to answer: If professional development is so important, then why is so much CPD so bad?

I think it is reasonable to suggest that the people who commission training, particularly in secondary schools, either don’t know how to find good provision or are severely resource constrained, which in turn leads them to deliver quite generic whole school training.

However, I’d like us to consider an alternative explanation. That is that ‘we’, as a profession, don’t really know how to do CPD. This reminds me of the arguments as to why the NHS does not fund greater mental health provision. Of course it is, in part, because they lack the funds. But it is also because we haven’t worked out any scaleable cost-effective means of treating it yet. Just as doctors don’t reliably know how to make unhappy people happier, education professionals don’t reliably know how to make teachers more effective.

One promising vein of research has been into instructional coaching, where a number of randomised controlled trials have estimated positive programme effects. Does this mean all teachers should have instructional coaches who are trained in how to use the observational rubrics to give feedback? No. Coaching is a great, but expensive, way to get teachers from being not OK at teaching to being good. Simple rubrics can’t support coaches in taking teachers from good to great because this is likely to be highly sensitive to the curriculum and demographics of students.

Moreover, even for inexperienced teachers, the expense means we have to ration the treatment. As with one-to-one therapy or tuition, we withhold it from most teachers and hope they figure out how to get better through a cheaper method – such as reading books or trial and error.

We have some of the best brains in the education system thinking about how best to spend CPD money (e.g. David Weston, the people at Institute for Teaching). However, I wonder whether there are even more important things to worry about first. I’d start with worrying about whether schools can provide teachers with a stable curriculum, with assignment to appropriate classes, with a healthy approach to workload to give teachers the space to think about their teaching, and of course a culture where teachers can teach and are encouraged to get better at teaching on their own.

When I think of the incredible teachers I’ve watched over the past few years, I wonder how they got so good at teaching. When I’ve had the opportunity to ask them, they have never volunteered that formal professional development courses made a material contribution. Of course, these teachers who made it to great without losing morale or leaving first are, alas, the exception. But what makes us think that the courses they didn’t use to successfully get better at teaching will be helpful to others who want to get there too?!?

I think that supporting teachers in getting better at teaching is critical to teacher morale and the long-term health of schooling system. This is the central thesis of our book, The Teacher Gap.

I just don’t know whether we’ve found our medicine.


It was fun arguing with people from across the world about how we get professional development right. Somebody on twitter pointed me to this gem of a quote that will always make me smile:

I hope I die during an in-service session because the transition between life and death would be so subtle.

Helen Timperley (2011) quoting an anonymous teacher

New grammar school rules, OK?

Sigh! New year, new grammar school paper. This time a HEPI paper by an ex-Civil Servant, Iain Mansfield, who has turned his hand to quantitative social research, starting with one of the most complex questions it is possible to devise. Thankfully I missed most of the commentary on it (bout of tonsillitis). If you didn’t, Lindsey Macmillan et al. have written one response. However, tonight I am better and from my quick read of the HEPI report it is immediately clear how poor much of the analysis is. Sigh again! I am determined that we don’t have to go through this annual charade ad-infinitum.

So I’ve got two rules that I think would help calm the debate and raise the quality of the argument that we are having about what age we should allow academic selection.

Rule 1: You can’t publish research yourself on the question of the causal impact of academic selection until you have passed a test to show you understand why the existing literature is so complex. Seriously, there is a reason why academics are forced to summarise the existing literature before they are allowed to publish their own findings! You need to be able to answer questions like:

  • From the following set of papers, which explicitly (a) acknowledge and (b) attempt to deal with the fact that over 20% of students at grammar schools live in a different local authority?
  • What are the consequences of ignoring the fact that 12% of students at grammar schools transferred from private primaries? Name five challenges that researchers face in incorporating private schools into analysis.
  • Contrast at least two different approaches to constructing the set of pseudo secondary-moderns that have been used in the literature to-date. What are the pros and cons of these approaches?
  • Manning and Pischke argued the Fernando Galindo-Rueda and Vignoles paper was invalid by invoking what seemed to be a neat falsification test. What was the test and under what sorts of assumptions would it have been valid?

Rule 2: You cannot be a public commentator on a piece of ‘research’ about the causal impact of grammar schools unless you can first answer the following questions about the research you plan to comment on.

  • Have the authors acknowledged that large numbers in grammar schools live in different, and usually non-selective, local authorities and do you understand how they have dealt with this problem?
  • Have the authors acknowledged that the presence of grammar schools distorts the nature of local private schools? Have they dealt with this (e.g. how are private school students included in their analysis groups)?
  • When considering the impact of selective areas as a whole, how do they define non-selective schools in selective areas, i.e. the group of schools that students are going to who fail the 11+? (Top tip – most are not categorised as secondary moderns in DfE databases.)
  • What is their counterfactual to living in a selective local authority? How have they ensured the types of families and students who are living in the counterfactual areas are similar?

There are plenty of public commentators who are capable to reading research carefully enough that they can meet Rule 2. You can’t screen them by their job title or fancy letters before or after their name. That’s why we need new rules. And these rules only apply to impact analysis of the sort that HEPI published today. Very happy to have people writing and talking qualitatively about the system. These perspectives are also important. Moreover, there are many, many questions where basic exploratory analysis is really interesting. It’s what I like to do most days. The causal impact of academic selection at the age of 11 isn’t one of them.

Sorry if this all sounds exclusive but… well… grammar schools are exclusive and they get great results. That’s the point! So let’s make the conversation about their impact on the system a little more sophisticated and maybe we’ll get a better result too.

Poor attainment data often comes too late!

It’s time to get positive about data. The right kind of data.

In my blogpost on the question of why we cannot easily measure progress, I explained why short, one-hour tests are rarely reliable enough to tell us anything interesting about whether or not a student has made sufficient progress over the course of a year. This is a source of worry for schools because measuring and reporting pupil progress is hard-baked into our school accountability system. My response about what to do was to tell teachers not to worry too much about progress since attainment is the thing we almost always want to know about anyway. If you still think that ‘progress’ is a meaningful numerical construct, I’d urge you to take a look at Tom Sherrington’s blog post on the matter.

I’ve since become even more convinced that measuring pupil progress is worse than irrelevant through conversations with Ben White, who pointed out to me that intervening on progress data is frequently unjust and disadvantages those who have historically struggled at school. Suppose you find two students who get 47% in your end of year 7 history test. It isn’t a great score and suggests they haven’t learnt many parts of the year’s curriculum sufficiently well. Will you intervene to give either of them support? The response in many secondary schools nowadays would be to interpret the 47% in relation to their Key Stage Two data. For the student who achieved good scaled scores at age 11 of around 107, the 47% suggests they are not on track to achieve predicted GCSE results and so will make a negative contribution to Progress 8. They are therefore marked down for intervention support. The other student left primary school with scaled scores around 94, so despite their poor historical knowledge at the end of Year 7, they are still on track to achieve their own predicted GCSE results. No intervention necessary here. It strikes Ben (and I) as deeply unjust that those who, for whatever reasons (chance, tutoring, high quality primary school, etc…) get high Key Stage 2 scores are then more entitled to support than those who have identical attainment now, but who once held lower Key Stage 2 scores. It would seem to be entrenching pre-existing inequalities in attainment. For me, the only justification for this kind of behaviour is some sort of genetic determinism, where their SATs scores are treated as a proxy for IQ and we should make no special efforts to help students break free of the pre-determined flightpaths we’ve set up for them. Aside from questions of social justice, it makes no sense to expect pupil attainment to follow these predictable trajectories – they simply won’t, regardless of how much you wish they would.

But all of that is an aside and doesn’t address the question of what we should do if we find out that a student hasn’t learnt much / has made poor progress / has fallen behind peers / has low attainment [delete as appropriate according to your conception of what you are trying to measure]. The trouble is, by the time we find out that attainment data is poor in an end-of-year test the damage has already been done and it is very hard to un-do.

The response of most tracking systems to this problem is simply to collect attainment data more frequently, thus bringing forward the point where the damage can be spotted. The problem with this – apart from the destruction of teachers’ lives through test marking and data dropping – is that it is very hard to spot the emergence of falling behind after just six weeks of lessons. Remember we have uncertainty of ‘true’ attainment at each testing point, so it is very hard to use a one-hour test to distinguish genuine difficulties in learning that are causing a student to slip behind their peers (rather than just having a one-off poorer score). If you intervene on everyone that shows poor progress in each six week testing period then you’ll over-intervene with those who don’t really need outside class support, thus spreading your resource too thinly rather than concentrating on the smaller group who really do need help.

There is an alternative. The most forward-thinking leadership teams in schools I have met start by planning what sorts of actions they need information for. Starting with this perspective yields a desire to seek out leading indicators that suggest a student might need some support, before the damage to attainment kicks in. Matthew Evans has a nice blog post where he describes how and why he is trying to prioritise ‘live’ data collection over ‘periodic’ data. Every school’s circumstances is slightly different but the cycle of learning isn’t so unique. Here is some data that really could lead to some actionable changes to improve learning schools:

  1. Which parents do I need to send letters or request meetings about poor school attendance? Data needed = live attendance records. See Stephen Tierney’s blog on how to write an effective letter home to parents.
  2. Which classes do I need to observe to review why school behaviour systems are not proving effective and support the teacher in improving classroom behaviour? Data needed = live behaviour records, logged as a simple code as incidents occur. (Combined with asking teachers how you can help, of course!)
  3. Which students now need an accelerated assessment of why they are not coping with the classroom environment, perhaps across several classrooms? Data needed = combining live behaviour records with periodic student or staff surveys of effort in class, attitudes to learning, levels of distraction. Beware! A music teacher should not be expected to do this for 400 students or for 20 individual classes. Concentrate on deep assessment of newly arrived year groups with simple ‘cause for concern’ calls for established students.
  4. How many students must I create provision for who have specific deficiencies in prior knowledge or skills that will make classes inaccessible? Data needed = periodic assessments of a set of narrowly defined skills – e.g. at the start of secondary school these might be fluency in number bonds, multiplication, arithmetic routines, clear handwriting, sufficiently fast reading speed, basic spelling and grammar. SATs and CAT tests are very poor proxies for these competencies that do not allow for efficiently targeted interventions.
  5. Which students might need alternative provision in place to complete homework? Data needed = live homework records if they are collected, or a period survey of homework completion. If centralised systems do not exist, do not ask every teacher to enter a data point for every student they teach when a simple ’cause for concern’ call will suffice. Many schools are now organising an early parents evening to bring families where homework is an issue into school to find out why. For parents who themselves did not enjoy school, this early conversation might be enough for them to feel motivated to support their own children in completing homework. Otherwise, silent study facilities should be put in place.

Measuring attainment is like a rain collection device that tells us how much it has rained in the past. An action-orientated data collection approach requires us to create barometers – devices that tell us we may have a problem before the damage is done.

Attainment is useful for retrospective monitoring, but is less useful for choosing optimal actions by senior leadership. Of course, this doesn’t mean that teachers should neglect to check that students seem to be learning what is expected of them in day-to-day lessons. But for management it simply isn’t straightforward to generate frequent, reliable, summative assessment data across most subjects. And even if they could, once the attainment data reveals that a student or class has a problem, it has already been going on for some time. Attainment data is a lagged indicator that a student or staff member had a problem. Poor attainment data often comes too late. The trick is to sniff out the leading indicators that tell leaders where to step in before the damage is done.

Meaningless data is meaningless

It’s not easy to contribute to a government report with recommendations when your modus operandi is explaining what’s gone wrong in schools, then declare it tricky to fix. But making data work better in schools is what I, alongside a dozen teachers, heads, inspectors, unions and government officials, were ask to write about.

Our starting point was observing the huge range of current practice in schools, from the minimalist approach of spending little time collecting and analysing data through to the large multi-academy trust with automated systems of monitoring everything right down to termly item-level test scores.

Whilst we could all agree that these extremes – the ‘minimalist’ and ‘automated’ models of data management – were making quite good trade-offs between time-invested and inferences made, something was going very wrong somewhere in the middle of the data continuum. These are the schools without the resources and infrastructure to automate all data collection, so require teachers and senior leaders to spend hours each week submitting un-standardised data for few gains.

And herein lies one problem… in the past we’ve told schools to collect data and use it again and again in as many systems as possible: to report to RSCs, Ofsted, governing boards, parents, pupils and in teacher performance management. But this assumes that data is impartial – that it measures the things we mean it to measure with precision and without bias. On the contrary, data is collected by humans and so is shaped by the purposes to which that human believes it will be used.

Our problems with data are not just lack of time. We could spend every day in schools collecting and compiling test score data in wonderful automated systems, but we’d still be constrained in how we were able to interpret the data. When I talk to heads, I often use the table below to frame a conversation about how little the tests they are using can actually tell them.

Teacher-designed test used in one school

Teacher-designed test used in 5 schools

Commercial standardised test

To record which sub-topics students understand

Somewhat

Somewhat

Rarely

Report whether student is likely to cope with next year’s curriculum

Depends on test and subject

Depends on test and subject

Depends on test and subject

Measure pupil attainment, relative to school peers

Yes, noisily

Yes, noisily

Yes, noisily

Measure pupil progress, relative to school peers

Only for more extreme cases

Only for more extreme cases

Only for more extreme cases

Check if student is ‘on track’ against national standards

No

Not really

Under some conditions

Department or teacher performance management

No

No

Under unlikely conditions

A lot of data currently compiled by schools is pretty meaningless because of the inherent challenges in measuring what has been learnt. Everyone involved in writing the report agreed that meaningless data is meaningless. Actually, it’s worse than meaningless because of the costs involved in collecting it. And it’s worse than meaningless if teachers then feel under pressure from data that doesn’t reflect their efforts, their teaching quality, or what students have learnt.

Education is a strange business, and it doesn’t tend to work out well when we try to transplant ideas from other industries. Teachers aren’t just sceptical about data because they are opposed to close monitoring; they simply know the numbers on a spreadsheet are a rather fuzzy image of what children are capable of at any point in time. If only we could implant electronic devices inside children’s brains to monitor exactly how they had responded to the introduction of a new idea or exactly what they could accurately recall on any topic of our choosing! This might sound a ludicrous extension of the desire to collect better attainment data, but it serves as a reminder of how incredibly complex – messy, even – the job of trying to educate a child is.

The challenge for the group who wrote this report is that research doesn’t help us decide whether some of the most common data practices in schools are helpful or a hindrance. For example, there is no study that demonstrates introducing a data tracking system tends to raise academic achievement; equally, there is no study the demonstrates it does not! Similarly, whilst use of target grades is now widespread in secondary schools, their use as motivational devices has not yet been evaluated. Given that the education community appears so divided in their perceptions about the value of data processes in schools, it seems that careful scientific research in a few key areas is now the only way we can move forward.

More research needed! How could an academic conclude anything else?


Read our report by clicking on the link below:

Making data work

The pupil premium is not working (part III): Can within-classroom inequalities ever be closed?

On Saturday 8th September 2018 I gave a talk to researchED London about the pupil premium. It was too long for my 40-minute slot, and the written version is similarly far too long for one post. So I am posting my argument in three parts [pt I is here and pt II is here].

I used to think social inequalities in educational outcomes could be substantially reduced by ensuring everyone had equal access to our best schools. That is why I devoted so many years to researching school admissions. Our schools are socially stratified and those serving disadvantaged communities are more likely have unqualified, inexperienced and non-specialist teachers. We should fix this, but even if we do these inequalities in access to experienced teachers are nowhere near stark enough to make a substantial dent on the attainment gap. In a rare paper to address this exact question, Graham Hobbs found just 7% of social class differences in educational achievement at age 11 can be accounted for by differences in the effectiveness of schools attended.

Despite wishing it weren’t true for the past 15 years of my research career, I have to accept that inequalities in our schooling system largely emerge between children who are sitting in the same classroom. If you want to argue with me that it doesn’t happen in your own classroom, then I urge you to read the late Graham Nuthall’s book, The Hidden Lives of Learners, to appreciate why you are (probably) largely unaware of individual student learning taking place. This makes uncomfortable reading for teachers and presents something of an inconvenience to policy-makers because it gives us few obvious levers to close the attainment gap.

So, what should we do? We could declare it all hopeless because social inequalities in attainment are inevitable. Perhaps they arise through powerful biological and environmental forces that are beyond the capabilities of schools to overcome. If you read a few papers about genetics and IQ it is easy start viewing schools as a ‘bit part’ in the production of intelligence. However, at least for me, there is a ray of hope. For these studies can only tell us how genetic markers are correlated with educational success in the past, without reference to the environmental circumstances that have allowed these relationships to emerge. Similarly, children’s home lives heavily influences attainment, but how we organise our schools and classrooms is an important moderator in how and why that influence emerges. Kris Boulton has written that he now views ‘ability’ as something that determines a child’s sensitivity to methods of instruction; so the question for us should be what classroom instructional approaches help those children most at risk of falling behind.

Having made it this far through my blogs, I suspect you are hoping for an answer as to what we should do about the attainment gap. I don’t have one, but I am sure that if there were any silver bullets – universal advice that works in all subjects across all age ranges – we would have stumbled on them by now. Instead, I’d like to take the final words to persuade you that our developing understanding of the human mind provides teachers with a useful language for thinking about why attainment gaps emerge within their own classrooms. Whether or not they choose to do anything about that is another matter entirely.

Focusing on inequalities in cognitive function rather than socio-economic status

In earlier blogs I have argued that noting the letters ‘PP’ on seating plans does not provide teachers with useful information for classroom instruction. Labelling students by their educational needs is helpful (and essential for secondary teachers who encounter hundreds of children each week) and I think paying more attention to variation in cognitive function within a class has far more value than their pupil premium status. Cognitive functions are top-down processes, initiated from the pre-frontal cortex of the brain, that are required for deliberate thought processes such as forming goals, planning ahead, carrying out a goal-directed plan, and performing effectively.

The neuroscience of socio-economic status is a new but rapidly growing field and SES-related disparities have already been consistently observed for working memory, inhibitory control, cognitive flexibility and attention. There is much that is still to be understood about why these inequalities emerge, but for a teacher faced with a class to teach, their origins are not particularly important. What matters is that they use instructional methods that give students in their class the best possible chances of success, given the variation in cognitive function they will possess.

Implications for the classroom

Unfortunately, translating this knowledge about social inequalities in cognitive function into actionable classroom practice is difficult and rather depends on the subject and age of children you teach. Maths teacher-bloggers find cognitive load theory insightful; other subjects less so. This is because developing strategies to overcome limitations in working memory through crystallised knowledge is more productive in hierarchical knowledge domains (maths, languages, handwriting, etc) where the benefits of accumulating knowledge and fluency in a few key areas spill across the entire curriculum.

That said, I think social inequalities in attention and inhibitory control affect almost all classroom settings. Attention is the ability to focus on particular pieces of information by engaging in a selection process that allows for further processing of incoming stimuli. Again, this is a young field but there are studies (e.g. here and here) that suggest it is a very important mediator in the relationship between socio-economic status and intelligence.

When you see a child who is not paying attention in class, what are they attending to? Graham Nuthall’s New Zealand studies showed how students live in a personal and social world of their own in the classroom:

They whispered to each other and passed notes. They spread rumours about girlfriends and boyfriends, they organised their after-school social life, continued arguments that started in the playground. They cared more about how their peers evaluated their behaviour than they cared about the teacher’s judgement… Within these standard patterns of whole-class management, students learn how to manage and carry out their own private and social agendas. They learn how and when the teacher will notice them and how to give the appearance of active involvement. They get upset and anxious if they notice that the teacher is keeping more than a passing eye on them.

We tend to assume that attentiveness is an attribute of the child, rather than something it is our job to manipulate. Teacher and psychology researcher, Mike Hobbiss, says we should instead view ‘paying attention’ as the outcome of instruction methods. In a blog post he urges us to create classroom conditions that are likely to engender the effect of focused attention by making our stimuli as attractive as possible and by reducing other distractors. We could do this by having students face the front, by controlling low-level disruption, and by removing mobile phones and fancy stationery materials, and so on. And since attention is limited (and more so in some children than others), he points out that: ‘capturing attention is not in itself the aim. The goal is to provide the optimal conditions so that attention is captured by the exact stimuli that we have identified as most valuable’.

There are a number of very successful schools I have visited where shutting down the choices about what students get to pay attention to during class is clearly the principal instrument for success. I am glad I have visited them, despite the state of cognitive dissonance they induce in me. On the one hand, I am excited to see schools where the quality of student work is beyond anything I thought it was possible to achieve at scale. On the other hand, their culture violates all my preconceptions about what school should be like. Childhood is for living, as well as for learning, and I find it uncomfortable to imagine my own children experiencing anything other than the messy classrooms of educational, social and interpersonal interactions that I did.

However, I do now think that we have to face up to the trade-offs that exist in the way we organise our classrooms. If we care about closing the attainment gap and we accept the relationship between SES and cognitive function, then surely our first port of call should be to create classroom environments and instructional programmes that prioritise the needs of those who are most constrained by their cognitive function? In many respects, we are still working out what this means for the classroom, but I’m pretty sure that being laissez-faire about what students can choose to pay attention to in class is likely to widen the attainment gap.

Graham Nuthall was not particularly optimistic about disrupting the cultural rituals of our classroom practice to improve what children are able to learn. He believed these rituals persist across generations because we learn about what it means to be a teacher through our own schooling as a child. We have deeply embedded values about the kinds of experiences we want our students to have in our classrooms. For him, the cultural values of teachers are the glue that maintains our schooling system as it is, with the consequence that it entrenches the attainment gaps we’ve always had.

Conclusion

The pupil premium, as a bundle of cash that sits outside general school funding with associated monitoring and reporting requirements, isn’t helping us close the attainment gap. We should just roll it into general school funding, preserving the steep social gradient in funding levels that we currently have. When we teach children from households that are educationally disengaged there is a lot we can do to help by way of pastoral and cultural support. This costs money and monitoring test scores isn’t the right way to check this provision is appropriate.

We shouldn’t ring fence funds for pupil premium students, not least because they may not be lowest income or most educationally disadvantaged students in the school. We should stop measuring or monitoring school attainment gaps because it is a largely statistically meaningless exercise that doesn’t help us identify what is and isn’t working in our school. In any case, ‘gaps’ matter little to students from poorer backgrounds; absolute levels of attainment do.

I understand the argument that marking ‘PP’ on a seating plan or generating a ‘PP’ report introduces a language and focus around helping the most disadvantaged in the school. I have argued that this language is of little value if it distorts optimal decision-making and takes the focus away from effective classroom practice. Instead, by focusing on disadvantage in the classroom – that is, cognitive functions that place students at an educational disadvantage – we have the opportunity to better understand how our choice of instructional methods maximises the chances of success for those most at risk of falling behind. I very much doubt it enables us to close the attainment gap, but I like to think it will help us achieve more success than we’ve had so far.

I am not unrealistic about how hard this is: our teachers have amongst the highest contact hours in the OECD and this has to change if they are to have the time to modify how they teach. But more importantly, we have to decide that changing classroom practice is something we want to do, even if it disrupts our long-held cultural ideals of what education should look like.