Poor attainment data often comes too late!

It’s time to get positive about data. The right kind of data.

In my blogpost on the question of why we cannot easily measure progress, I explained why short, one-hour tests are rarely reliable enough to tell us anything interesting about whether or not a student has made sufficient progress over the course of a year. This is a source of worry for schools because measuring and reporting pupil progress is hard-baked into our school accountability system. My response about what to do was to tell teachers not to worry too much about progress since attainment is the thing we almost always want to know about anyway. If you still think that ‘progress’ is a meaningful numerical construct, I’d urge you to take a look at Tom Sherrington’s blog post on the matter.

I’ve since become even more convinced that measuring pupil progress is worse than irrelevant through conversations with Ben White, who pointed out to me that intervening on progress data is frequently unjust and disadvantages those who have historically struggled at school. Suppose you find two students who get 47% in your end of year 7 history test. It isn’t a great score and suggests they haven’t learnt many parts of the year’s curriculum sufficiently well. Will you intervene to give either of them support? The response in many secondary schools nowadays would be to interpret the 47% in relation to their Key Stage Two data. For the student who achieved good scaled scores at age 11 of around 107, the 47% suggests they are not on track to achieve predicted GCSE results and so will make a negative contribution to Progress 8. They are therefore marked down for intervention support. The other student left primary school with scaled scores around 94, so despite their poor historical knowledge at the end of Year 7, they are still on track to achieve their own predicted GCSE results. No intervention necessary here. It strikes Ben (and I) as deeply unjust that those who, for whatever reasons (chance, tutoring, high quality primary school, etc…) get high Key Stage 2 scores are then more entitled to support than those who have identical attainment now, but who once held lower Key Stage 2 scores. It would seem to be entrenching pre-existing inequalities in attainment. For me, the only justification for this kind of behaviour is some sort of genetic determinism, where their SATs scores are treated as a proxy for IQ and we should make no special efforts to help students break free of the pre-determined flightpaths we’ve set up for them. Aside from questions of social justice, it makes no sense to expect pupil attainment to follow these predictable trajectories – they simply won’t, regardless of how much you wish they would.

But all of that is an aside and doesn’t address the question of what we should do if we find out that a student hasn’t learnt much / has made poor progress / has fallen behind peers / has low attainment [delete as appropriate according to your conception of what you are trying to measure]. The trouble is, by the time we find out that attainment data is poor in an end-of-year test the damage has already been done and it is very hard to un-do.

The response of most tracking systems to this problem is simply to collect attainment data more frequently, thus bringing forward the point where the damage can be spotted. The problem with this – apart from the destruction of teachers’ lives through test marking and data dropping – is that it is very hard to spot the emergence of falling behind after just six weeks of lessons. Remember we have uncertainty of ‘true’ attainment at each testing point, so it is very hard to use a one-hour test to distinguish genuine difficulties in learning that are causing a student to slip behind their peers (rather than just having a one-off poorer score). If you intervene on everyone that shows poor progress in each six week testing period then you’ll over-intervene with those who don’t really need outside class support, thus spreading your resource too thinly rather than concentrating on the smaller group who really do need help.

There is an alternative. The most forward-thinking leadership teams in schools I have met start by planning what sorts of actions they need information for. Starting with this perspective yields a desire to seek out leading indicators that suggest a student might need some support, before the damage to attainment kicks in. Matthew Evans has a nice blog post where he describes how and why he is trying to prioritise ‘live’ data collection over ‘periodic’ data. Every school’s circumstances is slightly different but the cycle of learning isn’t so unique. Here is some data that really could lead to some actionable changes to improve learning schools:

  1. Which parents do I need to send letters or request meetings about poor school attendance? Data needed = live attendance records. See Stephen Tierney’s blog on how to write an effective letter home to parents.
  2. Which classes do I need to observe to review why school behaviour systems are not proving effective and support the teacher in improving classroom behaviour? Data needed = live behaviour records, logged as a simple code as incidents occur. (Combined with asking teachers how you can help, of course!)
  3. Which students now need an accelerated assessment of why they are not coping with the classroom environment, perhaps across several classrooms? Data needed = combining live behaviour records with periodic student or staff surveys of effort in class, attitudes to learning, levels of distraction. Beware! A music teacher should not be expected to do this for 400 students or for 20 individual classes. Concentrate on deep assessment of newly arrived year groups with simple ‘cause for concern’ calls for established students.
  4. How many students must I create provision for who have specific deficiencies in prior knowledge or skills that will make classes inaccessible? Data needed = periodic assessments of a set of narrowly defined skills – e.g. at the start of secondary school these might be fluency in number bonds, multiplication, arithmetic routines, clear handwriting, sufficiently fast reading speed, basic spelling and grammar. SATs and CAT tests are very poor proxies for these competencies that do not allow for efficiently targeted interventions.
  5. Which students might need alternative provision in place to complete homework? Data needed = live homework records if they are collected, or a period survey of homework completion. If centralised systems do not exist, do not ask every teacher to enter a data point for every student they teach when a simple ’cause for concern’ call will suffice. Many schools are now organising an early parents evening to bring families where homework is an issue into school to find out why. For parents who themselves did not enjoy school, this early conversation might be enough for them to feel motivated to support their own children in completing homework. Otherwise, silent study facilities should be put in place.

Measuring attainment is like a rain collection device that tells us how much it has rained in the past. An action-orientated data collection approach requires us to create barometers – devices that tell us we may have a problem before the damage is done.

Attainment is useful for retrospective monitoring, but is less useful for choosing optimal actions by senior leadership. Of course, this doesn’t mean that teachers should neglect to check that students seem to be learning what is expected of them in day-to-day lessons. But for management it simply isn’t straightforward to generate frequent, reliable, summative assessment data across most subjects. And even if they could, once the attainment data reveals that a student or class has a problem, it has already been going on for some time. Attainment data is a lagged indicator that a student or staff member had a problem. Poor attainment data often comes too late. The trick is to sniff out the leading indicators that tell leaders where to step in before the damage is done.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s