What if we cannot measure pupil progress?

Testing and recording what students know and can do in a subject has always been part of our education system, especially in secondary schools where teachers simply cannot hold in their head accurate information about the hundreds of students they encounter each week. However, measuring progress – the change in attainment between two points in time – seems to be a rather more recent trend. The system – headteachers, inspectors, advisors – often wants to measure something quite precise: has a child learn enough in a subject this year, relative to other children who had the same starting point?

The talks I have given recently at ResearchED Durrington and Northern Rocks set out why relatively short, standardised tests that are designed to be administered in a 45-minute/one hour lesson are rarely going to be reliable enough to infer much about individual pupil progress. There is a technical paper and a blog post that outlines some of the work that we’ve been conducting on the EEF test database that led us to start thinking about how these tests are used in schools. This blog post simply sets out a few conclusions to help schools make reasonable inferences from test data.

We can say a lot about attainment, even if progress is poorly measured

No test measures attainment precisely and short tests are inevitably less reliable than long tests. The typical lesson-long tests used by schools at the end of a term or year are reliable enough to infer approximately where a student sits on a bell curve that scores all test-takers from the least good to the best in the subject. This works OK, provided all the students are studying the same curriculum in approximately the same order (a big issue in some subjects)!

Let’s take a student who scored 109 in a maths test at the start of the year. We cannot use that single score to assert that they must be better at maths than someone scoring 108 or 107. However, it is a good bet that they are better at maths than someone scoring 99. This is really useful information about maths attainment.

When we use standardised tests to measure relative progress, we often look to see whether a student has moved up (good) or down (down) the bell curve. This student scored 114 at the end of tell the year. On the face of it this looks like they’ve made good progress, and learnt more than similar students over the course of the year. However, 109 is a noisy measure of what they knew at the start of the year and 114 is a noisy measure of what they knew at the end of the year. Neither test is reliable enough to say if this individual pupil’s progress is actually better or worse than should be expected, given their starting point.

Slide2newnew

Dylan Wiliam (2010) explains that the challenge of measuring annual test score growth occurs because “the progress of individual students is slow compared to the variability of achievement within the age cohort”. This means that a school will typically find that only a minority of their pupils record a test score growth statistically significantly different from zero.

Aggregation is the friend of reliability

You can make a test more reliable by making it longer, sat over multiple papers, but this isn’t normally compatible with the day-to-day business of teaching and learning. However, teachers who regularly ask students to complete class quizzes and homework have the opportunity to compile a battery of data on how well a student is attaining. Although teachers will understandably worry that this ‘data’ isn’t as valid as a well-designed test, intelligently aggregating test and classwork data is likely to lead to a more reliable inference about a pupil’s attainment than relying on the short end-of-term test alone. (Of course, this ‘rough aggregation’ is exactly what teachers used to do when discussing attainment with parents, before pupil tracking was transferred from the teacher markbook to the centralised tracking software!)

Teacher accountability is the enemy of inference

Teachers always mediate tests in schools. They might help write the test, see it in advance, warn pupils or parents about the impending test, give guidance on revision, advise pupils about the consequences of doing badly, and so on. If the tests are high-stakes for teachers (i.e. used in performance management) and yet low-stakes for the pupils, it can become difficult for the MAT or school to ensure tests are sat in standardised conditions.

For example, if some teachers see the test in advance they might distort advice regarding revision topics in a manner that improves test performance but not the wider pupil knowledge domain. Moreover, some teachers may have an incentive to try to raise the stakes for pupils in an attempt to increase test persistence. The impact of the testing environment and perception of test stakes has been widely studied in the psychometric literature. In short, we need to be sure that standardised tests (of a standardised curriculum) are sat in standardised conditions where students and teachers have standardised perceptions of the importance of the test. For headteachers to make valid inferences across classrooms, or across schools, they need to be clear that they understand how the stakes are being framed for all students taking the test, even those who are not in their own school!

I think this presents a genuine problem for teacher accountability. One of the main reasons we calculate progress figures is to try to hold teachers to account for what they are doing, but very act of raising the stakes for teachers (and not necessarily for pupils) can create variable test environments that threaten our ability to measure progress reliably!

The longer a test is in place, the more it risks distorting curriculum

A test can only ever sample the wider subject knowledge domain you are interested in assessing. This can create a problem where, as teachers become more familiar with the test, they will ‘bend’ their teaching to towards the test items. Once this happens, the test itself becomes a poor proxy for the true subject knowledge domain. There are situations where this can seriously damage pupil learning. For example, many primary teachers report that one very popular standardised test is rather weak on arithmetic compared to SATs; given how important automaticity in arithmetic is, let’s hope no year 3, 4 or 5 teachers are being judged on their class performance in this test!

Our best hopes for avoiding serious curriculum distortion (or assessment washback) are two-fold. First, lower the stakes for teachers (see above). Second, make the test less well-known or less predictable for teachers. In the extreme, we hear of schools that employ external consultants to write end-of-year tests so that the class teachers cannot see them in advance. More realistically, frequently changing the content of the test can help minimise curriculum distortion, but is clearly time-consuming to organise. Furthermore, if the test changes each year then subject departments cannot straightforwardly monitor whether year group cohorts are doing better or worse than previous years.

None of this is a good reason not to make extensive use of tests in class!

Sitting tests and quizzes is an incredibly productive way to learn. Retrieval during a test aids later retention. Testing can produce better organisation of knowledge or schemas. As a consequence of this, testing can even facilitate retrieval of material that was not tested and can improve transfer of knowledge to new contexts.

Tests can be great for motivation. They encourage students to study! They can improve metacognitive monitoring to help students makes sense of what they know (and don’t yet know).

Tests can aid teacher planning and curriculum design. They can identify gaps in knowledge and provide useful feedback to instructors. Planning a series of assessments forces us to clarify what we intend students to learn and to remember in one month, one year, three years, five years, and so on.

Are we better off pretending we can measure progress?

I’m no longer sure that anybody is creating reliable termly or annual pupil progress data by subject. (If you think you are then please tell me how!) Perhaps we don’t really need to have accurate measures of pupil progress to carry on teaching in our classrooms. Education has survived for a long time without them. Perhaps SLT and Ofsted don’t really mind if we aren’t measuring pupil progress, so long as we all pretend we are. Pretending we are measuring pupil progress creates pressure on teachers through the accountability system. Perhaps that’s all we want, even if the metrics are garbage.

Moreover, I don’t know whether the English education system can live in a world where we know that we cannot straightforwardly measure pupil progress. But I am persuaded by this wonderful blogpost (written some time ago) by headteacher Matthew Evans that we must comes to terms with this reality. Like many other commentators on school accountability, he draws an analogy with The Matrix film in which Neo must decide whether to swallow the red or blue pill:

Accepting that we probably can’t tell if learning is taking place is tantamount to the factory manager admitting that he can’t judge the quality of the firm’s product, or the football manager telling his players that he doesn’t know how well they played. The blue pill takes us to a world in which leaders lead with confidence, clarity and certainty. That’s a comfortable world for everyone, not just the leader.

He goes on to argue, however, that we must swallow the red pill, because:

However grim and difficult reality is, at least it is authentic. To willingly deceive ourselves, or be manipulated by a deceitful other (like Descartes’ demon), is somehow to surrender our humanity.

And so, what if we all – teachers, researchers, heads, inspectors – accept that we are not currently measuring pupil progress?

What then?

Advertisements

How an economist would decide the what, when and how of reception year

Clare Sealy has written an amazing blog post explaining why rising 5s need to learn through a mixture of explicit teaching, whole class collective experiences, and play-based encounters. The early years isn’t an area of research for me, but it is a field I spend a lot of time thinking and reading about simply because my own children and those of my friends are currently so young.

Clare’s blog describes the controversies around the question of how we should educate in the reception year. However, I think questions of what and when we should teach young children are equally contentious. Reception year has moved from something that lasted only a few months for many (e.g. me) a generation ago to a de facto compulsory year of schooling and I’d like us* to conduct more empirical research on when it makes sense to teach complex skills such as reading and writing to children.

As an economist, whilst I am supporting my own children in learning new skills (potty training, arithmetic, reading, getting dressed etc…), I wonder why we don’t talk more about the opportunity costs involved in the decisions we make in reception year. What other opportunities must we give up when we decide to teach 4 year olds to read, or to learn some French words, or their number bonds to 20, or to learn a repertoire of songs by heart, or how to identify trees by their leaves?

Economists naturally think in terms of costs and benefits – here our costs are time costs. For example, we choose to potty train children about a year later than they were a generation ago. Why? By delaying we can invest far fewer hours in the process – hours that we then get to spend doing other things. We can afford this delay because disposable nappies are now cheap enough to use for extended periods of time. Equally, we now invest hundreds of hours ensuring children can read the word ‘mat’ when they complete the reception year. That has great benefits to the child, but it has also cost them time which they were then not able to spend doing other things that are promoted in other cultures, such as numeracy or memorising a repertoire of songs, dances and poems.

If an economist was asked how reception year should be organised, they would want some data on these time-investment trade-offs. For example, in the case of teaching a child to read through an explicit phonics programme, they would want to know exactly how the age at which a child starts learning affects the number of teaching hours that need to be invested. The chart below illustrates a trivial example of this. Suppose my daughter started a phonics programme at age 5 and it took her 300 hours of teaching time to complete. How many hours would it have taken if we’d started at age 3? 750 hours? 1000 hours? Would it be worth it? What about if we’d delayed until she was 7? 150 or 200 hours? Would these time gains make it worth delaying? Suppose we could draw a similar chart for a child who comes from a less book-rich home? Would the chart be steeper (i.e. the gap in time investment needed compared to my daughter closed somewhat over time) or flatter? I think we can be fairly sure the curve would be steeper for boys than girls, but by how much? These charts wouldn’t tell you when you should teach phonics, but they would make explicit one bit of evidence we need to decide at what age we should start teaching children to read.

Tradeoffs

Now, suppose we have two goals – learning to read well enough to pass the phonics test and achieving fluency in number bonds to and within 20. We can choose to start a phonics programme at the age of 4, but that doesn’t leave enough time to also achieve fluency in number bonds as well. Which should we prioritise in the younger children and which should we concentrate on later? An economist would say it depends on the shape of the curves. I suspect (based on my sample size of 2 children) that arithmetic has a flatter curve than reading so that the time investment for learning number bonds at age 4 is not vastly higher than it is at age 5.

These curves are fictional – I don’t know what they look like for real children. But I’d feel far more comfortable explaining to a foreigner why we teach our children to read soon after they are four if I knew the time trade-offs involved in this decision.

Economics is the coldest of the social sciences, but this analysis places every hour of children’s precious lives at its heart. It reminds us that we should take care in balancing the gains from learning new skills against the costly time investments of teaching new stuff to young children. And it reminds us that, whilst it might be more efficient to teach handwriting to four year olds through more explicit and formal methods, this fact alone doesn’t mean we should do it. We should also weigh up the relative time investments involved in choosing to bring forward the teaching of a new skill to the reception year, rather than deferring it to year 1 or 2. Indeed, I suspect much of the raging argument about how we should organise the reception year gets confused by private disagreements about the what and when.

(This is a slightly trivial New Year blog post that summarises everything economics has to say about the reception year. Economists shouldn’t decide what the reception year looks like. Don’t let them.)

—————————–

* not me

—————————–

Still reading? OK, here is the indulgent bit where I tell you about my personal views on the reception year:

  • I only did a few weeks in reception class and I did OK in life – I can’t help feeling that if it were so critical to start things young then other countries would be doing it too
  • Child-initiated play was great for my eldest in playgroup, where the adult-child ratios were high; it was pretty sub-optimal in reception year where there were necessarily frequent child-on-child interactions that could not be mediated by an adult, producing endless social/emotional issues. The thought of having to put my youngest through reception year doesn’t fill me with joy for this reason
  • We aren’t ever going to get larger physical spaces and more adults in reception classes. With that in mind, my dream reception year for my children would be 2-3 hours a day at school for collective activities (singing, learning poems, games) and structured work at tables, then back to pre-school for lunch and afternoon play.

Making Oxbridge entry matter less

Yet again, universities are under the spotlight for their admission processes. On the one hand, of course we need to do all we can to get under-represented groups into our elite universities. Alternatively, we could enquire as to why it is so important that they get into these universities in the first place. I’d[i] argue that this is largely because educational achievement is unmeasured at the end of degrees and so name of university attended is still acting as a (poor) signal of IQ/knowledge/effort [delete as appropriate] to employers.

One of the many reasons for this is that degree class inflation is out-of-control, with places such as the University of Surrey now awarding a first-class degree to over 40% of their students. Degree classifications clearly no longer reflect genuine attainment, either for cohorts passing through the system in different years or indeed across different institutions.

The consequence is that young people are hugely incentivised to apply to highly-selective courses, rather than ones with high quality teaching. For this is the only way they can signal their intellect in the labour market. For this reason, incidentally, the TEF alone cannot degrade the market quality of an LSE degree.

We could fix all these problems by introducing a common core examination in all degree subjects, set externally by learned societies. All students would sit them, say, two-thirds of the way through their degree, thus allowing specialised final year examinations to continue. Performance in this exam, by subject, would determine the number of first-class, upper-second, lower-second and third-class degrees the department is allowed to award that year. It would not determine the degree-class of the student.

Agreeing a common core of the curriculum would be more controversial in some subjects than in others. We should try this first in subjects where this is not controversial: the sciences, maths, economics, and so on.[ii]

This degree design would still leave the majority of time free for esoteric topics, set by a university (e.g. 50% of the first two years and 100% of the final year), who could choose to combine papers into a degree classification in any way it chooses. It would simply be restricted in the proportion of different classifications it could award, based on the common exam results.

The alternative is that we introduce some sort of IQ-style SAT entrance examination that in turn determines how degrees can be set. But this does not incentivise universities to ensure that students are learning anything.

Establishing robust and comparable degree classification will help fix the extraordinary stratification of universities in the eyes of employers. Getting into Oxbridge rather than, say, Nottingham undoubtedly gives people an easy ride in the labour market. As someone who got one of these free passes to pretend I am clever I used to think this was justified. I changed my mind when I had the chance to interview 17 year-olds myself.

A decade or so ago I was roped into interviewing for undergraduates at an Oxbridge college, not because anyone particularly valued my opinion but more because some newspaper scandals meant the college didn’t want Fellows interviewing alone. The experience completely revolutionised my view that university admissions were efficiently selecting students by ability.

We handed out about seven offers in the subject in each of the three years I helped out. Three were given to candidates who performed exceptionally well at interview and had great AS point scores; the other four were given rather arbitrarily from a long list of over a dozen candidates who did well at interview and on paper. I could see the consequences of the offers we made because I supervised first year students. Those who performed exceptionally well at interview often didn’t seem to turn out to be genuinely interested and motivated by their subject. The interview didn’t help those from disadvantaged backgrounds, in my experience, who clearly hadn’t been prepared. And the ‘thinking skills’ test that we introduced during the time I interviewed was clearly not tutor-proof; we observed striking mark inflation as it moved from a pilot to a known-test with companies offering preparation.

There are weak students studying at Oxbridge; there are outstanding students studying at Nottingham. The latter group, even if they are awarded a first, find it much harder to signal their talent to the employers who understandably place little store by degree classification. If we ensured genuine comparability in achievement across universities then university attended needn’t act as a signal for anything at all.

 

 

[i] Well, technically most of this argument comes from a conversation with a very smart man who is not in the position to make these arguments publicly at the moment!

[ii] The question of how degrees should be awarded across subjects is a question for another time, but one that is debated frequently by school examination boards. Essentially, there are principles that can be applied to achieve this where subjects have similar academic characteristics; deciding the national degree awarding proportions is almost impossible for art, music, nursing and so on. School examination boards also deal with questions about how to maintain comparability over time, etc…

If Engelmann taught swimming

I have been thinking about social inequalities and education for the past decade and feel like I’m walking a well-trodden path that has a hopeful ending. Perhaps by telling you where it leads it’ll help you get to a productive destination quicker.

I’ve spent my whole research career thinking that our best hope for fixing educational inequalities is to shuffle children, teachers and money across schools and/or classrooms. That is why I have spent so much time writing about school admissions and choice, measuring school performance, school funding and the pupil premium, and the allocation of teachers to schools and to classes within schools.

We have made essentially zero progress in England in closing the attainment gap between children who live in poorer and richer households. Zero. It is easy to feel despondent about this and wonder whether no solutions lie within the education system.

But two things that have happened over the summer – my daughter learning to swim and listening to Kris Boulton talk – have given me renewed hope.

***

We have a daughter who is a low ability swimmer. Like other families, every Saturday morning we’d bargain over whose turn it was to take her to her lesson. One half-term became two. Then three. Then four. And yet still she was in Beginners One. She was fine about this – she liked the classes and only causally noted that other children were learning to swim and moving up to the next class.

Other parents said, ‘Don’t worry. Everyone learns to swim in the end. She’ll get there.’ And I knew they were right – she would get there eventually and we should just accept it’ll take her longer than other children. But then someone suggested we try another swimming class. So we did. And from the moment she got into that new pool with a new instructor it was like watching some sort of miracle. By the end of the first lesson she was doing something approaching proper swimming and by the fourth lesson she was good enough to practise on her own at the local pool with us.

Did she just ‘need time’? Was it chance that it ‘just clicked’ on that particular day? Would she ever have learnt to swim in Beginners One at the old place? Was she really a ‘low ability’ swimmer?

As I was mulling over this small miracle whilst swimming in the local leisure pool on a Saturday (not a tranquil experience that is conducive to deep-thought), I remembered Kris Boulton’s strange picture of classroom desks with a probability that each child learns a concept. This is a photo of him presenting at researchED, but you can also hear about it on Mr Barton’s maths podcast or read his blogpost.

Kris Boulton

Kris thinks the problem with the accepted educational wisdom is that it deems most instructional methods as fine because some children always ‘get it’. From this observation, they then deduce that other children in the same classroom must be failing for reasons outside the classroom – poverty, genetics, and so on. If you read his blogs, Kris doesn’t deny that these other things are present, but he views these all as factors that increase the sensitivity of the child to instructional method chosen.

Kris has come around to this way of thinking through his study of the work of Zig Engelmann. Engelmann isn’t popular in many educational circles for his commitment to Direct Instruction. But you don’t have to be a fan of D.I. (I’m not particularly) to admire the scientific approach he has taken to constantly refining his programmes of instruction. And at the heart of the approach he takes is the following belief:

The best instructional methods will close the gap between those students who have a high chance of understanding a new concept and those who have a lower chance of understanding it.

This! This way of thinking about inequalities in rates of learning is simply not part of the narrative for many policy-makers and researchers. There are some children who will ‘get it’, regardless of instructional method used (the Autumn-born middle-class girls in infants and the kids in my daughter’s first swimming class who raced through and onto Beginners Two within a term). Then there are those for whom the probability that they learn the new concept is highly sensitive to methods of instruction. My daughter wasn’t a ‘low ability’ swimmer; she was just a novice swimmer who was more sensitive to instructional methods than others for whatever reason.

I don’t know whether Zig Engelmann has ever thought about swimming instruction, and I don’t know what he would make of the methods used to teach my daughter in the second swim school. Who knows whether her swim instructor has given much thought to questions of sequencing and the benefits of what Kris calls atomisation in his blogpost. I’m confident that she is not following a Direct Instruction script! But just imagine if the method of instruction she has devised through years of experience could be codified, at least in part, so that other instructors could follow it too.

There will always be differences in how easily humans are able to learn new concepts, but I’m more convinced than ever that we can reduce the size of these gaps in rates of learning by paying close attention to the instructional methods we use. An instructional method doesn’t work if only some children can succeed by it. Let’s work on developing methods that give every child the highest possible chance of succeeding.

Coda

I showed this post to Kris and he wrote:

Engelmann has applied his ideas to physical activity, including tying shoelaces, doing up buttons, and I think some aspects of sport.  I had an excellent instructor for Cuban Salsa a few years back. She created three 10 week courses, at differing levels, and broke everything up into different moves, from small components up to more complex combinations. One evening several of us went out to a dance event that had a lesson with a different instructor – he spent most of the time saying ‘No-one can really teach you how to move, you just have to feel the music.’ Utterly useless.

(I think those last two words are his less polite way of saying that he is a novice dancer who is very sensitive to instructional methods!)

Kris is a little further on in his journey than me, as he explains here. He believes so strongly that this is how we reduce inequalities in rates of learning that he is joining Up Learn, a company dedicated to this same belief, that is putting the theory into practice and believes they can use it to guarantee an A or A* to every A Level student who learns through their programme.

We don’t need better sorting hats to improve social mobility

This is roughly the talk I gave at a Policy Exchange fringe at Conservative Party Conference in 2016

 

I don’t like the words social mobility because they are so slippery as to give carte blanche to politicians to do exactly as they please.

We appear to have entered an era where social mobility policies involve the creation of new sorting hats. Educational sorting hats can be useful at the right time and place in life, to funnel some students into elite universities and others into technical training programmes, for example. But they often have pre-determined destinations in mind for the individuals who are pulled out, rather than leaving everyone to receive the kind of broad, academic education that enables citizens to have options throughout life.

This might make sense post-18 where some students are starting to push against the boundaries of what it is possible for them to achieve and where work-place preparation becomes important. (I have personal experience of these boundaries, having started a degree in maths at Cambridge before changing subject to something I found easier.)

But the idea that at age 11, children are ready to be put through the sorting hat that decides the type of education they deserve and will suit them for the sort of job we have in mind for them is deeply regressive. It is fundamentally in conflict (as are UTCs, incidentally) with the Govian belief in an academic education for all. One that gives every child the freedom to be everything they want to be. Is this not a truly Conservative ideal?

But to return to the sorting hat that Nick Timothy would like to introduce. At no stage has he, or other proponents, articulated how they would like the hat to sort. Is it IQ? Academic achievement so far in life? Likely future academic achievement? Whatever it is, he must know we cannot accurately measure it at age 11 and that some set of children will be ‘wrongly’ sorted, depending on what day of the week and on what test paper is sat.

Whatever system is devised, we pit sections of society against each other in ways that are divisive. We want August born children to have a decent chance of gaining a place so we allow them a lower qualifying score than September born children. Is this fair? And why are boys allowed lower qualifying scores than hard-working girls? How can it be fair to the working poor that the children of families on benefits in Birmingham now have dedicated spaces for them at grammar schools? And if we make spaces for the working poor too, as the PM has intimated they might, what about the ‘just about coping OK’ families or the ‘coping fine but no way we have the money for private schools’ families?

Proponents have expressed a desire to use grammar schools to support the white working class whose children are indeed struggling more than any others. But to do this we will have to require higher qualifying scores for Asian and black students, all of whom have excellent success rates at passing the eleven plus. At what point does this become racial discrimination?

Thankfully, I would argue, we do not need to worry about the complex questions that are intrinsic in putting eleven year olds through any sorting hat.

We do not need this sorting hat at age 11 because we want the same thing for everyone: a general, academic education that leaves them free to make choices about the kind of path in life they want to take as a citizen.

And there is nothing we propose to do with children in these grammar schools that we do not want every other child to experience.

[At the end of the session, Nick Gibb admitted he wanted to introduce a grammar school into a place like Knowsley because the teachers there have refused to improve schools and implement the EBacc. So our only choice is to abandon the 80% so that we can save the 20%. How sad. In response, I told him about a little place he might know that once had the kind of schools that caused the middle classes to flee the city. We invested lots of money, devised school improvement schemes and created programmes to plug teacher recruitment. That little place was Inner London.]

I’m data blogging elsewhere…

Nearly all my blog posts here have involved data or reviews of fairly quantitative research. Except for one of the most popular, which was just a polemic about keeping school buildings open for longer. (I guess people aren’t so interested in cold, hard facts about education policy after all.)

From now on, all my ‘normal’ data posts are going to be on the Education Datalab blog. I won’t be reposting them here, not least because it isn’t WordPress powered (long story…). Do take a look – I’ve already blogged on non-specialist teachers and free schools, amongst other things. If you are a Twitter-type you can keep in touch with our posts by following @edudatalab.

I’m going to reserve blogging here for things that are utterly unsupported by data and maybe even some non-education policy things. New posts to follow very soon…

Economics of Education

Economists analyse the production of education in this world where resources such as the capital invested in buildings or technology and the labour of the teacher workforce are necessarily scarce. This scarcity of resources means that policymakers must decide:

  1. How much to spend on each stage of education (i.e. what to produce);
  2. How to provide educational services in a way that maximises its benefits to society (i.e. how to produce education); and
  3. Who should have access to each stage of education (i.e. for whom is education provided).

Economic theory is able to help policy makers by providing both facts about the education system and values to inform decision-making. The part of economics that is concerned with establishing facts about the world is called positive economics. It asks questions such as ‘can we improve the quality of teachers by increasing pay’ or ‘will smaller class sizes raise pupil attainment’? Normative economics asks questions that require value judgments such as ‘is it fair to charge higher education students tuition fees’?

A social welfare framework underpins the dominant approach in the economics of education. According to this framework, society should strive to arrange educational services to be produced and distributed in manner that is both efficient and equitable. Efficiency means that educational outputs are maximised, given a set of constrained resources. Equity means that services are distributed according to some principle of social justice or fairness.

Individuals and governments often face hard choices because of the scarce resources they possess. For example, expanding higher education and increasing provision of early years care might both appear to be policies that have the potential to improve the well-being of society overall, but which should a government prioritise? Economists describe the costs of taking a particular action as opportunity costs, because the greatest cost of expanding higher education might be the lost benefits of not undertaking the next best alternative policy, such as increased provision of early years care.

Short history of the economics of education

Economists are normally associated with ensuring that profit-making companies and the overall economy functions well, but they have slowly expanded their interests to new spheres of society. The origin of the economics of education as a significant field within economics dates back to the theoretical and empirical developments made by American economists such as Gary Becker and Jacob Mincer in the 1960s. Their work introduced the idea of education as human capital and they attempted to calculate the economic returns to acquiring education.

Over the past decade there has been an enormous growth of interest by economists in education policy, both in the UK and across the world. This has been accompanied by a growing political interest in market-based reforms across the public sector. These types of reforms include devolvement of financial planning to front-line institutions such as hospitals and schools and giving consumers of public services choice about which provider to use. Economists from other fields such as labour economics have been attracted by the growing availability of large-scale datasets that facilitate complex statistical analysis to analyse the impact of particular policy initiatives. Examples of these data include the National Pupil Database in England, which has collected annual information on the background characteristics and Key Stage attainment data of all pupils in state-maintained schools since 2002, and the PISA, an international survey of the skills of 15 year olds across many industrialised countries.

What is the economic paradigm?

Economic theory makes certain assumptions about human behaviour in order to make predictions about the effects of policy changes. The starting point of economic analysis is that individuals and institutions are rational agents (often given the name homo-economicus), operating with self-interested intent as they make decisions about providing or participating in education. Individuals are assumed to set themselves a goal of maximising their own well-being (or utility), given fixed preferences or tastes for education and a well-defined money constraint. Many economic theories assume that human brains possess perfect computational powers to process all the information needed to make optimal choices at all times!

These economic models of individuals and institutions present a simplified version of reality and for this reason they have been criticised by many sociologists and psychologists who claim they bear little resemblance to the real world. However, economists would argue that these simplifying assumptions are necessary to make precise predictions about the likely impact of policies in this complex world we live in. To put it another way, economists do not really believe that humans are so selfish and simple in their motivations; they just believe that this simplification is a useful analytical tool to help us understand the world better.

Books where you can learn more…

Checchi, Daniele (2008) The Economics of Education: Human Capital, Family Background and Inequality,  Cambridge: Cambridge University Press. [This book provides a comprehensive overview of most of the work currently being carried out in the field]

Le Grand, J., Propper, C. and Smith, S. (2008) The Economics of Social Problems, London: Palgrave Macmillan. [An introduction to economic theory applied to a wide variety of social problems, including education]

Machin, S. and Vignoles, A. (2005) What’s the Good of Education? Princeton: Princeton University Press. [A non-technical UK perspective on the economics of education]

Barr, N. (2004) Economics of the Welfare State, Oxford: Oxford University Press. [If you have studied a little economics before, this book gives an excellent overview of relevant economic theory]

If you can get access to academic journal articles, you might find this an interesting starting point:  Machin, S. (2008) The new economics of education: methods, evidence and policy, Journal of Population Economics, 21 pp 1-19.

Where to find relevant journal articles…

Articles by economists about education policy are published in specialist journals such as:

They also publish in other economic journals, such as:

Other articles can be found across geography, statistics and education journals, such as:

Where to find discussion papers…

IDEAS at RePEc.  Almost all economics of education journal articles are first made available as working papers here.

Institute for the Study of Labor.  The German think-tank IZA publishes many papers in the field of economics of education.

Centre for Market and Public Organisation, University of Bristol.  This is one of the the largest centres for research in economics of education in the UK.

Centre for Economics of Education.  CEE is a multi-institutional research centre based at the Centre for Economic Performance (LSE), IFS and IOE.

Institute for Fiscal Studies.  IFS publishes reports about education spending in the UK.

National Bureau of Economic Research.  NBER publishes many US working papers in the field of economics of education.  However, you need a subscription to download the actual papers.

National Center for the Study of Privatization in Education.  NCSPE is based at Teachers College, University of Colombia.