Five provocations on AI and schooling

The future is an unknown, but a somewhat predictable unknown. To look to the future we must first look back upon the past. That is where the seeds of the future were planted. I never think of the future. It comes soon enough.”

Albert Einstein

ChatGPT-4 gave me this quote when I sought insights on the unpredictability of the future. It misattributed it to Elon Musk. The future clearly isn’t here yet and thankfully we still have older technologies like Google search to help us write.

As we navigate technological change, our only certainty is that we will be wrong. Around the turn of the millennium, I worked as a media sector analyst in an investment bank making predictions about how the Internet would disrupt the large media companies – Sky, NTL, Granada, TF1, Pearson, and so on. Our predictions included things like ordering pizzas through television sets, a notion quaint in today’s smartphone-dominated world. We speculated on the multi-channel threats to traditional TV channels, not foreseeing the rise of platforms like YouTube and TikTok. We were right that we were on a journey of disruption, but we could not foresee exactly where we were heading.

And so today, while we can make educated guesses about the general trajectory of new technologies like large language models and other AI, precise predictions about their impact on schooling remain elusive. This uncertainty stems not only from the evolving nature of technology itself but also from the unpredictable responses of complex organisations and individuals.

I imagine we’ll see substantial improvements in individual learning platforms over the next decade, but it is the complex responses of teachers and the schooling system that are of more interest to me. In a previous post, I touched upon the general direction of these changes. Here, I aim to present five provocations about AI and schooling – comments that are intended to provoke thought and that I think will be important, even if we don’t know precisely what the future looks like or how fast we are going to get there.

1. AI challenges teachers’ ideas of ‘intelligence’ and knowledge

Consider this scenario: I stand before you to deliver a talk, and a stream of interesting, relevant, and insightful words flows from my mouth. You’d likely infer that I am intelligent and knowledgeable. But what if those words were read verbatim from a large language model (LLM)? Does that imply intelligence on the part of the LLM, or do we have a higher notion of what it means to be smart that stretches beyond these rather sophisticated auto-complete tools?

The usefulness of LLM responses is typically welcomed by the general public, who may not frequently need to engage with the theoretical meaning of knowledge and understanding. They are consequentialists – if it looks useful, it is useful. However, for teachers, it is disorientating that LLMs display a remarkable capacity for coherently arranging information, despite lacking an explicit understanding of the underlying knowledge, ideas, rules or structures. This is because teachers’ conceptions of knowledge, learning, and intelligence within the subjects they teach are grounded in explicit schemas with well-defined idea relationships. LLMs, oblivious to these schemas, nonetheless seem capable of generating text that often mirrors what a subject expert might produce.

The codified knowledge schemas and curriculum plans that teachers create feel truthful, concrete and real, even though we know little about how they relate to learning and understanding in students’ brains. The inclination to regard the probabilistic approach of LLMs in generating seemingly intelligent sentences as less sophisticated compared to the structured theories and schemas teachers possess about their subjects is understandable. However, it is important to recognise that the knowledge within our students’ minds is complex and largely elusive, existing in a constant state of change. It is continuously constructed and reconstructed, mirroring the probabilistic nature of memory and understanding.

2. AI risks fracturing the teaching profession

As I wrote in my previous post, LLMs produce factually inaccurate information, but in relatively predictable ways. Teachers are likely to find the greatest success in helping them create resources, plan and even mark work in well-codified language-based subjects where there exists a generally agreed-upon and consistent knowledge base in the public sphere, and ideally where this knowledge base is consistent across English-speaking countries. For this reason, I suspect the utility of LLMs is likely to prove highest in modern languages, sciences, geography and history.

Technological advances have already introduced disparities in the workload of teachers, particularly in the often unseen tasks of planning and marking done outside the classroom. For instance, a maths department can operate with a fully purchased curriculum, resources that include auto-marking homework platforms, and standardised assessments. This might raise the question: do teachers in other departments resent these workload-reducing opportunities available in maths? Surprisingly, not significantly. This is partly because, due to severe teacher shortages, many maths teachers were originally trained in different subjects. Currently, the challenging circumstances in maths departments overshadow any potential envy.

However, professional fracture could become an issue as other subjects begin to leverage similar technologies to ease the teacher’s workload. Imagine LLMs drastically reducing costs and enhancing the efficiency of curriculum resources, independent study platforms, and auto-marking systems in well-codified, language-based subjects like sciences and the humanities. This could create a stark contrast in the workload intensity between teachers of these subjects and those teaching in fields like creative arts, design and technology, English literature, and physical education.

While we might turn a blind eye to these workload disparities as they largely occur out of sight in teachers’ homes, another potential divide looms – the autonomy granted to teachers. As I mentioned in my last post, computer-assisted instruction has already been shown to be more effective than the least effective maths teachers and we should only expect this technology to improve in the future in certain subjects. The ethical questions around what autonomy teachers should be granted will play out very differently according to teachers’ subjects and phases. 

When does the teaching profession, currently unified in terms of conditions and standards, start to fragment due to these technological disparities?

Moreover, who will hold back technological innovations that can help students, in order to preserve the coherence and the interests of the profession?

3. 4-in-10 teachers are using large language models for planning, but are the lessons better or worse?

Recent data from Teacher Tapp indicates that approximately 40% of teachers are now using LLMs to aid their work, primarily in lesson planning. This trend highlights teachers’ appreciation for the dialogic method of lesson planning and resource creation that LLMs offer. The approach aligns well with teachers’ needs to understand their subject matter and pedagogical choices during lesson preparation. Indeed, the more technologists try to create AI wrappers that deliver best-in-class LLM-driven lesson plans, the less teachers will find them useful.

For those who have experimented with generating lesson ideas using tools like ChatGPT, the experience often yields creative and unique concepts. This could encourage teachers to diversify their teaching methods and break away from routine lesson plans. While such experimentation can be engaging and potentially effective, it also poses challenges that could undermine learning. It requires careful selection of ideas that genuinely enhance learning, a task that is not always straightforward.

A more critical concern is the impact of frequent changes in lesson routines. Constantly introducing new activities and approaches, in pursuit of novelty, risks disrupting the established habits and routines of both teachers and students. This could lead to a scenario where even seasoned teachers resemble novices in terms of the diversity of activities they employ in the classroom. While this approach might be interesting and enjoyable, we should not automatically equate it with improved learning outcomes.

4. AI-assisted marking will be easier to achieve at scale in commercial organisations

In my previous post, I explored scenarios where LLMs could be beneficial in assisting with marking, a task often least favoured by teachers. We are likely to achieve success in using LLMs to support marking in:

  • Well-codified, language-based subjects
  • Assessment tasks with precisely defined rubrics
  • Situations where models can be trained on previous student responses
  • Low-stakes situations where the value of rapid feedback outweighs the reliability risks

It seems to me that commercial companies will develop more reliable marking tools than individual teachers, schools or MATs can. This is because LLMs seem to need substantial training and guidance before they can mark student work. Only commercial companies will achieve the economies of scale needed to gather thousands of human-marked scripts to train LLMs or invest time in writing individual rubrics. (This isn’t to say that some individual teachers won’t succeed in creating LLM-enhanced marking methods – indeed, we’re already seeing such innovations.)

Why is this significant? If auto-marking of homework is desired by teachers as a workload reduction tool then a school needs to ensure their curriculum can align with the homework platform. This has never been an issue in maths, where the hierarchical nature of the subject’s knowledge architecture means there is little curriculum variation across schools. However, in subjects like history and geography, we could see greater curriculum alignment as schools choose to purchase commercial learning platforms to support independent study. Thus, a de facto national curriculum of study could emerge, led by commerce rather than policy-makers. 

5. One day, AI could yield a radical re-imagining of the school timetable

I want to talk about the implications of pupils using LLMs to complete their homework, without talking about how and why this is an issue. As others have extensively written: Yes, students can and likely will use these tools to complete homework assignments (not least, because they are children and because schooling is, in part, an act of coercion). And yes, even in a world with AI, the importance of students engaging in complex language-based tasks, like essay writing, remains undisputed. The challenge lies in reconciling this with the reality of unsupervised homework, leading to the possible need for supervised study sessions in schools. The logistics and costs of such sessions are certainly up for debate, and there are many straightforward ways they could be introduced by extending the school day should the government make funding available. Here, however, I’d like to consider how they might be revolutionary.

Imagine that, instead of a student completing 5 one-hour taught lessons followed by 1 or 2 hours of supervised independent study time, these independent study hours were distributed throughout the school day. Those who have ever tried to create a school timetable might see how helpful these study periods could be.

I hope to write about the problems created by the secondary school timetable one day in a separate post. For now, I’ll just say that it represents a constrained two-sided matching problem, necessarily preventing optimal teacher-class pairings. Incorporating independent study sessions introduces slack into the matching model, improving the chances of effective teacher-class matches and opening doors to more radical changes. These could include cross-year group classes, systematic approaches to catch-up sessions, or even a reduction in teacher-led instruction time for subjects where blended learning is more effective.

Challenging the entrenched ‘grammar’ of schooling – fixed hours, traditional mixes of teacher-led versus independent study, and year group constraints – is daunting. I’m still a little equivocal about whether we can and should do it. But schooling is necessarily a resource-constrained and sub-optimal learning experience for any individual student. If technology provides us with new opportunities to improve the learning environment, then I think we should explore how more radical disruptions could take place.

2 thoughts on “Five provocations on AI and schooling

  1. Pingback: Weekly Round-Up: 12th January 2024 | Class Teaching

  2. Thanks for these thoughts – I have been arguing for at least two decades that the technology challenges the 1-1-30 (1 teachers, 1 room and 30 children) and the 9-4 model of schooling but it is the ‘conservatism’ particularly of assessment and achievement metrics that has held changes back. I thought that things might change post-pandemic but there has been a lot of “snapback” to how things were. G-AI (and A-GI) might be the ‘can opener’ that challenges these things as well as the rather dualistic novice-expert model. Maybe we are now living in those “interesting times”?

Leave a comment