Training the human arbiters of taste and truth

Let me share a secret with you. Chat-GPT, an AI model, generated the title of this blog post. I fed it my blog text and asked it for ten possible titles. Four of them would have been a little misleading. Five were precisely correct, and yet felt clichéd or otherwise slightly unattractive to my human ear. The one I chose, however, was better than any title I might have written myself.

I, the Human, am still the arbiter of truth and of taste.

Technologists lack models of learning

As an arbiter, I possess the objective knowledge and subjective taste necessary to make choices aligned with what humans perceive as accurate and pleasing. I developed these abilities through my life experiences, including during my own education. Educators often despair when students rely on AI like Chat-GPT to complete their homework, as they know it hinders their journey towards developing knowledge and taste.

Teachers equally despair as they hear technologists argue that education should simply adapt to accommodate AI. This disconnect between teachers and technologists arises because technologists lack expertise in cognition and learning. We could blame technologists for this, but I think educators are often poor at making their case because they too possess poor explicit models of learning. Teachers know what learning looks like within the domain they teach, having developed a set of heuristics and rules of thumb to ensure that their own students can learn. However, this doesn’t amount to a well-codified model of learning. (This is why teachers across different subjects find it hard to articulate why they organise their lessons and assessments the way they do.) Perhaps it is inevitable, given the complexity and multi-disciplinary nature of learning, that teachers will find it hard to communicate exactly when new technologies do and don’t possess a threat to the development of learning. But the consequence of poor articulation is that the teaching profession faces the constant risk of well-meaning outsiders undermining the learning process.

Students lack models of learning

I worry less about technologists’ beliefs about learning than I do about students’ beliefs.

Let me share another secret with you. I often used to skip my homework as a child, simply copying it from my friends during form-time. Like modern-day student users of Chat-GPT, I was myopic when it came to my studies, and usually far more motivated by things other than my classes. But equally importantly, I lacked an understanding of my own learning process; I didn’t know why I should complete my homework.

I still learn new things all the time as an adult, but I now appreciate the value of cognitive struggle in acquiring knowledge. So, when presented with exercises on platforms like Coursera, I refrain from skipping to the answers because I know this would impede my learning. Cognitive challenges nearly always feel costly in the moment so, whether adult or child, we only push ourselves to engage in them when we know there is a reward waiting for us in the future.

On a personal level, I am surprised by how little my 12-year-old daughter understands about why she should complete assigned homework tasks. I suspect she is typical in this regard. Given the threat posed by Chat-GPT and similar technologies, it feels like teachers need to develop a language for educating students about the mechanisms of learning and of the potential returns to cognitive struggles. (I know many of you reading this will feel you already do this, but you are not the majority of the half a million teachers in England.)

Development of taste and truth in the workplace

I actually feel the challenges posed by AI in the classroom are much easier to address than those encountered in the workplace. In traditional white-collar apprenticeship models, junior employees often learn by working alongside senior employees, gradually developing expertise while completing lower-level tasks. The model works because the junior employee has a productive value that reflects their contributions to the firm (and thus their wage), even whilst expertise is being developed.

Many of these lower-level white-collar tasks  – desk research, initial drafts, coding and compiling data – can now (or soon) be easily carried out by AI. At Teacher Tapp, for example, Laura and I no longer personally write the first drafts of the week’s survey questions; another team member completes the task, which benefits the company while progressing the individual towards becoming an arbiter of taste and truth. Is this the right model for us now that we know Chat-GPT can take the stream of question suggestions from app users and write the first draft of survey questions for us to review?

There is an opportunity for so many firms to cut costs and rely on AI to handle these lower-level jobs. In the short term, industries become more efficient. But in the long term, the underinvestment in human skills leads to serious difficulties. There are no shortcuts to developing expertise in culturally situated and complex domains. It is only through writing thousands of subpar survey questions that one learns to consistently produce good ones that are both pleasing and accurate to humans. Humanity requires investment in the future generation of arbiters of truth and taste, even if such investment requires firms to make inefficient short-term decisions.

Solutions within schools

As a teacher in secondary schools, it is crucial to acknowledge the serious threats posed by Chat-GPT and similar AI tools. However, I firmly believe that these challenges are more solvable within the educational system than those faced outside, where the current model of learning as a by-product of productive work is at risk due to the increasing influence of AI.

Within schools, we have the advantage of employing coercion as a tool to ensure students complete their work under specific conditions. The case for implementing supervised conditions for homework completion, with schools open later each day, has become more compelling recently. However, I feel it has also become more important to explicitly teach students about the process of learning, the purpose of tasks, and the significance of engaging in productive struggle. To achieve this, we must support teachers in making their internal learning models more explicit, enabling effective communication with students (and those technologists who want to impose ‘innovations’ in education).

The future of workplaces where AI takes over low-level white-collar work is harder to imagine. Perhaps we will no longer require human arbiters of truth and taste in many fields, such as surveying, statistics, law or accountancy. In which case, all hail the new masters. More likely, I suspect, there will always be individuals who choose to invest their lifetimes in honing the craft of creating ideas and crafting sentences that surpass our current imagination. These individuals will become the artisans of their generation, akin to the luthiers, blacksmiths, and tailors of today, forging a path that embraces creativity and craftsmanship in a world dominated by AI.

(NB. I asked Chat-GPT for help in writing that last sentence because I’d run out of ideas for how to finish this post. Chat-GPT has many flaws, but they never suffer from writer’s block.)

Where to read more…