Does AI Make Teaching More “Normal”? Reflections on Normativity, Coaching and Responsible Design
- Adam Sturdee
- Feb 17
- 4 min read

Recently, we had the privilege of a searching, generous conversation with colleagues from a leading UK university about the future of AI in teacher development. They were enthusiastic about the potential of transcript-based lesson analysis to deepen reflection. They were also clear-eyed about the risks. Their questions centred on something subtle but profound:
If AI works by identifying patterns, does it quietly push us towards the middle of the bell curve?
And if so, what might that mean for inclusive practice, culturally responsive pedagogy, dialogic classrooms, or work with pupils with SEND?
These are exactly the questions we should be asking.
The “Bell Curve” Concern
Large language models are statistical systems. They learn patterns from vast amounts of text. It is reasonable to worry that what is statistically common could become treated as professionally desirable. In education, that would be dangerous. Teaching is relational and contextual. What works in a Year 11 revision lesson may not work in a Year 7 philosophical enquiry. What works for a neurotypical cohort may not work for a class with significant additional needs. What works in one community may not translate to another. If AI were to flatten this diversity into a single model of “good teaching”, it would fail the profession.
But here is the crucial distinction: AI does not automatically enforce an average. It responds to the frame it is given. Normativity does not emerge simply from mathematics. It emerges from design.
Reflection Engine, Not Grading Engine
One of the strongest themes in our discussions was the role of judgement. We are clear that Starlight is designed as a reflection engine, not a grading engine. That distinction matters. When teachers upload a lesson to Starlight, they are not submitting themselves to inspection. They are generating a hypothesis about their practice. We frame outputs as contestable. The feedback points to specific moments in the transcript and invites professional judgement: agree, disagree, nuance. The aim is not to replace coaching conversations, but to make them more specific, timely and scalable. If AI becomes prescriptive, it becomes reductive. If it becomes interrogable, it becomes developmental.
Design Choices Matter More Than Algorithms
The university colleagues rightly raised the risk of standardisation. Our response is that the safeguard is not primarily technical, but architectural and cultural. Three design principles guide us:
1. The Lens Is Controllable
Schools can set organisational templates that weight specific priorities. A school can emphasise adaptive teaching, inclusion, dialogic talk, retrieval practice, behaviour routines, or any combination. Teachers can also apply their own personal lens.
“Effective practice” is therefore not treated as universal. It is contextualised.
2. Outputs Are Hypotheses
We increasingly design feedback to be falsifiable. Rather than broad claims, the system anchors observations in transcript evidence. Teachers can interrogate it through the built-in chat feature and ask, “What led you to that conclusion?” This keeps professional judgement at the centre.
3. Data Supports Inquiry, Not Compliance
Quantitative indicators can be powerful, but they can also distort behaviour. We are cautious about how metrics are used. Patterns are prompts for reflection, not targets for enforcement. The moment AI becomes a leaderboard, it starts to shape behaviour in unintended ways.
Ethical Questions About Voice and Consent
Another theme in our discussion concerned ethics, particularly around audio recording.
In UK schools, the lawful basis for transcript analysis typically sits within public task. Starlight operates as a data processor, with schools as data controllers, and we do not create biometric identifiers or voiceprints. Audio is used for transcription and discourse analysis, not identification. That said, legality is not the same as trust. Clear communication, transparent governance and responsible use are non-negotiable.
AI in education must be both compliant and credible.
The Real Risk: Metrics Over Meaning
If there is a genuine gravitational pull towards normativity in AI systems, it lies less in language modelling and more in metric design. Reduce teaching to a score and you narrow practice. Gamify reflection without depth and you risk superficial engagement.
We are acutely aware of this tension. Sustained, high-quality reflection is the goal, not volume of uploads or compliance with indicators.
Shaping Starlight With the Profession
What encouraged us most in the conversation was not the critique itself, but the spirit in which it was offered. Starlight has grown through partnership. From pilot schools to multi-academy trusts, from P4C practitioners to senior leaders, the product is being shaped by educators who care deeply about professional autonomy.
Our recent work on transcript-based lesson analysis (TBLA) has also been accepted for presentation at the BERA Teacher Education and Development Conference 2026, where we will explore how AI-supported reflection can scale coaching without sliding into surveillance or evaluation.
You can read more about our approach here:
What is Starlight? → https://www.starlightmentor.com
Exploring transcript-based lesson analysis → https://www.coaching.software
From pilot to full adoption → https://www.coaching.software/post/proven-value-full-adoption-across-all-pilot-schools
The Hard Question We Must Keep Asking
Does AI risk making teaching more “normal”? It could, if designed carelessly. But it could also do the opposite. It could surface blind spots, widen perspectives, and make reflective practice accessible to every teacher, not just those with regular coaching access. The difference lies in intent, design and governance. We are grateful for the challenge from our university colleagues. It sharpens our thinking. It disciplines our language. It reminds us that innovation without humility is dangerous.
AI will shape education. The only question is whether we shape it thoughtfully in return.
🎥 Subscribe to our channel here: https://www.youtube.com/@Star21-ai
🌐 Read more on our blog: www.coaching.software
💡 Explore the platform: www.starlightmentor.com
🐦 Follow us on X: @star21starlight
🔗 Connect with me on LinkedIn: https://www.linkedin.com/in/adam-sturdee-b0695b35a/
The Insight Engine is written by Adam Sturdee, co-founder of Starlight, the UK’s first AI-powered coaching platform, and a senior leader with responsibility for teaching, learning and coaching. This blog is part of a wider mission to support educators through meaningful reflection, not performance metrics. It documents the journey of building Starlight from the ground up, and explores how AI, when shaped with care, can reduce workload, surface insight, and help teachers think more deeply about their practice. Rooted in the belief that growth should be private, professional, and purposeful, The Insight Engine offers ideas and stories that put insight—not judgment—at the centre of development.



Comments