Why Your School’s AI Policy Should Prioritise Professional Learning First
- Adam Sturdee
- 4 days ago
- 4 min read

Across the UK, schools are drafting AI policies at pace. Most begin with student use. Safeguarding. Academic integrity. Detection tools. These are important. But they are not the right starting point. If AI policy in schools does not prioritise professional learning first, it risks becoming restrictive rather than transformative. This matters because AI is not simply another classroom tool. It is a cognitive technology. It shapes thinking. And if teachers and leaders do not understand it deeply themselves, they cannot guide pupils wisely.
AI as Augmented Intelligence, Not Automation
One of the quiet dangers in AI adoption is framing it as labour-saving software.
AI can reduce workload. It can draft documents, summarise research, and generate resources. But its deeper value lies elsewhere. Used well, AI acts as augmented intelligence.
It helps professionals:
Surface patterns in their own practice
Reflect on their language and questioning
Stress-test curriculum design
Clarify thinking
When AI is positioned in policy as augmentation rather than replacement, it strengthens professional judgement instead of undermining it. That framing should sit at the heart of any serious AI governance approach.
Why Staff Experience Must Come First
In many schools, teachers are cautiously experimenting with generative AI. Some are using it for planning. Others are exploring administrative efficiency. What is often missing is explicit policy permission and encouragement.
A strong AI policy should state clearly that:
Staff will be supported to develop AI literacy.
Experimentation is expected and structured.
AI can be used as a professional learning tool.
This is not a soft addition. It is strategic. If teachers only encounter AI through student misuse cases, they will associate it with risk. If they encounter it through structured reflection and insight, they will associate it with growth.
From Compliance to Coherence
Too many AI policies drift into compliance language. They describe restrictions. They outline monitoring. They emphasise prohibition. These elements may be necessary. But they do not build capability.
A coherent AI policy should instead answer three questions:
How does AI support our educational mission?
How will AI strengthen professional practice?
How will we govern AI ethically and transparently?
When professional learning is central, the policy becomes developmental rather than defensive.
Transcript-Based Insight and Reflective Practice
One area where AI’s professional learning potential becomes visible is transcript-based reflection. When teachers can privately review patterns in their own classroom dialogue, questioning balance, or feedback language, AI becomes a mirror rather than a monitor.
This is important. Reflection should be private, professional and purposeful. It should not become performative or data-driven for its own sake.
Tools like Starlight were designed with that principle in mind. AI-generated transcripts and coaching insights are sent directly to the teacher. The focus is on growth, not grading. An AI policy that recognises this kind of use signals trust.
Use AI to Draft the Policy Itself
There is a final leadership point here. If AI is powerful enough to shape learning, it is powerful enough to shape policy thinking.
Leaders should:
Ask AI what makes an effective AI policy.
Generate contrasting policy structures.
Identify potential blind spots.
Test ethical scenarios.
Then apply professional judgement. This models responsible AI use at senior level. It treats AI as a thinking partner, not an authority. In doing so, schools demonstrate that AI literacy is not confined to ICT lessons. It is embedded in leadership practice.
Governance Still Matters
None of this removes the need for clear governance.
AI policy must address:
Data protection and safeguarding alignment
Transparency of systems
Bias mitigation
Review cycles
But governance without professional learning leads to fear. Governance alongside professional learning leads to maturity.
If you’d like to explore how to design an AI policy rooted in values rather than fear, I’ve written a short leadership guide here: https://www.adamsturdee.com/post/how-to-write-a-strong-ai-policy-for-your-school-a-principles-first-approach
The Strategic Choice
The real decision facing schools is not whether to adopt AI. It is whether AI will be experienced primarily as surveillance or as support. Policy plays a decisive role in that choice. If your AI policy in schools begins with professional learning, positions AI as augmented intelligence, and embeds ethical clarity, you create the conditions for responsible innovation.
If it does not, AI will remain fragmented, misunderstood and reactive.
Leadership matters here.
Spark Insight with Starlight and strengthen professional judgement at every level.
🎥 Subscribe to our channel here: https://www.youtube.com/@Star21-ai
🌐 Read more on our blog: www.coaching.software
💡 Explore the platform: www.starlightmentor.com
🐦 Follow us on X: @star21starlight
🔗 Connect with me on LinkedIn: https://www.linkedin.com/in/adam-sturdee-b0695b35a/
The Insight Engine is written by Adam Sturdee, co-founder of Starlight, the UK’s first AI-powered coaching platform, and a senior leader with responsibility for teaching, learning and coaching. This blog is part of a wider mission to support educators through meaningful reflection, not performance metrics. It documents the journey of building Starlight from the ground up, and explores how AI, when shaped with care, can reduce workload, surface insight, and help teachers think more deeply about their practice. Rooted in the belief that growth should be private, professional, and purposeful, The Insight Engine offers ideas and stories that put insight—not judgment—at the centre of development.



Comments