Hedge AI: Governed AI for Schools

A single platform with three interfaces: scaffolded student help, teacher tools, and school-level oversight designed for minors. Real governance. Real safeguarding. Real control.

Based in the centre of the Translation & Innovation Hub at Imperial College London White City Campus, innovating the next generation of safe AI system backed by academic research.

Why Schools Need Governed AI

Most AI tools were built for the open internet. Schools are fundamentally different. You have minors under your duty of care, statutory safeguarding obligations, academic integrity to uphold, and accountability frameworks to navigate. The gap between consumer AI and school requirements is enormous.

Students are already using AI tools, often without appropriate guidance or oversight. Blanket bans don’t work. They push usage underground, remove any possibility of teaching responsible AI use, and leave schools exposed to greater risk.

Hedge AI adds a comprehensive governance layer around classroom AI so schools can adopt it without turning AI into a safeguarding gamble. We’ve built the infrastructure that sits between powerful AI models and your students, implementing the controls, permissions, and oversight that education requires.

This isn’t a chatbot with a content filter bolted on. It’s a purpose-built system that understands task modes, learning scaffolds, role-based permissions, audit trails, and safeguarding workflows—because it was designed by educators who understand your constraints.

What You Get in One System

For students

A GPT-like chat experience that actively supports learning rather than shortcuts it. Students receive hints, worked steps, conceptual checks, and structured feedback instead of copy-paste answers.

They could also access all textbooks, learning materials and potentially virtual demonstration of lab practicals by simply asking Hedge AI to search for them.

The system encourages reasoning, requires working, and adapts its scaffolding based on the school’s task mode settings.

For Teachers

Practical tools that cut repetitive workload whilst helping you manage classroom AI use without becoming compliance officers.

Answer a good question once, publish it to the class knowledge bank, and reduce repeated explanations.

Spot misconception clusters across your cohort. Set task modes that align with your pedagogy and assessment requirements.

On request: Tracking students grades and generate report.

For School Leaders

Policy controls, risk-handling workflows, and auditable oversight so AI use is consistent, safe, and accountable across the entire school.

Configure permissions by year group, subject, and context.

Receive safeguarding alerts. Access trend reporting. Maintain audit trails that satisfy inspection requirements.

Student & Teacher Experience

Student Interface

What Students Can Do:

  • Ask questions as they would to a tutor
  • Get help that teaches: hints first, then method, then checks
  • Improve drafts with feedback on clarity, structure, and reasoning
  • Generate practice questions and revision plans
  • Access the class Q&A bank for teacher-approved explanations

How Student AI Behaves:

Hedge AI is designed to support learning, not shortcut it. The assistant encourages working and reasoning, explains concepts using examples, asks clarifying questions when students are vague, can be configured to require sources for factual claims, and refuses harmful content categories. Most importantly, it follows your school’s task modes.

Teacher Interface

What Teachers Can Do:

  • Triage student questions in a Q&A inbox
  • Publish good answers to the class knowledge bank, reducing repetition
  • See where students are struggling across the class
  • Set task modes for homework, revision, and assessed work
  • Use staff AI tools to draft repetitive writing and planning outputs

Workload Support Examples:

Use Hedge AI to draft (then you approve): parent emails and class updates, trip letters and event communications, lesson scaffolds and differentiated practice sets, rubrics and feedback templates, meeting notes and summaries.

Academic Integrity:

We don’t sell magic detection of “who used AI.” That’s unreliable and it creates false accusations that damage relationships. Hedge AI supports integrity by design: task modes that restrict answer dumping in assessed contexts, structured prompts that ask for steps and reflection, and options for process evidence workflows where appropriate. Prevention beats detection every time.

Hedge AI is designed for school use by minors. That means safety, oversight, and accountability are first-class requirements—not features added later. Our approach to safeguarding is built into the system architecture, not retrofitted as an afterthought.

Safeguarding & Privacy: Built for Students

Safety Behaviour

Hedge AI is designed to refuse or safely handle high-risk categories: sexual content involving minors, explicit sexual content, graphic violence, self-harm encouragement, extremist recruitment content, and harassment and bullying guidance.

In high-risk situations, the goal isn’t to “chat it away.” The goal is to respond safely and follow your school’s safeguarding process.

Oversight Model

Schools need oversight. Students need trust. Hedge AI supports a balanced model: students use the tool for learning support, teachers see class-level trends and Q&A content they choose to publish, and authorised roles receive risk alerts and can review relevant content when necessary, where access will be logged.

We don’t recommend blanket surveillance. It breaks trust and increases liability.

Governance Reporting

Typical oversight includes: usage rates by year group and subject, common learning gaps surfaced by questions, safeguarding risk categories and response outcomes, and policy settings and changes over time.

Leadership visibility without surveillance.

Hedge AI is built so schools can govern AI use without turning the classroom into surveillance. Students learn responsibly. Teachers maintain control. Leaders have accountability.

Teacher Training & Feedback

Schools do not need motivational talks about AI. They need teachers who understand how AI actually behaves, where it fails, and how to use it in ways that protect learning and safeguarding.

Hedge AI runs professional teacher training sessions alongside our platform. These sessions serve two purposes:

  • to help teachers use AI confidently and responsibly in the classroom, and
  • to gather structured feedback and future demands from schools, grounded in real classroom constraints.

This is how Hedge AI stays aligned with how schools actually work.

Why we run teacher training

AI is already in schools, whether schools plan for it or not. Banning it entirely does not work. Allowing it without structure creates safeguarding and learning risks.

Teacher training is the missing layer.

Our sessions focus on practical understanding, not hype:

How large language models generate answers



Why hallucinations happen and how to spot them



Where AI sounds confident but is wrong


How “answer dumping” undermines learning



How to design classroom routines that keep students thinking


What teachers gain from the sessions

A clear mental model of AI

Teachers learn what AI is good at, what it is bad at, and what it should never be trusted to do without verification.

We cover:

  • hallucinations and confident errors
  • shallow but plausible explanations
  • bias, omissions, and oversimplification
  • why AI often gives the appearance of understanding without depth

This helps teachers make informed decisions instead of guessing.

Classroom routines that actually work

We focus on routines teachers can use immediately:

  • using hint-first and step-by-step prompts
  • separating homework help from assessed work
  • requiring sources and verification for factual content
  • turning student questions into reusable class resources

The goal is not to let AI replace teaching, but to support it without lowering standards.

Practical staff use cases

We also cover responsible staff-facing uses of AI, where it genuinely saves time:

Drafting communications and newsletters (staff-approved)


Planning events, trips, and administrative workflows


Creating rubrics, and differentiated practice

Reducing repetitive admin


How we collect feedback and new demands

With your consent, training sessions are used as a structured feedback loop.

Structured feedback capture

During and after sessions, we ask teachers focused questions:

  • What task are you trying to do?
  • What goes wrong with current tools or processes?
  • Where does AI help, and where does it cause problems?
  • What would a meaningful improvement look like in practice?

This keeps feedback grounded in real workflows, not abstract ideas.


Cross-school comparison

Different schools have different policies, constraints, and cultures. We compare feedback across schools to identify:

  • what problems are common and worth solving at scale
  • what issues are policy-driven and non-negotiable
  • what can be handled through modes, settings, or governance rather than new features

This prevents one-off solutions that do not generalise.

A roadmap schools can trust

Feedback from training sessions directly informs Hedge AI’s roadmap.

We prioritise:

  • changes that improve safeguarding and governance
  • features that support learning without encouraging shortcuts
  • controls that schools can explain clearly to staff, students, and parents

We aim for deliberate evolution, not constant churn.


Hedge AI

Hedge AI

Translation & Innovation Hub

Imperial College London

84 Wood Ln

London

W12 0BZ

Transforming the future of safe AI practice in schools

General Enquiries: enquiries@hedge-ai.co.uk

Booking Demos: collab@hedge-ai.co.uk

© 2026 Hedge AI. All rights reserved.