How to Create an AI Governance Policy for Your School District
By John Lyman
A tech director called me two weeks ago. Her superintendent had just come out of a board meeting where three different parents asked about the district's AI policy. The superintendent's answer: "We're working on it." The tech director's question to me: "How do I actually build one?"
If that sounds familiar, you're not alone. Most districts know they need an AI governance policy. What they don't have is a clear process for creating one — especially one that's practical enough for teachers to follow, specific enough to satisfy the board, and flexible enough to survive the next twelve months of AI evolution. This guide gives you that process: seven steps, from stakeholder input to board adoption, built for superintendents and tech directors who need to move now.
Before You Start: Set the Right Expectations
An AI governance policy is not a technology plan. It's not a 40-page document that lives on a shelf. And it's not a ban.
A good school district AI policy answers four questions: Who can use AI tools? For what purposes? With what data? Under what oversight? Everything else — vendor lists, training plans, incident procedures — lives in supporting documents. The policy itself should be something any teacher, parent, or board member can read in under five minutes and walk away knowing where the district stands.
Step 1: Assemble Your Governance Team
Timeline: Week 1
You need a small, cross-functional team — five to eight people, not a massive committee. Include your tech director, a curriculum leader, a building principal, a teacher representative, a special education coordinator, and someone from legal or compliance. If your district has a parent advisory council, invite a representative.
This team doesn't write the policy from scratch. Their job is to define boundaries, pressure-test language, and ensure the policy reflects reality across buildings. Assign one person — typically the tech director or assistant superintendent — as the policy owner responsible for drafting, revisions, and shepherding the document through board approval.
Step 2: Audit Your Current AI Landscape
Timeline: Weeks 1–2
You can't govern what you can't see. Before you write a single line of policy, map what's already happening. Run a short staff survey asking which AI tools are in use, for what tasks, and whether students interact with them directly. Review your IT logs for AI-related domains. Check your edtech inventory for tools that have quietly added AI features since your last review.
The output is a simple inventory: AI tools currently in use, organized by category (instructional, administrative, student-facing) with notes on whether each has a DPA or formal approval. This inventory becomes the foundation for your AI governance policy's vendor vetting framework — and it gives you concrete examples to reference when drafting.
Step 3: Define Your District's AI Use Categories
Timeline: Week 2
Not all AI use carries the same risk. Your policy needs to distinguish between categories clearly. A practical framework uses three tiers.
Tier 1 — Open Use. AI tools used for personal professional productivity with no student data involved. Examples: generating discussion questions, brainstorming project ideas. No individual approval required, but general acceptable use guidelines apply.
Tier 2 — Approved Use. AI tools that interact with student data or are used directly by students. These require formal district approval, a signed DPA, and FERPA/COPPA compliance verification. A teacher cannot adopt a Tier 2 tool on their own — it goes through vendor vetting.
Tier 3 — Prohibited Use. Uses the district does not permit. Examples: entering IEP data, disciplinary records, or mental health notes into any AI tool; using AI for high-stakes decisions about placement or grading without human review; students under 13 using tools without COPPA compliance.
This tiered approach gives staff a decision framework they can apply in real time. The question shifts from "Can I use this?" to "Which tier does this fall into?"
Step 4: Draft the Core Policy Document
Timeline: Weeks 3–4
Keep it to one to three pages. Include seven sections.
Purpose and scope. One paragraph defining that the policy governs AI tool use by staff and students district-wide.
Guiding principles. Three to five anchor statements — student data privacy is non-negotiable; AI supplements but does not replace professional judgment; equity of access is a priority; transparency with families is a default.
Use categories. Reference the three-tier framework from Step 3, with specifics on what falls in each tier and the approval process.
Roles and responsibilities. The tech director approves Tier 2 tools. Principals ensure building-level compliance. Teachers complete required training before using AI with students. The governance team reviews the policy annually.
Vendor requirements. All Tier 2 tools must have a signed DPA addressing FERPA, COPPA, data ownership, retention, deletion, and model training use.
Incident response. Suspected misuse or unauthorized data sharing gets reported to the tech director, investigated under existing breach procedures.
Review cycle. Annual at minimum, more frequently as the AI landscape evolves.
Step 5: Build Your Vendor Vetting Checklist
Timeline: Week 4
The policy says Tier 2 tools require approval. The checklist is how that approval works in practice. Keep it to seven questions: Does the tool comply with FERPA? COPPA for users under 13? Is there a signed DPA covering data handling, retention, and deletion? Does the vendor use student data to train models? Is the vendor SDPC-aligned? Can the district export and delete all data on termination? Does the vendor provide transparency into AI decision-making?
A vendor that answers all seven clearly and in writing is worth piloting. A vendor that can't answer two or more is a pass. This checklist isn't exhaustive due diligence — it's a first-pass filter that keeps ungoverned tools out while letting vetted tools through efficiently.
Step 6: Plan Role-Specific Training
Timeline: Weeks 4–5
The AI governance policy only works if people understand it. Training should be role-specific.
Teachers need to know what the tiers mean in practice, what data can and cannot enter AI tools, and how to talk to students about responsible use. Give them hands-on time with approved Tier 2 tools.
Building administrators need to know how to monitor compliance, respond to teacher requests about new tools, handle parent questions, and escalate suspected misuse.
Counselors and special education staff need to understand why IEP data, behavioral records, and mental health information are always Tier 3 — never entered into any AI tool, regardless of privacy claims.
Board members need the policy rationale, the risk landscape, and the metrics you'll use to prove it's working.
Deliver training before the policy takes effect. A policy that launches without training gets ignored until someone runs into a problem.
Step 7: Present to the Board and Communicate to Families
Timeline: Weeks 5–6
Bring the board three things: the policy, the vendor vetting checklist, and the training plan. Frame it around three messages — AI is already in our schools and this brings it under governance; the policy protects student data while enabling responsible innovation; the policy is a living document reviewed annually.
After adoption, communicate to families. A one-page parent-friendly summary — what the policy covers, why it matters, and how to ask questions — goes a long way. Post it on your website and include it in your next superintendent communication. Transparency with families isn't a nice-to-have; it's one of your guiding principles, and this is where you demonstrate it.
Keeping the Policy Alive
A policy that sits untouched for two years is worse than no policy — it gives the illusion of governance without the substance. Build in three maintenance practices: quarterly vendor reviews to check your approved tools list against actual usage; an annual policy review to update tiers, vendor requirements, and training protocols as AI evolves; and incident tracking — a simple log of AI-related questions, issues, and near-misses that tells you where the policy is working and where it has gaps. That log is also a powerful board reporting artifact: "Here's what we caught, here's how we responded, and here's what we've improved."
Your Next Step
You don't need to build this alone. Book a Cognitive Audit call and we'll map your district's current AI landscape, identify your highest-risk gaps, and help you draft a board-ready AI governance policy in 30 days. Or take the AI Readiness Quiz to see where your district stands before you start.
Next steps
Get a quick benchmark or a full starter pack for your district.