← Back to Blog

The 5 Questions Every Superintendent Must Ask Before Adopting AI Tools

By John Lyman

AI governanceK-12superintendentFERPACOPPAvendor vetting

A superintendent forwarded me a demo last week. An edtech vendor had just added an AI feature to their platform — automated feedback on student writing, personalized reading recommendations, the usual promises. The question was simple: Should we pilot this?

My response: Ask these five questions first.

Every week, another vendor rolls out AI capabilities. The pressure to adopt is real — and so are the risks. K-12 AI adoption isn't just a technology decision; it's a governance decision. Districts that move without clear answers often find themselves in front of the board explaining a data incident, or in front of parents explaining why their child's work was used to train a vendor's model. The districts that thrive aren't the ones that move fastest. They're the ones that move deliberately. Before you sign a pilot agreement, board approval, or purchase order, make sure you can answer the following — and document the answers. A one-page summary of these five questions and your district's answers doubles as a school district AI policy checkpoint and a defensible record if anyone asks how you're vetting AI tools.

1. Does This Tool Comply with FERPA and COPPA?

This isn't optional. FERPA protects education records; COPPA protects children under 13 online. Most AI tools that process student data — writing samples, quiz responses, behavioral notes — trigger one or both. Superintendent AI decisions that skip this step have led to headlines no district wants: student data in training sets, consent violations, and corrective action plans from state education agencies. FERPA AI compliance isn't a checkbox; it's a requirement.

What to ask the vendor: Where does our data go? Is it used to train their model? Do they have a Data Privacy Agreement (DPA) or Student Data Privacy Agreement? Are they aligned with the Student Data Privacy Consortium (SDPC) framework?

Red flags: "We use industry-standard security." "We're compliant." Vague answers. No written DPA. If they can't produce a clear, signed agreement that specifies how student data is handled, stored, and retained — and whether it's used for model training — don't move forward.

Green lights: SDPC-aligned vendor, explicit DPA, documented compliance with FERPA and COPPA.

2. Who Owns the Student Data Generated by the AI?

When a student uses an AI writing assistant, who owns the outputs? The prompts? The metadata about how the tool was used? In practice, vendors sometimes claim rights to "anonymized" or "aggregated" data — which can still be derived from your students' use. That creates long-term risk: your district loses control over how that data is used, sold, or repurposed. AI tool vetting for schools has to include a clear answer on ownership.

What to ask: Does the vendor claim ownership of any student-generated content or usage data? Can the district export and delete all student data upon request? What happens when the contract ends?

Red flags: Vendor claims ownership of aggregated or anonymized data derived from student use. Unclear data deletion procedures.

Green lights: District retains full ownership. Vendor provides data export and deletion within a defined timeframe (e.g., 30 days of contract termination).

3. Can We Audit and Explain AI Decisions to Parents?

When a parent asks, "Why did this tool recommend that for my child?" — can you answer? Explainability isn't optional in K-12. Parents and guardians have a right to understand how tools that affect their child's education work. If the vendor says the logic is proprietary and they can't share it, you're one FOIA request or parent meeting away from a credibility problem. Districts that adopt AI without an audit trail also struggle when something goes wrong — there's no way to reconstruct what happened or why.

What to ask: Does the vendor provide transparency into how the AI makes decisions? Can you explain to a parent why a recommendation was made? Is there an audit trail?

Red flags: Black-box AI. "Proprietary algorithm." No explainability.

Green lights: Vendor provides documentation on how recommendations are generated. Audit logs available. Human review or override options for high-stakes decisions.

4. Does Our Board Policy Cover AI Use?

Many districts have acceptable use policies written before ChatGPT existed. Do yours cover AI? A school district AI policy that's silent on AI leaves every adoption decision in a gray zone. When a teacher signs up for a free AI tool or a department pilot goes live without a formal process, the board has no line of sight — and no clear authority. That's how shadow AI spreads: well-intentioned use with no governance. Updating your AUP or technology policy to explicitly address AI tools is a one-time lift that pays off every time a new tool is proposed.

What to ask: Does your current AUP or technology policy explicitly address AI tools? Who is authorized to adopt new AI applications? What's the process — principal approval? Tech director? Board vote?

Red flags: No policy. Policy is silent on AI. Adoption happens ad hoc.

Green lights: Board-adopted AI use policy. Clear governance: who can use what, for what purposes, with what data. Procurement process includes AI-specific vetting.

5. How Do We Train Staff on Responsible Use Before Rollout?

The best policy in the world fails if no one knows it exists. K-12 AI adoption without training leaves teachers guessing: Is it okay to put student work into a free AI tool? Can I use AI to draft parent emails? What do I say when a student turns in AI-assisted work? Role-specific training — different for teachers, counselors, and admins — closes that gap. It also gives you a record that the district took reasonable steps to ensure responsible use, which matters if a complaint or incident ever lands on the board's desk.

What to ask: Do we have role-specific training — for teachers, counselors, admins — before we deploy? Who delivers it? How do we measure that staff understand the boundaries?

Red flags: "We'll do a quick training." "Teachers are tech-savvy, they'll figure it out."

Green lights: Structured training aligned with policy. Scenarios: "A student asks you to use ChatGPT on an assignment — what do you say?" Documentation that training was completed.


Your Next Step

These five questions aren't meant to slow you down. They're meant to protect your district, your students, and your community. The vendors who can answer them clearly are the ones worth piloting. Documenting your answers gives you a repeatable school district AI policy checkpoint for every new tool — and a clear story for your board and parents when they ask how you're governing AI.

Book a free 15-minute Cognitive Audit call to assess your district's AI readiness, or take the AI Readiness Quiz to see where you stand.

Next steps

Get a quick benchmark or a full starter pack for your district.

AI Governance Starter Kit

Get three professional documents ready to customize: AI Policy Template, Vendor Vetting Checklist, and Board Presentation Template. $79 one-time.

Get the Starter Kit – $79

Get our free AI Readiness Quiz

Assess your district's AI readiness in 2 minutes. Get a personalized score and actionable next steps.

We'll send the quiz link and weekly AI governance insights.

No spam. Unsubscribe anytime.

Related Posts