Skip to main content
Education

AI in K-12 and Higher Ed: What School IT Leaders Need to Plan For

The honest AI-in-education conversation most vendors aren't having — infrastructure, identity, data boundaries, and the pieces that land on IT's desk whether you're ready or not.

John Lane 2022-08-09 7 min read
AI in K-12 and Higher Ed: What School IT Leaders Need to Plan For

If you run IT for a school district, a community college, or a university, you have already had the AI conversation. Probably several times. Usually with a board member or a superintendent who read something over the weekend and wants to know "what our AI strategy is." The honest answer — "we don't have one yet, and neither does anyone else" — is not politically viable.

This post is what we actually tell our education customers when we sit down with them. It is not a product pitch. It is the list of things that will land on your IT team's desk over the next 18 to 24 months whether you're ready or not, and what you can do about each one.

The Stuff That's Already Happening

Before we talk about strategy, let's be honest about the current state. Students and teachers are already using AI. ChatGPT, Claude, Gemini, Copilot — they're in every browser, on every phone, and increasingly built into the productivity suites you already license. The question is no longer whether AI touches your learning environment. It's whether it touches it in a way you can see, govern, and support.

Shadow AI is the default

Every district and campus we work with has the same pattern: an acceptable use policy that technically prohibits AI, a teaching staff that uses it daily to write lesson plans and grade rubrics, and a student body that uses it for every writing assignment. Nobody is telling the IT team about it. The gap between policy and practice is bigger than anything we've seen since the BYOD wave a decade ago, and the solution is the same: stop pretending it isn't happening and build a sanctioned, governed path.

Microsoft and Google are already integrating

Copilot for Microsoft 365 and Gemini for Google Workspace are rolling into the productivity suites schools already license. These are not opt-in experiments anymore — they're showing up in Word, Outlook, Docs, and Gmail by default. If you haven't reviewed the data processing, the retention, and the tenant-level controls for these features, you need to put that at the top of the list. Student data is flowing into these integrations whether your AUP addresses them or not.

The Five Things That Land on IT's Desk

1. A data classification exercise you can no longer avoid

AI tools are fundamentally about pattern-matching over data. The questions "what data is allowed in which tool" and "whose data is it" become the central governance issue. You need to know:

  • What constitutes student PII under FERPA in the tools you're licensing.
  • Whether each AI tool's processing happens in a tenant you control or a shared service you don't.
  • Whether training on your data is opted in, opted out, or negotiable.
  • Whether prompt contents are retained, logged, or used for model improvement.

Microsoft Copilot for M365, properly configured with the commercial data protection settings, does not train on your tenant data and keeps prompts within your compliance boundary. Default consumer ChatGPT does not. This is a meaningful distinction and it's on you to know which your users are using.

2. Identity and single sign-on for AI tools

Every AI tool your district adopts is another identity integration. The good news is the big ones all support SAML/OIDC and can be integrated with Entra ID, Google Workspace, or your existing identity provider. The bad news is the long tail — the twenty different AI-adjacent tools teachers are signing up for with personal emails — is a governance nightmare.

The practical step is to make SSO mandatory for any AI tool used on district resources, and to build a short approved list. Teachers can request additions. The approval process is fast but non-negotiable. This is not about saying no — it's about ensuring that when a student's records are flowing into a tool, the district knows about it.

COPPA applies to under-13 students. FERPA applies to educational records. And by the end of 2025, nearly every state will have its own student data privacy law, many with AI-specific provisions. The burden of reading these and mapping them to your tool inventory is going to fall on IT, because nobody else in the organization has the technical fluency to do it.

Get ahead of this by maintaining a live spreadsheet (or better, a real SaaS management tool) that tracks every AI-touching application, its data processing addendum, its student data flow, and its parental consent posture. You will be asked for this inventory at some point. Having it ready is worth the effort.

4. Network and bandwidth impact

This one is boring and usually overlooked. AI-heavy usage in the classroom — video generation, image analysis, long-context chat, voice interaction — is bandwidth-intensive. It's not the same kind of load as streaming video; it's bursty, latency-sensitive, and often concurrent across an entire class of students. Review your district's internet circuits and your building-to-internet paths. The days of 100 Mbps per building are ending fast.

For districts running E-rate eligible circuits, the good news is there's funding for upgrades. For higher ed, the good news is you probably already have more bandwidth than you're using. The issue is usually the WAN between buildings or the wireless in classrooms, not the core pipe.

5. Teacher and staff enablement

This is the one IT leaders hate to hear: you're probably also going to own AI training for staff, because nobody else will. The curriculum department will want to focus on pedagogy. HR will focus on compliance. Which leaves "how do I actually use this tool safely" falling to IT or the instructional technology team.

Build a 90-minute training. Cover: which tools are approved, what data is safe to put into them, how to cite AI assistance, and the difference between the tenant-protected version and the open internet version. Make it annual, make it mandatory for any staff member who wants access to the approved AI tools, and document completion.

What Higher Ed Specifically Needs to Plan For

Universities and colleges face a few issues K-12 mostly doesn't.

Research data. Faculty are going to want to use AI on datasets covered by IRB, HIPAA, or grant agreements. The answer is usually private cloud inference — running models in a tenant you control, on compute you trust — rather than sending research data to a public API. This is a real cost and a real infrastructure project. Plan for it.

Academic integrity infrastructure. AI detection is a lost cause as a technical project. We have yet to see a detection tool we'd stake a disciplinary decision on. The honest answer is that academic integrity policy has to evolve to describe permitted use, not to police prohibited use. IT's role is to make the permitted tools easy to use and the unauthorized ones harder, not to build a surveillance system that will fail.

Campus-hosted LLMs for research and coursework. A growing number of institutions are running their own inference infrastructure — either commodity GPUs on-prem or dedicated tenants with a hyperscaler — to give students and researchers access to capable models without the data leaving the university. This is a reasonable project for an institution with a real ML or CS program, and a vanity project for everyone else. Know which you are before committing the capital.

The Strategy Conversation With the Board

When the board asks "what's our AI strategy," the honest answer is a version of this:

  1. We are governing the AI tools that already exist in our tenant (Copilot, Gemini) with the privacy settings and AUPs that apply to them.
  2. We have an approved-tools list that we maintain, and a process for teachers to request additions.
  3. We are training staff annually on safe use and data handling.
  4. We are tracking the regulatory landscape and updating our policies quarterly.
  5. We are budgeting for bandwidth and identity integration work that AI tool growth requires.

This is not exciting. It does not contain the word "transform." It will, however, keep you out of the newspaper for the wrong reasons, and it scales as the actual technology matures.

Three Takeaways

  1. The AI-in-schools conversation is a governance and infrastructure conversation, not a product conversation. Your job is not to pick a tool. It's to build the guardrails around whatever tools the learning community wants to use.
  2. Data classification, SSO, and an approved-tools list solve 80 percent of the real risk. The rest is training and bandwidth.
  3. Don't confuse campus-hosted LLM projects with strategy. Running your own inference is a capital project with a specific use case. It's rarely the thing that solves your AI governance problem.

Talk with us about your infrastructure

Schedule a consultation with a solutions architect.

Schedule a Consultation
Talk to an expert →