AI is becoming ubiquitous in the higher education space, and with increased usage comes increased risks. For example, shadow AI usage–or the use of AI tools that have not been vetted or approved by the institution–poses unique challenges for college and university security offices. In a 2025 EDUCAUSE survey of staff and faculty at higher education institutions, 42% of respondents indicated that their institution did not have any AI-related acceptable use policies at the time of the survey. Without clear policies on which AI tools to use and how to use them, the risks of shadow AI and data mismanagement increase. While AI can be a great tool for students, faculty, and staff, it is imperative that campus security offices collaborate with their administration to implement policies that promote the safe and secure use of these tools.
For guidance on creating or refining campus AI acceptable use policies, we reached out to Maggie Abate to get her thoughts on creating campus-focused, research-informed AI acceptable use policies. Maggie is a Data Engineer and Research Liaison, bridging the important work happening at REN-ISAC and OmniSOC with the critical research efforts of Indiana University’s AI Lab in the Kelley School of Business Data Science and AI Lab (DSAIL). She is a self-described “avid AI-enthusiast” who works where cybersecurity meets AI. Maggie uses her expertise to empower members with access to innovative technologies, thereby enhancing their capabilities and driving progress. “In my free time,” Maggie told us, “You can find me vibe-coding an app like it’s a mixtape.”
—
Q: What roles do AI acceptable use policies (AUP) play in a college or university’s cybersecurity posture?
M.Abate: Think of the AI AUP as the guardrail that lets us move fast without falling off a cliff. It clarifies what data can go into which tools, who’s allowed to use them, and under what conditions. Done well, it shrinks “shadow AI,” hooks AI activity into existing controls (SSO/MFA, logging, incident response), and gives faculty and students safe, approved ways to experiment—so innovation continues without putting the institution or its people at risk.
Q: What should schools consider when building a secure AI acceptable use policy?
M.Abate: Start with data and work outward. Map AI use to your data classifications (FERPA, PHI/PII, research/IP), set minimum standards for tools, and require access controls that match risk (SSO/MFA, least privilege). Add monitoring (DLP/CASB, audit logs) and spell out allowed vs. prohibited inputs with a simple exception path. Plain-English rule: don’t paste restricted data into AI tools unless that tool is explicitly approved for that data type.
Q: How should campus security teams communicate to faculty, staff, and students about the need for secure AI use?
M.Abate: Keep it simple, visible, and continuous. Share a one-page “Ask before you paste” guide, run short role-based micro-trainings with real examples (prompt injection, fake “download this model” links). Create champions in departments, hold open office hours, and keep a quick feedback loop so people can request new tools or report issues without friction.
Q: If a college or university was starting from zero on AI policy, where do you suggest they begin?
M.Abate: Define scope and principles, then inventory your data flows across teaching, research, and admin. Set a baseline for any AI tool you’ll allow (no training on institutional data, retention limits, logging, SSO/MFA), publish an approved-tools list with a data-type matrix, and launch with a one-pager plus bite-size training. Iterate based on usage patterns and incident learnings rather than waiting for a “perfect” policy.
Q: What are the risks of not having a comprehensive AI use policy for your campus?
M.Abate: You get data leakage, compliance trouble (FERPA/PHI/PII), and potential loss of IP—plus a surge of shadow AI that’s hard to monitor or investigate. Decisions end up made by unvetted tools, incident response gets murky without logs, and the reputational risk climbs with every misstep. In short: higher likelihood of harm and fewer ways to contain it.
Q: Beyond policy, what else can a campus security office do to promote secure AI use?
M.Abate: Focus on practice, not just paper. Run ongoing “AI safety” drills like a phishing program, publish quick newsletter tips, and make approved tools the default, easy path while quietly de-prioritizing risky alternatives. Build a community of practice with researchers and IT, and run tabletop exercises (prompt injection, model supply chain compromise) so people know what to do before something goes wrong.
—
Efforts to develop and improve the approach to AI governance and policy will evolve as both the tools themselves and the way we use them do. REN-ISAC members can leverage connections with other member representatives to lead discussions on building AI policies for their campuses and promoting responsible AI use. It’s a big task, but there are actions and initiatives that your security office can take now to set up your campus community for success.
Further reading:
Jorstad, J.A. (2025, March 26). Opinion: Defining a strong cybersecurity ecosystem for higher ed. Government Technology. https://www.govtech.com/education/higher-ed/opinion-defining-a-strong-cybersecurity-ecosystem-for-higher-ed
Kwong, F. (2025, October 9). The shadow AI threat: Why higher ed must wake up to risks before headlines hit. Campus Technology. https://campustechnology.com/articles/2025/10/09/the-shadow-ai-threat-why-higher-ed-must-wake-up-to-risks-before-the-headlines-hit.aspx?Page=1
Robert, J., & McCormack, M. (2025). 2025 EDUCAUSE AI landscape study: Into the digital AI divide. https://www.educause.edu/content/2025/2025-educause-ai-landscape-study/use-cases (EDUCAUSE log-in required.)
Walker, C. (2024, June 12). Data security best practices for AI tools in higher education. EdTech. https://edtechmagazine.com/higher/article/2024/06/data-security-best-practices-ai-tools-higher-education