IU does not currently have any specific policies around the use of generative AI, other than policies related to data security and privacy.
According to Teaching@IU: Microsoft Copilot is IU’s preferred generative AI service for faculty and staff. It can handle data up to University-Internal level. When using public tools, usage of data is more restrictive, even when the data are anonymized. See Acceptable uses of generative AI services at IU. Understanding the way data is classified can be complicated, and IU provides a Data Classification Matrix to help faculty make the most appropriate decisions with using data.
There are resources that have been developed by educators, for educators, to help formulate AI Syllabus Policy statements. For example:
This document, developed by Dr. Torrey Trust, provides concrete examples of what is and is not allowed in her class and why. For instance, she explains students can use AI to:
-
help make information easier to understand (e.g., explaining technical or academic jargon, providing concrete examples of an abstract idea).
-
make learning more accessible and digitally accessible for disabled individuals.
While students cannot use generative AI tools to:
-
respond to a discussion forum prompt or,
-
analyze data and submitting the data analysis as their own.
Tricia Bertram Gallant, Ph.D., suggests using Furze’s AI assessment scale to to have students identify the appropriate level of AI use/integration for each specific assignment/assessment in the class plus for various learning tasks students may generally engage in. This way students are active participants in the process of responsible AI use. https://docs.google.com/ document/d/ 1wJ3ZDOEvHkJ6licAinjBxkoX-_ PKfvfU/edit?usp=drive_link& ouid=110175068244706505541& rtpof=true&sd=true –
She also has a guide which helps faculty apply Fruze et al’s scale to their own assessments: https://docs.google.com/document/d/1xpYesTrDKtSpHKy7cVnheHxUBgsABL_s/edit
Relatedly, the Center of Teaching and Learning at the University of Massachusetts, Amherst has developed an AI policy flowchart, which provides questions which help faculty consider, how to provide options to students that might increase the likelihood that students will follow your generative AI course policy: https://www.umass.edu/ctl/how-do-i-consider-options-may-increase-likelihood-students-will-follow-my-generative-ai-course. They suggest that you may find it useful to go through this flowchart as an iterative process: starting with what you think your course policy might be, working through the decision points, and reworking your policy accordingly. Temple also has a decision tree intended to help decide whether students should use AI in a course: https://sites.temple.edu/edvice/2023/06/14/a-survival-guide-to-ai-and-teaching-pt-3-should-i-allow-my-students-to-use-generative-ai-tools-decision-tree/ While this resource: https://ditchthattextbook.com/ai-cheating/ asks the question of what is cheating when AI is allowed for use in the classroom.
Leave a Reply