You’re heard some buzz about generative AI (genAI), and you’ve learned to mitigate some of the concerns. Students will be tempted to use generative AI on assignments that are too high stakes or too low stakes, so our goal as instructors should be to be transparent and demonstrate to our students why the assessments we are asking them to do are worth their time. By being clear and transparent about how our assessments are beneficial to their learning and future careers, we motivate our students’ learning.
An additional way to be transparent with our students is to set a clear and explicit policy on AI use. Do not leave your students wondering about if or how they are allowed to use AI; instead, demonstrate for them in class how you would like them to use AI. Studies show this kind of transparency reduces academic misconduct and will also benefit your classroom climate by reducing anxiety around the use of AI. As Bowen and Watson suggest, we can also use AI to help us get the language and tone right (91-92).
Bowen and Watson devote and entire chapter of their groundbreaking book on AI to writing a policy for your course (132-146). Important considerations include explicitly (transparently) stating why you adopted the policy you did (how does AI impact your field or your personal teaching methods), finding shared values (consider co-creating your policy), stating your intention to help students develop AI Literacy (in step with your department’s goals and vision), and bringing renewed clarity to how you define plagiarism in your course (“new policies… demand new discussions,” 137).
As the details will matter, and any implementation of AI use across disparate disciplines may look different, US institutions have largely left the implementation of AI policy up to individual instructors. This creates several good/bad news scenarios. The good news is you are being treated as an expert and are being allowed to decide for yourself how to best implement AI use in your courses. The bad news is this is being done in real time and will require more labor from you. Another issue is that students might encounter a bewilderingly disparate array of AI policies, even within one semester: one of their instructors might try to ban AI use, another might require it. So be sure to include your policy in your syllabus and publish it and draw attention to it repeatedly during your classes. With great power comes great responsibility, and it will be up to you to make sure your students know how you want them to use AI in your course.
But, as always, we are here to help! If you would like to discuss any concerns or questions you have about creating an AI policy for your course, contact the CITL or your local teaching and learning center for a consultation. You can also read Bowen and Watson online, read our previous blog posts on AI, or look at an extensive (if not always exemplary) collected set of example policies online. Another (perhaps better) reference is our own Justin Hodgson’s course policy example.
References:
“About Microsoft Copilot at IU.” IU Knowledge Base.
Bowen, Jose Antonio and C. Edward Watson. Teaching with AI: A Practical Guide to a New Era of Human Learning (John Hopkins UP, 2024).
Eaton, Lance. Syllabi policies for AI generative tools. 2023.
Hodgson, Justin. Generative AI, ChatGPT, & Syllabi: An Ethics of Practice. 2023.
Leave a Reply