Now that you are transparent about how you would like students to use generative AI in your class, you’ve moved on to teaching them AI literacy. Along with demonstrating for them how generative AI likes to hallucinate, you’re teaching them the 21st century skill of prompt engineering.
But something goes wrong! You diligently use our recommended formula of [role] + [context] + [task] + [format], but the answer that Copilot is suggesting is incorrect or insufficient. What do you do? Stay calm, don’t panic! Depending on the issue, there are more recommendations for getting Copilot to give you a good answer. For example, Bowen and Watson’s excellent Teaching with AI suggests that you can feed Copilot an example by pointing it towards or uploading a known-good reference (a PowerPoint, pdf, or website) from a source you trust. If I want to make sure that Copilot gives me good suggestions to improve my rubric, for example, I tell it to consult the Cult of Pedagogy website as it gives me a list of five suggestions.
To learn how faculty are using Copilot to solve problems and improve their teaching, sign up for our next AI Faculty Showcase, April 10th. For more on writing prompts and how generative AI can impact your teaching, you can self-enroll in our “GenAI in the Classroom” modules, or watch a previous Faculty Showcase on how instructors are creatively using AI to support teaching large enrollment courses. Contact the CITL with questions or for a personal teaching consultation on how generative AI can improve your teaching and learning outcomes.
Leave a Reply