How Will Generative AI Change My Course (GenAI Checklist)?

With the growing prevalence of generative AI applications and the ongoing discussions surrounding their integration in higher education, it can be overwhelming to contemplate their impact on your courses, learning materials, and field. As we navigate these new technologies, it is crucial to reflect on how generative AI can either hinder or enhance your teaching methods. CATL has created a checklist designed to help instructors consider how generative artificial intelligence (GAI) products may affect your courses and learning materials (syllabi, learning outcomes, and assessment).

Each step provides guidance on how to make strategic course adaptations and set course expectations that address these tools. As you go through the checklist, you may find yourself revisiting previous steps as you reconsider your course specifics and understanding of GAI.

Checklist for Assessing the Impact of Generative AI on your Course

View an abridged, printable version of the checklist to work through on your own.

Step One: Experiment with Generative AI

  • Experiment with GAI tools. Test Copilot (available to UWGB faculty, staff, and students) by inputting your own assignment prompts and assessing its performance in completing your assignments.
  • Research the potential benefits, concerns, and use cases regarding generative AI to gain a sense of the potential applications and misuses of this technology.

Step Two: Review Your Learning Outcomes

  • Reflect on your course learning outcomes. A good place to start is by reviewing this resource on AI and Bloom’s Taxonomy which considers AI capabilities for each learning level. Which outcomes lend themselves well to the use of generative AI and which outcomes emphasize your students’ distinctive human skills? Keep this in mind as you move on to steps three and four, as the way students demonstrate achieved learning outcomes may need to be revised.

Step Three: Assess the Extent of GAI Use in Class

  • Assess to what extent your course or discipline will be influenced by AI advancements. Are experts in your discipline already collaborating with GAI tools? Will current or future careers in your field work closely with these technologies? If so, consider what that means about your responsibility to prepare students for using generative AI effectively and ethically.
  • Determine the extent of usage appropriate for your course. Will you allow students to use GAI all the time or not at all? If students can use it, is it appropriate only for certain assignments/activities with guidance and permission from the instructor? If students can use GAI, how and when should they cite their use of these technologies (MLA, APA, Chicago)? Be specific and clear with your students.
  • Revisit your learning outcomes (step two). After assessing the impact of advancements in generative AI on your discipline and determining how the technology will be used (or not used) in your course, return to your learning outcomes and reassess if they align with course changes/additions you may have identified in this step.

Step Four: Review Your Assignments/Assessments

  • Evaluate your assignments to determine how AI can be integrated to support learning outcomes. The previous steps asked you to consider the relevance of AI to your field and its potential impact on students’ future careers. How are professionals in your discipline using AI, and how might you include AI-related skills in your course? What types of skills will students need to develop independently of AI, such as creativity, interpersonal skills, judgement, metacognitive reflection, and contextual reasoning? Can using AI for some parts of an assignment free up students’ time to focus more on the parts that develop these skills?
  • View, again, this resource on AI capabilities versus distinctive human skills as they relate to the levels of Bloom’s Taxonomy.
  • Define AI’s role in your course assignments and activities. Like step three, you’ll want to be clear with your students on how AI may be used for specific course activities. Articulate which parts of an assignment students can use AI assistance for and which parts students need to complete without AI. If AI use doesn’t benefit an assignment, explain to your students why it’s excluded and how the assignment work will develop relevant skills that AI can’t assist with. If you find AI is beneficial, consider how you will support your students’ usage for tasks like editing, organizing information, brainstorming, and formatting. In your assignment instructions, explain how students should cite or otherwise disclose their use of AI.
  • Apply the TILT framework to your assignments to help students understand the value of the work and the criteria for success.

Step Five: Update Your Syllabus

  • Add a syllabus statement outlining the guidelines you’ve determined pertaining to generative AI in your course. You can refer to our syllabus snippets for examples of generative AI-related syllabi statements.
  • Include your revised or new learning outcomes in your syllabus and consider how you will emphasize the importance of those course outcomes for students’ career/skill development.
  • Address and discuss your guidelines and expectations for generative AI usage with students on day one of class and put them in your syllabus. Inviting your students to provide feedback on course AI guidelines can help increase their understanding and buy-in.

Step Six: Seek Support and Resources

  • Engage with your colleagues to exchange experiences and practices for incorporating or navigating generative AI.
  • Stay informed about advancements and applications of generative AI technology.

Checklist for Assessing the Impact of Generative AI on Your Course © 2024 by Center for the Advancement of Teaching and Learning is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International

Want More Resources?

Visit the CATL blog, The Cowbell, for more resources related to generative AI in higher education.

Need Help?

CATL is available to offer assistance and support at every step of the checklist presented above. Contact CATL for a consultation or by email at CATL@uwgb.edu if you have questions, concerns, or perhaps are apprehensive to go through this checklist.

 

 

My Resistance (and Maybe Yours): Help Me Explore Generative AI

Article by Tara DaPra, CATL Instructional Development Consultant & Associate Teaching Professor of English & Writing Foundations

I went to the OPID conference in April to learn from colleagues across the Universities of Wisconsin who know much more than I do about Generative AI. I was looking for answers, for insight, and maybe for a sense that it’s all going to be okay.

I picked up a few small ideas. One group of presenters disclosed that AI had revised their PowerPoint slides for concision, something that, let’s be honest, most presentations could benefit from. Bryan Kopp, an English professor at UW-La Crosse, opened his presentation “AI & Social Inequity” by plainly stating that discussions of AI are discussions of power. He went on to describe his senior seminar that explored these social dynamics and offered the reassurance that we can figure this thing out with our students.

I also heard a lot of noise: AI is changing everything! Students are already using it! Other students are scared, so you have to give them permission. But don’t make them use it, which means after learning how to teach it, and teaching them how to use it, you must also create an alternate assessment. And you have to use it, too! But you can’t use it to grade or write LORs or in any way compromise FERPA. Most of all, don’t wait! You’re already sooo behind!

In sum: AI is everywhere. It’s in your car, inside the house, in your pocket. And (I think?) it’s coming for your refrigerator and your grocery shopping.

I left the conference with a familiar ache behind my right shoulder blade. This is the place where stress lives in my body, the place of “you really must” and “have to.” And my body is resisting.

I am not an early adopter. I let the first gen of any new tech tool come and go, waiting for the bugs to be worked out, to see if it will survive the Hype Cycle. This year, my syllabus policy on AI essentially read, “I don’t know how to use this thing, so please just don’t.” Though, in my defense, the fact that I even had a policy on Generative AI might actually make me an early adopter, since a recent national survey of provosts found only 20% were at the helm of institutions with formal, published policy on the use of AI.

So I still don’t have very many answers, but I am remembering to breathe through my resistance, which has helped me develop a few questions: How can I break down this big scary thing into smaller pieces? How might I approach these tools with a sense of play? How can I experiment in the classroom with students? How can I help them understand the limitations of AI and the essential nature of their human brains, their human voices?

To those ends, I’d like to hear from you. Send me your anxieties, your moral outrage, your wildest hopes and dreams. What have you been puzzling over this year? Have you found small ways to use Generative AI in your teaching or writing? Have your ethical questions shifted or deepened? And should I worry that maybe, in about two hundred years, AI is going to destroy us all?

This summer and next year, CATL will publish additional materials and blog posts exploring Generative AI. CATL has already covered some of the “whats,” and will continue to do so, as AI changes rapidly. But, just as we understand that to motivate students, we must also talk about “the why,” we must make space for these questions ourselves. In the meantime, as I explore these questions, I’m leaning into human companionship, as members of my unit (Applied Writing & English) will read Co-Intelligence: Living and Working with AI by Ethan Mollick. We’re off contract this summer, so it’s not required, but, you know, we have to figure this out. So if we must, let’s at least do it over dinner.

What is Generative Artificial Intelligence (GAI)? Exploring AI Tools and Their Relationship with Education

Generative Artificial Intelligence (GAI) and machine-generated content have become prominent in educational discussions. Amidst technical jargon and concerns about the impact of traditional learning, writing, and other facets, understanding what these tools are and what they can do can be overwhelming. This toolbox guide provides insights into some commonly used generative AI tools and explains how they are changing the landscape of higher education.

What is Generative AI?

CATL created a short video presentation in Fall 2023 that provides instructors with an introduction to generative AI tools. The video and the linked PowerPoint slides below can help you understand how generative AI tools work, their capabilities, and their limitations. Please note, minor parts of the tool identification in the video have been corrected below in the ‘Common Generative AI Tools’ section. 

Introduction to Generative AI – CATL Presentation Slides (PDF)

Microsoft Copilot – UWGB Supported GAI Tool

 Microsoft Copilot is the recommended tool for UWGB instructors and students for safety, equity, and GBIT technical support. Using Microsoft Copilot with your UWGB account will bypass the need for individuals to create personal accounts which require providing personal information in the sign-up process. Learn more about Copilot below.

  • Microsoft has created its own AI called Copilot using a customized version of OpenAI’s large language model and many of the features of ChatGPT. Users can interact with the AI through a chatbot, compose feature, or the with Microsoft Edge search engine. Microsoft is also rolling out Copilot-powered features in many of its Office 365 products, but these features are currently only available for an additional subscription fee.
  • Faculty, staff, and students can access Copilot (which uses both ChatGPT 4.0 and Bing Chat) with their UWGB account. Visit www.copilot.microsoft.com to try out Copilot or watch our short video on how to log in using a different browser. By logging in with UWGB credentials, a green shield and “protected” should appear on the screen. The specifics of what is/is not protected can be complicated, but this Microsoft document is intended to provide guidance. Regardless of potential protections, FERPA and HIPPA-protected information (student or employee) should not be entered.
home page for Microsoft Copilot
The Microsoft Copilot home page as of May 2024

Common Generative AI Tools

Since OpenAI released ChatGPT in November 2022, various companies have developed their own generative AI applications based on or in direct competition with OpenAI’s framework. Learn more about a few common, browser-based generative AI tools below.

  • ChatGPT is an AI-powered chatbot created by OpenAI. The "GPT" in "ChatGPT" stands for Generative Pre-trained Transformer.
  • ChatGPT previously required users to sign up for an account and verify with a phone number, but it can now be used without an account. Users can use the chatbot features of ChatGPT both with or without an account (currently version ChatGPT 3.5) or access more advanced models and features with a paid account (currently version ChatGPT 4.0). For more information or to try it yourself, visit chatgpt.com.
  • Google has created their own AI tool called Gemini (formerly Google Bard). Similar to ChatGPT and Copilot, Gemini can generate content based on users’ inputs. Outputs may also include sources fetched from Google.
  • Using Gemini requires a free Google account. If you have a personal Google account, you can try out Gemini at gemini.google.com.

 

Note that we are also learning more about potential access to Adobe Express and Firefly (including their image generation features) with UWGB login credentials, at least for employees. Watch this space for additional details as they become available.

What Can Generative AI Tools Do?

The generative AI tools we’ve discussed so far are all trained on large datasets that produce outputs based on patterns in that dataset. User prompts and feedback can be used to improve their outputs and models, so these tools are constantly evolving. Explore below to learn about some use cases and limitations of text-based generative AI tools.

Generative AI tools can be used in a multitude of ways. Some common use cases for text-based generative AI tools include: 

  • Language generation: Users can ask the AI to write essays, poems, emails, research papers, and Powerpoint presentations, or code snippets on a given topic.  
  • Information retrieval: Users can ask the AI simple questions like “explain the rules of football to me” or “what is the correct way to use a semicolon?”.
  • Language translation: Users can use the AI to translate words or phrases into different languages.  
  • Text summarization: Users can ask them to condense long texts, including lecture notes or entire books, into shorter summaries.
  • Idea generation: Users can use the AI to brainstorm and generate ideas for a story, research outline, email, or cover letter. 
  • Editorial assistance: Users can input their own writing and then ask the AI to provide feedback or rewrite it to make it more concise or formal.
  • Code generation: Users can ask the AI to generate code snippets, scripts, or even full programs in various programming languages based on specific requirements or prompts.
  • Image generation: Users can ask the AI to create images or visual content from text descriptions, including illustrations, designs or conceptual art.

These tools are constantly evolving and improving, but in their current state, many have the following limitations:

  • False or hallucinated responses: Most AI-powered text generators produce responses that they deem are likely answers based on complex algorithms and probability, which is not always the correct answer. As a result, AI may produce outputs that are misleading or incorrect. When asking AI complex questions, it may also generate an output that is grammatically correct but logically nonsensical or contradictory. These incorrect responses are sometimes called AI "hallucinations."
  • Limited frame of reference: Outputs are generated based on the user's input and the data that the AI has been trained on. When asking an AI about current events or information not widely circulated on the internet, it may produce outputs that are not accurate, relevant, or current because its frame of reference is limited to data that it has been trained on. 
  • Citation: Although the idea behind generative AI is to generate unique responses, there have been documented cases in which an AI has produced outputs containing unchanged, copyrighted content from its dataset. Even when an AI produces a unique response, some are unable to verify the accuracy of their outputs or provide sources supporting their claims. Additionally, AI tools have been known to produce inaccurate information, citations, and can even hallucinate citations 
  • Machine learning bias: AI tools may produce outputs that are discriminatory or harmful due to pre-existing bias in the data it has been trained.

The potential for GAI tools seems almost endless — writing complete essays, creating poetry, summarizing books and large texts, creating games, translating languages, analyzing data, and more. GAI tools can interpret and analyze language, similar to how human beings can. These tools have become more conversational and adaptive with each update, making it difficult to discern between what is generated by an AI and what is produced by a human, and the machine-learning models they are based upon imitate the way humans learn, so their accuracy and utility will only continue to improve over time.

What Does This Mean for Educators?

The existence of this technology raises questions about which tasks will be completed all or in part by machines in the future and what that means for our learning outcomes, assessments, and even disciplines. Some experts are discussing to what extent it should become part of the educational enterprise to teach students how to write effective AI prompts and use GAI tools to produce work that balances quality with efficiency. Other instructors are considering integrating lessons on AI ethics or information literacy into their teaching. Meanwhile, organizations like Inside Higher Ed have rushed to conduct research and surveys on current and prospective AI usage in higher ed to offer some benefits and challenges of using generative AI for leaders in higher education looking to make informed decisions about AI guidance and policy.

Next Steps for UWGB Instructors

The Universities of Wisconsin have issued official guidance on the use of generative AI, but the extent to which courses will engage with this technology is largely left up to the individual instructor. Instructors may wish to mitigate, support, or even elevate students’ use of generative AI depending on their discipline and courses.

Those interested in using these tools in the classroom should familiarize themselves with these considerations for using generative AI, especially regarding a tool’s accuracy, privacy, and security. As with any tool we incorporate into our teaching, we must be thoughtful about how and when to use AI and then provide students with proper scaffolding, framing, and guardrails to encourage responsible and effective usage.

Still, even for those who don’t want to incorporate this technology into their courses right now, we can’t ignore its existence either. All instructors, regardless of their philosophy on AI, are highly encouraged to consider how generative AI will impact their assessments, incorporate explicit guidance on AI tool usage in their syllabi, and continue to engage in conversations around these topics with their colleagues, chairs, and deans.

Learn More

Explore even more CATL resources related to AI in education:

If you have questions, concerns, or ideas specific to generative AI tools in education and the classroom, please email us at catl@uwgb.edu or set up a consultation!

Dispelling Common Instructor Misconceptions about AI

Staying updated on the rapidly evolving world of generative artificial intelligence (GAI) can be challenging, especially with new information and advancements seemingly happening in rapid succession. As tools like ChatGPT have taken the world by storm, many educators have developed divergent (and strong!) views about these technologies. It can be easy to get swept up in the hype or the doom and gloom of the media storm – overselling or underselling these technologies drives clicks, after all – but it also leads to the spread of misinformation as we try to cope with all the change.

In a previous blog post, we introduced generative AI technologies, their capabilities, and potential implications for higher education. Now, in this post, we will dig deeper into some important considerations regarding AI by exploring common misconceptions that some instructors may hold. While some educators are enthusiastic about incorporating AI into their teaching methodologies, others may harbor doubts, apprehensions, or simply lack interest in exploring these tools. Regardless of one’s stance, it is crucial that we all develop an understanding of how these technologies work so we can have healthy and productive conversations about GAI’s place in higher education.

Misconception #1: GAI is not relevant either to my discipline or to my work.

Reality: GAI is already integrated into many of the tools we use daily and will continue to become more prevalent in our work as technology evolves. 

Whether we teach nursing, accounting, chemistry, or writing, we use tools like personal computers, email, and the internet nearly every day. Generative AI is proving to be much the same, and companies like Google, Microsoft, and Meta are already integrating it into many of the tools we already use. Google now provides AI-generated summaries at the top of search results. Microsoft Teams offers a feature for recapping meetings using GAI and is experimenting with GAI-powered analytics tools in Excel and Word. Meta has integrated AI into the search bar of Instagram and Facebook. Canvas may have some upcoming AI integrations as well. Some of us may wish to put the genie back in the bottle, but this technology is not going away.

Misconception #2: The content that GAI produces is not very good, so I don’t have to worry about it.

Reality: GAI outputs will continue to evolve, improve, and become harder to discern from human-created content.

A lot of time, energy, and money is being invested into generative AI, which means we can expect that AI-generated content will continue to advance rapidly. In fact, many GAI tools are designed to continually progress and improve upon previous models. Although identifying some AI-generated content may be easy now, we should assume that this will only become increasingly difficult to discern as the technology evolves and becomes better at mimicking human-created content. Currently, generative AI tools have been described as a “C average” student, but with additional development and thoughtful prompting, it may be capable of A-level work.

Misconception #3: I don’t plan on using AI in my courses, so I don’t need to learn about it or talk about it with my students or colleagues.

Reality: All instructors should engage in dialogue on the impact of AI in education and/or in their field.

Even if you don’t plan on using AI in your courses, it is still important to learn about these technologies and consider their impact on your discipline and higher education. Consider discussing AI technology and its implications with your department, colleagues, and students. In what ways will generative AI tools change the nature of learning outcomes and even careers in your discipline? How are other instructors responding? In what ways can instructors support each other as they each grapple with these questions?

Not sure where to start? Use CATL’s checklist for assessing the impact of generative AI on your course to understand how this technology might affect your students and learning outcomes, regardless of if you plan to use AI in your courses or not.

Misconception #4: I’m permitting/prohibiting all AI use in my course, so I don’t need to provide further instructions for my students.

Reality: All instructors should clearly outline expectations for students’ use/non-use of AI in the course syllabus and assignment directions.

Whether you have a “red-light,” “yellow-light,” or “green-light” approach to AI use in your class, it is important to provide students with clear expectations and guidelines. Be specific in your syllabi and assignment descriptions about where and when you will allow or prohibit the use of these tools or features. Make sure your guidelines are consistent with official guidance from the Universities of Wisconsin and UW-Green Bay, communications from our Provost’s Office, and any additional recommendations from your chair or dean. CATL has developed syllabus snippets on generative AI usage that you are welcome to use, adapt, or borrow from for inspiration. Be as transparent as possible and recognize that students will be encouraged to check with you if they cannot find affirmative permission to use GAI in a specific way.

Misconception #5: All my students are already using AI and know how it works.

Reality: Many students do not have much experience with this technology yet and will need guidance on how to use it effectively and ethically. Students also have inequitable access.

While there is certainly a growing number of students who have started experimenting with GAI, instructors may be surprised at how many students have used these tools little if at all. Even when students do have experience using GAI, we cannot assume that they understand how to use it effectively or know when its use is ethically problematic. Furthermore, some students have access to high-speed Internet, a personal computer, and paid access to their favorite GAI tool. Other students may have no or spotty web access and may be relying on a cell phone as their only means of working on a course.

If you are permitting students to use GAI tools in your class, provide them with guidance on how they can partner with these tools to meet course outcomes, rather than using them as a shortcut for critical thinking. Encourage students to analyze the outputs produced by GAI and make assessments about where these tools are useful and where they fall short (e.g., Are the outputs accurate? Are they specific and relevant? What may be missing?). Classes should also engage in discussions about the importance of citing or disclosing the use of AI. UWGB’s librarians are a great resource if you would like help developing a lesson plan around information literacy, GAI “hallucinations,” or GAI citations in specific styles, such as APA. In terms of equitable access to GAI, while it may not be possible to control for all variables, one way you can help level the playing field is by having your students use Microsoft Copilot through their UWGB accounts. You could also have them document how they have used the tool (e.g., what prompts they used).

Misconception #6: If I use AI-generated content in my courses, I am not responsible for inaccuracies in the output.

Reality: If you use AI-generated content to develop your courses, you are ultimately responsible for verifying the accuracy of the information and providing credible sources.

GAI is prone to mistakes; therefore, it is up to human authors and editors to take responsibility for the content generated in part or whole by AI. Exercise caution when using GAI tools because the information provided by them may not always be accurate. GAI developers like OpenAI are upfront about GAI’s potential to hallucinate, so it’s best to vet outputs against trusted sources. Be sure to also watch out for potential bias that can appear in outputs, as these tools are trained on human-generated data that can contain biases. If you use GAI to develop course materials, you should disclose or cite usage in the same format your students would use too. It is also best practice to talk about these issues with students. They are also ultimately responsible for the content they submit, and they should know, for example, that GAI grading that appears “unbiased” actually carries with it the biases of those who trained it.

Misconception #7: I can rely on AI detection tools to catch students who are using GAI inappropriately.

Reality: AI detection tools are unreliable, subject to bias, and provide no meaningful evidence for cases of academic dishonesty.

As research continues to come out about AI detectors, one thing is certain: they are unreliable at best. AI writing can easily fly under the radar with careful prompting (e.g., “write like a college sophomore and vary the sentence length” or “write like these examples”). Even more concerning is the bias present in AI detection, such as the disproportionally high rate of false positives for human writing by non-native English writers. And unlike plagiarism detection, which is easy to verify and understand, the process of AI detection is a black box – instructors receive a score, but not a rationale for how the tool made its assessment. These different concerns have led many universities to ban their use entirely.

Instructors are encouraged to consider ways of fostering academic integrity and critical thinking rather than trying to police student behavior with AI detectors. If you’d still like to try using an AI detection tool, know that these reports are not enough to constitute evidence of academic misconduct and should be treated as only a signal that additional review may be necessary. In most cases, the logical next step will be an open, non-confrontational conversation with the student to learn more about their thought process and any tools they may have involved. Think, too, about the potential consequences of falsely accusing a student of academic misconduct. The threat of failing an assignment, or even a course, could have an impact on trust with you or their department, eligibility for a scholarship keeping them in school, and so on. The unreliability and lack of transparency in AI detection can lead to increased anxiety even among students who are not engaging in academic misconduct.

Misconception #8: I can input any information into an AI tool as long as it is relevant to my job duties.

Reality: Instructors need to exercise caution when handling student data to avoid violating UWGB policy and federal law (e.g., privacy laws such as FERPA).

Many GAI tools are trained on user inputs, so we must exercise caution when considering what information is appropriate to use in a prompt. Even when a product claims that it doesn’t retain prompt information, there is still potential for data breaches or bugs that invertedly put users’ data at risk. It is crucial that you never put students’ personally identifiable information (PII) into an AI-powered tool, as this may violate the Family Education Right to Privacy Act (FERPA). This also goes for work emails and documents that may contain sensitive information.

Misconception #9: AI advancement means the end of professors/teaching/higher education.

Reality: AI has many potential applications related to education, but CATL does not see them replacing human-led instruction.

Don’t get caught up in the smoke. Although the capabilities of generative AI can seem scary or worrying at first, that is a natural reaction to any major technological breakthrough. Education has experienced many shifts from technological advancements in the past, from the calculator to the internet, and has adapted and evolved alongside these technologies. It will take some time for higher education to embrace AI, but we can do our part by continuing to learn more about these technologies and asking important questions about their long-term impacts. Do you have questions or concerns about how AI will impact your course materials and assessments? Schedule a consultation with us – CATL is here to help!