Dispelling Common Instructor Misconceptions about AI

Staying updated on the rapidly evolving world of generative artificial intelligence (GAI) can be challenging, especially with new information and advancements seemingly happening in rapid succession. As tools like ChatGPT have taken the world by storm, many educators have developed divergent (and strong!) views about these technologies. It can be easy to get swept up in the hype or the doom and gloom of the media storm – overselling or underselling these technologies drives clicks, after all – but it also leads to the spread of misinformation as we try to cope with all the change.

In a previous blog post, we introduced generative AI technologies, their capabilities, and potential implications for higher education. Now, in this post, we will dig deeper into some important considerations regarding AI by exploring common misconceptions that some instructors may hold. While some educators are enthusiastic about incorporating AI into their teaching methodologies, others may harbor doubts, apprehensions, or simply lack interest in exploring these tools. Regardless of one’s stance, it is crucial that we all develop an understanding of how these technologies work so we can have healthy and productive conversations about GAI’s place in higher education.

Misconception #1: GAI is not relevant either to my discipline or to my work.

Reality: GAI is already integrated into many of the tools we use daily and will continue to become more prevalent in our work as technology evolves. 

Whether we teach nursing, accounting, chemistry, or writing, we use tools like personal computers, email, and the internet nearly every day. Generative AI is proving to be much the same, and companies like Google, Microsoft, and Meta are already integrating it into many of the tools we already use. Google now provides AI-generated summaries at the top of search results. Microsoft Teams offers a feature for recapping meetings using GAI and is experimenting with GAI-powered analytics tools in Excel and Word. Meta has integrated AI into the search bar of Instagram and Facebook. Canvas may have some upcoming AI integrations as well. Some of us may wish to put the genie back in the bottle, but this technology is not going away.

Misconception #2: The content that GAI produces is not very good, so I don’t have to worry about it.

Reality: GAI outputs will continue to evolve, improve, and become harder to discern from human-created content.

A lot of time, energy, and money is being invested into generative AI, which means we can expect that AI-generated content will continue to advance rapidly. In fact, many GAI tools are designed to continually progress and improve upon previous models. Although identifying some AI-generated content may be easy now, we should assume that this will only become increasingly difficult to discern as the technology evolves and becomes better at mimicking human-created content. Currently, generative AI tools have been described as a “C average” student, but with additional development and thoughtful prompting, it may be capable of A-level work.

Misconception #3: I don’t plan on using AI in my courses, so I don’t need to learn about it or talk about it with my students or colleagues.

Reality: All instructors should engage in dialogue on the impact of AI in education and/or in their field.

Even if you don’t plan on using AI in your courses, it is still important to learn about these technologies and consider their impact on your discipline and higher education. Consider discussing AI technology and its implications with your department, colleagues, and students. In what ways will generative AI tools change the nature of learning outcomes and even careers in your discipline? How are other instructors responding? In what ways can instructors support each other as they each grapple with these questions?

Not sure where to start? Use CATL’s checklist for assessing the impact of generative AI on your course to understand how this technology might affect your students and learning outcomes, regardless of if you plan to use AI in your courses or not.

Misconception #4: I’m permitting/prohibiting all AI use in my course, so I don’t need to provide further instructions for my students.

Reality: All instructors should clearly outline expectations for students’ use/non-use of AI in the course syllabus and assignment directions.

Whether you have a “red-light,” “yellow-light,” or “green-light” approach to AI use in your class, it is important to provide students with clear expectations and guidelines. Be specific in your syllabi and assignment descriptions about where and when you will allow or prohibit the use of these tools or features. Make sure your guidelines are consistent with official guidance from the Universities of Wisconsin and UW-Green Bay, communications from our Provost’s Office, and any additional recommendations from your chair or dean. CATL has developed syllabus snippets on generative AI usage that you are welcome to use, adapt, or borrow from for inspiration. Be as transparent as possible and recognize that students will be encouraged to check with you if they cannot find affirmative permission to use GAI in a specific way.

Misconception #5: All my students are already using AI and know how it works.

Reality: Many students do not have much experience with this technology yet and will need guidance on how to use it effectively and ethically. Students also have inequitable access.

While there is certainly a growing number of students who have started experimenting with GAI, instructors may be surprised at how many students have used these tools little if at all. Even when students do have experience using GAI, we cannot assume that they understand how to use it effectively or know when its use is ethically problematic. Furthermore, some students have access to high-speed Internet, a personal computer, and paid access to their favorite GAI tool. Other students may have no or spotty web access and may be relying on a cell phone as their only means of working on a course.

If you are permitting students to use GAI tools in your class, provide them with guidance on how they can partner with these tools to meet course outcomes, rather than using them as a shortcut for critical thinking. Encourage students to analyze the outputs produced by GAI and make assessments about where these tools are useful and where they fall short (e.g., Are the outputs accurate? Are they specific and relevant? What may be missing?). Classes should also engage in discussions about the importance of citing or disclosing the use of AI. UWGB’s librarians are a great resource if you would like help developing a lesson plan around information literacy, GAI “hallucinations,” or GAI citations in specific styles, such as APA. In terms of equitable access to GAI, while it may not be possible to control for all variables, one way you can help level the playing field is by having your students use Microsoft Copilot through their UWGB accounts. You could also have them document how they have used the tool (e.g., what prompts they used).

Misconception #6: If I use AI-generated content in my courses, I am not responsible for inaccuracies in the output.

Reality: If you use AI-generated content to develop your courses, you are ultimately responsible for verifying the accuracy of the information and providing credible sources.

GAI is prone to mistakes; therefore, it is up to human authors and editors to take responsibility for the content generated in part or whole by AI. Exercise caution when using GAI tools because the information provided by them may not always be accurate. GAI developers like OpenAI are upfront about GAI’s potential to hallucinate, so it’s best to vet outputs against trusted sources. Be sure to also watch out for potential bias that can appear in outputs, as these tools are trained on human-generated data that can contain biases. If you use GAI to develop course materials, you should disclose or cite usage in the same format your students would use too. It is also best practice to talk about these issues with students. They are also ultimately responsible for the content they submit, and they should know, for example, that GAI grading that appears “unbiased” actually carries with it the biases of those who trained it.

Misconception #7: I can rely on AI detection tools to catch students who are using GAI inappropriately.

Reality: AI detection tools are unreliable, subject to bias, and provide no meaningful evidence for cases of academic dishonesty.

As research continues to come out about AI detectors, one thing is certain: they are unreliable at best. AI writing can easily fly under the radar with careful prompting (e.g., “write like a college sophomore and vary the sentence length” or “write like these examples”). Even more concerning is the bias present in AI detection, such as the disproportionally high rate of false positives for human writing by non-native English writers. And unlike plagiarism detection, which is easy to verify and understand, the process of AI detection is a black box – instructors receive a score, but not a rationale for how the tool made its assessment. These different concerns have led many universities to ban their use entirely.

Instructors are encouraged to consider ways of fostering academic integrity and critical thinking rather than trying to police student behavior with AI detectors. If you’d still like to try using an AI detection tool, know that these reports are not enough to constitute evidence of academic misconduct and should be treated as only a signal that additional review may be necessary. In most cases, the logical next step will be an open, non-confrontational conversation with the student to learn more about their thought process and any tools they may have involved. Think, too, about the potential consequences of falsely accusing a student of academic misconduct. The threat of failing an assignment, or even a course, could have an impact on trust with you or their department, eligibility for a scholarship keeping them in school, and so on. The unreliability and lack of transparency in AI detection can lead to increased anxiety even among students who are not engaging in academic misconduct.

Misconception #8: I can input any information into an AI tool as long as it is relevant to my job duties.

Reality: Instructors need to exercise caution when handling student data to avoid violating UWGB policy and federal law (e.g., privacy laws such as FERPA).

Many GAI tools are trained on user inputs, so we must exercise caution when considering what information is appropriate to use in a prompt. Even when a product claims that it doesn’t retain prompt information, there is still potential for data breaches or bugs that invertedly put users’ data at risk. It is crucial that you never put students’ personally identifiable information (PII) into an AI-powered tool, as this may violate the Family Education Right to Privacy Act (FERPA). This also goes for work emails and documents that may contain sensitive information.

Misconception #9: AI advancement means the end of professors/teaching/higher education.

Reality: AI has many potential applications related to education, but CATL does not see them replacing human-led instruction.

Don’t get caught up in the smoke. Although the capabilities of generative AI can seem scary or worrying at first, that is a natural reaction to any major technological breakthrough. Education has experienced many shifts from technological advancements in the past, from the calculator to the internet, and has adapted and evolved alongside these technologies. It will take some time for higher education to embrace AI, but we can do our part by continuing to learn more about these technologies and asking important questions about their long-term impacts. Do you have questions or concerns about how AI will impact your course materials and assessments? Schedule a consultation with us – CATL is here to help!

Events on AI, Machine-Generated Content, and ChatGPT (Feb. 10, Feb. 17, Mar. 24 & Apr. 7, 2023)

Have you heard the term “ChatGPT” and wondered what everyone was talking about? Are you thinking about how artificial intelligence and machine-generated content could help you as a teacher or complicate your ability to assess true student learning? Experts from across UW-Green Bay are coming together to help you! Please read on to learn more about the sessions being offered in Spring 2023.

ChatGPT Workshop (Feb. 10 & 17, 8 – 9:30 a.m.)

We are excited to announce that the Cofrin School of Business, with support from CATL, is hosting a workshop on ChatGPT! Come learn about ChatGPT by Open AI. Join CSB faculty in this interactive workshop to experience the most advanced chatbot and discuss implications for teaching and learning.

The workshop is moderated by Oliver Buechse, Executive in Residence, Cofrin School of Business. It will be offered on two different Fridays, Feb. 10 and 17, from 8 – 9:30 a.m. in the Willie D. Davis Finance and Investment Lab on the first floor of Wood Hall. The workshops are free and open to all UWGB employees.

If you need an accommodation for any of the sessions that are a part of the “ChaptGPT Workshop” please contact Kathryn Marten (martenk@uwgb.edu).

AI, Teaching, & Learning Series (Feb. 17, Mar. 24, & Apr. 7, 11:40 a.m. – 12:30 p.m.)

UW-Green Libraries, CATL, The Learning Center, and UWGB faculty are all coming together to offer a series of three workshops on machine-generated content applications and artificial intelligence tools such as ChatGPT and their potential impacts on teaching and learning. Participants will have the option to attend this series in-person or via Zoom. 

Teaching and Learning in the Time of ChatGPT | Friday, Feb. 17, 11:40 a.m. – 12:30 p.m.

UW-Green Bay instructors with expertise in artificial intelligence and machine learning will introduce us to AI-content generating tools, like ChatGPT, and their potential uses and pitfalls. Join other instructors for an engaging discussion about the impact on teaching and learning and a brief opportunity to test the tools themselves. 

Writing Assignments and Artificial Intelligence | Friday, Mar. 24, 11:40 a.m. – 12:30 p.m.

ChatGPT and other text-generating tools have raised concerns among instructors whose curriculum relies upon writing assignments from creative writing to lab reports and research papers. In this session, we’ll focus on the implications of these tools on writing and pedagogy, assessment, and curriculum design.  

Designing and Managing Authentic Assessments | Friday, Apr. 7, 11:40 a.m. – 12:30 p.m.

Students may inevitably use artificial intelligence and text-generating tools, but there are strategies instructors can explore and use to alleviate instructional stress around student learning. In this session, we will explore strategies for planning and developing authentic assessments to help students actively engage in their learning. This session will also offer instructors resources to help navigate the issues surrounding artificial intelligence and discuss ways to create assessments that embrace or acknowledge the use of AI and text-generating tools.

If you need an accommodation for any of the sessions that are a part of the “AI, Teaching & Learning Series,” please contact Kate Farley (farleyk@uwgb.edu).