Sample Assignments for Different Approaches to GAI Use

In a previous CATL article, we recommended using the traffic light model to guide students on the appropriate use of generative AI (GAI) in assignments and course activities. Assuming you’ve already included a policy on GAI in your syllabus, it’s also important to provide clear instructions in your assignment descriptions. Below are some examples of assignment descriptions, using the traffic light approach and graphic. Instructors will vary on whether they want to use that visual or simply explain in words. If you choose to use the stoplight visuals, please be sure to provide an accompanying description of what that means for your specific assignment. While tailored to specific subjects, these samples share common strategies.

Consider the following general suggestions when designing your assignments:

  • Be clear and specific about GAI use in your syllabi and assignments. Clearly outline when and how GAI can be used for assignments and activities. Avoid ambiguity so students know exactly what’s expected. For example, if brainstorming is allowed but not writing, specify that distinction.
  • Include GAI usage disclaimers in assignment directions. Regularly remind students by adding a GAI disclaimer at the beginning of assignment instructions. This will make them accustomed to looking for guidance on AI use before starting their work.
  • Explain the rational for AI use or nonuse. Help students understand the reasoning behind when GAI can or cannot be used. This can reinforce the learning objectives and clarify the purposes behind your guidelines.
  • Clarify the criteria for evaluating AI collaboration. Specify how assignments will be graded concerning AI use. If students need to acknowledge or cite their AI usage, provide specific instructions on how they should do so.
  • Define which AI tools students can use. Should students stick to Microsoft Copilot (available to them with their UWGB account, so they don’t have to provide personal information to a third party or pay a subscription fee) or can they use others like ChatGPT?
  • Use the TILT framework. Leading with transparent design for assignments and activities helps students clearly understand the purpose, tasks, and assessment criteria. This framework can also help instructors clarify how GAI should be used and assessed in assignments.

Sample Assignment Instructions on AI Use

Red Light Approach: No GAI Use Permitted Assignment Example

The example below is for a writing emphasis course and the assignment purpose is to evaluate students’ own writing. For this assignment, GAI tools are not allowed. The instructor includes an explanation of this description to further clarify the assignment’s purpose.

Yellow Light Approach: GAI Use Permitted for Specific Tasks/Tools Examples

The yellow-light approach can be hard to define depending on what you want students to practice and develop for a given assignment. We’ve provided two samples below that each take a slightly different approach, but all clearly label what tools and for what tasks AI can be used and why.

Green Light Approach: All GAI Use Permitted

Instructors may choose to take a green light approach to AI for all assignments or allow AI use for selected assignments. The example below takes a low-stakes approach, permitting full AI use to encourage experimentation. Even with this method, instructors should provide clear assignment expectations.

Learn More

Explore even more CATL resources related to AI in education.

Generative Artificial Intelligence (GAI) and Acknowledging or Citing Use

UW-Green Bay’s libraries have an excellent student-facing webpage on how to acknowledge or formally cite the use of GAI. This blog is intended to supplement that resource with information more specific to instructors. Professors will be vital in helping students understand both the ethics and practicalities of transparency when employing GAI tools in our work. Please keep the following caveats in mind as you explore this resource.

  • As with all things GAI, new developments are rapid and commonplace, which means everyone needs to be on the alert for changes.
  • Instructors are the ones who decide their specific course policies on disclosing or citing GAI. The information below provides some options for formatting acknowledgments, but they are not exhaustive.
  • Providing acknowledgment for the use of GAI may seem straightforward, but it is actually a very nuanced topic. Questions about copyright implications, whether AI can be considered an “author,” and the ethics of relationships between large AI entities and publishing houses are beyond the scope of this blog. Know, though, that such issues are being discussed.
  • Please remember that it is not only important for students to acknowledge or cite the use of GAI. Instructors need to do so with their use of it, as well.

Acknowledgment or Citation of GAI

There is a difference between acknowledging the use of GAI with a simple statement at the end of a paper, requiring students to submit a full transcript of their GAI chat in an appendix, and providing a formal citation in APA, MLA, or Chicago styles.

  • UWGB Libraries have some excellent acknowledgment examples on their page.
  • UWM’s library page provides basic templates for citations intended to be consistent with APA, MLA, and Chicago styles.
  • There are also lengthy blog explanations and detailed citation examples available directly from APA, MLA, and the Chicago Manual of Style.

Regardless of the specific format being used, the information likely to be required to acknowledge or cite GAI includes:

  1. The name of the GAI tool (e.g., Copilot, ChatGPT)
    Microsoft Copilot, OpenAI’s ChatGPT 4.o (May 23, 2024 version), etc.
  2. The specific use of the GAI tool
    “to correct grammar and reduce the length in one paragraph of a 15-page paper”
  3. The precise prompts entered (initial and follow-up)
    “please reduce this paragraph by 50 words and correct grammatical errors”; follow-up prompt: “now cut 50 words from this revised version”
  4. The specific output and how it was used (perhaps even a full transcript)
    “specific suggestions, some of which were followed, of words to cut and run-on sentences to revise”
  5. The date the content was created
    August 13, 2024

Ultimately, instructors decide what format is best for their course based on their field of study, the nature and extent of GAI use permitted, and the purpose of the assignment. It is important to proactively provide specific information to students about assignments. Professors who are particularly interested in whether students are using GAI effectively may focus on the prompts used or even ask for the full transcript of a session. If, in a specific assignment, the instructor is more interested in students learning their discipline’s citation style, then they might ask for a formal citation using APA format. Although the decision is up to the professor, they should tell students in advance and strongly encourage them to have separate Word documents for each of their classes in which they save any GAI chats (including prompts and output) and their date. That way they have records to go back to; If they use Copilot with data protection, it does not save the content of sessions.

What Messages Might I Give to Students about Using, Disclosing, or Citing GAI?

Instructors should consider how they will apply this information about acknowledgments and citations in their own classes. CATL encourages you to do the following in your work with students.

  1. Decide on a policy for acknowledging/citing GAI use for each course assignment and communicate it in your syllabus and any applicable handouts, Canvas pages, etc.
  2. Reinforce for students that GAI makes mistakes. Students are ultimately responsible for the accuracy of the work they submit and for not using others’ intellectual property without proper acknowledgment. They should be encouraged to check on the actual existence of any sources cited by a GAI tool because they are sometimes “hallucinated,” not genuine.
  3. Talk to students about the peer review and publication processes and what those mean for source credibility compared to the “scraping” process used to train GAI models.
  4. Explain that GAI is not objective. It can contain bias. It has been created by humans and trained on data primarily produced by humans, which means it can reflect their very real biases.
  5. Communicate that transparency in GAI use is critical. Instructors should be clear with their students about when and how they may use GAI to complete specific assignments. At the same time, one of the best ways instructors can share the importance of transparency and attribution is through modeling it themselves (e.g., an instructor disclosing that they used Copilot to create a case study for their course and modeling how to format the disclosure).
  6. Remind students that even if the specific format varies, the information they are most likely to have to produce for a disclosure/acknowledgment or citation is: a) the name of the tool, b) the specific use of the tool, c) the prompts used, d) the output produced, and e) the date of use.
  7. Finally, encourage students to copy and paste all GAI interaction information, including an entire chat history, into a Word document for your course and to save it for future reference. One advantage of Microsoft Copilot with data protections is that it does not retain chat histories. That’s wonderful from a security perspective, but it makes it impossible to re-create that information once a session has ended. They should also know that even GAI tools that save interactions and use them to train their model are unlikely to re-produce a session even if the same prompt is entered.

Indicating Generative AI Assignment Permissions with the Traffic Light Model (Red Light, Yellow Light, Green Light)

CATL recommends using the red, yellow, and green light approach to clearly label what level of generative AI (GAI) use is permitted for each of your course assignments. The traffic lights will be useful, but students will also need precise written instructions to supplement them on each assignment’s instructions. In general, you should include: a) whether GAI use is permitted, b) what tasks it can (e.g., brainstorming topic ideas) and can’t (e.g., creating text) be used on, c) how it should be cited (if applicable), and d) a rationale for why it can/can’t be used. We have provided brief examples below, but keep in mind that lengthy assignments that involve complex GAI use might require much more detailed instructions of even a page or more. Note that the text in brackets [ ] is designed to provide some examples of words that might go there; you will need to choose and insert your own text.

Red Light Approach: No GAI Use Permitted

A red traffic light illuminated with an “x” symbol.Collaboration with any GAI tool is forbidden for this activity. This assignment’s main goal is to develop your own [e.g., writing, coding] skills. Generative AI tools cannot be used because doing so will not be helpful to your own skill development and confidence in those abilities.

Yellow Light Approach: GAI Use Permitted for Specific Tasks and/or Using Specific Tools

A yellow traffic light illuminated with an “!” symbol.You may use the GAI tool Copilot – and only Copilot – for specific tasks in this assignment, but not for all of them. You may use GAI tools to [brainstorm a research topic], but not for [writing or editing your research proposal]. You will need to properly cite or disclose your generative AI using [e.g., APA Style]. If you are unsure or confused about what GAI use is permitted, please reach out to me.

OR

You may use GAI tools on this assignment to [e.g., create the budget for your grant proposal], but not to do anything else, such as create text, construct your persuasive arguments, or edit your writing. You will need to properly cite or disclose your generative AI using [e.g., APA Style]. Although other tools are permitted, you are strongly encouraged to use Microsoft Copilot with data protections for reasons of security, equity, and access to GBIT technical support.

Green Light Approach: All GAI Use Permitted

A green traffic light illuminated with a checkmark symbol. You are encouraged to use GAI tools for this assignment. Any generative AI use will need to be disclosed and cited using the methods described in your syllabus. For this assignment, you may use GAI tools to [e.g., brainstorm, create questions, text, or code, organize information, build arguments, and edit]. You will need to properly cite or disclose how/where you used generative AI using [e.g., APA Style]. If you would like feedback on your GAI tool use or have questions, please reach out to me.

 

Outlining When and How Students May Use GAI

An instructor may want to outline specific tasks when using the traffic light approach. Consider some of the examples below.

You may use AI to “[task(s)]”, but not to “[task(s)]”:

  • Analyze Data
  • Brainstorm Ideas, Thesis Statements, etc.
  • Build Arguments
  • Conduct Peer Review
  • Create Discussion Posts
  • Create Questions
  • Create Study Guides
  • Develop Thesis Statements
  • Edit Content
  • Format Documents/Presentations
  • Generate Citations
  • Generate New Text, Code, Art, etc.
  • Generate Research Questions
  • Generate Samples/Examples
  • Organize Information
  • Provide Explanations/Definitions
  • Research a Topic
  • Search for Research Articles
  • Summarize Text/Literature/Article
  • Write Self-Reflections

Learn More

Explore even more CATL resources related to AI in education:

If you have questions, concerns, or ideas specific to generative AI tools in education and the classroom, please email us at catl@uwgb.edu or set up a consultation!

My Resistance (and Maybe Yours): Help Me Explore Generative AI

Article by Tara DaPra, CATL Instructional Development Consultant & Associate Teaching Professor of English & Writing Foundations

I went to the OPID conference in April to learn from colleagues across the Universities of Wisconsin who know much more than I do about Generative AI. I was looking for answers, for insight, and maybe for a sense that it’s all going to be okay.

I picked up a few small ideas. One group of presenters disclosed that AI had revised their PowerPoint slides for concision, something that, let’s be honest, most presentations could benefit from. Bryan Kopp, an English professor at UW-La Crosse, opened his presentation “AI & Social Inequity” by plainly stating that discussions of AI are discussions of power. He went on to describe his senior seminar that explored these social dynamics and offered the reassurance that we can figure this thing out with our students.

I also heard a lot of noise: AI is changing everything! Students are already using it! Other students are scared, so you have to give them permission. But don’t make them use it, which means after learning how to teach it, and teaching them how to use it, you must also create an alternate assessment. And you have to use it, too! But you can’t use it to grade or write LORs or in any way compromise FERPA. Most of all, don’t wait! You’re already sooo behind!

In sum: AI is everywhere. It’s in your car, inside the house, in your pocket. And (I think?) it’s coming for your refrigerator and your grocery shopping.

I left the conference with a familiar ache behind my right shoulder blade. This is the place where stress lives in my body, the place of “you really must” and “have to.” And my body is resisting.

I am not an early adopter. I let the first gen of any new tech tool come and go, waiting for the bugs to be worked out, to see if it will survive the Hype Cycle. This year, my syllabus policy on AI essentially read, “I don’t know how to use this thing, so please just don’t.” Though, in my defense, the fact that I even had a policy on Generative AI might actually make me an early adopter, since a recent national survey of provosts found only 20% were at the helm of institutions with formal, published policy on the use of AI.

So I still don’t have very many answers, but I am remembering to breathe through my resistance, which has helped me develop a few questions: How can I break down this big scary thing into smaller pieces? How might I approach these tools with a sense of play? How can I experiment in the classroom with students? How can I help them understand the limitations of AI and the essential nature of their human brains, their human voices?

To those ends, I’d like to hear from you. Send me your anxieties, your moral outrage, your wildest hopes and dreams. What have you been puzzling over this year? Have you found small ways to use Generative AI in your teaching or writing? Have your ethical questions shifted or deepened? And should I worry that maybe, in about two hundred years, AI is going to destroy us all?

This summer and next year, CATL will publish additional materials and blog posts exploring Generative AI. CATL has already covered some of the “whats,” and will continue to do so, as AI changes rapidly. But, just as we understand that to motivate students, we must also talk about “the why,” we must make space for these questions ourselves. In the meantime, as I explore these questions, I’m leaning into human companionship, as members of my unit (Applied Writing & English) will read Co-Intelligence: Living and Working with AI by Ethan Mollick. We’re off contract this summer, so it’s not required, but, you know, we have to figure this out. So if we must, let’s at least do it over dinner.

What is Generative Artificial Intelligence (GAI)? Exploring AI Tools and Their Relationship with Education

Generative Artificial Intelligence (GAI) and machine-generated content have become prominent in educational discussions. Amidst technical jargon and concerns about the impact of traditional learning, writing, and other facets, understanding what these tools are and what they can do can be overwhelming. This toolbox guide provides insights into some commonly used generative AI tools and explains how they are changing the landscape of higher education.

What is Generative AI?

CATL created a short video presentation in Fall 2023 that provides instructors with an introduction to generative AI tools. The video and the linked PowerPoint slides below can help you understand how generative AI tools work, their capabilities, and their limitations. Please note, minor parts of the tool identification in the video have been corrected below in the ‘Common Generative AI Tools’ section. 

Introduction to Generative AI – CATL Presentation Slides (PDF)

Microsoft Copilot – UWGB Supported GAI Tool

 Microsoft Copilot is the recommended tool for UWGB instructors and students for safety, equity, and GBIT technical support. Using Microsoft Copilot with your UWGB account will bypass the need for individuals to create personal accounts which require providing personal information in the sign-up process. Learn more about Copilot below.

  • Microsoft has created its own AI called Copilot using a customized version of OpenAI’s large language model and many of the features of ChatGPT. Users can interact with the AI through a chatbot, compose feature, or the with Microsoft Edge search engine. Microsoft is also rolling out Copilot-powered features in many of its Office 365 products, but these features are currently only available for an additional subscription fee.
  • Faculty, staff, and students can access Copilot (which uses both ChatGPT 4.0 and Bing Chat) with their UWGB account. Visit www.copilot.microsoft.com to try out Copilot or watch our short video on how to log in using a different browser. By logging in with UWGB credentials, a green shield and “protected” should appear on the screen. The specifics of what is/is not protected can be complicated, but this Microsoft document is intended to provide guidance. Regardless of potential protections, FERPA and HIPPA-protected information (student or employee) should not be entered.
home page for Microsoft Copilot
The Microsoft Copilot home page as of May 2024

Common Generative AI Tools

Since OpenAI released ChatGPT in November 2022, various companies have developed their own generative AI applications based on or in direct competition with OpenAI’s framework. Learn more about a few common, browser-based generative AI tools below.

  • ChatGPT is an AI-powered chatbot created by OpenAI. The "GPT" in "ChatGPT" stands for Generative Pre-trained Transformer.
  • ChatGPT previously required users to sign up for an account and verify with a phone number, but it can now be used without an account. Users can use the chatbot features of ChatGPT both with or without an account (currently version ChatGPT 3.5) or access more advanced models and features with a paid account (currently version ChatGPT 4.0). For more information or to try it yourself, visit chatgpt.com.
  • Google has created their own AI tool called Gemini (formerly Google Bard). Similar to ChatGPT and Copilot, Gemini can generate content based on users’ inputs. Outputs may also include sources fetched from Google.
  • Using Gemini requires a free Google account. If you have a personal Google account, you can try out Gemini at gemini.google.com.

 

Note that we are also learning more about potential access to Adobe Express and Firefly (including their image generation features) with UWGB login credentials, at least for employees. Watch this space for additional details as they become available.

What Can Generative AI Tools Do?

The generative AI tools we’ve discussed so far are all trained on large datasets that produce outputs based on patterns in that dataset. User prompts and feedback can be used to improve their outputs and models, so these tools are constantly evolving. Explore below to learn about some use cases and limitations of text-based generative AI tools.

Generative AI tools can be used in a multitude of ways. Some common use cases for text-based generative AI tools include: 

  • Language generation: Users can ask the AI to write essays, poems, emails, research papers, and Powerpoint presentations, or code snippets on a given topic.  
  • Information retrieval: Users can ask the AI simple questions like “explain the rules of football to me” or “what is the correct way to use a semicolon?”.
  • Language translation: Users can use the AI to translate words or phrases into different languages.  
  • Text summarization: Users can ask them to condense long texts, including lecture notes or entire books, into shorter summaries.
  • Idea generation: Users can use the AI to brainstorm and generate ideas for a story, research outline, email, or cover letter. 
  • Editorial assistance: Users can input their own writing and then ask the AI to provide feedback or rewrite it to make it more concise or formal.
  • Code generation: Users can ask the AI to generate code snippets, scripts, or even full programs in various programming languages based on specific requirements or prompts.
  • Image generation: Users can ask the AI to create images or visual content from text descriptions, including illustrations, designs or conceptual art.

These tools are constantly evolving and improving, but in their current state, many have the following limitations:

  • False or hallucinated responses: Most AI-powered text generators produce responses that they deem are likely answers based on complex algorithms and probability, which is not always the correct answer. As a result, AI may produce outputs that are misleading or incorrect. When asking AI complex questions, it may also generate an output that is grammatically correct but logically nonsensical or contradictory. These incorrect responses are sometimes called AI "hallucinations."
  • Limited frame of reference: Outputs are generated based on the user's input and the data that the AI has been trained on. When asking an AI about current events or information not widely circulated on the internet, it may produce outputs that are not accurate, relevant, or current because its frame of reference is limited to data that it has been trained on. 
  • Citation: Although the idea behind generative AI is to generate unique responses, there have been documented cases in which an AI has produced outputs containing unchanged, copyrighted content from its dataset. Even when an AI produces a unique response, some are unable to verify the accuracy of their outputs or provide sources supporting their claims. Additionally, AI tools have been known to produce inaccurate information, citations, and can even hallucinate citations 
  • Machine learning bias: AI tools may produce outputs that are discriminatory or harmful due to pre-existing bias in the data it has been trained.

The potential for GAI tools seems almost endless — writing complete essays, creating poetry, summarizing books and large texts, creating games, translating languages, analyzing data, and more. GAI tools can interpret and analyze language, similar to how human beings can. These tools have become more conversational and adaptive with each update, making it difficult to discern between what is generated by an AI and what is produced by a human, and the machine-learning models they are based upon imitate the way humans learn, so their accuracy and utility will only continue to improve over time.

What Does This Mean for Educators?

The existence of this technology raises questions about which tasks will be completed all or in part by machines in the future and what that means for our learning outcomes, assessments, and even disciplines. Some experts are discussing to what extent it should become part of the educational enterprise to teach students how to write effective AI prompts and use GAI tools to produce work that balances quality with efficiency. Other instructors are considering integrating lessons on AI ethics or information literacy into their teaching. Meanwhile, organizations like Inside Higher Ed have rushed to conduct research and surveys on current and prospective AI usage in higher ed to offer some benefits and challenges of using generative AI for leaders in higher education looking to make informed decisions about AI guidance and policy.

Next Steps for UWGB Instructors

The Universities of Wisconsin have issued official guidance on the use of generative AI, but the extent to which courses will engage with this technology is largely left up to the individual instructor. Instructors may wish to mitigate, support, or even elevate students’ use of generative AI depending on their discipline and courses.

Those interested in using these tools in the classroom should familiarize themselves with these considerations for using generative AI, especially regarding a tool’s accuracy, privacy, and security. As with any tool we incorporate into our teaching, we must be thoughtful about how and when to use AI and then provide students with proper scaffolding, framing, and guardrails to encourage responsible and effective usage.

Still, even for those who don’t want to incorporate this technology into their courses right now, we can’t ignore its existence either. All instructors, regardless of their philosophy on AI, are highly encouraged to consider how generative AI will impact their assessments, incorporate explicit guidance on AI tool usage in their syllabi, and continue to engage in conversations around these topics with their colleagues, chairs, and deans.

Learn More

Explore even more CATL resources related to AI in education:

If you have questions, concerns, or ideas specific to generative AI tools in education and the classroom, please email us at catl@uwgb.edu or set up a consultation!