Sample Assignments for Different Approaches to GAI Use

In a previous CATL article, we recommended using the traffic light model to guide students on the appropriate use of generative AI (GAI) in assignments and course activities. Assuming you’ve already included a policy on GAI in your syllabus, it’s also important to provide clear instructions in your assignment descriptions. Below are some examples of assignment descriptions, using the traffic light approach and graphic. Instructors will vary on whether they want to use that visual or simply explain in words. If you choose to use the stoplight visuals, please be sure to provide an accompanying description of what that means for your specific assignment. While tailored to specific subjects, these samples share common strategies.

Consider the following general suggestions when designing your assignments:

  • Be clear and specific about GAI use in your syllabi and assignments. Clearly outline when and how GAI can be used for assignments and activities. Avoid ambiguity so students know exactly what’s expected. For example, if brainstorming is allowed but not writing, specify that distinction.
  • Include GAI usage disclaimers in assignment directions. Regularly remind students by adding a GAI disclaimer at the beginning of assignment instructions. This will make them accustomed to looking for guidance on AI use before starting their work.
  • Explain the rational for AI use or nonuse. Help students understand the reasoning behind when GAI can or cannot be used. This can reinforce the learning objectives and clarify the purposes behind your guidelines.
  • Clarify the criteria for evaluating AI collaboration. Specify how assignments will be graded concerning AI use. If students need to acknowledge or cite their AI usage, provide specific instructions on how they should do so.
  • Define which AI tools students can use. Should students stick to Microsoft Copilot (available to them with their UWGB account, so they don’t have to provide personal information to a third party or pay a subscription fee) or can they use others like ChatGPT?
  • Use the TILT framework. Leading with transparent design for assignments and activities helps students clearly understand the purpose, tasks, and assessment criteria. This framework can also help instructors clarify how GAI should be used and assessed in assignments.

Sample Assignment Instructions on AI Use

Red Light Approach: No GAI Use Permitted Assignment Example

The example below is for a writing emphasis course and the assignment purpose is to evaluate students’ own writing. For this assignment, GAI tools are not allowed. The instructor includes an explanation of this description to further clarify the assignment’s purpose.

Yellow Light Approach: GAI Use Permitted for Specific Tasks/Tools Examples

The yellow-light approach can be hard to define depending on what you want students to practice and develop for a given assignment. We’ve provided two samples below that each take a slightly different approach, but all clearly label what tools and for what tasks AI can be used and why.

Green Light Approach: All GAI Use Permitted

Instructors may choose to take a green light approach to AI for all assignments or allow AI use for selected assignments. The example below takes a low-stakes approach, permitting full AI use to encourage experimentation. Even with this method, instructors should provide clear assignment expectations.

Learn More

Explore even more CATL resources related to AI in education.

Generative Artificial Intelligence (GAI) and Acknowledging or Citing Use

UW-Green Bay’s libraries have an excellent student-facing webpage on how to acknowledge or formally cite the use of GAI. This blog is intended to supplement that resource with information more specific to instructors. Professors will be vital in helping students understand both the ethics and practicalities of transparency when employing GAI tools in our work. Please keep the following caveats in mind as you explore this resource.

  • As with all things GAI, new developments are rapid and commonplace, which means everyone needs to be on the alert for changes.
  • Instructors are the ones who decide their specific course policies on disclosing or citing GAI. The information below provides some options for formatting acknowledgments, but they are not exhaustive.
  • Providing acknowledgment for the use of GAI may seem straightforward, but it is actually a very nuanced topic. Questions about copyright implications, whether AI can be considered an “author,” and the ethics of relationships between large AI entities and publishing houses are beyond the scope of this blog. Know, though, that such issues are being discussed.
  • Please remember that it is not only important for students to acknowledge or cite the use of GAI. Instructors need to do so with their use of it, as well.

Acknowledgment or Citation of GAI

There is a difference between acknowledging the use of GAI with a simple statement at the end of a paper, requiring students to submit a full transcript of their GAI chat in an appendix, and providing a formal citation in APA, MLA, or Chicago styles.

  • UWGB Libraries have some excellent acknowledgment examples on their page.
  • UWM’s library page provides basic templates for citations intended to be consistent with APA, MLA, and Chicago styles.
  • There are also lengthy blog explanations and detailed citation examples available directly from APA, MLA, and the Chicago Manual of Style.

Regardless of the specific format being used, the information likely to be required to acknowledge or cite GAI includes:

  1. The name of the GAI tool (e.g., Copilot, ChatGPT)
    Microsoft Copilot, OpenAI’s ChatGPT 4.o (May 23, 2024 version), etc.
  2. The specific use of the GAI tool
    “to correct grammar and reduce the length in one paragraph of a 15-page paper”
  3. The precise prompts entered (initial and follow-up)
    “please reduce this paragraph by 50 words and correct grammatical errors”; follow-up prompt: “now cut 50 words from this revised version”
  4. The specific output and how it was used (perhaps even a full transcript)
    “specific suggestions, some of which were followed, of words to cut and run-on sentences to revise”
  5. The date the content was created
    August 13, 2024

Ultimately, instructors decide what format is best for their course based on their field of study, the nature and extent of GAI use permitted, and the purpose of the assignment. It is important to proactively provide specific information to students about assignments. Professors who are particularly interested in whether students are using GAI effectively may focus on the prompts used or even ask for the full transcript of a session. If, in a specific assignment, the instructor is more interested in students learning their discipline’s citation style, then they might ask for a formal citation using APA format. Although the decision is up to the professor, they should tell students in advance and strongly encourage them to have separate Word documents for each of their classes in which they save any GAI chats (including prompts and output) and their date. That way they have records to go back to; If they use Copilot with data protection, it does not save the content of sessions.

What Messages Might I Give to Students about Using, Disclosing, or Citing GAI?

Instructors should consider how they will apply this information about acknowledgments and citations in their own classes. CATL encourages you to do the following in your work with students.

  1. Decide on a policy for acknowledging/citing GAI use for each course assignment and communicate it in your syllabus and any applicable handouts, Canvas pages, etc.
  2. Reinforce for students that GAI makes mistakes. Students are ultimately responsible for the accuracy of the work they submit and for not using others’ intellectual property without proper acknowledgment. They should be encouraged to check on the actual existence of any sources cited by a GAI tool because they are sometimes “hallucinated,” not genuine.
  3. Talk to students about the peer review and publication processes and what those mean for source credibility compared to the “scraping” process used to train GAI models.
  4. Explain that GAI is not objective. It can contain bias. It has been created by humans and trained on data primarily produced by humans, which means it can reflect their very real biases.
  5. Communicate that transparency in GAI use is critical. Instructors should be clear with their students about when and how they may use GAI to complete specific assignments. At the same time, one of the best ways instructors can share the importance of transparency and attribution is through modeling it themselves (e.g., an instructor disclosing that they used Copilot to create a case study for their course and modeling how to format the disclosure).
  6. Remind students that even if the specific format varies, the information they are most likely to have to produce for a disclosure/acknowledgment or citation is: a) the name of the tool, b) the specific use of the tool, c) the prompts used, d) the output produced, and e) the date of use.
  7. Finally, encourage students to copy and paste all GAI interaction information, including an entire chat history, into a Word document for your course and to save it for future reference. One advantage of Microsoft Copilot with data protections is that it does not retain chat histories. That’s wonderful from a security perspective, but it makes it impossible to re-create that information once a session has ended. They should also know that even GAI tools that save interactions and use them to train their model are unlikely to re-produce a session even if the same prompt is entered.

Indicating Generative AI Assignment Permissions with the Traffic Light Model (Red Light, Yellow Light, Green Light)

CATL recommends using the red, yellow, and green light approach to clearly label what level of generative AI (GAI) use is permitted for each of your course assignments. The traffic lights will be useful, but students will also need precise written instructions to supplement them on each assignment’s instructions. In general, you should include: a) whether GAI use is permitted, b) what tasks it can (e.g., brainstorming topic ideas) and can’t (e.g., creating text) be used on, c) how it should be cited (if applicable), and d) a rationale for why it can/can’t be used. We have provided brief examples below, but keep in mind that lengthy assignments that involve complex GAI use might require much more detailed instructions of even a page or more. Note that the text in brackets [ ] is designed to provide some examples of words that might go there; you will need to choose and insert your own text.

Red Light Approach: No GAI Use Permitted

A red traffic light illuminated with an “x” symbol.Collaboration with any GAI tool is forbidden for this activity. This assignment’s main goal is to develop your own [e.g., writing, coding] skills. Generative AI tools cannot be used because doing so will not be helpful to your own skill development and confidence in those abilities.

Yellow Light Approach: GAI Use Permitted for Specific Tasks and/or Using Specific Tools

A yellow traffic light illuminated with an “!” symbol.You may use the GAI tool Copilot – and only Copilot – for specific tasks in this assignment, but not for all of them. You may use GAI tools to [brainstorm a research topic], but not for [writing or editing your research proposal]. You will need to properly cite or disclose your generative AI using [e.g., APA Style]. If you are unsure or confused about what GAI use is permitted, please reach out to me.

OR

You may use GAI tools on this assignment to [e.g., create the budget for your grant proposal], but not to do anything else, such as create text, construct your persuasive arguments, or edit your writing. You will need to properly cite or disclose your generative AI using [e.g., APA Style]. Although other tools are permitted, you are strongly encouraged to use Microsoft Copilot with data protections for reasons of security, equity, and access to GBIT technical support.

Green Light Approach: All GAI Use Permitted

A green traffic light illuminated with a checkmark symbol. You are encouraged to use GAI tools for this assignment. Any generative AI use will need to be disclosed and cited using the methods described in your syllabus. For this assignment, you may use GAI tools to [e.g., brainstorm, create questions, text, or code, organize information, build arguments, and edit]. You will need to properly cite or disclose how/where you used generative AI using [e.g., APA Style]. If you would like feedback on your GAI tool use or have questions, please reach out to me.

 

Outlining When and How Students May Use GAI

An instructor may want to outline specific tasks when using the traffic light approach. Consider some of the examples below.

You may use AI to “[task(s)]”, but not to “[task(s)]”:

  • Analyze Data
  • Brainstorm Ideas, Thesis Statements, etc.
  • Build Arguments
  • Conduct Peer Review
  • Create Discussion Posts
  • Create Questions
  • Create Study Guides
  • Develop Thesis Statements
  • Edit Content
  • Format Documents/Presentations
  • Generate Citations
  • Generate New Text, Code, Art, etc.
  • Generate Research Questions
  • Generate Samples/Examples
  • Organize Information
  • Provide Explanations/Definitions
  • Research a Topic
  • Search for Research Articles
  • Summarize Text/Literature/Article
  • Write Self-Reflections

Learn More

Explore even more CATL resources related to AI in education:

If you have questions, concerns, or ideas specific to generative AI tools in education and the classroom, please email us at catl@uwgb.edu or set up a consultation!

Creating Intentional and Impactful Multiple-Choice Assessments

Multiple-choice quizzes are one of the most common forms of assessment in higher education, as they can be used in courses of nearly every discipline and level. Multiple-choice questions are also one of the quickest and easiest forms of assessment to grade, especially when administered through Canvas or another platform that supports auto-grading. Still, like any assessment method, there are some contexts that are well-suited for multiple-choice questions and others that are not. In this toolbox article, we will provide some evidence-based guidance on when to leverage multiple-choice assessments and how to do so effectively.

Strengths and Weaknesses of Multiple-Choice Assessments

Multiple-choice assessments are a useful tool, but every tool has its limitations. As you weigh the strengths and weaknesses of this format, remember to consider your course’s learning outcomes in relation to your assessments. Then, once you’ve considered how your assessments align with your outcomes, determine if those outcomes are well-suited to a multiple-choice assessment.

Objectivity

Multiple-choice assessments are a form of objective assessment. For a typical multiple-choice item, there is no partial credit — each answer option is either fully correct or fully incorrect, which is what makes auto-grading possible. This objectivity is useful for assessing outcomes in which students need to complete a task with a concrete solution, such as defining discipline-specific terminology, solving a mathematical equation, or recalling the details of a historical event.

The tradeoff of this objectivity is that “good” multiple-choice questions are often difficult to write. Since multiple-choice questions presume that there is only one correct answer, instructors must be careful to craft distractors (incorrect answer options) that cannot be argued as “correct.” Likewise, the question stem should be phrased so that there is a definitively correct solution. For example, if a question is based on an opinion, theory, or framework, then the stem should explicitly reference this idea to reduce subjectivity.

Example of Subjective vs. Objective Question Stem

____ needs are the most fundamental for an individual's overall wellbeing.

  • A) Cognitive
  • B) Self Esteem
  • C) Self-Actualization
  • D) Physiological

(Answer: D)

According to Maslow's hierarchy of needs, ____ needs are the most fundamental for an individual's overall wellbeing.

  • A) Cognitive
  • B) Self Esteem
  • C) Self-Actualization
  • D) Physiological

(Answer: D)

This version of the question stem clarifies that this question is based on a framework, Maslow's hierarchy of needs, which increases the question's objectivity, and therefore its reliability and validity for assessment.

Another caution regarding the objectivity of multiple-choice questions is that answers to these test items can often be found through outside resources — students’ notes, the textbook, a friend, Google, generative AI, etc. — which has important implications for online testing. Experts in online education advise against trying to police or surveil students, and instead encourage instructors to design their online assessments to be open-book (Norton Guide to Equity-Minded Teaching, p. 106).

Open-book multiple-choice questions can still be useful learning tools, especially in frequent, low-stakes assessments or when paired with a few short answer questions. Fully auto-graded multiple-choice quizzes can function as “mastery” quizzes, in which a student has unlimited attempts but must get above a certain threshold (e.g., 90%, 100%) to move on. Using low-stakes, open-note practice tests can be an effective form of studying, and in many cases may be better for retrieval than students studying on their own.

You can also customize your Canvas quiz settings to control other conditions, such as time. Classic Quizzes and New Quizzes include options that add a layer of difficulty to repeatable multiple-choice assessments, such as time limits, shuffled questions or answer choices, and the use of question banks. These settings, when used with low-stakes assessments with multiple attempts, can help students practice meeting the course’s learning outcomes before larger summative assessments.

Versatility

Multiple-choice assessments sometimes get a bad reputation for being associated with rote memorization and lower order thinking skills, but in reality, they can be used to assess skills at every level of Bloom’s taxonomy. This includes higher order thinking skills, such as students’ ability to analyze a source, evaluate data, or make decisions in complex situations.

For example, you could present students with a poem or graph and then use a multiple-choice question to assess a student’s ability to analyze and interpret the example. Or, alternatively, you could create a question stem that includes a short scenario and then ask students to pick the best response or conclusion from the answer choices.

Examples of Multiple-Choice Items That Assess Higher Order Thinking Skills

[The poem is included here.]

The chief purpose of stanza 9 is to:

  • A)  Delay the ending to make the poem symmetrical.
  • B)  Give the reader a realistic picture of the return of the cavalry.
  • C)  Provide material for extending the simile of the bridge to a final point.
  • D)  Return the reader to the scene established in stanza 1.

(Answer: D)

This item tests higher order thinking skills because it requires test-takers to apply what they know about literary devices and analyze a poem in order to discriminate the best answer.

Source: Burton, S. J., et al. (2001). How to Prepare Better Multiple Choice Test Items: Guidelines for University Faculty.

A line graph showing the relationship between time and heart rate for two different groups of individuals that were administered a drug for a clinical trial; the y-axis goes from 70 to 90 and the x-axis goes from their baseline heartrate to 5 min after the drug was administered

The graph above illustrates the change in heart rate over time for two different groups that were administered a drug for a clinical study. After studying the graph, a student concluded that there was a large increase in heart rate around the one-minute mark, even though the results of the study determined that patients' heart rates remained relatively stable over the duration of five minutes. Which aspect of the graph most likely misled the student when they drew their conclusion?

  • A)  The baseline for y-axis starts at 70 beats/min, rather than 0 beats/min.
  • B)  The y-axis is in beats/min, rather than beats/hour.
  • C)  The graph lacks a proper title.
  • D)  The graph includes datasets from two groups, instead of just one.

(Answer: A)

This item tests higher order thinking skills because it requires test-takers to analyze a graph and evaluate which answer choice might lead someone to draw a misleading conclusion from the graph.

Source: In, J. & Lee, S. (2017) Statistical data presentation. Korean J Anesthesiol, 70 (3): 267–276.

 

A nurse is making a home visit to a 75-year old male client who has had Parkinson's disease for the past five years. Which finding has the greatest implication on the patient's care?

  • A)  The client's wife tells the nurse that the grandchildren have not been able to visit for over a month.
  • B)  The nurse notes that there are numerous throw rugs throughout the client's home.
  • C)  The client has a towel wrapped around his neck that the wife uses to wipe her husband's face.
  • D)  The client is sitting in an armchair, and the nurse notes that he is gripping the arms of the chair.

(Answer: B)

This item tests higher order thinking skills because it requires test-takers to apply what they know about Parkinson's disease and then evaluate the answer choices to determine which observation is the most relevant to the patient's care in the scenario.

Source: Morrison, S. and Free, K. W. (2001). Writing multiple-choice test items that promote and measure critical thinking. Journal of Nursing Education, 40 (1), 17-24.

Multiple-choice questions can also be adjusted for difficulty by tweaking the homogeneity of the answer choices. In other words, the more similar the distractors are to the correct answer, the more difficult the multiple-choice question will be. When selecting distractors, pick answer choices that seem appropriately plausible for the skill level of students in your course, such as common student misconceptions. Using appropriately difficult distractors will help increase your assessments’ reliability.

Despite this versatility, there are still some skills — such as students’ ability to explain a concept, display their thought process, or perform a task — that are difficult to assess with multiple-choice questions alone. In these cases, there are other forms of assessment that are better suited for these outcomes, whether it be through a written assignment, a presentation, or a project-based activity. Regardless of your discipline, there are likely some areas of your course that suit multiple-choice assessments better than others. The key is to implement multiple-choice assessments thoughtfully and intentionally with an emphasis on how this format can help students meet the course’s learning outcomes.

Making Multiple-Choice Assessments More Impactful

Once you have weighed the pros and cons of multiple-choice assessments and decided that this format fits your learning outcomes and assessment goals, there are some additional measures you can take to make your assessments more effective learning opportunities. By setting expectations and allowing space for practice, feedback, and reflection, you can help students get the most out of multiple-choice assessments.

Set Expectations for the Assessment

In line with the Transparency in Learning and Teaching (TILT) framework, disclosing your expectations is important for student success. Either in the Canvas quiz description or verbally in class (or both), explain to students the multiple-choice assessment’s purpose, task, and criteria. For example, is the assessment a low-stakes practice activity, a high-stakes exam, or something in between? What topics and learning outcomes will the assessment cover? What should students expect in terms of the number/type of questions and a time limit, if there is one? Will students be allowed to retake any part of the assessment for partial or full credit? Clarifying these types of questions beforehand helps students understand the stakes and goal of the assessment so they can prepare accordingly.

Provide Opportunities for Practice and Feedback

To help reduce test-taking anxiety and aid with long-term retrieval, make sure to provide students with ample practice before high-stakes assessments. Try to use practice assessments to model the format and topics that will be addressed on major assessments. If you are using a certain platform to conduct your assessments, like Canvas quizzes or a textbook publisher, consider having students use that same platform for these practice assessments so they can feel comfortable using the technology in advance of major assessments as well.

Research also indicates that providing feedback after an assessment is key for long-term retention. Interestingly, this is not only true for answers that students got wrong, but also in cases when a student arrives at the correct answer but with a low degree of confidence. Without assessment feedback, students may just check their quiz grade and move on, rather than taking the time to process their results and understand how they can improve.

You can include immediate and automatic qualitative feedback for quiz questions through Canvas Classic Quizzes and New Quizzes. Feedback (or “answer comments”) can be added to individual answer options or to an entire multiple-choice item. For example, you can add a pre-formulated explanation underneath an answer choice on why that distractor is a common misconception. If a student has incorrectly selected that answer choice, they can read that feedback after submitting their quiz attempt to learn why their choice was incorrect.

Create Space for Reflection

A bar graph showing a positive relationship between final test scores and learning conditions that include practice tests with feedback

Source: Roediger III, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention.

As indicated in the chart above, delayed feedback is potentially even more effective for long-term retention than immediate feedback. Consider reserving some time in class to debrief after important assessments and address students’ remaining questions. For asynchronous online courses, you could record a short post-test video in which you comment on trends you saw in students’ scores and clear up common misconceptions.

If you want to go a step further, you can also have students complete a self-reflective activity, also known as an exam wrapper, like a post-test survey or written reflection. Self-reflective activities like these have been shown to increase students’ overall performance in class by helping them learn how to reflect on their own study and performance habits, in addition to the positive effects on information retention mentioned earlier.

Questions?

Need some help designing your next multiple-choice assessment? Want to learn more about mastery quizzes, Canvas quiz settings, or exam wrappers? CATL is here to help! Reach out to us at CATL@uwgb.edu or schedule a consultation and we can help you brainstorm assessment solutions that fit your course’s needs. Or, if you’re ready to start building your assessment, check out this related guide for tips on writing more effective multiple-choice questions.

Steps Towards Assuring Academic Integrity

Article by Nathan Kraftcheck.

A common initial concern I often hear when meeting with new distance education instructors is how to prevent cheating and plagiarism. How can they ensure the rigor of their assessments? Although there is not a 100% successful strategy that one can adopt, neither is there a 100% successful strategy for eliminating cheating during in-person assessments (Watson & Sottile, 2010). However, we still strive to limit academic dishonesty to the best of our abilities. I’ve provided some practices below that you may find useful in both reducing the opportunity for students to commit academic misconduct, and their motivation to do so in your class.

Quiz and exam building strategies to save time and reduce cheating

One way in which online instructors reduce time spent grading is through online quizzes and exams. Systems like Canvas have allowed for automatic grading of certain question types for many years (open-ended manually graded questions are also available). Depending on an instructor’s goals, course objectives, and discipline, automatically scored online quizzes and exams can be a useful tool.

  • As a formative learning activity in itself—a way for students to check their own learning in low-stakes assessment.
  • As a replacement for other low-stakes work. This can be useful in offloading discussion board fatigue that many students cited over the Fall of 2020.
  • As a method to assess foundational knowledge that is necessary for future work in the major or program.
  • As a manageable way to assess a large number of students.

Decorative icon of a stopwatchWhen including online quizzes and exams in a course, the ability for students to look up the answer is always a concern. Instructors may be tempted to direct students to not use external materials when taking their assessment. Unfortunately, just asking students not to use such material is unlikely to find much success. A more common, practical approach is to design the quiz or exam around the fact that many students will use external resources when possible.

  • Allow multiple attempts.
  • Consider drawing questions from a pool of possible questions. This can allow the students to engage with the same concept, framed differently, helping them work on the underlying concept instead of mastering a specific question’s language.
  • Let students see the correct answer after the quiz or exam is no longer available. This can help if you'd like to allow your students to use their quiz or exam as a study guide.
  • Draw questions from a pool of possible questions—each student will have a randomized experience this way and depending on how large the pool is, some students may not see the exact same version of the questions (assuming different wording across questions that measure the same concept).
  • Shuffle the order of possible answers.
  • Limit how long the quiz or exam is open if you want to mimic a closed-book assessment—don't allow students time to look up all the answers.
  • Let students see the correct answer after the quiz or exam is no longer available. This can help if you'd like to allow your students to use their quiz or exam as a study guide.
  • Only show one question at a time—this can limit a student's ability to look up multiple questions at once and also limit their ability to share the questions with a friend.
  • Set availability and due dates for your quizzes or exams.
  • Modify your questions slightly from semester to semester. Do this for 100% of your publisher-provided questions—assume copies of publisher questions and their answers are available online for students to look up.

For more detailed information on these items, please look at this page for guidance.

Less high-stakes assessment and more low-stakes assessment

It’s fairly common for teaching and learning centers to promote an increase in the use of lower-stakes assessments and a decrease in the higher-stakes assessments. This might seem counterintuitive because more assessments could mean a greater opportunity to cheat, right? It may also seem like additional work since students would be required to take assessments more frequently. However, there is good reason to advocate for more frequent, smaller assessments.

  • Students have a better understanding of how well they grasp discrete topics.
  • Students will know earlier if they’re not doing well, instead of at the first midterm.
  • Students learn from recalling information—a quiz can be more effective than just studying.
  • Students learn more through repeated assessment in comparison to one assessment (Brame & Biel, 2015; Roediger & Butler, 2011).

“Done” / “not done” grading

For low-stakes formative student assignments, consider adopting a done/not done approach. This could take the form of any assignment you could quickly assess for completion, for instance, a brief reflective or open-ended written assignment submission or discussion post. By keeping the activity brief and focused, you’ll be able to quickly assess whether it was done correctly while allowing students to re-engage with class topics by making meaning from what they’ve learned.

From the University of Waterloo:

  • What new insights did I develop as a result of doing this work?
  • How has my perspective changed after doing this assignment?
  • What challenges to my current thinking did this work present?
  • How does work in this course connect with work in another course?
  • What concepts do I still need to study more? Where are the disconnects in my learning?

Working up to larger projects and papers

Decorative icon of an increasing chart.For larger, summative projects that by their nature dictate a large influence on final course grades, consider breaking up the project into smaller steps (Ahmad & Sheikh, 2016).

For a written paper assignment, an instructor could:

  • Start by asking students to select a topic based on a parameter you provide and find source material to support their topic.
  • Students submit their topic description and source material as an assignment. Grade as complete/incomplete and provide guidance if necessary on topic and/or sources.
  • Ask students to create an annotated bibliography.
  • Students submit an annotated bibliography to an assignment. Grade as complete/incomplete.
  • Create a discussion in Canvas where students can talk about their topics (assuming they're all somewhat related). Grade as complete/incomplete.
  • Ask students to create a rough draft. Utilize peer grading in Canvas to off-load grading and include student-to-student communication and collaboration.
  • Students submit a second draft or final draft. Grade with a rubric.

By asking students to select their topic and work through the writing process on a step-by-step basis, the instructor can see the process the student takes through the paper’s development, and students are not able to procrastinate and thus won’t feel pressured to plagiarize someone else’s work (Elias, 2020) as they’re already doing most of the work anyway. This process also discourages plagiarism as there is not as much of an emphasis on the finished product, as the possible score is distributed across multiple activities (Carnegie Mellon University).

Built-in flexibility

Dropping lowest scores

Another way to reduce the appeal of cheating in courses is to offer some flexibility in grading (Ostafichuk, Frank & Jaeger). This can take many forms, of course, but one common to classes using a learning management system like Canvas is to drop the lowest score in a grouping of similar activities. As an example, an online instructor might have a Canvas Assignment Group containing all of their graded quizzes. The instructor can then create a rule for that Assignment Group which will tell Canvas how to calculate scores of the activities inside. For example, the instructor might set the Assignment Group to exclude each student’s lowest score in that group when calculating the final grade. The quiz score that is dropped varies from student to student, but each would have their lowest score dropped.

Student options

Building off the concept of dropping the lowest score students achieve from a group of assignments, you can also use this functionality to build in student choice. For example, if you have more than two discussion activities in your class that meet the same learning objective, consider letting students select the one that they want to participate in and then “drop” the other.

Late work leeway

Another option for flexibility is to set the “due” dates for your graded activities but leave the “available to” date empty or make it the absolute last date and time you would accept submissions. This will allow your students to submit their work beyond the due date and have it flagged as “late”, but also reduce the likelihood of a student cheating on an assignment if they’ve procrastinated or otherwise fallen behind in their coursework. You can also create late work grading policies within Canvas that automatically deduct a percentage of possible points on a daily or weekly basis.

Make use of rubrics

Decorative icon of a rubric.  Research has shown that rubrics are effective tools in shining light on the most important elements of an assignment, setting student expectations for quality and depth of submitted work, and simplifying the grading process for instructors (Kearns). By making a rubric available ahead of time, students have another opportunity to see how their work will be graded and what crucial elements they should include. They can also address equity issues between students, leveling the field between students whose education has prepared them to succeed in college versus those who have not (Stevens & Levi, 2006). Some instructors have found that using rubrics reduces their time spent grading, possibly because of the focused nature of what is being assessed (Cornell University; Duquesne University).

Canvas has built-in rubrics that can be attached to any graded activity. Rubrics are used mostly in Canvas Assignments and Discussions. They can be added to quizzes but aren’t used in the actual grading process for quizzes and would be used as guidance for students only. To learn more about rubrics, take a look at this rubrics guide by Boston College. There’s also a more task-oriented guide from Canvas, available here.

What do you think?

What techniques have you found useful in limiting academic misconduct in your classes? Let us know by dropping a comment below!