How Will Generative AI Change My Course (GenAI Checklist)?

With the growing prevalence of generative AI applications and the ongoing discussions surrounding their integration in higher education, it can be overwhelming to contemplate their impact on your courses, learning materials, and field. As we navigate these new technologies, it is crucial to reflect on how generative AI can either hinder or enhance your teaching methods. CATL has created a checklist designed to help instructors consider how generative artificial intelligence (GAI) products may affect your courses and learning materials (syllabi, learning outcomes, and assessment).

Each step provides guidance on how to make strategic course adaptations and set course expectations that address these tools. As you go through the checklist, you may find yourself revisiting previous steps as you reconsider your course specifics and understanding of GAI.

Checklist for Assessing the Impact of Generative AI on your Course

View an abridged, printable version of the checklist to work through on your own.

Step One: Experiment with Generative AI

  • Experiment with GAI tools. Test Copilot (available to UWGB faculty, staff, and students) by inputting your own assignment prompts and assessing its performance in completing your assignments.
  • Research the potential benefits, concerns, and use cases regarding generative AI to gain a sense of the potential applications and misuses of this technology.

Step Two: Review Your Learning Outcomes

  • Reflect on your course learning outcomes. A good place to start is by reviewing this resource on AI and Bloom’s Taxonomy which considers AI capabilities for each learning level. Which outcomes lend themselves well to the use of generative AI and which outcomes emphasize your students’ distinctive human skills? Keep this in mind as you move on to steps three and four, as the way students demonstrate achieved learning outcomes may need to be revised.

Step Three: Assess the Extent of GAI Use in Class

  • Assess to what extent your course or discipline will be influenced by AI advancements. Are experts in your discipline already collaborating with GAI tools? Will current or future careers in your field work closely with these technologies? If so, consider what that means about your responsibility to prepare students for using generative AI effectively and ethically.
  • Determine the extent of usage appropriate for your course. Will you allow students to use GAI all the time or not at all? If students can use it, is it appropriate only for certain assignments/activities with guidance and permission from the instructor? If students can use GAI, how and when should they cite their use of these technologies (MLA, APA, Chicago)? Be specific and clear with your students.
  • Revisit your learning outcomes (step two). After assessing the impact of advancements in generative AI on your discipline and determining how the technology will be used (or not used) in your course, return to your learning outcomes and reassess if they align with course changes/additions you may have identified in this step.

Step Four: Review Your Assignments/Assessments

  • Evaluate your assignments to determine how AI can be integrated to support learning outcomes. The previous steps asked you to consider the relevance of AI to your field and its potential impact on students’ future careers. How are professionals in your discipline using AI, and how might you include AI-related skills in your course? What types of skills will students need to develop independently of AI, such as creativity, interpersonal skills, judgement, metacognitive reflection, and contextual reasoning? Can using AI for some parts of an assignment free up students’ time to focus more on the parts that develop these skills?
  • View, again, this resource on AI capabilities versus distinctive human skills as they relate to the levels of Bloom’s Taxonomy.
  • Define AI’s role in your course assignments and activities. Like step three, you’ll want to be clear with your students on how AI may be used for specific course activities. Articulate which parts of an assignment students can use AI assistance for and which parts students need to complete without AI. If AI use doesn’t benefit an assignment, explain to your students why it’s excluded and how the assignment work will develop relevant skills that AI can’t assist with. If you find AI is beneficial, consider how you will support your students’ usage for tasks like editing, organizing information, brainstorming, and formatting. In your assignment instructions, explain how students should cite or otherwise disclose their use of AI.
  • Apply the TILT framework to your assignments to help students understand the value of the work and the criteria for success.

Step Five: Update Your Syllabus

  • Add a syllabus statement outlining the guidelines you’ve determined pertaining to generative AI in your course. You can refer to our syllabus snippets for examples of generative AI-related syllabi statements.
  • Include your revised or new learning outcomes in your syllabus and consider how you will emphasize the importance of those course outcomes for students’ career/skill development.
  • Address and discuss your guidelines and expectations for generative AI usage with students on day one of class and put them in your syllabus. Inviting your students to provide feedback on course AI guidelines can help increase their understanding and buy-in.

Step Six: Seek Support and Resources

  • Engage with your colleagues to exchange experiences and practices for incorporating or navigating generative AI.
  • Stay informed about advancements and applications of generative AI technology.

Checklist for Assessing the Impact of Generative AI on Your Course © 2024 by Center for the Advancement of Teaching and Learning is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International

Want More Resources?

Visit the CATL blog, The Cowbell, for more resources related to generative AI in higher education.

Need Help?

CATL is available to offer assistance and support at every step of the checklist presented above. Contact CATL for a consultation or by email at CATL@uwgb.edu if you have questions, concerns, or perhaps are apprehensive to go through this checklist.

 

 

Learning Outcomes that Lead to Student Success 

What are learning outcomes and why do you need them?

There’s a famous misquote from Lewis Carroll, “If you don’t know where you’re going, any road will get you there.” The same is true in our courses: if you don’t know what you want your students to learn, it doesn’t really matter how or what you teach them. Every instructor wants to ensure student success, but if we as instructors don’t have accurate and well-thought-out learning outcomes, what does success mean in our classes? Creating learning outcomes should be a collaborative process where instructors responsible for teaching a course come together to craft these statements based on the most important learning in a course, taking care to maintain a balance between critical thinking and base knowledge while keeping an eye toward what makes a learning outcome an achievable learning goal.

Learning outcome creation

Before you create course learning outcomes

  • If your course is part of a program, you should ensure that the learning outcomes mesh with the rest of the program to meet all program learning outcomes.
  • Plan collaboratively with colleagues teaching the same course. All learning outcomes for sections taught of the same course should have the same learning outcomes according to the HLC (Higher Learning Commission) criteria 3a.
  • With colleagues, determine and list the most important learning or skills that will take place in this course.
  • Whittle down the list if it is too large. Consider what you and your colleagues can reasonably accomplish during the semester.
  • Pay attention to the conversation around Generative AI. What your students need to know and do may change because of the rapid development of AI.

Considerations as you create your learning outcomes

  1. Keep assessment and, therefore, your verb choices in the forefront of your mind. As you write learning outcomes, you want to ensure that the learning outcomes contain actions that can be demonstrated. When you ask students to “understand” something, this is difficult to demonstrate. If they “explain” it instead, that is an action that can be done and measured in various ways.
  2. Keep Bloom’s Taxonomy next to you as you create. It makes sense to use a taxonomy when writing outcomes. In Bloom’s model, skills and verbs on the bottom of the pyramid are less complex or intellectually demanding than those at the top of the pyramid; keep in mind they may still be totally appropriate, especially for lower-level courses. More critical thinking skills are required for those skills at the top of the pyramid, but it is useful and acceptable to use verbs and abilities from all levels of the pyramid. If you are teaching an upper-level course, you don’t want to draw all your verbs and skills from Bloom’s Taxonomy’s knowledge level. You should be using some higher levels in Bloom’s system.  The chart below can be a guide as you create those learning outcomes and note that generative AI developments may make the original chart problematic in different ways. There are alternatives to Blooms, as well.

    Alternatives to Blooms Taxonomy levels and verbs.
    Newtonsneurosci, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, v Wikimedia Commons
  3. Use SMART Goals also. In addition to including Bloom’s Taxonomy as part of your learning outcomes, we encourage you to make sure that your learning outcomes are created using the SMART goals model.   SMART goals were developed in 1981 by George Duran, who noticed that most business goals were not created in a way that could be implemented effectively.

SMART is an acronym we can use to describe the attributes of effective learning outcomes for your students. Please note that you will find different versions of the acronyms in the SMART goal model, but these are the ones CATL uses to discuss learning outcomes:

    • Specific – target a specific area, skill, or knowledge
    • Measurable – progress is quantifiable
    • Attainable – able to be achieved or realistic
    • Relevant – applicable to the students in the class
    • Time-based – achieved in a specific timeframe, such as a semester

Example: By the end of the semester (T), students will be able to diagram (M) the process of photosynthesis (S, A) in this biology class (R).

Learning outcomes are more likely to be meaningful if they can meet all of the qualifiers in the SMART acronym. Think specifics as you create your learning outcome. If you can’t tell if your learning outcome meets one of the qualifiers, you should rework it until it does.

Review your learning outcomes

Your next step as a team should be to review your learning outcomes. Compare them to the SMART model and Bloom’s Taxonomy or any other relevant model you might be using. If it helps, consider these examples. First, “Students will improve their understanding of passive voice.” On the surface, it might look like a reasonable goal, but then as you ask, “What does it mean to improve? Where did the student start from? When does this need to be done by?” This goal offers no answers to those questions.

How about this one? “By the end of the semester, all students will receive a 100% score on their math notation quiz.” For context, this is a Writing Foundations course. That begs the question, is this outcome relevant to this group of students? Is 100% a reasonable and attainable goal?

Consider these questions as a guide when creating SMART goals. A more reasonable goal for this group of writing students is that by the end of the semester, students will be able to identify and accurately and effectively use scholarly research in their writing projects 80% of the time. One part of the review process is ensuring your outcomes are SMART, but there are additional elements to consider, including the questions below.

  • Can you identify the verb in your learning outcome?
  • If your students master the skills in your learning outcomes, will they be satisfactorily prepared to go to another course that teaches the next level of this material?
  • If this is a course in a series, have you checked to be sure that your outcomes make sense with the previous and next courses?
  • Has your unit done curriculum mapping for its goals, and do your course outcomes align with that mapping?

Put it all together

Creating learning outcomes that reflect the learning necessary to achieve mastery in a course can be an arduous process. It should be a collaborative process as well. We encourage you to reach out to the CATL team if you would like guidance or help walking through Bloom’s Taxonomy and the SMART goal model. We are always available to help!

Resources on creating learning outcomes

Writing Effective Multiple-Choice Questions

Writing good multiple-choice questions is challenging. Tricky or verbose questions can reduce the test item’s reliability and validity, while a poor selection of answer choices can make a question either far too easy or incredibly difficult. A question that suffers several common pitfalls might even work against the learning outcomes it is trying to measure. Fortunately, researchers and assessment experts have identified some common guidelines for creating more equitable and reliable multiple-choice assessments. In this guide, we’ll walk through seven tips for writing more effective multiple-choice test items.

The scope of this guide is focused specifically on authoring multiple-choice questions. If you’d like to dig into when to use multiple-choice assessments, as well as recommendations for scaffolding, testing in online environments, and providing feedback, check out this other CATL blog post on general considerations for creating impactful multiple-choice assessments.

Before getting into the tips on writing questions, we’ll review the anatomy of a multiple-choice question and outline some common language that we use throughout this guide.

Table of Contents

The Anatomy of a Multiple-Choice Item

Throughout this article, we will use the following terms and definitions when referring to the parts of a multiple-choice question:

  • Item: A question and its answer choices as a unit
  • Stem: The posited question that respondents are asked to answer; often phrased as a question, but can also be a statement (e.g., fill in the blank)
  • Alternatives: A list of suggested answers that appear after the question stem; comprised of several incorrect answer options and one (or more) correct or best answer(s)
  • Distractor: An incorrect alternative

An example multiple-choice question in which the top portion is labelled as the “stem” and the answer choices A through F are labelled “alternatives,” with “A” serving as the answer and “B-F” serving as distractors

Example of a multiple-choice item, its stem, and the alternatives

Source: Vanderbilt Center for Teaching and Learning

Tips for Writing Effective Multiple-Choice Questions

Most of the recommendations in this guide have been adapted from How to Prepare Better Multiple-Choice Test Items: Guidelines for University Faculty Simple (Burton et al, 1991) and Developing and Validating Multiple-choice Test Items (Haladyna, 2004). We’ve distilled down these long-form documents into a few simple guidelines that align with current recommendations from experts at other centers for teaching and learning (see “Additional Resources”). If you are interested in learning more about the research behind these suggestions, we encourage you to check out one or both of the resources linked above.

Tip #1: Tie each item to a learning outcome

In order to maximize an assessment’s validity and reliability, each multiple-choice item should be clearly aligned with one of the assessment’s learning outcomes (and, by extension, the course learning outcomes). Generally, it is recommended to have each item tied to only one outcome each. However, in the case of items that assess higher order thinking and present complex problems or scenarios, it is possible that a multiple-choice item may assess more than one outcome.

Tip #2: Create a specific, clear, and succinct stem

A straightforward, clear, and concise stem free from extraneous information increases a multiple-choice item’s reliability. When writing and revising your question stems, it is a good practice to ask yourself if there is a simpler or more direct way to rephrase a question. Overly wordy stems rely on students’ reading comprehension, which is usually not one of the intended outcomes of the assessment. Likewise, confusing or ambiguous stems can be accidentally misleading. Ideally, a student who has mastered the target outcome should be able to answer the question posited even without the alternatives present.

For millennia, humanity has been entranced by the ebb and flow of the tides. Many past civilizations believed the ocean's waters were controlled by monsters, spirits, or gods, but today know the scientific laws and theories that explain the tides. These movements are influenced by, in part, the gravitational force from the sun, the earth’s rotation, shoreline geography, and weather patterns, but all of these pale in comparison to the effects of:

  • a)  El Niño
  • b)  The gravitational force of the moon
  • c)  The ozone layer
  • d)  Deep-sea trenches

(Answer: B)

Why it doesn’t work: The extra information in the question stem makes it difficult for the test-taker to discern the question that is being posed. The question itself is also worded ambiguously.

Earth’s tides are influenced primarily by:

  • a)  El Niño
  • b)  The gravitational force of the moon
  • c)  The ozone layer
  • d)  Deep-sea trenches

(Answer: B)

Why it works: The question stem has been revised to remove all unnecessary information and it now poses a simple, straightforward question.

Tip #3: Avoid using negatives in question phrasing

It is usually best to avoid negative phrasing in question stems, such as asking students to identify which alternative does not belong. Negatively phrased stems tend to be less reliable in assessing students’ learning than stems that ask students to identify the correct answer. The exception to this guideline is in cases when knowing what not to do is key, such as questions related to safety protocols. If you do choose to include a negative qualifier, use bold or italics to emphasize the negative word and make sure that you don’t create a double negative with any of the alternatives.

Which of the following is not a quality of an active listener? 

  • a)  Not talking over others 
  • b)  Making eye contact with the speaker 
  • c)  Asking clarifying questions 
  • d)  Mentally planning a rebuttal while the other person is speaking 

(Answer: D)

Why it doesn’t work: The question stem is phrased in the negative and the negative qualifier is not emphasized, making the question less reliable. Additionally, one of the alternatives also contains the word “not,” creating a double negative with the question stem. 

True or false? An active listener…

  • Refrains from talking over others (T/F)
  • Makes eye contact with the speaker (T/F)
  • Asks clarifying questions (T/F)
  • Mentally plans a rebuttal while the other person is speaking (T/F)

(Answers: T, T, T, F)

Why it works: This question stem has been rephrased to avoid using the word “not.” The answer choices have been turned into four separate true/false statements so each item can be assessed separately and have been revised to remove the word “not.”

Which of the following is not a recommended action to protect yourself during an earthquake if you are inside a building?

  • a) Drop to your hands and knees
  • b) Take shelter under a sturdy nearby desk or table
  • c) Crawl to the nearest exit
  • d) Cover your head and neck with your arms

(Answer: C)

Why it works: In this scenario, knowing what not to do during an earthquake is one of the learning outcomes, so it is appropriate to use a negative qualifier. The negative qualifier in the stem, “not,” has also been emphasized with bold and italics to draw attention to it.

Tip #4: Use plausible distractors

Good distractors need to appear plausible to students that have not met the target learning outcome, but not so tricky that they could be argued as correct answers by a test-taker that has met the learning outcome. When you are writing a multiple-choice question it is often useful to write the stem first, then the correct answer first. Once you have decided on these two pieces, formulate 2-4 distractors based on common student misconceptions. If you can’t think of another “good” distractor for a set of alternatives, it is usually better to have fewer alternatives than to include extra alternatives just for the sake of consistency.

George Washington Carver is best known for his work as a(n) ______.

  • a)  Agricultural scientist
  • b)  Extraterrestrial expert
  • c)  Basket-weaver
  • d)  Juggler

(Answer: A)

Why it doesn’t work: The distractors are so absurd and far-removed from the topic of the question that even a student who knows nothing about George Washington Carver could discern the correct answer, making the test item neither reliable nor valid.

George Washington Carver is best known for his work as a(n) ______.

  • a)  Agricultural scientist
  • b)  Electrical engineer
  • c)  Microbiologist
  • d)  Politician

(Answer: A)

Why it works: The distractors seem plausible, creating a question that will more accurately assess students’ knowledge of George Washington Carver.

Tip #5: Use homogeneous phrasing and formatting for alternatives

Small typos, inconsistencies in tenses or phrasing, or changes in text formatting can accidentally provide clues about which alternatives are the distractors and which are correct answers. Savvy test-takers can pick up on these inconsistencies and use this information to deduce the correct answer even if they have not achieved mastery for the desired outcome, so keep an eye out for these things as you proofread your exam. If you notice formatting inconsistencies in your Canvas quizzes, you can use the Rich Content Editor to remove all formatting and set the selected text to Canvas’s defaults.

What three parts of speech can an adverb modify?

  • a)  Verbs, adjectives, and other adverbs
  • b)  Noun, adjective and preposition
  • c)  Verb, noun and conjunction
  • d)  Adjective, adverb and exclamation

(Answer: A)

Why it doesn’t work: Answer choice “A,” the correct answer, is in a different font from the other alternatives. Additionally, the distractors use the singular version of each part of speech, rather than the plural, and omit the Oxford comma before “and.” These inconsistencies hint to students that “A” is the odd one out.

What three parts of speech can an adverb modify?

  • a)  Verbs, adjectives, and other adverbs
  • b)  Nouns, adjectives, and prepositions
  • c)  Verbs, nouns, and conjunctions
  • d)  Adjectives, adverbs, and exclamations

(Answer: A)

Why it works: The distractors have been revised to look consistent with the correct answer, creating a question that assesses students’ knowledge of parts of speech, rather than their eye for detail.

Tip #6: Avoid using none-of-the-above or all-of-the-above as alternatives

Questions that provide “all of the above” or “none of the above” as alternatives are generally less reliable for assessing outcomes than a multiple-choice question with mutually exclusive alternatives. The table below outlines the use cases for “all of the above” and “none of the above” along with why they are flawed for reliable assessment in each instance. If you can’t think of another distractor while drafting a question, remember that it is okay for some questions to have fewer alternatives.

Use of “all of the above” and “none of the above” 

Alternative

Weakness

“All of the above” as the answer Can be identified by noting that two of the other alternatives are correct
“All of the above” as a distractor Can be eliminated by noting that one of the other alternatives is incorrect
“None of the above” as the answer Measures the ability to recognize incorrect answers rather than correct answers
“None of the above” as a distractor Does not appear plausible to some students

(Adapted from How to Prepare Better Multiple-Choice Test Items: Guidelines for University Faculty, Brigham Young University)

Tip #7: Create questions with only one correct alternative

Like none- or all-of-the-above alternatives, asking students to identify multiple correct alternatives is a less reliable form of assessment than an item with only one correct answer. Multiple-response questions are also reliant on confusing grading calculations, since selecting an incorrect alternative “cancels out” a correct selection (this Canvas guide goes into more detail about how Multiple Answer questions are auto-graded). And, in questions with more incorrect than correct answers, students can still score points by selecting no answers at all!

A straightforward multiple-choice item with only one correct answer and mutually exclusive alternatives is a more reliable way of discerning whether a student truly knows a concept or is guessing. Another option is to turn a multiple-answer question into a series of true/false questions, which will provide a more reliable picture of students’ understanding and a more valid grade for their efforts.

Check all that apply. COVID-19:

  • a)  Is an infectious disease
  • b)  Is spread primarily through fungal spores
  • c)  Can be treated with antibiotics
  • d)  Can infect people of all ages

(Answer: A and D)

Why it doesn’t work: Because of the way multiple-response questions are graded, they are less reliable than individual multiple-choice or true/false questions.

True or false? COVID-19:

  • Is an infectious disease (T/F)
  • Is spread primarily through fungal spores (T/F)
  • Can be treated with antibiotics (T/F)
  • Can infect people of all ages (T/F)

(Answers: T, F, F, T)

Why it works: Each statement is assessed individually, allowing for more granular and accurate scoring.

Questions?

Want more tips for writing multiple-choice questions? Looking for someone to help brainstorm outcome-aligned questions with? CATL is here for you! Reach out any time to set up a meeting or send us your questions at CATL@uwgb.edu.

Additional Resources

Creating Intentional and Impactful Multiple-Choice Assessments

Multiple-choice quizzes are one of the most common forms of assessment in higher education, as they can be used in courses of nearly every discipline and level. Multiple-choice questions are also one of the quickest and easiest forms of assessment to grade, especially when administered through Canvas or another platform that supports auto-grading. Still, like any assessment method, there are some contexts that are well-suited for multiple-choice questions and others that are not. In this toolbox article, we will provide some evidence-based guidance on when to leverage multiple-choice assessments and how to do so effectively.

Strengths and Weaknesses of Multiple-Choice Assessments

Multiple-choice assessments are a useful tool, but every tool has its limitations. As you weigh the strengths and weaknesses of this format, remember to consider your course’s learning outcomes in relation to your assessments. Then, once you’ve considered how your assessments align with your outcomes, determine if those outcomes are well-suited to a multiple-choice assessment.

Objectivity

Multiple-choice assessments are a form of objective assessment. For a typical multiple-choice item, there is no partial credit — each answer option is either fully correct or fully incorrect, which is what makes auto-grading possible. This objectivity is useful for assessing outcomes in which students need to complete a task with a concrete solution, such as defining discipline-specific terminology, solving a mathematical equation, or recalling the details of a historical event.

The tradeoff of this objectivity is that “good” multiple-choice questions are often difficult to write. Since multiple-choice questions presume that there is only one correct answer, instructors must be careful to craft distractors (incorrect answer options) that cannot be argued as “correct.” Likewise, the question stem should be phrased so that there is a definitively correct solution. For example, if a question is based on an opinion, theory, or framework, then the stem should explicitly reference this idea to reduce subjectivity.

Example of Subjective vs. Objective Question Stem

____ needs are the most fundamental for an individual's overall wellbeing.

  • A) Cognitive
  • B) Self Esteem
  • C) Self-Actualization
  • D) Physiological

(Answer: D)

According to Maslow's hierarchy of needs, ____ needs are the most fundamental for an individual's overall wellbeing.

  • A) Cognitive
  • B) Self Esteem
  • C) Self-Actualization
  • D) Physiological

(Answer: D)

This version of the question stem clarifies that this question is based on a framework, Maslow's hierarchy of needs, which increases the question's objectivity, and therefore its reliability and validity for assessment.

Another caution regarding the objectivity of multiple-choice questions is that answers to these test items can often be found through outside resources — students’ notes, the textbook, a friend, Google, generative AI, etc. — which has important implications for online testing. Experts in online education advise against trying to police or surveil students, and instead encourage instructors to design their online assessments to be open-book (Norton Guide to Equity-Minded Teaching, p. 106).

Open-book multiple-choice questions can still be useful learning tools, especially in frequent, low-stakes assessments or when paired with a few short answer questions. Fully auto-graded multiple-choice quizzes can function as “mastery” quizzes, in which a student has unlimited attempts but must get above a certain threshold (e.g., 90%, 100%) to move on. Using low-stakes, open-note practice tests can be an effective form of studying, and in many cases may be better for retrieval than students studying on their own.

You can also customize your Canvas quiz settings to control other conditions, such as time. Classic Quizzes and New Quizzes include options that add a layer of difficulty to repeatable multiple-choice assessments, such as time limits, shuffled questions or answer choices, and the use of question banks. These settings, when used with low-stakes assessments with multiple attempts, can help students practice meeting the course’s learning outcomes before larger summative assessments.

Versatility

Multiple-choice assessments sometimes get a bad reputation for being associated with rote memorization and lower order thinking skills, but in reality, they can be used to assess skills at every level of Bloom’s taxonomy. This includes higher order thinking skills, such as students’ ability to analyze a source, evaluate data, or make decisions in complex situations.

For example, you could present students with a poem or graph and then use a multiple-choice question to assess a student’s ability to analyze and interpret the example. Or, alternatively, you could create a question stem that includes a short scenario and then ask students to pick the best response or conclusion from the answer choices.

Examples of Multiple-Choice Items That Assess Higher Order Thinking Skills

[The poem is included here.]

The chief purpose of stanza 9 is to:

  • A)  Delay the ending to make the poem symmetrical.
  • B)  Give the reader a realistic picture of the return of the cavalry.
  • C)  Provide material for extending the simile of the bridge to a final point.
  • D)  Return the reader to the scene established in stanza 1.

(Answer: D)

This item tests higher order thinking skills because it requires test-takers to apply what they know about literary devices and analyze a poem in order to discriminate the best answer.

Source: Burton, S. J., et al. (2001). How to Prepare Better Multiple Choice Test Items: Guidelines for University Faculty.

A line graph showing the relationship between time and heart rate for two different groups of individuals that were administered a drug for a clinical trial; the y-axis goes from 70 to 90 and the x-axis goes from their baseline heartrate to 5 min after the drug was administered

The graph above illustrates the change in heart rate over time for two different groups that were administered a drug for a clinical study. After studying the graph, a student concluded that there was a large increase in heart rate around the one-minute mark, even though the results of the study determined that patients' heart rates remained relatively stable over the duration of five minutes. Which aspect of the graph most likely misled the student when they drew their conclusion?

  • A)  The baseline for y-axis starts at 70 beats/min, rather than 0 beats/min.
  • B)  The y-axis is in beats/min, rather than beats/hour.
  • C)  The graph lacks a proper title.
  • D)  The graph includes datasets from two groups, instead of just one.

(Answer: A)

This item tests higher order thinking skills because it requires test-takers to analyze a graph and evaluate which answer choice might lead someone to draw a misleading conclusion from the graph.

Source: In, J. & Lee, S. (2017) Statistical data presentation. Korean J Anesthesiol, 70 (3): 267–276.

 

A nurse is making a home visit to a 75-year old male client who has had Parkinson's disease for the past five years. Which finding has the greatest implication on the patient's care?

  • A)  The client's wife tells the nurse that the grandchildren have not been able to visit for over a month.
  • B)  The nurse notes that there are numerous throw rugs throughout the client's home.
  • C)  The client has a towel wrapped around his neck that the wife uses to wipe her husband's face.
  • D)  The client is sitting in an armchair, and the nurse notes that he is gripping the arms of the chair.

(Answer: B)

This item tests higher order thinking skills because it requires test-takers to apply what they know about Parkinson's disease and then evaluate the answer choices to determine which observation is the most relevant to the patient's care in the scenario.

Source: Morrison, S. and Free, K. W. (2001). Writing multiple-choice test items that promote and measure critical thinking. Journal of Nursing Education, 40 (1), 17-24.

Multiple-choice questions can also be adjusted for difficulty by tweaking the homogeneity of the answer choices. In other words, the more similar the distractors are to the correct answer, the more difficult the multiple-choice question will be. When selecting distractors, pick answer choices that seem appropriately plausible for the skill level of students in your course, such as common student misconceptions. Using appropriately difficult distractors will help increase your assessments’ reliability.

Despite this versatility, there are still some skills — such as students’ ability to explain a concept, display their thought process, or perform a task — that are difficult to assess with multiple-choice questions alone. In these cases, there are other forms of assessment that are better suited for these outcomes, whether it be through a written assignment, a presentation, or a project-based activity. Regardless of your discipline, there are likely some areas of your course that suit multiple-choice assessments better than others. The key is to implement multiple-choice assessments thoughtfully and intentionally with an emphasis on how this format can help students meet the course’s learning outcomes.

Making Multiple-Choice Assessments More Impactful

Once you have weighed the pros and cons of multiple-choice assessments and decided that this format fits your learning outcomes and assessment goals, there are some additional measures you can take to make your assessments more effective learning opportunities. By setting expectations and allowing space for practice, feedback, and reflection, you can help students get the most out of multiple-choice assessments.

Set Expectations for the Assessment

In line with the Transparency in Learning and Teaching (TILT) framework, disclosing your expectations is important for student success. Either in the Canvas quiz description or verbally in class (or both), explain to students the multiple-choice assessment’s purpose, task, and criteria. For example, is the assessment a low-stakes practice activity, a high-stakes exam, or something in between? What topics and learning outcomes will the assessment cover? What should students expect in terms of the number/type of questions and a time limit, if there is one? Will students be allowed to retake any part of the assessment for partial or full credit? Clarifying these types of questions beforehand helps students understand the stakes and goal of the assessment so they can prepare accordingly.

Provide Opportunities for Practice and Feedback

To help reduce test-taking anxiety and aid with long-term retrieval, make sure to provide students with ample practice before high-stakes assessments. Try to use practice assessments to model the format and topics that will be addressed on major assessments. If you are using a certain platform to conduct your assessments, like Canvas quizzes or a textbook publisher, consider having students use that same platform for these practice assessments so they can feel comfortable using the technology in advance of major assessments as well.

Research also indicates that providing feedback after an assessment is key for long-term retention. Interestingly, this is not only true for answers that students got wrong, but also in cases when a student arrives at the correct answer but with a low degree of confidence. Without assessment feedback, students may just check their quiz grade and move on, rather than taking the time to process their results and understand how they can improve.

You can include immediate and automatic qualitative feedback for quiz questions through Canvas Classic Quizzes and New Quizzes. Feedback (or “answer comments”) can be added to individual answer options or to an entire multiple-choice item. For example, you can add a pre-formulated explanation underneath an answer choice on why that distractor is a common misconception. If a student has incorrectly selected that answer choice, they can read that feedback after submitting their quiz attempt to learn why their choice was incorrect.

Create Space for Reflection

A bar graph showing a positive relationship between final test scores and learning conditions that include practice tests with feedback

Source: Roediger III, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention.

As indicated in the chart above, delayed feedback is potentially even more effective for long-term retention than immediate feedback. Consider reserving some time in class to debrief after important assessments and address students’ remaining questions. For asynchronous online courses, you could record a short post-test video in which you comment on trends you saw in students’ scores and clear up common misconceptions.

If you want to go a step further, you can also have students complete a self-reflective activity, also known as an exam wrapper, like a post-test survey or written reflection. Self-reflective activities like these have been shown to increase students’ overall performance in class by helping them learn how to reflect on their own study and performance habits, in addition to the positive effects on information retention mentioned earlier.

Questions?

Need some help designing your next multiple-choice assessment? Want to learn more about mastery quizzes, Canvas quiz settings, or exam wrappers? CATL is here to help! Reach out to us at CATL@uwgb.edu or schedule a consultation and we can help you brainstorm assessment solutions that fit your course’s needs. Or, if you’re ready to start building your assessment, check out this related guide for tips on writing more effective multiple-choice questions.

Register for the 2024 Instructional Development Institute (IDI) on Jan. 9, 2024

Welcome to the UW-Green Bay Instructional Development Institute (IDI) registration and main information page! For quick access to conference details, use the table of contents below:

Conference Overview

The Instructional Development Institute will take place on Tuesday, Jan. 9, 2024, and is hosted by the Center for the Advancement of Teaching and Learning (CATL) and the Instructional Development Council. The 2024 IDI is a one-day, completely virtual and free teaching and learning conference that will feature live presentations by expert faculty, staff, and UWGB community members on the theme of  “Thriving in Higher Education.” We are pleased to have Dr. Kevin Gannon as the conference keynote speaker, author of the book Radical Hope. In addition to his address on the conference topic, Dr. Gannon will lead two workshops, one focused on sustainable online teaching practices and the other centered on fostering belonging in both virtual and face-to-face learning environments. The 2024 IDI has concluded and registration is closed. Please contact CATL (CATL@uwgb.edu) if you have any questions about accessing recorded conference materials available in the 2024 IDI Canvas course.

About the Conference Theme: Thriving in Higher Education

Higher education has witnessed substantial challenges in recent years. Instructors and students faced COVID-19, the ensuing dramatic shifting to pandemic pedagogy, and all that came with it. Institutions confronted budget, enrollment, and political pressures, and they are now grappling with emerging generative AI technologies and their impact on education. Amid such disruptions, it can be easy to approach our work with a mentality of survival. This year’s Instructional Development Institute instead challenges you to consider what it would mean not simply to survive, but to thrive in higher education. While there are no easy answers, we can work together as educators to set goals, support one another, surmount obstacles, and achieve at a high level, similar to the expectations we have for our students.

Keynote Speaker

Photo of Dr. Kevin Gannon
Dr. Kevin Gannon is Director of the Center for the Advancement of Faculty Excellence (CAFÉ) and Professor of History at Queens University of Charlotte, in North Carolina. He is the author of Radical Hope: A Teaching Manifesto (West Virginia University Press, 2020), and his writing has also appeared in The Chronicle of Higher Education, Vox, CNN, and The Washington Post. In 2016, he appeared in the Oscar-nominated documentary 13th, directed by Ava DuVernay. He is currently working on a project centered around reimagining introductory and survey courses in higher education.

Keynote Address: Hopeful Teaching in Less-Than Hopeful Times

Let’s not mince words: these are almost overwhelmingly difficult times to be in higher education. After (barely?) surviving multiple years of “pandemic pedagogy,” we find ourselves on a landscape marked by faculty burnout, student disconnection, fiscal shenanigans, and an external climate that seems to get more foreboding by the day. How, then, is it possible to bring any meaningful sense of hope to our work in teaching and learning? And how might we imagine a context where we’re not simply surviving, but where we and our students are actually thriving? This session will not claim to provide all the answers, nor will it simply throw out empty inspirational quotes like one of those motivational page-a-day calendars. Rather, we’ll focus on agency as a foundation for hopeful teaching, and consider the ways in which we might help our students discover, develop, and value their own agency as learners. In doing so, we’ll look at some promising strategies which evidence suggests will be helpful in this work. Participants will leave this session with specific ideas which they can incorporate into their own teaching.

Keynote Workshop: Sustaining Our Students and Ourselves in Online Teaching and Learning

This session will explore strategies by which we can make the workload involved in online teaching both manageable and sustainable. We’ll use the idea of “presence” from the Community of Inquiry framework as a way to interrogate our own practices and consider what alternatives might exist. We’ll then look at examples of tools and practices which can both enhance presence in our courses and make our workflow more manageable.

Keynote Workshop: (Re) Connecting with Students after “Pandemic Pedagogy”

One of the most prevalent observations from faculty in recent months has been how difficult it is to connect (or reconnect) with students since the disruptions of the pandemic. What are the reasons for this attenuated sense of connection? Why does engagement seem so difficult now? How do we deepen student engagement in our courses without adding unsustainable amounts to our workload? This session will explore the sources of this disconnect, and consider some specific ways in which we can foster meaningful engagement from students—with both course material and one another.

Schedule

LIVE SESSIONS

8:45 – 9:00 a.m. | Welcome & Land Acknowledgement

  • Kate Burns (Provost and Vice Chancellor of Academic Affairs) and Kris Vespia (Center for the Advancement of Teaching and Learning Director)

9:00 – 10:00 a.m. | Keynote Address

  • Hopeful Teaching in Less-Than-Hopeful Times
    Dr. Kevin Gannon (Center for the Advancement of Faculty Excellence Director, Queens University)

10:15 – 11:00 a.m. | Session #1

Concurrent Session Options:

  • Community-Based Learning: A Pillar of Thriving in College and Beyond
    Katia Levintova (Professor), Isabel Gosse (UW-Green Bay Student), Ashley Heath (Academic Program Manager), Heather Kaminski (Assistant Professor), Grace Knudsen (Campus Compact AmeriCorps VISTA), Beth Kowalski (Director, Neville Public Museum), & Brady Reinhard (UW-Green Bay Student)
  • The Role of Resilience in Non-Clinical Case Management at UW-Green Bay
    Erin A. Van Daalwyk (Dean of Students) & Katie Morois (Assistant Dean of Students)
  • Holistically Envisioning “Real-World” Applicability: A Conversation
    David Voelker (Professor)

11:05 – 11:50 a.m. | Session #2

Concurrent Session Options:

  • Trust No One: Implementing Information Literacy in a First-Year Seminar
    Clifton Ganyard (Associate Professor) & Renee Ettinger (Assistant Director, Library Research Services)
  • Thriving OER Projects at UWGB: A Roundtable Discussion
    Carli Reinecke (OER Librarian), Joan Groessl (Associate Professor), Amy Kabrhel (Associate Professor), Kevin Kain (Teaching Professor), & Sawa Senzaki (Professor)
  • The Myth of Standard Language Ideology: Language Inclusivity in the Higher Education Classroom
    Cory Mathieu (Assistant Professor) & Shara Cherniak (Assistant Teaching Professor)

11:55 a.m. – 12:25 p.m. | Lunch

  • Psychology and Stuff Podcast: Evidence-Based Strategies for Thriving in Academia
    Alison Jane Martingano (Assistant Professor), Jason Cowell (Professor), Tom Gretton (Assistant Professor), Ryan Martin (Dean, College of Arts, Humanities, and Social Sciences), Abigail Nehrkorn-Bailey (Assistant Professor), Georjeanna Wilson-Doenges (Professor), & Chelsea Wooding (Assistant Professor)

12:30 – 1:30 p.m. | Keynote Workshop #1

  • Sustaining Our Students and Ourselves in Online Teaching and Learning
    Dr. Kevin Gannon (Center for the Advancement of Faculty Excellence Director, Queens University)

1:45 – 2:30 p.m.| Session #3

Concurrent Session Options:

  • Foundations for the Thriving Student in the Age of ChatGPT
    Jodi Pierre (Research Librarian) & Kristopher Purzycki (Assistant Professor)
  • What the Health? Strategies to Thrive in the Stressful World of Higher Education
    Jared Dalberg (Associate Professor)
  • Slaying the “Techno-issue” Dragon
    Vallari Chandna (Professor), Anup Nair (Assistant Teaching Professor), & Praneet Tiwari (Assistant Teaching Professor)

2:45 – 3:45 p.m. | Keynote Workshop #2

  • (Re) Connecting with Students After “Pandemic Pedagogy”
    Dr. Kevin Gannon (Center for the Advancement of Faculty Excellence Director, Queens University)

3:45 – 4:00 p.m. | Wrap-Up

  • Kris Vespia (Center for the Advancement of Teaching and Learning Director)

ON-DEMAND SESSIONS

  • Combining Engineering Ethics and Information Literacy in a STEM First-Year Seminar
    Nabila Rubaiya (Assistant Teaching Professor) & Jodi Pierre (Research Librarian)
  • Escape from the Chemistry Lab!
    Breeyawn Lybbert (Associate Professor)

Institute FAQs

A: The 2024 IDI is completely virtual and will be held through a Canvas course. The conference will feature a keynote address and two workshops led by Dr. Kevin Gannon. In addition to Dr. Gannon’s sessions, attendees will also be able to engage with a variety of live and one-demand presentations hosted by UWGB faculty, staff, and community partners. 

A: Everyone who registers for the conference will be sent an email on January 2, 2024, with a link to self-enroll in the IDI Canvas course. Follow the steps in the email to set up a Canvas account (if applicable) and complete the self-enrollment process.  By joining the IDI Canvas course, you will have full access to all the live and on-demand sessions, materials, and discussions. If you have any issues joining the course, please contact us at CATL@uwgb.edu. 

A: All live conference sessions will be hosted through Zoom and links to each individual Zoom session will be made available within the IDI Canvas course at 8 a.m. on Tuesday, January 9, 2024.

A: In addition to the live presentations, you can also explore a mix of on-demand sessions from pre-recorded presentations, podcasts, and online resources that explore concepts in teaching and learning. These sessions will be available in the IDI Canvas course and can be accessed after the conference as well.  

A: Yes, all live sessions will be recorded and posted in the IDI Canvas course after the conference. We will post an announcement in the course once all session recordings have been made available. You will be able to watch the recordings at any time up to a year after the conference date. 

A: Yes! The 2024 IDI is free and open to all educators in the UW system and beyond. 

A: Yes, we welcome those who are unable to attend live on Jan. 9 to still register for access to the session recordings after the conference.