Creating Intentional and Impactful Multiple-Choice Assessments

Multiple-choice quizzes are one of the most common forms of assessment in higher education, as they can be used in courses of nearly every discipline and level. Multiple-choice questions are also one of the quickest and easiest forms of assessment to grade, especially when administered through Canvas or another platform that supports auto-grading. Still, like any assessment method, there are some contexts that are well-suited for multiple-choice questions and others that are not. In this toolbox article, we will provide some evidence-based guidance on when to leverage multiple-choice assessments and how to do so effectively.

Strengths and Weaknesses of Multiple-Choice Assessments

Multiple-choice assessments are a useful tool, but every tool has its limitations. As you weigh the strengths and weaknesses of this format, remember to consider your course’s learning outcomes in relation to your assessments. Then, once you’ve considered how your assessments align with your outcomes, determine if those outcomes are well-suited to a multiple-choice assessment.

Objectivity

Multiple-choice assessments are a form of objective assessment. For a typical multiple-choice item, there is no partial credit — each answer option is either fully correct or fully incorrect, which is what makes auto-grading possible. This objectivity is useful for assessing outcomes in which students need to complete a task with a concrete solution, such as defining discipline-specific terminology, solving a mathematical equation, or recalling the details of a historical event.

The tradeoff of this objectivity is that “good” multiple-choice questions are often difficult to write. Since multiple-choice questions presume that there is only one correct answer, instructors must be careful to craft distractors (incorrect answer options) that cannot be argued as “correct.” Likewise, the question stem should be phrased so that there is a definitively correct solution. For example, if a question is based on an opinion, theory, or framework, then the stem should explicitly reference this idea to reduce subjectivity.

Example of Subjective vs. Objective Question Stem

____ needs are the most fundamental for an individual's overall wellbeing.

  • A) Cognitive
  • B) Self Esteem
  • C) Self-Actualization
  • D) Physiological

(Answer: D)

According to Maslow's hierarchy of needs, ____ needs are the most fundamental for an individual's overall wellbeing.

  • A) Cognitive
  • B) Self Esteem
  • C) Self-Actualization
  • D) Physiological

(Answer: D)

This version of the question stem clarifies that this question is based on a framework, Maslow's hierarchy of needs, which increases the question's objectivity, and therefore its reliability and validity for assessment.

Another caution regarding the objectivity of multiple-choice questions is that answers to these test items can often be found through outside resources — students’ notes, the textbook, a friend, Google, generative AI, etc. — which has important implications for online testing. Experts in online education advise against trying to police or surveil students, and instead encourage instructors to design their online assessments to be open-book (Norton Guide to Equity-Minded Teaching, p. 106).

Open-book multiple-choice questions can still be useful learning tools, especially in frequent, low-stakes assessments or when paired with a few short answer questions. Fully auto-graded multiple-choice quizzes can function as “mastery” quizzes, in which a student has unlimited attempts but must get above a certain threshold (e.g., 90%, 100%) to move on. Using low-stakes, open-note practice tests can be an effective form of studying, and in many cases may be better for retrieval than students studying on their own.

You can also customize your Canvas quiz settings to control other conditions, such as time. Classic Quizzes and New Quizzes include options that add a layer of difficulty to repeatable multiple-choice assessments, such as time limits, shuffled questions or answer choices, and the use of question banks. These settings, when used with low-stakes assessments with multiple attempts, can help students practice meeting the course’s learning outcomes before larger summative assessments.

Versatility

Multiple-choice assessments sometimes get a bad reputation for being associated with rote memorization and lower order thinking skills, but in reality, they can be used to assess skills at every level of Bloom’s taxonomy. This includes higher order thinking skills, such as students’ ability to analyze a source, evaluate data, or make decisions in complex situations.

For example, you could present students with a poem or graph and then use a multiple-choice question to assess a student’s ability to analyze and interpret the example. Or, alternatively, you could create a question stem that includes a short scenario and then ask students to pick the best response or conclusion from the answer choices.

Examples of Multiple-Choice Items That Assess Higher Order Thinking Skills

[The poem is included here.]

The chief purpose of stanza 9 is to:

  • A)  Delay the ending to make the poem symmetrical.
  • B)  Give the reader a realistic picture of the return of the cavalry.
  • C)  Provide material for extending the simile of the bridge to a final point.
  • D)  Return the reader to the scene established in stanza 1.

(Answer: D)

This item tests higher order thinking skills because it requires test-takers to apply what they know about literary devices and analyze a poem in order to discriminate the best answer.

Source: Burton, S. J., et al. (2001). How to Prepare Better Multiple Choice Test Items: Guidelines for University Faculty.

A line graph showing the relationship between time and heart rate for two different groups of individuals that were administered a drug for a clinical trial; the y-axis goes from 70 to 90 and the x-axis goes from their baseline heartrate to 5 min after the drug was administered

The graph above illustrates the change in heart rate over time for two different groups that were administered a drug for a clinical study. After studying the graph, a student concluded that there was a large increase in heart rate around the one-minute mark, even though the results of the study determined that patients' heart rates remained relatively stable over the duration of five minutes. Which aspect of the graph most likely misled the student when they drew their conclusion?

  • A)  The baseline for y-axis starts at 70 beats/min, rather than 0 beats/min.
  • B)  The y-axis is in beats/min, rather than beats/hour.
  • C)  The graph lacks a proper title.
  • D)  The graph includes datasets from two groups, instead of just one.

(Answer: A)

This item tests higher order thinking skills because it requires test-takers to analyze a graph and evaluate which answer choice might lead someone to draw a misleading conclusion from the graph.

Source: In, J. & Lee, S. (2017) Statistical data presentation. Korean J Anesthesiol, 70 (3): 267–276.

 

A nurse is making a home visit to a 75-year old male client who has had Parkinson's disease for the past five years. Which finding has the greatest implication on the patient's care?

  • A)  The client's wife tells the nurse that the grandchildren have not been able to visit for over a month.
  • B)  The nurse notes that there are numerous throw rugs throughout the client's home.
  • C)  The client has a towel wrapped around his neck that the wife uses to wipe her husband's face.
  • D)  The client is sitting in an armchair, and the nurse notes that he is gripping the arms of the chair.

(Answer: B)

This item tests higher order thinking skills because it requires test-takers to apply what they know about Parkinson's disease and then evaluate the answer choices to determine which observation is the most relevant to the patient's care in the scenario.

Source: Morrison, S. and Free, K. W. (2001). Writing multiple-choice test items that promote and measure critical thinking. Journal of Nursing Education, 40 (1), 17-24.

Multiple-choice questions can also be adjusted for difficulty by tweaking the homogeneity of the answer choices. In other words, the more similar the distractors are to the correct answer, the more difficult the multiple-choice question will be. When selecting distractors, pick answer choices that seem appropriately plausible for the skill level of students in your course, such as common student misconceptions. Using appropriately difficult distractors will help increase your assessments’ reliability.

Despite this versatility, there are still some skills — such as students’ ability to explain a concept, display their thought process, or perform a task — that are difficult to assess with multiple-choice questions alone. In these cases, there are other forms of assessment that are better suited for these outcomes, whether it be through a written assignment, a presentation, or a project-based activity. Regardless of your discipline, there are likely some areas of your course that suit multiple-choice assessments better than others. The key is to implement multiple-choice assessments thoughtfully and intentionally with an emphasis on how this format can help students meet the course’s learning outcomes.

Making Multiple-Choice Assessments More Impactful

Once you have weighed the pros and cons of multiple-choice assessments and decided that this format fits your learning outcomes and assessment goals, there are some additional measures you can take to make your assessments more effective learning opportunities. By setting expectations and allowing space for practice, feedback, and reflection, you can help students get the most out of multiple-choice assessments.

Set Expectations for the Assessment

In line with the Transparency in Learning and Teaching (TILT) framework, disclosing your expectations is important for student success. Either in the Canvas quiz description or verbally in class (or both), explain to students the multiple-choice assessment’s purpose, task, and criteria. For example, is the assessment a low-stakes practice activity, a high-stakes exam, or something in between? What topics and learning outcomes will the assessment cover? What should students expect in terms of the number/type of questions and a time limit, if there is one? Will students be allowed to retake any part of the assessment for partial or full credit? Clarifying these types of questions beforehand helps students understand the stakes and goal of the assessment so they can prepare accordingly.

Provide Opportunities for Practice and Feedback

To help reduce test-taking anxiety and aid with long-term retrieval, make sure to provide students with ample practice before high-stakes assessments. Try to use practice assessments to model the format and topics that will be addressed on major assessments. If you are using a certain platform to conduct your assessments, like Canvas quizzes or a textbook publisher, consider having students use that same platform for these practice assessments so they can feel comfortable using the technology in advance of major assessments as well.

Research also indicates that providing feedback after an assessment is key for long-term retention. Interestingly, this is not only true for answers that students got wrong, but also in cases when a student arrives at the correct answer but with a low degree of confidence. Without assessment feedback, students may just check their quiz grade and move on, rather than taking the time to process their results and understand how they can improve.

You can include immediate and automatic qualitative feedback for quiz questions through Canvas Classic Quizzes and New Quizzes. Feedback (or “answer comments”) can be added to individual answer options or to an entire multiple-choice item. For example, you can add a pre-formulated explanation underneath an answer choice on why that distractor is a common misconception. If a student has incorrectly selected that answer choice, they can read that feedback after submitting their quiz attempt to learn why their choice was incorrect.

Create Space for Reflection

A bar graph showing a positive relationship between final test scores and learning conditions that include practice tests with feedback

Source: Roediger III, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention.

As indicated in the chart above, delayed feedback is potentially even more effective for long-term retention than immediate feedback. Consider reserving some time in class to debrief after important assessments and address students’ remaining questions. For asynchronous online courses, you could record a short post-test video in which you comment on trends you saw in students’ scores and clear up common misconceptions.

If you want to go a step further, you can also have students complete a self-reflective activity, also known as an exam wrapper, like a post-test survey or written reflection. Self-reflective activities like these have been shown to increase students’ overall performance in class by helping them learn how to reflect on their own study and performance habits, in addition to the positive effects on information retention mentioned earlier.

Questions?

Need some help designing your next multiple-choice assessment? Want to learn more about mastery quizzes, Canvas quiz settings, or exam wrappers? CATL is here to help! Reach out to us at CATL@uwgb.edu or schedule a consultation and we can help you brainstorm assessment solutions that fit your course’s needs. Or, if you’re ready to start building your assessment, check out this related guide for tips on writing more effective multiple-choice questions.

Sandbox Courses: A Time-Saving Tool for Course Design and Collaboration

Decorative image of sandbox with a toy truck.

The University of Wisconsin – Green Bay uses Canvas as its Learning Management System (LMS). When instructors participate in professional development opportunities offered by the Center for the Advancement of Teaching and Learning (CATL), they often encounter information about creating a Canvas Sandbox course. But what exactly is a sandbox course? This blog post will define what a sandbox course is, what the differences are between a sandbox course and an instructional course, and some different use cases for sandbox courses that will help save you time in the long run.

What is a sandbox course?

A sandbox course is an empty Canvas course shell that can be used for a wide variety of purposes. These courses are not linked to the UWGB course registrar the way instructional courses are. Therefore, Sandbox courses can be used as a testing field or playground within the Canvas environment. Sandbox courses can be used by instructors as a tool to engage with Canvas content and teaching materials with other faculty or staff.

How is a sandbox course different from an instructional course?

  • Sandbox course: A sandbox course can be manually created at any time. These courses are not linked to a specific term within Canvas and do not have term start or end dates. Sandbox courses are not linked to the Registrar or SIS, so they do not have automatic enrollments and do not have any students.
  • Instructional course: An instructional course is created 75 days before the start date of the course as it is listed in the Schedule of Classes. These courses are linked to the UWGB student information system (SIS) which automatically enrolls students. This same system also automatically updates student enrollments as students add and drop courses at the beginning of a term to keep your course enrollments up to date. Both the instructor and students within a Canvas instructional course are added with SIS system sync. Therefore, the only Teachers within an instructional course are those listed as an instructor of record by the Registrar’s Office and only students who officially enroll in a course are added to an instructional course shell.

What are the limitations and benefits of a sandbox course?

Sandbox courses do not have the option to add someone to the course as a “Student.” This is a setting enforced by the University of Wisconsin System. Instructors can, however, utilize the “Student View” option in Canvas to view content in their Sandbox courses as a student would see it. To do so, any modules and content of interest must be published.

Canvas sandbox courses also allow for multiple individuals to have the role of “Teacher” at the same time. As sandbox courses are not linked to the SIS system, these roles can be granted by anyone within the course who has the role of “Teacher”. This allows for multiple instructors to contribute collaboratively to learning materials and activities to a course, or to allow instructors to share content with each other without worry that students will have access to those resources.

How can you utilize a sandbox course (instructors and staff)?

  • Sharing course content with other instructors or staff members while being mindful of FERPA. This is the safest way to share course content between instructors.
  • Preemptively building out your course content prior to the creation of your Instructional Canvas courses (these show up 75 days before the listed course start date). Content built in a sandbox course prior to the creation of an instructional course can be moved into the live instructional course using the Canvas Course Import tool.
  • Make “live” revisions to course content during an active teaching term without impacting the instructional course your students are engaging in. The best way to do this is to build a sandbox course and then copy the course content from your instructional course into the sandbox course. Then you can make reflective edits to that content in the sandbox course without impacting the activities that students have engaged with.
  • Collaborative course design and course building with a co-instructor or designers.
  • Creation of departmental or program trainings for instructors, staff, graduate students, and/or student employees. If you would like to create a course shell for training and development purposes and need to add users with the “Student” role, please reach out to dle@uwgb.edu and a Canvas admin can copy your sandbox into a course shell that supports the Student role.
  • Testing and experimenting to build new activities or assessments using different integrations (LTIs) such as PlayPosit and Hypothesis that are available within in the UWGB Canvas instance.

How do I create a sandbox course?

To create a Canvas sandbox course, you can follow the directions listed in this UWGB Knowledge Base article. There are, however, a few caveats for the creation of a sandbox course in the UWGB instance of Canvas. These conditions are listed below:

Global Navigation how to create a Sandbox

 

  • You must be enrolled in at least one existing Canvas course as a Teacher. If you are not enrolled in any Canvas courses as a Teacher yet, you can email DLE@uwgb.edu to have a Canvas admin create a sandbox for you.
  • You must access the Sandbox course creation tool, located under the Help menu within the Canvas Global Navigation Menu, from the University of Wisconsin – Green Bay instance website (https://uwgby.instructure.com).

 

Call for 2024-25 Wisconsin Teaching Fellows and Scholars Program (Applications Due Sunday, Nov. 26, 2023)

The UWGB Provost Office and the Center for the Advancement of Teaching and Learning, on behalf of the UW System’s Office of Professional and Instructional Development (OPID), invite faculty and instructional academic staff to apply for the 2024-25 cohort of the Wisconsin Teaching Fellows and Scholars (WTFS) Program.

This program is designed to provide time (one year) to systematically reflect with peers in a supportive and open-minded community and, ultimately, to move from “scholarly teaching” to the “scholarship of teaching.” Administered by OPID and directed by UW faculty, the WTFS Program is grounded in the Scholarship of Teaching and Learning (SoTL).

Full program description and call

The deadline for applications has been extended through Sunday, Nov. 26, 2023. Interested applicants should submit items 1-5 below as separate attachments to one email message. That email should be sent to CATL (CATL@uwgb.edu) with the subject line “WTFS Application.” The reference letter can be submitted directly to the CATL email by your Department Chair or Dean, but it is also due by Nov. 26. The full list of required materials is below:

  1. Application checklist;
  2. A letter stating your interest in and qualifications for the WTFS Program (two-page maximum);
  3. A teaching & learning philosophy (three-page maximum);
  4. An abbreviated curriculum vitae (two-page maximum);
  5. This budget sheet estimating costs using UW System travel reimbursement rates;
  6. A reference letter from your Department Chair or Dean (can be directly emailed to CATL@uwgb.edu).

As always, let us know if you have any questions via email: CATL@uwgb.edu.

Call for Teaching Enhancement Grant Proposals (Due Nov. 28, 2023)

The Instructional Development Council (IDC) is accepting applications for Teaching Enhancement Grants (TEGs) through support from the Center for the Advancement of Teaching and Learning (CATL) and the Office of the Provost. TEGs provide funding for professional development activities related to teaching or for projects that lead to the improvement of teaching skills or the development of innovative teaching strategies.

Faculty and instructional academic staff whose primary responsibility is teaching for the academic year in which the proposed project takes place are strongly encouraged to apply! Click the button below for full details.

Fall 2023 Application Info

Applications are due Tuesday, Nov. 28, 2023. If you have any questions about the application or TEGs, please email the Instructional Development Council at idc@uwgb.edu.

What is ChatGPT? Exploring AI Tools and Their Relationship with Education

Generative Artificial Intelligence (AI) and machine-generated content have become prominent in educational discussions. Amidst technical jargon and concerns about the impact of traditional learning, writing, and other facets, understanding what these tools are and what they can do can be overwhelming. This toolbox guide provides insights into some commonly used generative AI tools and explains how they are changing the landscape of higher education.

What is Generative AI?

CATL created a short video presentation in Fall 2023 that provides instructors with an introduction to generative AI. The video and the linked PowerPoint slides below can help you understand how generative AI tools work, their capabilities, and their limitations. Please note, minor parts of the tool identification in the video have been corrected below in the ‘Common Generative AI Tools’ section. 

Introduction to Generative AI – CATL Presentation Slides (PDF)

Common Generative AI Tools

One of the most popular text-based generative AI tools is ChatGPT by OpenAI. Since its November 2022 release, various companies have developed their own generative AI applications based on or in direct competition with OpenAI’s framework. Learn more about a few common, browser-based generative AI tools below.

  • ChatGPT is an AI-powered chatbot created by OpenAI. The "GPT" in "ChatGPT" stands for Generative Pre-trained Transformer.
  • ChatGPT previously required users to sign up for an account and verify with a phone number, but it can now be used without an account. Users can use the chatbot features of ChatGPT both with or without an account (currently version ChatGPT 3.5) or access more advanced models and features with a paid account (currently version ChatGPT 4.0). For more information or to try it yourself, visit chatgpt.com.
  • Microsoft has created its own AI called Copilot using a customized version of OpenAI’s large language model and many of the features of ChatGPT. Users can interact with the AI through a chatbot, compose feature, or the Bing search engine. Microsoft is also rolling out Copilot-powered features in many of its Office 365 products, but these features are currently only available for an additional subscription fee.
  • Faculty, staff, and students can access Copilot (which uses both ChatGPT 4.0 and Bing Chat) with their UWGB account. Visit bing.com to try out Copilot or watch our short video on how to log in using a different browser. By logging in with UWGB credentials, a green shield and “protected” should appear on the screen. The specifics of what is/is not protected can be complicated, but this Microsoft document is intended to provide guidance. Regardless of potential protections, FERPA and HIPPA-protected information (student or employee) should not be entered.
  • Google has created their own AI tool called Gemini (formerly Google Bard). Similar to ChatGPT and Copilot, Gemini can generate content based on users’ inputs. Outputs may also include sources fetched from Google.
  • Using Gemini requires a free Google account. If you have a personal Google account, you can try out Gemini at gemini.google.com.

 

home page for Microsoft Copilot
The Microsoft Copilot home page as of May 2024

Note: For UWGB faculty, staff, and students, we recommend using Microsoft Copilot and other tools that do not require users to provide personal information in the sign-up process. Note that we are also learning more about potential access to Adobe Express and Firefly (including their image generation features) with UWGB login credentials, at least for employees. Watch this space for additional details as they become available.

What Can Generative AI Tools Do?

The generative AI tools we’ve discussed so far are all trained on large datasets that produce outputs based on patterns in that dataset. User prompts and feedback can be used to improve their outputs and models, so these tools are constantly evolving. Explore below to learn about some use cases and limitations of text-based generative AI tools.

Generative AI tools can be used in a multitude of ways. Some common use cases for text-based generative AI tools include: 

  • Language generation: Users can ask the AI to write essays, poems, emails, or code snippets on a given topic.  
  • Information retrieval: Users can ask the AI simple questions like “explain the rules of football to me” or “what is the correct way to use a semicolon?”.
  • Language translation: Users can use the AI to translate words or phrases into different languages.  
  • Text summarization: Users can ask them to condense long texts, including lecture notes or entire books, into shorter summaries.
  • Idea generation: Users can use the AI to brainstorm and generate ideas for a story, research outline, email, or cover letter. 
  • Editorial assistance: Users can input their own writing and then ask the AI to provide feedback or rewrite it to make it more concise or formal.

These tools are constantly evolving and improving, but in their current state, many have the following limitations:

  • False or hallucinated responses: Most AI-powered text generators produce responses that they deem are likely answers based on complex algorithms and probability, which is not always the correct answer. As a result, AI may produce outputs that are misleading or incorrect. When asking AI complex questions, it may also generate an output that is grammatically correct but logically nonsensical or contradictory. These incorrect responses are sometimes called AI "hallucinations."
  • Limited frame of reference: Outputs are generated based on the user's input and the data that the AI has been trained on. When asking an AI about current events or information not widely circulated on the internet, it may produce outputs that are not accurate, relevant, or current because its frame of reference is limited to data that it has been trained on. 
  • Citation: Although the idea behind generative AI is to generate unique responses, there have been documented cases in which an AI has produced outputs containing unchanged, copyrighted content from its dataset. Even when an AI produces a unique response, some are unable to verify the accuracy of their outputs or provide sources supporting their claims.
  • Machine learning bias: AI tools may produce outputs that are discriminatory or harmful due to pre-existing bias in the data it has been trained.

The potential for tools like ChatGPT seems almost endless — writing complete essays, creating poetry, summarizing books and large texts, creating games, translating languages, analyzing data, and more. ChatGPT and its contemporaries can interpret and analyze language, similar to how human beings can. These tools have become more conversational and adaptive with each update, making it difficult to discern between what is generated by an AI and what is produced by a human, and the machine-learning models they are based upon imitate the way humans learn, so their accuracy and utility will only continue to improve over time.

What Does This Mean for Educators?

The existence of this technology raises questions about which tasks will be completed all or in part by machines in the future and what that means for our learning outcomes, assessments, and even disciplines. Some experts are discussing to what extent it should become part of the educational enterprise to teach students how to write effective AI prompts and use tools like ChatGPT to produce work that balances quality with efficiency. Other instructors are considering integrating lessons on AI ethics or information literacy into their teaching. Meanwhile, organizations like Inside Higher Ed have rushed to conduct research and surveys on current and prospective AI usage in higher ed to offer some benefits and challenges of using generative AI for leaders in higher education looking to make informed decisions about AI guidance and policy.

Next Steps for UWGB Instructors

The Universities of Wisconsin have issued official guidance on the use of generative AI, but the extent to which courses will engage with this technology is largely left up to the individual instructor. Instructors may wish to mitigate, support, or even elevate students’ use of generative AI depending on their discipline and courses.

Those interested in using these tools in the classroom should familiarize themselves with these considerations for using generative AI, especially regarding a tool’s accuracy, privacy, and security. As with any tool we incorporate into our teaching, we must be thoughtful about how and when to use AI and then provide students with proper scaffolding, framing, and guardrails to encourage responsible and effective usage.

Still, even for those who don’t want to incorporate this technology into their courses right now, we can’t ignore its existence either. All instructors, regardless of their philosophy on AI, are highly encouraged to consider how generative AI will impact their assessments, incorporate explicit guidance on AI tool usage in their syllabi, and continue to engage in conversations around these topics with their colleagues, chairs, and deans.

Learn More

Explore even more CATL resources related to AI in education:

If you have questions, concerns, or ideas specific to generative AI tools in education and the classroom, please email us at catl@uwgb.edu or set up a consultation!