Creating Intentional and Impactful Multiple-Choice Assessments

Multiple-choice quizzes are one of the most common forms of assessment in higher education, as they can be used in courses of nearly every discipline and level. Multiple-choice questions are also one of the quickest and easiest forms of assessment to grade, especially when administered through Canvas or another platform that supports auto-grading. Still, like any assessment method, there are some contexts that are well-suited for multiple-choice questions and others that are not. In this toolbox article, we will provide some evidence-based guidance on when to leverage multiple-choice assessments and how to do so effectively.

Strengths and Weaknesses of Multiple-Choice Assessments

Multiple-choice assessments are a useful tool, but every tool has its limitations. As you weigh the strengths and weaknesses of this format, remember to consider your course’s learning outcomes in relation to your assessments. Then, once you’ve considered how your assessments align with your outcomes, determine if those outcomes are well-suited to a multiple-choice assessment.

Objectivity

Multiple-choice assessments are a form of objective assessment. For a typical multiple-choice item, there is no partial credit — each answer option is either fully correct or fully incorrect, which is what makes auto-grading possible. This objectivity is useful for assessing outcomes in which students need to complete a task with a concrete solution, such as defining discipline-specific terminology, solving a mathematical equation, or recalling the details of a historical event.

The tradeoff of this objectivity is that “good” multiple-choice questions are often difficult to write. Since multiple-choice questions presume that there is only one correct answer, instructors must be careful to craft distractors (incorrect answer options) that cannot be argued as “correct.” Likewise, the question stem should be phrased so that there is a definitively correct solution. For example, if a question is based on an opinion, theory, or framework, then the stem should explicitly reference this idea to reduce subjectivity.

Example of Subjective vs. Objective Question Stem

____ needs are the most fundamental for an individual's overall wellbeing.

  • A) Cognitive
  • B) Self Esteem
  • C) Self-Actualization
  • D) Physiological

(Answer: D)

According to Maslow's hierarchy of needs, ____ needs are the most fundamental for an individual's overall wellbeing.

  • A) Cognitive
  • B) Self Esteem
  • C) Self-Actualization
  • D) Physiological

(Answer: D)

This version of the question stem clarifies that this question is based on a framework, Maslow's hierarchy of needs, which increases the question's objectivity, and therefore its reliability and validity for assessment.

Another caution regarding the objectivity of multiple-choice questions is that answers to these test items can often be found through outside resources — students’ notes, the textbook, a friend, Google, generative AI, etc. — which has important implications for online testing. Experts in online education advise against trying to police or surveil students, and instead encourage instructors to design their online assessments to be open-book (Norton Guide to Equity-Minded Teaching, p. 106).

Open-book multiple-choice questions can still be useful learning tools, especially in frequent, low-stakes assessments or when paired with a few short answer questions. Fully auto-graded multiple-choice quizzes can function as “mastery” quizzes, in which a student has unlimited attempts but must get above a certain threshold (e.g., 90%, 100%) to move on. Using low-stakes, open-note practice tests can be an effective form of studying, and in many cases may be better for retrieval than students studying on their own.

You can also customize your Canvas quiz settings to control other conditions, such as time. Classic Quizzes and New Quizzes include options that add a layer of difficulty to repeatable multiple-choice assessments, such as time limits, shuffled questions or answer choices, and the use of question banks. These settings, when used with low-stakes assessments with multiple attempts, can help students practice meeting the course’s learning outcomes before larger summative assessments.

Versatility

Multiple-choice assessments sometimes get a bad reputation for being associated with rote memorization and lower order thinking skills, but in reality, they can be used to assess skills at every level of Bloom’s taxonomy. This includes higher order thinking skills, such as students’ ability to analyze a source, evaluate data, or make decisions in complex situations.

For example, you could present students with a poem or graph and then use a multiple-choice question to assess a student’s ability to analyze and interpret the example. Or, alternatively, you could create a question stem that includes a short scenario and then ask students to pick the best response or conclusion from the answer choices.

Examples of Multiple-Choice Items That Assess Higher Order Thinking Skills

[The poem is included here.]

The chief purpose of stanza 9 is to:

  • A)  Delay the ending to make the poem symmetrical.
  • B)  Give the reader a realistic picture of the return of the cavalry.
  • C)  Provide material for extending the simile of the bridge to a final point.
  • D)  Return the reader to the scene established in stanza 1.

(Answer: D)

This item tests higher order thinking skills because it requires test-takers to apply what they know about literary devices and analyze a poem in order to discriminate the best answer.

Source: Burton, S. J., et al. (2001). How to Prepare Better Multiple Choice Test Items: Guidelines for University Faculty.

A line graph showing the relationship between time and heart rate for two different groups of individuals that were administered a drug for a clinical trial; the y-axis goes from 70 to 90 and the x-axis goes from their baseline heartrate to 5 min after the drug was administered

The graph above illustrates the change in heart rate over time for two different groups that were administered a drug for a clinical study. After studying the graph, a student concluded that there was a large increase in heart rate around the one-minute mark, even though the results of the study determined that patients' heart rates remained relatively stable over the duration of five minutes. Which aspect of the graph most likely misled the student when they drew their conclusion?

  • A)  The baseline for y-axis starts at 70 beats/min, rather than 0 beats/min.
  • B)  The y-axis is in beats/min, rather than beats/hour.
  • C)  The graph lacks a proper title.
  • D)  The graph includes datasets from two groups, instead of just one.

(Answer: A)

This item tests higher order thinking skills because it requires test-takers to analyze a graph and evaluate which answer choice might lead someone to draw a misleading conclusion from the graph.

Source: In, J. & Lee, S. (2017) Statistical data presentation. Korean J Anesthesiol, 70 (3): 267–276.

 

A nurse is making a home visit to a 75-year old male client who has had Parkinson's disease for the past five years. Which finding has the greatest implication on the patient's care?

  • A)  The client's wife tells the nurse that the grandchildren have not been able to visit for over a month.
  • B)  The nurse notes that there are numerous throw rugs throughout the client's home.
  • C)  The client has a towel wrapped around his neck that the wife uses to wipe her husband's face.
  • D)  The client is sitting in an armchair, and the nurse notes that he is gripping the arms of the chair.

(Answer: B)

This item tests higher order thinking skills because it requires test-takers to apply what they know about Parkinson's disease and then evaluate the answer choices to determine which observation is the most relevant to the patient's care in the scenario.

Source: Morrison, S. and Free, K. W. (2001). Writing multiple-choice test items that promote and measure critical thinking. Journal of Nursing Education, 40 (1), 17-24.

Multiple-choice questions can also be adjusted for difficulty by tweaking the homogeneity of the answer choices. In other words, the more similar the distractors are to the correct answer, the more difficult the multiple-choice question will be. When selecting distractors, pick answer choices that seem appropriately plausible for the skill level of students in your course, such as common student misconceptions. Using appropriately difficult distractors will help increase your assessments’ reliability.

Despite this versatility, there are still some skills — such as students’ ability to explain a concept, display their thought process, or perform a task — that are difficult to assess with multiple-choice questions alone. In these cases, there are other forms of assessment that are better suited for these outcomes, whether it be through a written assignment, a presentation, or a project-based activity. Regardless of your discipline, there are likely some areas of your course that suit multiple-choice assessments better than others. The key is to implement multiple-choice assessments thoughtfully and intentionally with an emphasis on how this format can help students meet the course’s learning outcomes.

Making Multiple-Choice Assessments More Impactful

Once you have weighed the pros and cons of multiple-choice assessments and decided that this format fits your learning outcomes and assessment goals, there are some additional measures you can take to make your assessments more effective learning opportunities. By setting expectations and allowing space for practice, feedback, and reflection, you can help students get the most out of multiple-choice assessments.

Set Expectations for the Assessment

In line with the Transparency in Learning and Teaching (TILT) framework, disclosing your expectations is important for student success. Either in the Canvas quiz description or verbally in class (or both), explain to students the multiple-choice assessment’s purpose, task, and criteria. For example, is the assessment a low-stakes practice activity, a high-stakes exam, or something in between? What topics and learning outcomes will the assessment cover? What should students expect in terms of the number/type of questions and a time limit, if there is one? Will students be allowed to retake any part of the assessment for partial or full credit? Clarifying these types of questions beforehand helps students understand the stakes and goal of the assessment so they can prepare accordingly.

Provide Opportunities for Practice and Feedback

To help reduce test-taking anxiety and aid with long-term retrieval, make sure to provide students with ample practice before high-stakes assessments. Try to use practice assessments to model the format and topics that will be addressed on major assessments. If you are using a certain platform to conduct your assessments, like Canvas quizzes or a textbook publisher, consider having students use that same platform for these practice assessments so they can feel comfortable using the technology in advance of major assessments as well.

Research also indicates that providing feedback after an assessment is key for long-term retention. Interestingly, this is not only true for answers that students got wrong, but also in cases when a student arrives at the correct answer but with a low degree of confidence. Without assessment feedback, students may just check their quiz grade and move on, rather than taking the time to process their results and understand how they can improve.

You can include immediate and automatic qualitative feedback for quiz questions through Canvas Classic Quizzes and New Quizzes. Feedback (or “answer comments”) can be added to individual answer options or to an entire multiple-choice item. For example, you can add a pre-formulated explanation underneath an answer choice on why that distractor is a common misconception. If a student has incorrectly selected that answer choice, they can read that feedback after submitting their quiz attempt to learn why their choice was incorrect.

Create Space for Reflection

A bar graph showing a positive relationship between final test scores and learning conditions that include practice tests with feedback

Source: Roediger III, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention.

As indicated in the chart above, delayed feedback is potentially even more effective for long-term retention than immediate feedback. Consider reserving some time in class to debrief after important assessments and address students’ remaining questions. For asynchronous online courses, you could record a short post-test video in which you comment on trends you saw in students’ scores and clear up common misconceptions.

If you want to go a step further, you can also have students complete a self-reflective activity, also known as an exam wrapper, like a post-test survey or written reflection. Self-reflective activities like these have been shown to increase students’ overall performance in class by helping them learn how to reflect on their own study and performance habits, in addition to the positive effects on information retention mentioned earlier.

Questions?

Need some help designing your next multiple-choice assessment? Want to learn more about mastery quizzes, Canvas quiz settings, or exam wrappers? CATL is here to help! Reach out to us at CATL@uwgb.edu or schedule a consultation and we can help you brainstorm assessment solutions that fit your course’s needs. Or, if you’re ready to start building your assessment, check out this related guide for tips on writing more effective multiple-choice questions.

Steps Towards Assuring Academic Integrity

Article by Nathan Kraftcheck.

A common initial concern I often hear when meeting with new distance education instructors is how to prevent cheating and plagiarism. How can they ensure the rigor of their assessments? Although there is not a 100% successful strategy that one can adopt, neither is there a 100% successful strategy for eliminating cheating during in-person assessments (Watson & Sottile, 2010). However, we still strive to limit academic dishonesty to the best of our abilities. I’ve provided some practices below that you may find useful in both reducing the opportunity for students to commit academic misconduct, and their motivation to do so in your class.

Quiz and exam building strategies to save time and reduce cheating

One way in which online instructors reduce time spent grading is through online quizzes and exams. Systems like Canvas have allowed for automatic grading of certain question types for many years (open-ended manually graded questions are also available). Depending on an instructor’s goals, course objectives, and discipline, automatically scored online quizzes and exams can be a useful tool.

  • As a formative learning activity in itself—a way for students to check their own learning in low-stakes assessment.
  • As a replacement for other low-stakes work. This can be useful in offloading discussion board fatigue that many students cited over the Fall of 2020.
  • As a method to assess foundational knowledge that is necessary for future work in the major or program.
  • As a manageable way to assess a large number of students.

Decorative icon of a stopwatchWhen including online quizzes and exams in a course, the ability for students to look up the answer is always a concern. Instructors may be tempted to direct students to not use external materials when taking their assessment. Unfortunately, just asking students not to use such material is unlikely to find much success. A more common, practical approach is to design the quiz or exam around the fact that many students will use external resources when possible.

  • Allow multiple attempts.
  • Consider drawing questions from a pool of possible questions. This can allow the students to engage with the same concept, framed differently, helping them work on the underlying concept instead of mastering a specific question’s language.
  • Let students see the correct answer after the quiz or exam is no longer available. This can help if you'd like to allow your students to use their quiz or exam as a study guide.
  • Draw questions from a pool of possible questions—each student will have a randomized experience this way and depending on how large the pool is, some students may not see the exact same version of the questions (assuming different wording across questions that measure the same concept).
  • Shuffle the order of possible answers.
  • Limit how long the quiz or exam is open if you want to mimic a closed-book assessment—don't allow students time to look up all the answers.
  • Let students see the correct answer after the quiz or exam is no longer available. This can help if you'd like to allow your students to use their quiz or exam as a study guide.
  • Only show one question at a time—this can limit a student's ability to look up multiple questions at once and also limit their ability to share the questions with a friend.
  • Set availability and due dates for your quizzes or exams.
  • Modify your questions slightly from semester to semester. Do this for 100% of your publisher-provided questions—assume copies of publisher questions and their answers are available online for students to look up.

For more detailed information on these items, please look at this page for guidance.

Less high-stakes assessment and more low-stakes assessment

It’s fairly common for teaching and learning centers to promote an increase in the use of lower-stakes assessments and a decrease in the higher-stakes assessments. This might seem counterintuitive because more assessments could mean a greater opportunity to cheat, right? It may also seem like additional work since students would be required to take assessments more frequently. However, there is good reason to advocate for more frequent, smaller assessments.

  • Students have a better understanding of how well they grasp discrete topics.
  • Students will know earlier if they’re not doing well, instead of at the first midterm.
  • Students learn from recalling information—a quiz can be more effective than just studying.
  • Students learn more through repeated assessment in comparison to one assessment (Brame & Biel, 2015; Roediger & Butler, 2011).

“Done” / “not done” grading

For low-stakes formative student assignments, consider adopting a done/not done approach. This could take the form of any assignment you could quickly assess for completion, for instance, a brief reflective or open-ended written assignment submission or discussion post. By keeping the activity brief and focused, you’ll be able to quickly assess whether it was done correctly while allowing students to re-engage with class topics by making meaning from what they’ve learned.

From the University of Waterloo:

  • What new insights did I develop as a result of doing this work?
  • How has my perspective changed after doing this assignment?
  • What challenges to my current thinking did this work present?
  • How does work in this course connect with work in another course?
  • What concepts do I still need to study more? Where are the disconnects in my learning?

Working up to larger projects and papers

Decorative icon of an increasing chart.For larger, summative projects that by their nature dictate a large influence on final course grades, consider breaking up the project into smaller steps (Ahmad & Sheikh, 2016).

For a written paper assignment, an instructor could:

  • Start by asking students to select a topic based on a parameter you provide and find source material to support their topic.
  • Students submit their topic description and source material as an assignment. Grade as complete/incomplete and provide guidance if necessary on topic and/or sources.
  • Ask students to create an annotated bibliography.
  • Students submit an annotated bibliography to an assignment. Grade as complete/incomplete.
  • Create a discussion in Canvas where students can talk about their topics (assuming they're all somewhat related). Grade as complete/incomplete.
  • Ask students to create a rough draft. Utilize peer grading in Canvas to off-load grading and include student-to-student communication and collaboration.
  • Students submit a second draft or final draft. Grade with a rubric.

By asking students to select their topic and work through the writing process on a step-by-step basis, the instructor can see the process the student takes through the paper’s development, and students are not able to procrastinate and thus won’t feel pressured to plagiarize someone else’s work (Elias, 2020) as they’re already doing most of the work anyway. This process also discourages plagiarism as there is not as much of an emphasis on the finished product, as the possible score is distributed across multiple activities (Carnegie Mellon University).

Built-in flexibility

Dropping lowest scores

Another way to reduce the appeal of cheating in courses is to offer some flexibility in grading (Ostafichuk, Frank & Jaeger). This can take many forms, of course, but one common to classes using a learning management system like Canvas is to drop the lowest score in a grouping of similar activities. As an example, an online instructor might have a Canvas Assignment Group containing all of their graded quizzes. The instructor can then create a rule for that Assignment Group which will tell Canvas how to calculate scores of the activities inside. For example, the instructor might set the Assignment Group to exclude each student’s lowest score in that group when calculating the final grade. The quiz score that is dropped varies from student to student, but each would have their lowest score dropped.

Student options

Building off the concept of dropping the lowest score students achieve from a group of assignments, you can also use this functionality to build in student choice. For example, if you have more than two discussion activities in your class that meet the same learning objective, consider letting students select the one that they want to participate in and then “drop” the other.

Late work leeway

Another option for flexibility is to set the “due” dates for your graded activities but leave the “available to” date empty or make it the absolute last date and time you would accept submissions. This will allow your students to submit their work beyond the due date and have it flagged as “late”, but also reduce the likelihood of a student cheating on an assignment if they’ve procrastinated or otherwise fallen behind in their coursework. You can also create late work grading policies within Canvas that automatically deduct a percentage of possible points on a daily or weekly basis.

Make use of rubrics

Decorative icon of a rubric.  Research has shown that rubrics are effective tools in shining light on the most important elements of an assignment, setting student expectations for quality and depth of submitted work, and simplifying the grading process for instructors (Kearns). By making a rubric available ahead of time, students have another opportunity to see how their work will be graded and what crucial elements they should include. They can also address equity issues between students, leveling the field between students whose education has prepared them to succeed in college versus those who have not (Stevens & Levi, 2006). Some instructors have found that using rubrics reduces their time spent grading, possibly because of the focused nature of what is being assessed (Cornell University; Duquesne University).

Canvas has built-in rubrics that can be attached to any graded activity. Rubrics are used mostly in Canvas Assignments and Discussions. They can be added to quizzes but aren’t used in the actual grading process for quizzes and would be used as guidance for students only. To learn more about rubrics, take a look at this rubrics guide by Boston College. There’s also a more task-oriented guide from Canvas, available here.

What do you think?

What techniques have you found useful in limiting academic misconduct in your classes? Let us know by dropping a comment below!

Hands of students completing a cloud-shaped puzzle which reads "Online Collaboration"

Up and Running with Remote Group Work

A Case for Group Work

Group work can elicit negative reactions from instructors and students alike. Often enough, students groan about doing it and instructors dread grading it. The process is ripe for communication breakdowns resulting in stress from both perspectives. On top of this, the digital learning environment tends to compound these issues. Why then is group work so prevalent?

The answer is that, when done well, group activities help foster engagement and build relationships. Collaborative work helps students develop important skills like effectively articulating ideas, active listening, and cooperation with peers. Collaborative assignments correlate strongly with student success positioning them as one of eight high-impact practices identified by the Association of American Colleges and Universities. Making group work a worthwhile experience for students requires extra consideration and planning, but the positive gains are worth the effort.

Designing Group Work for Student Success

How can we design collaborative activities that are a quality learning experience for students? Scaffolding makes sure students are confident in their understanding of and ability to execute the activity. UW-Extension has created a helpful guide on facilitating group work that outlines three key suggestions to get you started. First, be sure students understand the purpose of the activity, in terms of what they are supposed to learn from it and why it is a group activity. Second, provide support so students have the necessary tools and training to collaborate. You are clear how and when students are to collaborate or provide suggestions. You ensure students understand how to use the needed technologies. Finally, providing opportunities for peer- and self-evaluation can alleviate frustrations of unequal workload by having students evaluate their own and their peers’ contributions. As challenges arise, guide groups toward solutions that are flexible but fair to all members. When embarking on group projects, be prepared to provide students with guidance about what to do when someone on the team is not meeting the group’s expectations.

One example of this as you design your group projects is to ask yourself whether it’s important students meet synchronously. If so, how might you design the project for students with caregiving responsibilities or with full-time or “off hours” work schedules? These students may not be able to meet as regularly or at the same time as other students. See below for how this might play into assessing the group project. You might also consider whether all students need to hold the same role within the group, or if their collective project be split up based on group roles.

Consider how the group dynamics can impact student experiences. Helping students come up with a plan for group work and methods of holding one another accountable promotes an inclusive and equitable learning environment. Consider any of these tools to help your students coordinate these efforts:

Assessing Group Work

Equitable, specific, and transparent grading are crucial to group-work success. The Eberly Center for Teaching Excellence of Carnegie Mellon University has a great resource on how to assess group work, including samples. This resource breaks grading group work down into three areas. First, assess group work based on both individual and group learning and performance. Include an individual assessment component to motivate all students to contribute and help them to feel their individual efforts are recognized. Also assess the process along with the product. What skills are you hoping students develop by working in groups? Your choice of assessment should point to these skills. One way to meet this need is to have students complete reflective team, peer, or individual evaluations as described above. Finally, outline your assessment criteria and grading scheme upfront. Students should have clear expectations of how you will assess them. Include percentages for team vs. individual components and product vs. process components as they relate to the total project grade.

Tools for Working Collaboratively

Picking the right tool among a plethora of what is available is an important step. First, consider how you would like students to collaborate for the activity. Is it important that students talk or chat synchronously, asynchronously, or both? Will students share files?

The following suggestions include the main collaboration tools supported at UWGB. Click to expand the sections for the various tools below.

If you are interested in learning more about any of these tools, consider scheduling a consultation with a CATL member.

Canvas discussions are one option for student collaboration. Operating much like an online forum, discussions are best suited for asynchronous communication, meaning students can post and reply to messages at any time, in any order. If you have groups set up in Canvas, you can create group discussions in which group members can only see one another’s posts. You can also adjust your course settings so that students can create their own discussion threads as well.

Hypothesis is a Canvas integration that lets instructors and students collaboratively annotate a digital document or website. Hypothesis annotation activities can be completed synchronously, such as over a Zoom call, or asynchronously on students' own time. Activities can be created for either the whole class or for small groups and are a great way for students to bounce around ideas about a text or reading. 

Office 365 refers to the online Microsoft Office Suite, including Word, PowerPoint, and Excel. Students can work collaboratively and asynchronously on projects using online document versions of any of these software, which updates changes in nearly real time. Microsoft Office 365 has partial integration with Canvas, allowing students to set up and share Office documents from within Canvas using the Collaborations feature. Students will have to log in to Office 365 through their Canvas course before they can use most features of Canvas and Office 365 integration.

Zoom is one of two web conferencing tools supported by the university, the other being Teams. The Zoom Canvas integration allows instructors to set up meetings within a Canvas course. Students can then access meeting and recording links from within the Canvas course. As such, it is generally easy to for students to access and use. One downside to Zoom is that it is a purely synchronous meeting tool, so students will have to coordinate their schedules or find other ways of including members that may not be able to attend a live meeting. Students that wish to set up meetings amongst themselves are not able to set up meetings with the Canvas integration, though they can use the Zoom desktop app or web portal and their UWGB account.

Microsoft Teams is a collaboration tool that combines web conferencing, synchronous and asynchronous text communications (in the form of chat and posts), and shared, collaborative file space. Students can create a new team in MS Teams for their group project or operate in a channel of an existing class team. Microsoft Teams also has partial integration with Canvas, meaning students and instructors can create and share Teams meeting links within the Rich Content Editor of Canvas (in pages, announcements, discussions, etc.).

Putting It into Practice

When we ask students to work collaboratively, it’s important we reveal the “hidden curriculum” by building in the steps they should take to be a successful team. As a starting point, asking students to answer these questions helps clarify the work of the group:

  • “Who’s on the team?”
  • “What are your tasks as a group?”
  • “How will you communicate?” (Asynchronously? Synchronously?)
  • “How will you ensure everyone can meet the deadlines you set?”
  • “If or When someone misses a meeting, how will you ensure that everyone has access to the information they’ll need to help you all complete the project on time?”
  • “When will you give each other feedback before you turn in the final assignment?”

For a ‘bare bones’ group assignment, take the above considerations on designing and assessing groupwork into account and create a worksheet for the student groups to fill out together. Create a Canvas group assignment to collect those agreements, assign it some points that will be a part of the whole project grade, and set the deadline for turning it in early so that students establish their plan early enough for it to benefit their group. Scaffolded activities that give students enough structure and agency is a delicate balance, but these kinds of guided worksheets and steps can help students focus their energy on the project, assignment, or task once everyone is on the same page.

Let’s keep the conversation going!

Do you have some tried and tested strategies for helping students coordinate and complete group work online? Send them our way by emailing: CATL@uwgb.edu or comment below!

Event Follow-Up: Transparent Assignment Design (Apr. 15, 2019)

Faculty and staff from Green Bay, Manitowoc, Marinette, and Sheboygan joined other institutions participating in the Taking Student Success to Scale high-impact practice (HIP) project in an interactive webinar about designing transparent assignments. The session was hosted by Mary-Ann Winkelmes on 4/15/19. More information on Dr. Winkelmes’s work can be found beneath the embedded video.

Session Recording (4/15/19)

Session Resources

More Information

The National Association of System Heads (NASH) sponsored a webinar with Mary-Ann Winkelmes on Transparent Assignment Design. All members of the campus community were invited.  Mary-Ann is the founder and director of the Transparency in Learning and Teaching in Higher Education Project (TILT Higher Ed).

Transparent instruction is an inclusive, equitable teaching practice that can enhance High Impact Practices by making learning processes explicit and promoting student success equitably. A 2016 AAC&U study (Winkelmes et al.) identifies transparent assignment design as a small, easily replicable teaching intervention that significantly enhances students’ success, with greater gains by historically underserved students. A 2018 study suggests those benefits can boost students’ retention rates for up to two years. In this session we reviewed the findings and examined some sample assignments. Then we applied the research to revising some class activities and assignments. Participants left with a draft assignment or activity for one of their courses, and a concise set of strategies for designing transparent assignments that promote students’ learning equitably.