Creating Intentional and Impactful Multiple-Choice Assessments

Multiple-choice quizzes are one of the most common forms of assessment in higher education, as they can be used in courses of nearly every discipline and level. Multiple-choice questions are also one of the quickest and easiest forms of assessment to grade, especially when administered through Canvas or another platform that supports auto-grading. Still, like any assessment method, there are some contexts that are well-suited for multiple-choice questions and others that are not. In this toolbox article, we will provide some evidence-based guidance on when to leverage multiple-choice assessments and how to do so effectively.

Strengths and Weaknesses of Multiple-Choice Assessments

Multiple-choice assessments are a useful tool, but every tool has its limitations. As you weigh the strengths and weaknesses of this format, remember to consider your course’s learning outcomes in relation to your assessments. Then, once you’ve considered how your assessments align with your outcomes, determine if those outcomes are well-suited to a multiple-choice assessment.

Objectivity

Multiple-choice assessments are a form of objective assessment. For a typical multiple-choice item, there is no partial credit — each answer option is either fully correct or fully incorrect, which is what makes auto-grading possible. This objectivity is useful for assessing outcomes in which students need to complete a task with a concrete solution, such as defining discipline-specific terminology, solving a mathematical equation, or recalling the details of a historical event.

The tradeoff of this objectivity is that “good” multiple-choice questions are often difficult to write. Since multiple-choice questions presume that there is only one correct answer, instructors must be careful to craft distractors (incorrect answer options) that cannot be argued as “correct.” Likewise, the question stem should be phrased so that there is a definitively correct solution. For example, if a question is based on an opinion, theory, or framework, then the stem should explicitly reference this idea to reduce subjectivity.

Example of Subjective vs. Objective Question Stem

____ needs are the most fundamental for an individual's overall wellbeing.

  • A) Cognitive
  • B) Self Esteem
  • C) Self-Actualization
  • D) Physiological

(Answer: D)

According to Maslow's hierarchy of needs, ____ needs are the most fundamental for an individual's overall wellbeing.

  • A) Cognitive
  • B) Self Esteem
  • C) Self-Actualization
  • D) Physiological

(Answer: D)

This version of the question stem clarifies that this question is based on a framework, Maslow's hierarchy of needs, which increases the question's objectivity, and therefore its reliability and validity for assessment.

Another caution regarding the objectivity of multiple-choice questions is that answers to these test items can often be found through outside resources — students’ notes, the textbook, a friend, Google, generative AI, etc. — which has important implications for online testing. Experts in online education advise against trying to police or surveil students, and instead encourage instructors to design their online assessments to be open-book (Norton Guide to Equity-Minded Teaching, p. 106).

Open-book multiple-choice questions can still be useful learning tools, especially in frequent, low-stakes assessments or when paired with a few short answer questions. Fully auto-graded multiple-choice quizzes can function as “mastery” quizzes, in which a student has unlimited attempts but must get above a certain threshold (e.g., 90%, 100%) to move on. Using low-stakes, open-note practice tests can be an effective form of studying, and in many cases may be better for retrieval than students studying on their own.

You can also customize your Canvas quiz settings to control other conditions, such as time. Classic Quizzes and New Quizzes include options that add a layer of difficulty to repeatable multiple-choice assessments, such as time limits, shuffled questions or answer choices, and the use of question banks. These settings, when used with low-stakes assessments with multiple attempts, can help students practice meeting the course’s learning outcomes before larger summative assessments.

Versatility

Multiple-choice assessments sometimes get a bad reputation for being associated with rote memorization and lower order thinking skills, but in reality, they can be used to assess skills at every level of Bloom’s taxonomy. This includes higher order thinking skills, such as students’ ability to analyze a source, evaluate data, or make decisions in complex situations.

For example, you could present students with a poem or graph and then use a multiple-choice question to assess a student’s ability to analyze and interpret the example. Or, alternatively, you could create a question stem that includes a short scenario and then ask students to pick the best response or conclusion from the answer choices.

Examples of Multiple-Choice Items That Assess Higher Order Thinking Skills

[The poem is included here.]

The chief purpose of stanza 9 is to:

  • A)  Delay the ending to make the poem symmetrical.
  • B)  Give the reader a realistic picture of the return of the cavalry.
  • C)  Provide material for extending the simile of the bridge to a final point.
  • D)  Return the reader to the scene established in stanza 1.

(Answer: D)

This item tests higher order thinking skills because it requires test-takers to apply what they know about literary devices and analyze a poem in order to discriminate the best answer.

Source: Burton, S. J., et al. (2001). How to Prepare Better Multiple Choice Test Items: Guidelines for University Faculty.

A line graph showing the relationship between time and heart rate for two different groups of individuals that were administered a drug for a clinical trial; the y-axis goes from 70 to 90 and the x-axis goes from their baseline heartrate to 5 min after the drug was administered

The graph above illustrates the change in heart rate over time for two different groups that were administered a drug for a clinical study. After studying the graph, a student concluded that there was a large increase in heart rate around the one-minute mark, even though the results of the study determined that patients' heart rates remained relatively stable over the duration of five minutes. Which aspect of the graph most likely misled the student when they drew their conclusion?

  • A)  The baseline for y-axis starts at 70 beats/min, rather than 0 beats/min.
  • B)  The y-axis is in beats/min, rather than beats/hour.
  • C)  The graph lacks a proper title.
  • D)  The graph includes datasets from two groups, instead of just one.

(Answer: A)

This item tests higher order thinking skills because it requires test-takers to analyze a graph and evaluate which answer choice might lead someone to draw a misleading conclusion from the graph.

Source: In, J. & Lee, S. (2017) Statistical data presentation. Korean J Anesthesiol, 70 (3): 267–276.

 

A nurse is making a home visit to a 75-year old male client who has had Parkinson's disease for the past five years. Which finding has the greatest implication on the patient's care?

  • A)  The client's wife tells the nurse that the grandchildren have not been able to visit for over a month.
  • B)  The nurse notes that there are numerous throw rugs throughout the client's home.
  • C)  The client has a towel wrapped around his neck that the wife uses to wipe her husband's face.
  • D)  The client is sitting in an armchair, and the nurse notes that he is gripping the arms of the chair.

(Answer: B)

This item tests higher order thinking skills because it requires test-takers to apply what they know about Parkinson's disease and then evaluate the answer choices to determine which observation is the most relevant to the patient's care in the scenario.

Source: Morrison, S. and Free, K. W. (2001). Writing multiple-choice test items that promote and measure critical thinking. Journal of Nursing Education, 40 (1), 17-24.

Multiple-choice questions can also be adjusted for difficulty by tweaking the homogeneity of the answer choices. In other words, the more similar the distractors are to the correct answer, the more difficult the multiple-choice question will be. When selecting distractors, pick answer choices that seem appropriately plausible for the skill level of students in your course, such as common student misconceptions. Using appropriately difficult distractors will help increase your assessments’ reliability.

Despite this versatility, there are still some skills — such as students’ ability to explain a concept, display their thought process, or perform a task — that are difficult to assess with multiple-choice questions alone. In these cases, there are other forms of assessment that are better suited for these outcomes, whether it be through a written assignment, a presentation, or a project-based activity. Regardless of your discipline, there are likely some areas of your course that suit multiple-choice assessments better than others. The key is to implement multiple-choice assessments thoughtfully and intentionally with an emphasis on how this format can help students meet the course’s learning outcomes.

Making Multiple-Choice Assessments More Impactful

Once you have weighed the pros and cons of multiple-choice assessments and decided that this format fits your learning outcomes and assessment goals, there are some additional measures you can take to make your assessments more effective learning opportunities. By setting expectations and allowing space for practice, feedback, and reflection, you can help students get the most out of multiple-choice assessments.

Set Expectations for the Assessment

In line with the Transparency in Learning and Teaching (TILT) framework, disclosing your expectations is important for student success. Either in the Canvas quiz description or verbally in class (or both), explain to students the multiple-choice assessment’s purpose, task, and criteria. For example, is the assessment a low-stakes practice activity, a high-stakes exam, or something in between? What topics and learning outcomes will the assessment cover? What should students expect in terms of the number/type of questions and a time limit, if there is one? Will students be allowed to retake any part of the assessment for partial or full credit? Clarifying these types of questions beforehand helps students understand the stakes and goal of the assessment so they can prepare accordingly.

Provide Opportunities for Practice and Feedback

To help reduce test-taking anxiety and aid with long-term retrieval, make sure to provide students with ample practice before high-stakes assessments. Try to use practice assessments to model the format and topics that will be addressed on major assessments. If you are using a certain platform to conduct your assessments, like Canvas quizzes or a textbook publisher, consider having students use that same platform for these practice assessments so they can feel comfortable using the technology in advance of major assessments as well.

Research also indicates that providing feedback after an assessment is key for long-term retention. Interestingly, this is not only true for answers that students got wrong, but also in cases when a student arrives at the correct answer but with a low degree of confidence. Without assessment feedback, students may just check their quiz grade and move on, rather than taking the time to process their results and understand how they can improve.

You can include immediate and automatic qualitative feedback for quiz questions through Canvas Classic Quizzes and New Quizzes. Feedback (or “answer comments”) can be added to individual answer options or to an entire multiple-choice item. For example, you can add a pre-formulated explanation underneath an answer choice on why that distractor is a common misconception. If a student has incorrectly selected that answer choice, they can read that feedback after submitting their quiz attempt to learn why their choice was incorrect.

Create Space for Reflection

A bar graph showing a positive relationship between final test scores and learning conditions that include practice tests with feedback

Source: Roediger III, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention.

As indicated in the chart above, delayed feedback is potentially even more effective for long-term retention than immediate feedback. Consider reserving some time in class to debrief after important assessments and address students’ remaining questions. For asynchronous online courses, you could record a short post-test video in which you comment on trends you saw in students’ scores and clear up common misconceptions.

If you want to go a step further, you can also have students complete a self-reflective activity, also known as an exam wrapper, like a post-test survey or written reflection. Self-reflective activities like these have been shown to increase students’ overall performance in class by helping them learn how to reflect on their own study and performance habits, in addition to the positive effects on information retention mentioned earlier.

Questions?

Need some help designing your next multiple-choice assessment? Want to learn more about mastery quizzes, Canvas quiz settings, or exam wrappers? CATL is here to help! Reach out to us at CATL@uwgb.edu or schedule a consultation and we can help you brainstorm assessment solutions that fit your course’s needs. Or, if you’re ready to start building your assessment, check out this related guide for tips on writing more effective multiple-choice questions.

a group of UWGB students in green t-shirts smiling and giving a thumbs up as they welcome new freshmen for move-in

Why Didn’t Anyone Do Today’s Reading? – Engaging Students by Building Relationships 

Article by Pamela Rivers

The semester is well under way. Your students have taken their first exam. Some are active and excelling. Others have stopped coming to class or are not completing the assigned readings. Welcome to the end of September.

Maybe you thought this time it wouldn’t happen. Everyone was eager and excited and answering your questions for the first few class sessions. Now, however, you are right back to encountering some disengaged students doing what feels like the bare minimum, and it’s eating away at your passion for teaching. Is this the fate for our classes, or are there more or different things we can do to reach students?

First, to be clear, engaging students is not magic, and although it should be informed by science, in many ways it’s also an art form. Like all art, some of it appeals to us and some of it doesn’t. No one can promise you a room full of fully engaged students who always turn in their homework, laugh at all your jokes, and come prepared every session. No trick or strategy works for every person, every time. There are, however, certain strategies you can employ to make it more likely your students will listen, attend, and want to do well, for you and for themselves.

Relationships Matter

In “Culturally Responsive Teachers Create Counter Narratives for Students”, Zaretta Hammond argues that relationships can be the “on ramp to learning.” She says that relationships can be as important as the curriculum. One research study cited in Relationship-Rich Education showed that alumni who had a faculty member who cared about them as a student felt more connected to their current jobs. Unfortunately, only 27% of graduates surveyed had someone in that role. This powerful research shows that developing relationships with our students not only engages them, but can also lead to their success down the road.

That is compelling research, and it can take a lot less than you might imagine to make a real difference in the lives of your students. Students want to know that you care, and they want to feel welcome in your classroom. Research suggests that colleges and universities need to invest in a “relentless welcome of their students,” (Felton and Lambert, 2020) but faculty can lead the way in their individual classrooms by integrating activities that build relationships and encourage engagement.

Getting to Know You Surveys

Before class starts, whether online or face-to-face, send out a “getting to know you” survey through Canvas. This survey can ask questions specific to your discipline, but it is also a place to show interest in your students and what might hold them back from being successful. You could ask about your students’ pronouns, how they prefer to be contacted, any worries they are having about your class, and any specific needs they have. You can find a lot out about a student by simply asking. Need a ready-made survey? Reach out to CATL to get a copy of our Canvas Template, which includes a sample survey.

Ice Breakers

When you hear the word “ice breakers,” you may groan. The truth is a silly, active icebreaker is a wonderful way to get face-to-face students moving and is a start to building classroom community (Sciutto, M.J., 1995). A people bingo game, for example, can help get students talking and will help them get to know each other. If you are teaching online, there are plenty of icebreakers you can do asynchronously, including video introductions or a game like two truths and a lie.

Class Norms

Developing a set of agreed-upon class norms (expectations or guidelines), both for your students and you, that everyone is involved in creating goes a long way toward building both trust and community. Next semester, take part of your first class session to have your students help you develop norms. If you need some ideas for what these class expectations might look like, check out the “Trust” section of this CATL toolbox article.

Make It Matter

Find ways to tie your assignments to students’ goals, lives, and futures. If you ask me to spend 2 hours every week looking up dictionary definitions for words I’ve never heard of for a random quiz that doesn’t seem to have any bearing on what I’m supposed to be learning in your course, I am unlikely to be motivated to keep spending my time looking in the dictionary. If, on the other hand, you explain to me the importance of the words I’m learning, how they will be useful in my next class, and even how they may show up on a licensing exam for my future career, my motivation changes.

Unplanned Conversations

In face-to-face or synchronous online courses, you can use the time before class or while students are working to chat with those students who are unoccupied. Mention something you liked about their work, ask how their weekend was, and show a genuine interest in them. You never know what you might learn in these conversations. It may not lead to anything, or it may lead to a student feeling seen. Establishing a friendly and open line of communication with students in this way also makes it more likely that they will feel comfortable coming to you if they have a question or issue in the class.

Give Your Students a Chance to be Successful

As you build up to the major coursework in your class, have small, low-stakes assignments that give them all an opportunity for success and to receive formative feedback. As students get a small taste of success, they will want to feel that more.

Use Your Students’ Names and Pronouns

Another way to make a student feel seen is by how you address them. Ask your students what they would like to be called and what pronouns they use in a “getting to know you survey” or some other activity at the start of the semester. If you are teaching a face-to-face class and are good with names, try to memorize their names and pronouns during the first few weeks and use them frequently. If you are teaching online or have more students than you can remember for a large face-to-face roster, ask students to complete the name pronunciation activity created by CATL to help instructors with names. In face-to-face classes, also consider having students create name tents that they can pull out for class use. These small steps show that you care about making them feel comfortable in class, and help students learn the names of their peers as well.

Engagement is Key for Student Success

There are no silver bullets for engagement, but hopefully there are a few things on this list that you can consider adding to your teaching practices. And the truth is, engagement matters. According to Miller in “The Value of Being Seen: Faculty-Student Relationships as the Cornerstone of Postsecondary Learning,” engaged students experience more academic success and have higher persistence rates. Keeping our students engaged gives them the best chance at success.

References

Cohen, E., & Viola, J. (2022). The role of pedagogy and the curriculum in university students’ sense of belonging. Journal of University Teaching & Learning Practice, 19(4), 1–17.

Felton, P., & Lambert, L. (2020). Relationship-Rich Education. Johns Hopkins University Press.

Hammond, Z. (2018, June 18). Culturally Responsive Teachers Create Counter Narratives for Students. Valinda Kimmel. September 12, 2023, valinda.kimmel.com

Lu, Adrienne. (2023, February 17). Everyone Is Talking About “Belonging,” but What Does It Really Mean? Chronicle of Higher Education, 69(12), 1–6.

Miller, K. E. (2020). The Value of Being Seen: Faculty-Student Relationships as the Cornerstone of Postsecondary Learning. Transformative Dialogues: Teaching & Learning Journal, 13(1), 100–104.

Sciutto, M. J. (1995). Student-centered methods for decreasing anxiety and increasing interest level in undergraduate. Journal of Instructional Psychology, 22(3), 277.

Importable Canvas Resources on Canvas Commons

CATL has created several Canvas resources that UWGB instructors can import directly into their Canvas courses through the Canvas Commons. To import any of the following resources in your course, access Commons from the global navigation menu while signed into Canvas, and search for the resource by its title below. You can import the resource directly into your course(s) right from Commons. For full instructions, please see the KnowledgeBase guide Canvas (Instructors) – Importing a Resource from Canvas Commons.

  • UWGB Student Resource Module – This importable module provides information to students on how to use Canvas and how to get help from student support services at UW-Green Bay.
  • UWGB Name Pronunciation Recording Assignment – This importable assignment guides students through the process of making a very brief audio or video recording of themselves pronouncing their own name, saving that recording to their Kaltura My Media library, and then adding a share link to that recording to their Canvas user profile’s “Links” section. Students and instructors can then access each other’s profiles through the People page or discussions to listen to each other’s name recordings and learn how to pronounce each other’s names.

Dispelling Common Instructor Misconceptions about AI

Staying updated on the rapidly evolving world of generative artificial intelligence (GAI) can be challenging, especially with new information and advancements seemingly happening in rapid succession. As tools like ChatGPT have taken the world by storm, many educators have developed divergent (and strong!) views about these technologies. It can be easy to get swept up in the hype or the doom and gloom of the media storm – overselling or underselling these technologies drives clicks, after all – but it also leads to the spread of misinformation as we try to cope with all the change.

In a previous blog post, we introduced generative AI technologies, their capabilities, and potential implications for higher education. Now, in this post, we will dig deeper into some important considerations regarding AI by exploring common misconceptions that some instructors may hold. While some educators are enthusiastic about incorporating AI into their teaching methodologies, others may harbor doubts, apprehensions, or simply lack interest in exploring these tools. Regardless of one’s stance, it is crucial that we all develop an understanding of how these technologies work so we can have healthy and productive conversations about GAI’s place in higher education.

Misconception #1: GAI is not relevant either to my discipline or to my work.

Reality: GAI is already integrated into many of the tools we use daily and will continue to become more prevalent in our work as technology evolves. 

Whether we teach nursing, accounting, chemistry, or writing, we use tools like personal computers, email, and the internet nearly every day. Generative AI is proving to be much the same, and companies like Google, Microsoft, and Meta are already integrating it into many of the tools we already use. Google now provides AI-generated summaries at the top of search results. Microsoft Teams offers a feature for recapping meetings using GAI and is experimenting with GAI-powered analytics tools in Excel and Word. Meta has integrated AI into the search bar of Instagram and Facebook. Canvas may have some upcoming AI integrations as well. Some of us may wish to put the genie back in the bottle, but this technology is not going away.

Misconception #2: The content that GAI produces is not very good, so I don’t have to worry about it.

Reality: GAI outputs will continue to evolve, improve, and become harder to discern from human-created content.

A lot of time, energy, and money is being invested into generative AI, which means we can expect that AI-generated content will continue to advance rapidly. In fact, many GAI tools are designed to continually progress and improve upon previous models. Although identifying some AI-generated content may be easy now, we should assume that this will only become increasingly difficult to discern as the technology evolves and becomes better at mimicking human-created content. Currently, generative AI tools have been described as a “C average” student, but with additional development and thoughtful prompting, it may be capable of A-level work.

Misconception #3: I don’t plan on using AI in my courses, so I don’t need to learn about it or talk about it with my students or colleagues.

Reality: All instructors should engage in dialogue on the impact of AI in education and/or in their field.

Even if you don’t plan on using AI in your courses, it is still important to learn about these technologies and consider their impact on your discipline and higher education. Consider discussing AI technology and its implications with your department, colleagues, and students. In what ways will generative AI tools change the nature of learning outcomes and even careers in your discipline? How are other instructors responding? In what ways can instructors support each other as they each grapple with these questions?

Not sure where to start? Use CATL’s checklist for assessing the impact of generative AI on your course to understand how this technology might affect your students and learning outcomes, regardless of if you plan to use AI in your courses or not.

Misconception #4: I’m permitting/prohibiting all AI use in my course, so I don’t need to provide further instructions for my students.

Reality: All instructors should clearly outline expectations for students’ use/non-use of AI in the course syllabus and assignment directions.

Whether you have a “red-light,” “yellow-light,” or “green-light” approach to AI use in your class, it is important to provide students with clear expectations and guidelines. Be specific in your syllabi and assignment descriptions about where and when you will allow or prohibit the use of these tools or features. Make sure your guidelines are consistent with official guidance from the Universities of Wisconsin and UW-Green Bay, communications from our Provost’s Office, and any additional recommendations from your chair or dean. CATL has developed syllabus snippets on generative AI usage that you are welcome to use, adapt, or borrow from for inspiration. Be as transparent as possible and recognize that students will be encouraged to check with you if they cannot find affirmative permission to use GAI in a specific way.

Misconception #5: All my students are already using AI and know how it works.

Reality: Many students do not have much experience with this technology yet and will need guidance on how to use it effectively and ethically. Students also have inequitable access.

While there is certainly a growing number of students who have started experimenting with GAI, instructors may be surprised at how many students have used these tools little if at all. Even when students do have experience using GAI, we cannot assume that they understand how to use it effectively or know when its use is ethically problematic. Furthermore, some students have access to high-speed Internet, a personal computer, and paid access to their favorite GAI tool. Other students may have no or spotty web access and may be relying on a cell phone as their only means of working on a course.

If you are permitting students to use GAI tools in your class, provide them with guidance on how they can partner with these tools to meet course outcomes, rather than using them as a shortcut for critical thinking. Encourage students to analyze the outputs produced by GAI and make assessments about where these tools are useful and where they fall short (e.g., Are the outputs accurate? Are they specific and relevant? What may be missing?). Classes should also engage in discussions about the importance of citing or disclosing the use of AI. UWGB’s librarians are a great resource if you would like help developing a lesson plan around information literacy, GAI “hallucinations,” or GAI citations in specific styles, such as APA. In terms of equitable access to GAI, while it may not be possible to control for all variables, one way you can help level the playing field is by having your students use Microsoft Copilot through their UWGB accounts. You could also have them document how they have used the tool (e.g., what prompts they used).

Misconception #6: If I use AI-generated content in my courses, I am not responsible for inaccuracies in the output.

Reality: If you use AI-generated content to develop your courses, you are ultimately responsible for verifying the accuracy of the information and providing credible sources.

GAI is prone to mistakes; therefore, it is up to human authors and editors to take responsibility for the content generated in part or whole by AI. Exercise caution when using GAI tools because the information provided by them may not always be accurate. GAI developers like OpenAI are upfront about GAI’s potential to hallucinate, so it’s best to vet outputs against trusted sources. Be sure to also watch out for potential bias that can appear in outputs, as these tools are trained on human-generated data that can contain biases. If you use GAI to develop course materials, you should disclose or cite usage in the same format your students would use too. It is also best practice to talk about these issues with students. They are also ultimately responsible for the content they submit, and they should know, for example, that GAI grading that appears “unbiased” actually carries with it the biases of those who trained it.

Misconception #7: I can rely on AI detection tools to catch students who are using GAI inappropriately.

Reality: AI detection tools are unreliable, subject to bias, and provide no meaningful evidence for cases of academic dishonesty.

As research continues to come out about AI detectors, one thing is certain: they are unreliable at best. AI writing can easily fly under the radar with careful prompting (e.g., “write like a college sophomore and vary the sentence length” or “write like these examples”). Even more concerning is the bias present in AI detection, such as the disproportionally high rate of false positives for human writing by non-native English writers. And unlike plagiarism detection, which is easy to verify and understand, the process of AI detection is a black box – instructors receive a score, but not a rationale for how the tool made its assessment. These different concerns have led many universities to ban their use entirely.

Instructors are encouraged to consider ways of fostering academic integrity and critical thinking rather than trying to police student behavior with AI detectors. If you’d still like to try using an AI detection tool, know that these reports are not enough to constitute evidence of academic misconduct and should be treated as only a signal that additional review may be necessary. In most cases, the logical next step will be an open, non-confrontational conversation with the student to learn more about their thought process and any tools they may have involved. Think, too, about the potential consequences of falsely accusing a student of academic misconduct. The threat of failing an assignment, or even a course, could have an impact on trust with you or their department, eligibility for a scholarship keeping them in school, and so on. The unreliability and lack of transparency in AI detection can lead to increased anxiety even among students who are not engaging in academic misconduct.

Misconception #8: I can input any information into an AI tool as long as it is relevant to my job duties.

Reality: Instructors need to exercise caution when handling student data to avoid violating UWGB policy and federal law (e.g., privacy laws such as FERPA).

Many GAI tools are trained on user inputs, so we must exercise caution when considering what information is appropriate to use in a prompt. Even when a product claims that it doesn’t retain prompt information, there is still potential for data breaches or bugs that invertedly put users’ data at risk. It is crucial that you never put students’ personally identifiable information (PII) into an AI-powered tool, as this may violate the Family Education Right to Privacy Act (FERPA). This also goes for work emails and documents that may contain sensitive information.

Misconception #9: AI advancement means the end of professors/teaching/higher education.

Reality: AI has many potential applications related to education, but CATL does not see them replacing human-led instruction.

Don’t get caught up in the smoke. Although the capabilities of generative AI can seem scary or worrying at first, that is a natural reaction to any major technological breakthrough. Education has experienced many shifts from technological advancements in the past, from the calculator to the internet, and has adapted and evolved alongside these technologies. It will take some time for higher education to embrace AI, but we can do our part by continuing to learn more about these technologies and asking important questions about their long-term impacts. Do you have questions or concerns about how AI will impact your course materials and assessments? Schedule a consultation with us – CATL is here to help!

Scaffolding for Online Learning

As the end of the semester approaches and you begin to review the curricular structure of your courses in the near future, you may recognize the need for more robust scaffolding in content design regarding the online modality. Before reviewing and modifying your course in this capacity, it is important to know what scaffolding is, and why it is important for student learning. Scaffolding, as EdGlossary defines it in education, refers to ‘a variety of instructional techniques used to move students progressively toward stronger understanding and, ultimately, greater independence in the learning process’. Ultimately, the goal of scaffolding is to give students building blocks of learning that lead to better retention and acquisition of knowledge.

The most common place to start with scaffolding that can provide a significant impact is in larger assignments or assessments. A good ‘rule of thumb’ is to begin with the tasks that take a significant portion of time and energy. Breaking an assessment into smaller subtasks creates natural checkpoints for the students to gauge their understanding. This also allows you as the teacher to gain insight into how their knowledge acquisition is going and allows you to slightly alter course if the learning is not going as first imagined – check out CATL’s blog post on ‘small teaching’ for more information on that topic.

For example, if you are requiring students to ultimately create a final essay project, you could create a scaffolded or sequenced set of checkpoints to build towards the final assignment’s conclusion. The University of Michigan’s Center for Writing has a comprehensive breakdown of this sequencing:

  1. Pre-Writing: including proposals, work-in-progress presentations, and research summaries
  2. Writing: including counterarguments, notes, and drafts
  3. Revision: including peer reviews, conferences, and revision plans

The introduction of any of these concepts in an online environment requires intentionality and planning, while ensuring the students remain highly engaged throughout the process. As the students revise their papers, scheduling individual conferences, peer reviews (via online conferences, social annotations via Hypothesis, or via Canvas), and revision plans can all provide beneficial steps for a scaffolded approach to a final essay project. To ensure that the students are understanding what is required of them, be certain that you answer such critical questions as:

  • How are students able to know that they completed the steps required, and how will they know they have completed it satisfactorily?
  • How will you make the connections between the scaffolded activities and the end product clear as students progress systematically through the courses?
  • Have you clearly identified opportunities for students, particularly in the online modality, to get together remotely for feedback, thought-partnering, and/or review?

Another version of scaffolding in the online modality has to do with the structuring of how students gain an understanding of the content. The University of Buffalo’s Office of Curriculum, Assessment, and Teaching Transformation takes the Gradual Release of Responsibility (GRR) model and utilizes it in both a standard classroom, as well as a ‘flipped classroom’ environment. The GRR model focuses on an ‘I Do’, ‘We Do’, ‘You Do’ framework that is very popular in educational scaffolding. This framework for scaffolding could be centered around a larger assignment or exam, but it does not necessarily need to be. The GRR model of scaffolding could also be utilized when breaking down a larger concept for students. See how this model could potentially be utilized in a chemistry lesson surrounding intramolecular forces:

  1. “I Do” – The instructor creates an introductory lesson introducing intramolecular forces, and discusses the types of bonds that atoms can form (ionic, covalent, etc.). The instructor then shows examples of these types of bonds utilizing different atom types via medium of choice.
  2. “We Do” – This portion of the scaffolding could take place between students, working in pairs or small groups identifying the different types of bonds, and providing examples of each. This scaffolding could also include meeting with the instructor, via Teams or Zoom, or through a discussion that provides more of a ‘guided’ approach to the concepts.
  3. “You Do” – Students work on their own to display the learning that they have gathered on the topic. This could be done with a written assignment, discussion board post, low-stake quiz, or any way that the instructor chooses to assess students’ acquisition of knowledge.

These are just a couple of examples how you can integrate scaffolding into your course content for online learning. The critical aspect of scaffolding is purposeful chunking and segmenting of complex concepts and activities for comprehensive knowledge acquisition. It is important to keep in mind that any scaffolding should continue to be aligned to course expectations and learning outcomes as students will be more successful when it is done with consistency in a holistic sense.

If you would like to learn more about how to use scaffolding for online learning in your own course or have examples of how you are already using it, we’d love to hear from you! Feel free to contact the CATL office by email (CATL@uwgb.edu) to let us know where you’ve found success with these strategies, or to schedule a consultation with us.