Getting More out of Exam Debriefs

Brief—that pretty much describes exam debriefs in many courses. The teacher goes over the most commonly missed questions, and the students can ask about answers but generally don’t. These kinds of debriefs don’t take up a lot of class time, but that’s about all that can be said for them. For some time now, I’ve been suggesting that students, not the teacher, should be correcting the wrong answers. The students are the ones who missed the questions.

Teaching Professor BlogAs I continue to assemble a collection of resources on the learning potential inherent in testing events, my thinking about how we debrief exams is changing. A debrief can accomplish two important goals. First, it’s another opportunity for students to encounter content they haven’t yet learned. I know that raises the question of second chances. I don’t believe we should excuse students from the consequences of their study decisions, but test scores often reveal the absence of important knowledge or serious misunderstandings. I’ve decided that I’m willing to concede on the ethics of second chances if there’s an opportunity for students to redress content deficiencies.

Second, debriefs are an ideal time for students to confront the efficacy of the approaches they are using to prepare for exams. Exams get students’ attention. With their exam scores in front of them, there’s an openness to considering how it came to be.

Here are two strategies that illustrate how these goals can be accomplished without requiring extra class time. The March 2016 issue of the Teaching Professor newsletter describes a “two-stage testing” process whereby students take the exam and then correct the exam answers. Students can make corrections independently or in collaboration with others, either within class or outside of it. Since reading the article, I’ve learned about an earlier version of this strategy called “self-correcting” exams. Here’s how it worked in Montepare’s psychology class. Students took a typical multiple-choice exam. They put their answers on the exam and on a teacher-provided answer sheet. They turned in the answer sheet and took the exam home. They had until the next class period to change their answers. Both the in-class and at-home exams were scored. If the answer was right on both, it counted for two points. If it was right on one but not the other, the student earned one point. And if it was wrong on both, no points were awarded. In her article, Montepare answers a number of questions about the strategy: Can students cheat? Do they? Does this contribute to grade inflation? Do students come to the exam less prepared, knowing they’ll have that second chance?

The most important question about this approach is whether it promotes learning. In two subsequent studies, students took two self-correcting exams and one cumulative final, not self-corrected. Francis and Barnett (2012) report a “marginally significant interaction,” one they describe as a “relatively poor investment” given the time required to implement the strategy. Gruhn and Cheng (2014) report more positive results with students performing better on the final than those taking the same final but without having had the self-correcting activity. They also confirmed Montepare’s observation that the approach benefits low-performing students.

The second strategy encourages students to conduct an exam analysis where they review their exam and look at the questions missed to see if there’s a pattern (are they missing questions for the same reason?). Then they write brief descriptions of how they studied and, based on that information, they consider if there are changes that might better prepare them for the next exam. The instructor lists a number of study options and demonstrates a variety of them while teaching. After their analysis, students schedule a short meeting with the professor to discuss what they’ve learned. Approximately 50 percent of the students in a human anatomy course participated in this exam analysis, and those who did significantly improved their scores on the second exam. The exam debrief protocol students used is included in the article, and a more detailed discussion of the strategy appears in an upcoming issue of the newsletter.

Both of these approaches significantly change exam debrief experiences. Both challenge us to consider how we debrief exams and what we hope to accomplish with our post-exam analysis.

References: Montepare, J. M., (2005). A self-correcting approach to multiple-choice tests. APS Observer, 18 (10), 35-36.

Francis, A. L. and Barnett, J., (2012). The effect and implications of a “self-correcting” assessment procedure. Teaching of Psychology, 39 (1), 38-41.

Gruhn, D. and Cheng, Y., (2014). A self-correcting approach to multiple-choice exams improves students’ learning. Teaching of Psychology, 41 (4), 335-339.

Favero, T. G. and Hendricks, N., (2016). Student exam analysis (debriefing) promotes positive change in exam preparation and learning. Advances in Physiology Education, 40 (3), 323-328.

© Magna Publications. All rights reserved.

Teaching Professor newsletter

This Post Has 3 Comments

  1. Jason

    Hello, I’d just like to say a fantastic thought! I typically exercise the “brief debrief” focusing on poorly answered questions and reviewing the specific content to ensure understanding before going forward, and then often re-test the same material on subsequent tests. The goal is not to continuously penalize, but allow students another opportunity to get it right, too often though the question is again answered poorly.
    The obvious workload aside of having to create new tests each year, has anyone entertained thoughts of utilizing this two-stage approach? I think I may choose to weight the in-class test vs the take-home attempt at a 2:1 ratio over the 1:1 described.
    And while I’m contemplating this approach, how would one find a solution for students with exam writing accommodations, who require assistive technologies? As a relatively new teacher in the college system, I appreciate any thoughts on improving the traditional assessment and improving overall student learning. Thanks!

Leave a Reply