For many faculty, adding a new teaching strategy to our repertoire goes something like this. We hear about an approach or technique that sounds like a good idea. It addresses a specific instructional challenge or issue we’re having. It’s a unique fix, something new, a bit different, and best of all, it sounds workable. We can imagine ourselves doing it.
Let’s consider how this works with an example. Have you heard of exam wrappers? When exams are returned they come with a “wrapper” on which students are asked to reflect, usually on three areas related to exam performance:
- what study skills they used to prepare;
- what types of mistakes they made on the exam; and
- what modifications might improve their performance on the next test.
At a strategic moment, this technique confronts students with how they prepared and performed with an eye on the exams still to come. It’s an approach with the potential to develop metacognition—that reflective analysis and resultant insight needed to understand, in this case, what actions it takes to learn content and perform well on exams.
But is there any evidence that exam wrappers improve performance and promote metacognitive insights? For a lot of instructional strategies, we still rely on the instructor’s opinion. However, in the case of exam wrappers, we do have evidence—just not a lot and the results are mixed. In this most recent study (with a robust design), they didn’t work. Researchers Soicher and Gurung found that the wrappers didn’t change exam scores, final course grades, or scores on the Metacognitive Awareness Inventory, an empirically developed instrument that measures metacognition. Examples where exam wrappers did have positive outcomes are referenced in the study.
What instructors most want to know about any strategy is whether it works. Does it do what it’s supposed to do? We’d like the answer to be clear cut. But in the case of exam wrappers, the evidence doesn’t indicate if they’re a good idea or not. That’s frustrating, but it’s also a great example of how conflicting results lead to better questions—the ones likely to move us from a superficial to a deeper understanding of how different instructional strategies “work”.
What could explain the mixed results for exam wrappers? Does the desired outcome depend on whether students understand what they’re being asked to do? Students are used to doing what teachers tell them, pretty much without asking themselves or the teacher questions. As these researchers note, maybe students don’t “recognize the value or benefit of metacognitive skills” as they are intended to be developed by exam wrappers (p. 69).
Is effectiveness a function of how many exam wrappers a student completes? Would they be more effective if they were used in more than one course? Maybe the issue is what students write on the wrapper. Would it help if the teacher provided some feedback on what students write? In other words, the effectiveness of exam wrappers could be related to a set of activities that accompany them. Maybe they don’t work well if they’re just a stand-alone activity.
It could also be that students are doing exactly what we ask of them. For example, they see that they’re missing questions from the reading, and so they write that they need to keep up with the reading and not wait until the night before the exam to start doing it. But despite these accurate assessments, they still don’t keep up with the reading. Students have been known to abandon ineffective study routines reluctantly, even after repeated failure experiences.
There’s a lot more complexity than meets the eye with almost every instructional strategy. We’d love for them to be sure-fire fixes; supported with evidence and with predictably reliable outcomes. Unfortunately, how instructional strategies affect learning is anything but simple, and our thinking about them needs to reflect this complexity.
We could conclude that with mixed results and instructional contexts so variable, there’s no reason to look at the research or consider the evidence. Wrong! The value of these systematic explorations is not so much the findings but their identification of details with the potential to make a difference. So, you may start by thinking that exam wrappers are a cool idea, but that’s not all you’ve got. Sets of conditions and factors made a difference when someone else used them and took a systematic look at their effects on learning. That means you can use them having made some purposeful decisions about potentially relevant details.
Reference: Soicher, R. N. and Gurung, R. A. R., (2017). Do exam wrappers increase metacognition and performance? A single course intervention. Psychology Learning and Teaching, 16 (1), 64-73.
This Post Has 2 Comments
I have been a proponent of exam wrappers in my faculty development work. I haven’t read the study, but one factor that might explain mixed results is the way the wrappers are framed. In addition to the basic questions cited by Dr. Weimer, I recommend that faculty ask students to assess their performance on the exam. Then they compare how they think they did with how they really did. This can lead to a discussion on preparation and study skills: students who performed better than expected can be guided back to their preparation and study skills responses and told to consider keeping with that strategy. Students who did worse than expected can be counseled to re-evaluate how they prepare for an exam and to consider applying the strategies that worked for the more successful students.
The degree to which a faculty member incorporates post-exam feedback and discussion based on the students’ exam wrapper responses is another factor that I believe could account for the mixed results. If the faculty member does not take time in class or with individual students to discuss the value of certain study strategies over others or to refer students back to previous reflections, then the exam wrapper strategy probably won’t have the desired effect.
I agree sgjones – good suggestions on how to best implement this metacognitive strategy for these students.