In last week’s post, we looked at a sample of the discipline-based evidence in support of quizzes with the goal of gaining a better understanding of what it means to say that an instructional practice is evidence-based. We are using quizzes as the example, but this type of exploration could and should be done with any number of instructional practices.
Not all the evidence that supports the use of quizzes comes from our discipline-based research. There’s more to the story and much of it can be found in the research done in cognitive psychology. It is there that cognitive psychologists have studied the “testing effect,” which is a phenomenon “in which individuals remember tested material better than material they have merely reviewed.” (Nguyen and McDaniel, p. 87; see Roediger and Karpicke for a review of this research). Based on experiences in our classrooms, that finding is not surprising.
Testing (like what happens with quizzes) works because it forces retrieval—students must look for or recall the information that they have learned. Research documents that retrieval practice solidifies the learning. The more times you look for something, the easier it is to find, and the more strongly it’s embedded in your knowledge structures.
Cognitive psychologists have also verified the value of “distributed practice” —those frequent and often brief study sessions. Studying regularly, as opposed to cramming, produces learning gains, most often measured as improved test scores. Survey evidence that supports quizzing includes multiple student reports that regular quizzes prompt them to study more between tests. And then there’s the benefits derived from a sort of shuffled review known as “interleaving” (written about in a recent post). If quizzes include content from various parts of the course and not always in the order the content was presented, that may also result in an improvement in test scores.
There is an important caveat with cognitive psychology research on learning. Most of it has been done in labs or simulated classrooms. That’s justified because the researchers need to control variables so that the results can be reliably tied to the treatment—something those collecting data in actual classrooms can’t always control. But as Nguyen and McDaniel point out, that has meant some of the testing conditions aren’t analogous to what happens in actual classrooms. For example, researchers don’t always use educationally relevant materials. They use things like word lists or paired associations, and they’ll often employ identical or very similar quiz and test questions—something most teachers don’t do. Plus, the time between studying and testing in the lab environment tends to be much shorter than it is in college courses.
But cognitive psychology research has gotten into some areas unexplored (as far as I can tell) in our discipline-based research. For example, Nguyen and McDaniel have looked at the relationship between questions on quizzes and questions on exams. They found that when the quiz and exam questions both addressed the same concept, the testing effect occurred. But when there was no coordination between quiz and test questions, the use of quizzes did not improve exam scores. “The take-home message is that when quiz and test items are haphazardly sampled, teachers must be cautious in assuming that testing will confer benefits for exam performance.” (p. 89)
The evidence on quizzing that we’ve looked at in these two posts illustrates the complexity belied by this seemingly simple, straightforward descriptor. Yes, there’s evidence that supports the use of quizzes. Is there enough to call them evidence-based and recommend that faculty use them? On this and many other instructional practices we haven’t grappled with how much evidence is enough. If we assume there’s enough, we need to do so recognizing that we have only just begun figuring out the various features of quizzing that most reliably predict better exam scores. And at this point, we also don’t know if quizzes work better with some kinds of content or if quizzes result in better exam scores for certain kinds of students. Finally, unless individual teachers have examined their unique use of quizzes, they can’t claim with certainty they’re accruing the evidence-based benefits reported by others.
The interest in making instruction more evidence-based is laudatory and long overdue. But easy labels can hide layers of complexity, for quizzes and many other instructional practices.
References: Nguyen, K., and McDaniel, M. A., (2014). Using quizzing to assist student learning in the classroom: The good, the bad, and the ugly. Teaching of Psychology, 42 (1), 87-92.
Roediger, H. L., III and Karpicke, J. D., (2006). The power of testing memory: Basic research and implications for practice. Perspectives on Psychological Science, I, 181-210.
© Magna Publications. All rights reserved.
This Post Has 2 Comments
[I probably should have posted on a previous blog – but was traveling.]
A practice I have found extremely useful is to use repeated cumulative quizzes. The idea is that you begin by determining what are the 10-20 key concepts in the course as a whole, and then as each concept is introduced you add it on to the quiz. And so at the beginning of session two of the course you quiz the students on concepts A and B, which were presented in session one. At the beginning of session three you quiz on concepts C and D from session two, but ALSO concepts A and B from session one. At the beginning of session four the quiz comprises concepts A and B (session 1), C and D (session two), and E and F (session three).
When I first began this I thought the long quizzes at the end would take too much of class time, but then discovered that the students have become so familiar with the material and the process that they can race through the quizzes.
What has encouraged me further is that I meet students 3-5 years later and they smile and start quoting off some of the concepts … and telling me how they have applied the concepts.
Here is a new meta-analysis on retrieval practice drawing on many classroom-based studies. http://journals.sagepub.com/doi/pdf/10.3102/0034654316689306