During a conversation about evidence-based teaching, a faculty member piped up with some enthusiasm and just a bit of pride, “I’m using an evidence-based strategy.” He described a rather unique testing structure and concluded, “There’s a study that found it significantly raised exam scores.” He shared the reference with me afterward and it’s a solid study—not exceptional, but good.
Here’s the rub: one study isn’t enough. If a piece of research contains an interesting approach (a lot of them do) and you think it’s something that might advance the learning of your students, by all means try it. Just don’t imagine that the results reported in the study are anything close to a guaranteed outcome.
We’ve got pedagogical research occurring in virtually every discipline. That’s wonderful, but it’s not without some serious issues. Studies done in classrooms by faculty are usually one-of-a-kind analyses. It’s one version of an instructional strategy that likely has many permutations; it’s being used with a particular kind of content at a particular brand of institution and its effects are being studied via a unique cohort of students. Unless it’s a big, cross-disciplinary, cross-institution analysis, giant generalizations are not in order.
At this point, there’s not much that’s new under the pedagogical sun, which means there’s very little that’s being used that hasn’t already been studied by somebody else. And this is social science research, so it doesn’t usually advance in a linear fashion with one study building on another and leading to a definitive conclusion. Results vary regularly, to the point of affording findings that contradict other results. Add to that the disciplinary location of research on teaching and learning. I recently wrote about how we do pedagogical work in our disciplinary silos, often unaware and sometimes uninterested even in related inquiries being done in other silos.
The disciplinary location of the work makes it very difficult, really pretty much impossible, to track down all the research on a given instructional strategy, technique, approach, or method. But there’s a long road between everything that’s been done on a topic and one study. We need to find ourselves some place in between. Meta-analyses help. They aggregate data and use it to determine the relative magnitude of an effect. The challenge is, they aren’t all that easy to read and they focus on where the research needs to go next, not where the findings lead the practitioner. Qualitative reviews also help, but there aren’t many of them. However, most journals won’t publish a study if it doesn’t include at least some review of the literature, and that provides a place to start. So, there’s always more than one study to put on the scale.
But there’s also a need to weigh your own evidence—to systematically and critically look at the impact of new (and old, for that matter) instructional policies, practices, and behaviors. You can start with your own assessment—what effects are you seeing? You know your students, what they’ve done in the past, and what they’re doing now. That positions you to make some solid judgments. But you can’t stop there. You may have a vested interested in the success of what you’ve implemented. You may have an overly critical perspective. There’s data that needs to be collected from students—not whether they liked something, but what impact it had on their efforts to learn. Did it help, hinder, motivate, or support? Students experience instruction firsthand. You need their feedback, but you can’t stop there. Students also have biased perspectives. They have their own notions about what they think teachers ought to be doing, and they have been known to report that they learned when they didn’t. So you also need some objective measures—tangible, observable outcomes best identified before implementation, not afterward. Are you seeing evidence in student work, a change in exam scores, increased interaction, better attendance? And this data collection needs to be ongoing. Once is not enough; it’s as suspect as the one-study rationale.
Evidence-based teaching is a great thing—it’s long overdue. But we’ve got to be savvy about pedagogical evidence, holding it to high standards and weighing where the research findings and our own results register on the teaching-learning scale.
This Post Has One Comment
Thank You Maryellen. This article "nails down" exactly thoughts I've expressed over and over again. It seems that every idyllic idea emerging regarding "how better to do it" is precisely as you described, social science research. (And, there's a how to book recommended for us to purchase.)
Perhaps our focus should be on brain research. Surely, what cognitive scientists believe applies to the teaching & learning environment/process, should not be overlooked. For all my classes/subjects, as an integral part, we look at brain research, memory, memory transfer, etc.
If our students don't get information into their long term memories, we can be assured, they won't be retrieving our thoughtful lessons.