Testing What You’re Teaching Without Teaching to the Test
Have your students ever told you that your tests are too hard? Tricky? Unfair? Many of us have heard these or similar comments. The conundrum
Have your students ever told you that your tests are too hard? Tricky? Unfair? Many of us have heard these or similar comments. The conundrum
I’ve been rethinking my views on quizzing. I’m still not in favor of quizzes that rely on low-level questions where the right answer is a memorized detail or a quizzing strategy where the primary motivation is punitive, such as to force students to keep up with the reading. That kind of quizzing doesn’t motivate reading for the right reasons and it doesn’t promote deep, lasting learning. But I keep discovering innovative ways faculty are using quizzes, and these practices rest on different premises. I thought I’d use this post to briefly share some of them.
Here are two frequently asked questions about exam review sessions: (1) Is it worth devoting class time to review, and (2) How do you get students, rather than the teacher, doing the reviewing? Instead of answering those questions directly, I decided a more helpful response might be a set of activities that can make exam review sessions more effective.
It was just a passing comment in a student’s email reply to me concerning some questions I had raised on her most recent paper. She answered my inquiries and then basically thanked me for “grading with such grace.” This is not a word that I have ever associated with my grading. Tough–yes; thorough–you bet, but grace? Doesn’t that imply my being too easy? Had I given more credit than the student deserved?
The current state of student assessment in the classroom is mediocre, vague, and reprehensibly flawed. In much of higher education, we educators stake a moral high ground on positivistic academics. Case in point: assessment. We claim that our assessments within the classroom are objective, not subjective. After all, you wouldn’t stand in front of class and say that your grading is subjective and that students should just deal with it, right? Can we honestly examine a written paper or virtually any other assessment in our courses and claim that we grade completely void of bias? Let’s put this idea to the test. Take one of your assessments previously completed by a student. Grade the assignment using your rubric. Afterwards, have another educator among the same discipline grade the assignment using your exact rubric. Does your colleague’s grade and yours match? How far off are the two grades? If your assessment is truly objective, the grades should be exact. Not close but exact. Anything else reduces the reliability of your assessment.
It was always the same scenario. I’d be feeling a great sense of accomplishment because I had spent hours grading a set of English papers—painstakingly labeling errors and writing helpful comments. Everything was crystal clear, and the class could now move on to the next assignment. Except it wasn’t, and we couldn’t. A few students would inevitably find their way to my office, plunk their papers down on my desk, and ask me to explain the grade. Something had to change. I knew exactly why I was assigning the grades, but I obviously needed to find a more effective way of communicating these reasons to my students.
With the academic year nearly over and final exams upon us, it’s a good time to consider how we assess student knowledge in our courses. Cumulative finals are still used in many courses, but a significant number of faculty have backed away from them because they are so unpopular with students, who strongly voice their preferences for exams that include only questions on content covered in that unit or module.
In some types of assignments, it’s the process that’s more important than the product. Journals and online discussion exchanges, even homework problems, are good examples. Students are thinking and learning as they work to sort through ideas, apply content, or figure out how to solve problems. So what the student needs to get credit for is not the product, but the process. And the way most faculty make that determination is by deciding whether the student has made a good faith effort.
In “Calculating Final Course Grades: What About Dropping Scores or Offering a Replacement?” (The Teaching Professor March 2014), the editor notes that “some students … assume that course content is a breeze, [so] the first exam serve[s] as a wake-up call.” (p. 6) In two Introductory Psychology classes (150 students), I recently implemented an effective three-step strategy for getting the best out of such students (and, indeed, all students).
Instructors commonly cope with a missed test or failed exam (this may also apply to quizzes) by letting students drop their lowest score. Sometimes the lowest score is replaced by an extra exam or quiz. Sometimes the tests are worth different amounts, with the first test worth less, the second worth a bit more, and the third worth more than the first two—but not as much as the final.
Get exclusive access to programs, reports, podcast episodes, articles, and more!