I’m “reflecting” a lot these days. My tenure review is a few months away, and it’s time for me to prove (in one fell swoop) that my students are learning. The complexity of this testimonial overwhelms me because in the context of the classroom experience, there are multiple sources of data and no clear-cut formula for truth.
One of the courses I teach is EDUC 340, Methods of Inclusion. The cornerstones of this 10-week class are differentiated instruction, universal design for learning, and the special education procedures/services needed to meet a variety of student needs. Embedded in this instructional blueprint are a multitude of professional teaching standards that encompass a variety of knowledge, skills, and dispositions. My students begin the course with very little knowledge about disabilities. Therefore, in some respects, it is not too difficult to provide evidence that they have learned. So when the “rubber met the road” during my first pre-tenure review, I was able to provide more than ample evidence of student learning. Pre- and posttests were supported with portfolio assessments, reflection papers, group projects, and the usual course evaluations. Together, these instruments assured the faculty review committee that there was a defensible connection between my teaching and students’ learning.
As I prepare for my second pre-tenure review, I wanted to be even more deliberate and thoughtful in my assessment of teaching and learning. The retrospective pretest-posttest, developed by Campbell and Stanley in 1963, fulfilled this objective. Unlike the traditional pre- and posttest design (where students answer questions about content on the first day of class and then answer those same questions at the end of the course), this method asks students on the last day of class to reflect and numerically indicate their level of understanding of a particular course objective when the course began and then to consider and numerically rate their understanding of that same objective at the end of the course. The difference between these two self-reported scores could be considered an index of their learning.
Identifying the most salient objectives of EDUC 340 was a daunting task, but I was able to decide on a set of 27 learning targets that addressed the core principles of the course. (An example of one of these indicators is, “Students will understand the areas of exceptionality in learning as defined in the Individuals with Disabilities Education Act.”) I then formatted these targets across two columns: a “Before I took EDUC 340” column and an “After I took EDUC 340” column. I listed the 27 learning targets and asked students to consider (and calculate using a 7-point Likert scale) any perceived change in their understanding of each. The seven response options I included ranged from No Knowledge (1) to Confident Understanding (7). This scale encouraged students to think about their progress (or lack of it), and it allowed me to verify the extent to which learning had occurred.
The data this assessment generates enables me to better plan for the next class. It helps me craft assignments, build/revise content, and evaluate individual and whole group progress. If I use the assessment regularly, I can compare different classes. I can also look at individual learning targets and correlate them with the effectiveness of specific assignments and/or teaching strategies. In effect, a retrospective pretest-posttest data set lends itself to a multitude of inquiries ultimately answering the question “Did I help my students learn?” and if so, “What was it that they actually learned?”
Aside from its contextual relevance, this assessment is also easy and efficient to administer. It doesn’t take a lot of class time and it’s a flexible format. I can add, delete, or revise the learning targets I include on the form. Similarly, if a topic emerges in a particular class, I can include it on the assessment form.
Student learning can and should be assessed in a variety of ways. So by no means am I marginalizing other end-of-course assessments or assuming a kind of simplicity that doesn’t exist. There are advantages and disadvantages to any method we choose to assess learning. What is especially valuable about assessment is the way it opens the door to our most important educational work—the work of student learning. I have found that the retrospective pre- and posttest assessment opens the door wide, and for this reason I am motivated to share it.
Dr. Deborah Bracke is an assistant professor at Augustana College in Illinois.
Reprinted from The Teaching Professor, 27.2 (2013): 1. © Magna Publications