What Can We Learn from End-of-Course Evaluations?

male professor reviews course evaluations

No matter how much we debate the issue, end-of-course evaluations count. How much they count is a matter of perspective. They matter if you care about teaching. They frustrate you when you try to figure out what they mean. They haven’t changed; they are regularly administered at odds with research-recommended practices. And faculty aren’t happy with the feedback they provide. A survey (Brickman et al., 2016) of biology faculty members found that 41% of them (from a wide range of institutions) were not satisfied with the current official end-of-course student evaluations at their institutions, and another 46% were only satisfied “in some ways.”

But are these approaches to assessing teaching likely to go away any time soon? I’m not feeling the winds of change. For that reason, I’d like to use this post to suggest several ways faculty can work around and move beyond end-of-course ratings.

Teaching Professor Blog A good place to start is with how we orient toward the feedback provided by these summative assessments, and for this there’s literature to help. Golding and Adam (2016) used focus groups to explore how award-winning teachers approached the feedback provided on student evaluations. Among a number of findings, these faculty talked about an improvement mindset—about always confronting themselves with how they could improve, always being on the lookout for ways to increase student learning, and always accepting that no matter how high (or low) the scores, improvement is an option. Hodges and Stanton (2007) looked at a collection of common student complaints (e.g. “Problems on the exam weren’t like the ones done in class”) for what they indicated about the intellectual challenges faced by novice learners. Gallagher (2000) received a set of low ratings. After some rationalizing and blaming, he decided to see if he could learn something from the feedback. By reading the comments through this new lens, he saw that they could be used to improve his teaching.

The global judgments frequently offered by end-of-course ratings (how does this instructor compare with all others on the planet) should be viewed as a place to start. Rather than offering answers, they can be used to raise questions. “What am I doing that’s causing students to view my teaching this way?” Such questions need to lead us to specific, concrete behaviors—things teachers are or aren’t doing. The Teaching Practices Inventory developed by Weiman and Gilbert (2014) is a great place to start acquiring this very detailed, nuts and bolts understanding of one’s instructional practice. It was developed for use in science and math courses, but slight adjustments can make it relevant in many other disciplines.

The Brickman et al. (2016) study of biology faculty also asked them what kinds of instructional feedback they thought they needed. The faculty reported that they value what peers could provide, but they usually don’t. Classroom observations for promotion and tenure were seen more as rubber stamps than real opportunities for critical analysis of teaching. Classroom observations can do so much more, as two recently developed instruments (COPUS and PORTAAL, see references) demonstrate. COPUS collects data on teacher and student actions at regular time intervals, and PORTAAL provides observational feedback on the use of 21 active learning elements with proven positive effects on learning. To clarify, if a colleague observes a session across disciplines, the observer is there not to judge but to experience the session as a student. When was it easy to understand? What examples made sense? When was it confusing? What questions should have been asked?

We also can obtain more useful input from students. We need to ask for feedback in the middle of the course, when there’s still time to make changes and students feel they have a stake in the action. We need to provide ground rules that give students the opportunity to practice the principles of constructive feedback. And we need to ask more specific questions formatted in different ways. Hoon et al. (2015) showed that even the simple start-stop-continue format improved the quality of student feedback, as did Veeck et al. (2016) with collaborative online evaluations. (For those not familiar with start-stop-continue, this is where you ask students to tell you what you should start doing, what you should stop doing, and what you should continue doing.) Finally, we need to close the loop by talking about what we’ve learned from the feedback, what we’ve decided to change, and what will remain the same.

Brickman et al. wrote, “Our findings reveal a large, unmet desire for greater guidance and assessment data to inform pedagogical decision making” (p. 1). This post illustrates some things faculty can do about that.

References

Brickman, P., Gormally, C., and Martella, A. M., (2016). Making the grade: Using instructional feedback and evaluation to inspire evidence-based teaching. Cell Biology Education, 15 (1), 1-14.

Eddy, S. L., Converse, M., and Wenderoth, M. P., (2015). PORTAAL: A classroom observation tool assessing evidence-base teaching practice for active learning in large science, technology, engineering and mathematics classes. Cell Biology Education, 14 (Summer), 1-16.

Gallagher, T. J. “Embracing Student Evaluations of Teaching: A Case Study.” Teaching Sociology, April 2000, 28, 140-146.

Golding, C., and Adam, L., (2016). Evaluate to improve: Useful approaches to student evaluation. Assessment & Evaluation in Higher Education, 41 (1), 1-14.

Gormally, C., Evans, M., and Brickman, P., (2014). Feedback about teaching in higher ed: Neglected opportunities to promote change. Cell Biology Education, 13 (Summer), 187-199.

Hodges, L. C., and Stanton, K. (2007). Translating comments on student evaluations into the language of learning. Innovative Higher Education, 31, 279-286.

Hoon, A., Oliver, E., Szpakowska, K., and Newton, P., (2015). Use of the Stop, Start, Continue method is associated with the production of constructive qualitative feedback by students in higher education. Assessment & Evaluation in Higher Education, 40 (5), 755-767.

Smith, M. K., Jones, F. H. M., Gilbert. S. L., and Weiman, C. E. (2013). The classroom observation protocol for undergraduate STEM (COPUS): A new instrument to characterize university STEM classroom practices. Cell Biology Education, 12, (Winter), 618-625.

Veeck, A., O’Reilly, K., MacMillan, A., and Yu, H., (2016). The use of collaborative midterm student evaluations to provide actionable results. Journal of Marketing Education, 38 (3), 157-169.

Wieman, C., and Gilbert, S., (2014). The teaching practices inventory: A new tool for characterizing college and university teaching in mathematics and science. Cell Biology Education, 13 (Fall), 552-569.

© Magna Publications. All rights reserved.

This Post Has 3 Comments

  1. Stephen Davis

    I grew up in the Anthracite Coal district of Pennsylvania where they did strip mining. Almost 95% of what they mined was not useful but the 5% that was turned out to be of enough value that it was worth all the effort. Thus, teaching evaluations by students! You actually have to mine them for the golden nuggets and it’s a big job, but it pays big dividends. See a study I did at: https://ohio.app.box.com/s/1cswsp2utfys3gses5a82b8cnw56ki29

  2. Al Beitz

    Teaching evaluations are in many cases a necessary evil and are used by Department Chairs and Deans to help determine Merit as well as Promotion and Tenure for individual faculty. And student based evaluations typically fail to measure teaching effectiveness or student learning. But beyond that they can in fact assist in improving teaching and student learning as the article implies. I find doing a CIQ (Brookfield’s critical incident questionaire) 4 or 6 weeks into the class is very helpful in determining what students like and dislike about my course. I have also used student focus groups near the end of the semester to obtain honest feedback on what worked well and what didn’t in my course. Peer review is another often overlooked practice that assists faculty in improving their teaching and pedagogy. Effective peer review will only happen if a department or college culture develops that strongly encourages peer review of teaching. When I was a department Chair, we developed a system in which all faculty were required to undergo peer review every 2-3 years. Faculty had to undergo training to perform effective peer reviews. This system is still in place and is incorporated into our annual reports and in our merit review process. I would encourage junior faculty not to be disheartened by average student evaluations, but use them as guides towards course improvement.

  3. Jason

    My college implements student reviews in the two weeks following mid-terms, so that students should have a good idea of their current course standing, yet still feel as though their feedback can influence the remainder of their course. I found at other schools where evaluations are left nearer the end of semester, few students feel the social responsibility to care about future classes or their teacher’s development, they just want to move on. Timing and relevance can make a huge difference in the quality of these surveys.

Leave a Reply