Course Evaluations: How Can Should We Improve Response Rates?

Shortly after 2000, higher education institutions started transitioning from paper and pencil student-rating forms to online systems. The online option has administrative efficiency and economics going for it. At this point, most course evaluations are being conducted online. Online rating systems have not only institutional advantages but also advantages for students: students can take as much (or little) time as they wish to complete the form, their anonymity is better preserved, and several studies have reported an increase in the number of qualitative comments when evaluations are offered online. Other studies document that overall course ratings remain the same or are slightly improved in the online format.

Teaching Professor Blog But not all the news is good. Response rates drop significantly when students do the ratings online, from 70–80% for paper and pencil forms to 50–60% online. A 2008 review of nine comparison studies reported that online response rates averaged 23% lower than traditional formats. These low response rates raise the issue of generalizability. What percentage of students in a course need to respond for the results to be representative? The answer depends on a number of variables, most notably class size. For a class of 20 students, one expert puts the minimum at 58%. As class size increases, the percentage drops. Despite some disagreement as to the percentages, there is consensus that online response rates should be higher than they are right now.

Goodman, Anson, and Belcheir surveyed 678 faculty across a range of disciplines asking them to report how they were trying to boost online response rates. Among those surveyed, 13% reported that they did nothing to improve the rates and that, on average, 50% of their students completed the forms. Those who did something to encourage students to complete the evaluations generated response rates of 63%. The most common approaches faculty reported were the ones we’d expect. They reminded students to complete the forms, which upped the response rate to 61%, and they explained how the results helped them improve instruction, which bumped the rate up to 57%. But what improved response rates the most (roughly 22%) was to provide students with incentives.

The incentives were grouped as point-based or non-point-based and individual or class-wide. Points were by far the most common incentive, used by 75% of the faculty who reported offering incentives. Points given for completing the evaluation ranged from 0.25 to 1% of the total grade. The most common class-based incentive involved setting a target response rate—say, 80%—and then rewarding the class if the target was reached. For example, students could use a crib card during the final (non-point-based) or got a designated number of bonus points. In an earlier blog post, I described an institutional incentive in which those who completed course evaluations got early access to their grades.

This study of incentives explored other interesting issues (which makes it worth reading), but I want to use the rest of today’s post to focus on using incentives to increase response rates. I can understand why faculty do it. Ratings are a part of promotion and tenure processes, and they affect adjunct employment and sometimes merit raises too, so I’m not interested in moral judgments about individuals who have decided to do.

But regardless of what we do in our courses, all of us need to think about the implications of the practice. What messages do we convey to students when we offer incentives to complete course evaluations? Does it change the value of the feedback? We also should consider why response rates are so low. Is it because once students reach the end of the course, they just want to be done and aren’t really interested in helping to improve the course for students who will take it after them? Or have they grown tired of all these course evaluations and don’t think their feedback makes any difference anyway?

Perhaps we all can agree that offering incentives to complete the evaluations doesn’t get students doing ratings for the right reason. Students should offer assessments because their instructors benefit from student feedback the same way students learn from teacher feedback. They should be doing ratings because reflecting about courses and teachers enables students to better understand themselves as learners. They should be doing these end-of-course evaluations because they believe the quality of their experiences in courses matters to the institution.

The bottom line question: Is there any way to get students doing ratings for the right reasons? Please, let’s have a conversation about this.

Reference: Goodman, J., Anson, R. and Belcheir, M., (2015). The effect of incentives and other instructor-driven strategies to increase online student evaluation response rates. Assessment & Evaluation in Higher Education, 40 (7), 958-970.

© Magna Publications. All rights reserved.

This Post Has 18 Comments

  1. Akilah

    Our administration encourages us to set aside class time and ask students to do the evals on their phones. I usually do it when we’re in the lab the last week of class. No points or incentives needed then.

  2. Angela Daniels

    Some schools (not ours) do not allow students to receive a grade for the course until they complete the evaluation. That’s incentive but it seems pushy. I would guess that many students just click the same button over and over to get it done and your data is meaningless. Also, I notice that the only students who do the evals either really LOVED the class or HATED it. So, again, the data is questionable in my mind. Allowing time in class in the last week does seem like a good idea, especially if you leave the room for the 5-10 minutes it takes and make sure each student has access to the evaluation. I think the old way (with paper forms) worked better because we did it while we had them in class.

  3. Aubree Evans

    The only thing I’ve found that actually works and is not coercion (like grades, extra credit, or peer pressure) is to ask students to bring their devices on a specific day, and set aside class time to complete the online evaluations on their devices in class. With the intention of getting more authentic results, I ask the students to reflect on specific activities and feedback during the semester in a reflective writing piece or small group discussion before doing the online evaluation. (See Deborah J. Merritt’s article, “Bias, the Brain, and Student Evaluations of Teaching.”)

  4. Walt Hamilton

    Has anyone tried mid-course evaluations? It gives the students some ownership in that they can hope for change while there is still time for them to benefit.

    1. Kerry Cantwell

      I have done some mid-course evals in my online classes to great success, but it’s an online class, so…
      I do want to try a mid-course eval in my seated classes, though, because it’s before you’ve lost the students who are not likely to stay until the end, and those are the students from whom I want the most feedback.

  5. JohnT

    I have mid-course discussions in my online courses that asks for feedback on the course so far. Not anonymous but get some good feedback, and it’s another way to give students some voice and engagement in the course. Most students remark that they aren’t usually asked for their feedback until the formal end-of-course evaluation.

  6. Abigail Smith

    When I was a student, I recognized the value of evaluations, but they were annoying. I usually did them, but I tend to be a more conscientious person than most, I think. When I skipped evals it was for two reasons: 1) I didn’t have the cognitive bandwidth at the moment to take that much time to answer a bunch of detailed questions, and/or 2) I was clued in to the politics of the department, and I knew that this teacher who’s been teaching this class for 10 years and has tenure isn’t going to give a damn what I think.

    There’s not much we can do about the second barrier, but the first barrier is easy to solve — make surveys short. I know researchers want granular data, but looking at a long page of 50+ questions is overwhelming! Your brain gets tired 1/4th of the way through, and you can’t really provide quality responses after that. Busy, exhausted students would be more likely to complete a shorter survey than a longer one.

    You could create a pool of questions that you want feedback on, and then each survey is randomly assigned just 5 questions, plus a space to write any comments. Students could be told up front that this survey will only take 1 minute, plus any time they want to spend writing comments.

    Seriously, limit it to 5 questions, and I’d be surprised if the response rates didn’t jump.

    1. Gardner Lepp PhD

      I agree. It’s too easy to keep adding questions about specific aspects. But no student wants to take a 50-question survey at the end of a course. Keep it short and more students will engage in the activity, and take it seriously.

  7. Anthony dayton

    It isn’t just the percentage of student responses, but the quality and validity that should be of concern. As someone wrote earlier, most students who reply either loved or hated the course, (or more likely, the instructor). Some students use the opportunity to carry out a personal vendetta because they are angry. I’ve been downgraded by students to whom I gave a zero for plagiarism and others because they were second language students who wanted me to use different grading criteria. (Yes, the responses are anonymous, but sometimes it’s not difficult to determine the author a specific response.) A question might ask if the instructor returned assignments in a timely fashion, or responded to your emails quickly, and the responses are generally excellent to good with one or two stating “never” or “infrequently.” Given that I always return work in a timely fashion, and always respond to email questions quickly, all of those responses really should be excellent. I’ll live with the “good,” but a student who responded on the negative side of the scale is not being honest. Anonymity seemed a good idea to encourage students to respond but it also empowers those who abuse the idea of evaluations. A recommendation is to ask students to provide a specific example, not just to fill in a bubble. Honest and fair evaluations become critical when they are tied to the way the institution deals with their instructors, and the effect these evaluations might have on their career.

    1. Guest2

      Agreed, Anthony. I, too, support that we should not strictly be held to obviously biased evaluations, if our job depends on it. Plagiarism is epidemic and can subject a student to suspension. Comments by students who plagiarize should be excluded, in my humble opinion.

  8. Tom Moffitt

    Ask students to complete the evaluations, keep them short and focused on the most relevant information, remind them to do them, and explain why in terms that they can understand seems to work best. It is a process. Not all will participate, but those who do seem to provide the best feedback, both positive and negative.

    For future reference, we could ask students why they do not participate and what would encourage them to participate.

  9. Jonathan Coleman

    One of the things I’ve found to be most effective at improving participation rates is getting the faculty to support the philosophy behind course evaluations. Recently, I completed a project working with the faculty to redesign our questions and shift the focus from perceived teacher performance (popularity contest) to best practices for student learning. We ask the faculty to take some time in class to do course evaluations and share why it is important to them and some changes they have made as a result. If the students know the teachers are using the results to improve the learning experience, they are much more likely to participate and the results are much more meaningful (not just rants).

  10. Dave Porter

    I think this is an important topic and appreciate Faculty Focus raising it. I agree with the conclusion of the essay, but am concerned that some of the early passages may reflect inaccurate assumptions regarding quantitative reasoning.

    Basically, any measure of a criterion (such as teaching effectiveness) can itself be evaluated on two distinctly different dimensions: validity and reliability. Both are
    important, but validity (i.e., accuracy) is perhaps the more critical. It is possible to have a measure that is very reliable but completely in-valid. Conversely, any measure that is valid must have some reasonable level of reliability. I think much of the confusion in this essay stems from the lack of recognition and acknowledgement of this distinction.

    The claim that smaller response rates “should” be increased presumes that reliability is more important than validity. More responses mean that the sample estimates (i.e., the average scores from those who did respond to the survey) are more likely to
    approximate the population scores reflecting the attitudes of all members of
    the class (both those who completed the survey and those who did not respond).
    There is, however, one huge caveat to the claim that larger samples mean better
    estimates – the samples must be representative. When samples are small,
    the danger of a non-representative sample (either those who enthusiastically
    support or reject the course being over-represented) increases. A sample
    of 50% may be just fine – especially if the mean scores from these estimates approximate the values from larger samples and alternative situations as the article
    suggests. This is “convergent validity”. Thus, increasing the size of the sample for its own sake (or simply to increase reliability that already appears to be adequate) is far from compelling.

    The part of the essay providing “evidence” of the effects of various interventions is misleading: it presents the classic confusion of correlation with causation. Teachers who voluntarily choose to use the tactics described are likely to differ from those
    who do not choose to employ these tactics. I think teachers in the first group are likely to value student input and, furthermore, are likely to convey this message throughout their courses. Thus, teachers’ attitudes may be the real cause of increased participation despite the correlation between tactics and participation rates. I suspect a teacher who has expressed disdain for students throughout a semester will not increase his or her response rate by sending a message telling students of the great value of their feedback at the end of the course.

    The most serious problem, and one to which Maryellen alludes, involves the use of incentives. While these tactics may have a small effect on reliability by slightly increasing participation, they fundamentally alter the situation. Incentives introduce the influence of reciprocity and the average scores derived in these situations are likely to be more positive than the scores for those faculty members who do not use incentives. In fact, an informal experiment conducted 30 years ago at the Air Force Academy clearly showed this result. Basically, 60 sections of cadets were randomly assigned to one of two distinctly different evaluation conditions: donuts or no donuts. Thirty sections of students received a donut when they filled out their in-class end
    of course critique the other 30 sections did not. After controlling for a host of other variables (such as instructor, time of day, and subject (all we core courses)), the statistical analyses showed that the donuts increased scores by about one half a standard deviation. Interestingly, this was about the same effect that was found for an instructor teaching her/his second section of the course as opposed to the first.

    Ironically, small incentives are likely to have an even larger effect on student attitudes (and ratings) than large incentives. Tests of the late Leon Festinger’s somewhat
    counter-intuitive cognitive dissonance theory have repeatedly provided evidence of the power of small (seemingly inadequate) rewards to change opinions substantially.

    Instructor ratings and student grades (which are correlated) are perhaps the two most easily measured and commonly assessed measures within educational systems. It is unfortunate that so little has been done to focus attention on the relationship of these variables and student learning and retention (our ultimate criteria). I appreciate Faculty Focus for raising the issue and beginning to stir the pot.

  11. Bernie Dana

    I strongly disagree with the suggestion that we can all “agree that offering incentives to complete the evaluations doesn’t get students doing ratings for the right reason.” This statement requires us to conclude that offering a few points has to be viewed as an “incentive.” Our students are very busy people at the end of the semester. While it may be true that some students view a few extra points as an incentive, I believe that most students simply see it as a reward, a simple “thank you” for taking time to complete the survey. Our business courses has consistently achieved an 80% response level for online course evaluations. We do that by encouraging our business faculty to:

    (1) Provide 3 points as a reward for completing the survey. For most courses this is about .3% to .5% of total points. This level of points is not much of an “incentive.” It will seldom change a grade.

    (2) Communicate to students that we value and use their feedback to improve courses. I agree that students will not complete a survey if they don’t think the faculty member or someone else in authority pays any attention to it. As Business Department Chair, I ask the full-time and adjunct faculty to create a course improvement plan each semester if they fall short of department’s “stretch” goals in 7 key areas of the survey. I send an email to all business students at the end of the semester to make them aware of this and to encourage them to help us all get better and meeting their needs and expectations.

    (3) Send follow-up reminders to students who have not completed the survey a few days ahead of the deadline. Our system separates the student’s name from their
    responses. Prior to the deadline, our faculty can see the names of students who have and have not responded to the survey, but cannot access any results. This makes it easy to send email reminders.

    I had 28 years of corporate experience before becoming a college professor. It is good business to pay attention to what your customer’s think and want. Educators often fall short of understanding this concept. Some are not even willing to accept that a student is their customer. As a result, too many refuse to accept that a student’s rating of their performance has any real value.
    While there are exceptions, I have strong evidence that most of my students really want to learn in the same way that a customer wants to acquire and excellent product or service. I brought my corporate experience in quality management into my approach to teaching for the past 15 years. I use the course evaluation results as an indicator of opportunities to improve. I meet with 5-member feedback teams for each course at the end each semester to review everything we did in the class and to find out how it could have been done better or how the approach should be changed. This allows me to merge the desired student learning outcomes with methods and approaches that help motivate students to achieve them.

  12. Guest2

    I would like to see higher response rates than I see in my evaluations. I agree that the response rate should align with class size to be significant. Then there is the bigger picture which perhaps should be taken into account. After experiencing disappointment that more students do not respond (over a period of three years) my evaluations have consistently placed me above average because most of the responses are from satisfied or happy students who leave generous comments. In 3 years, there was one adverse response. In an effort to assess myself, I wonder how the track record should figure in evaluations. Francine – Online STEM Instructor

  13. Ken Ryalls, IDEA

    At IDEA, we believe that creating value for student feedback is the most essential factor to elicit good response rates. Research and best practice consistently show the single greatest influence on increasing participation in student ratings surveys is for faculty to express and demonstrate how the results are important and used in making meaningful change. The next most influential factor is to set aside time in class to complete the surveys, regardless of delivery modality.

    So while we agree that improving response rates should continue to be a goal for campuses whether they use online or paper delivery, we would like to suggest that instead of continuously discussing response rates, we should put greater focus on getting a clear picture over time, regardless of response rate.

    Here is why.

    We know that ratings based on lower response rates cannot be assumed to represent the overall class perceptions as well as higher response rates. But, representativeness is a different issue than reliability. The former is tied to the percent of students in the class that respond—the greater the response rate, the more representative are the scores derived from the course rating. Reliability, on the other hand, is related to sample size, or the number of student raters. If 50 students out of a class of 100 responded to a survey, their ratings would be more statistically reliable than if 19 students out of a class of 20 responded even though the 19 responders would be more representative. So even though the reliability of any measure does increase as the number of observations increase, it does not follow that a low number of observations means those observations are not reliable; even classes with low response rates can provide useful information for a teacher provided that the data are utilized as part of a holistic analysis of feedback over time.

    The ultimate goal should be to gather multiple sources of ongoing teaching and learning feedback so that the faculty has information helpful in ensuring continued growth of program quality over time. As Dr. Weimer states, “They (students) should be doing these end-of-course evaluations because they believe the quality of their experiences in courses matters to the institution.”

  14. Gonzalo Munevar

    I simply ask them to do the evaluations so the university will get off my back about the low response rate. It works. But I am afraid that no one has raised the most important question: Do evaluations really help improve higher education. My answer is that they don’t. In fact, student evaluations are the single most important factor in the dumbing down of American higher education in the last 40 years. In the first place, they are supposed to measure the quality of teaching, but do nothing of the sort. A teacher is good when he gets his students to learn. Consider how odd it would be to say, “Charles is a great teacher but his students learn little in his class.” Or “Mary is the worst teacher in the department but her students learn more from her than from any of her colleagues.” But the student evaluation forms do not even touch on how much the students have learned in a class. And if one or two forms actually asked for that, the students would not be particularly reliable judges. How often does one hear in math class students who say “I really understand the subject but I flunked.” So evaluation forms are simply irrelevant to determining the value of teaching. This is the conclusion I came to while being chair of a very large department for many years. During that time I saw thousands of class evaluations (I am counting classes here, not students) and I could see little positive correlation between good teaching and good evaluations. If anything, there was a small negative correlation. What was my evidence for that conclusion? I proposed, and the department accepted, a system for assessing the quality of student work. At the end of the term all the instructors who had taught sections of a particular course would sit together around a big table and share papers written by their students (2 A, 2 B , 2 C papers and the distribution of grades for that section). The papers had to keep the students’ names and the comments made by the instructor. It was clear to all that students in some sections produced better work than in others. I remember, for example, one new professor who was receiving the highest evaluations in the intro to philosophy course. Students just loved him. His direct supervisor, on the other hand, generally received only so-so evaluations. Well, about half and hour into our meeting, this new professor turned to me and said, “What is wrong with my teaching?” “Why are you asking that?” I asked in turn. “Because it is clear,” he said, “that my A: students would barely get Cs in your class or P’s class [his supervisor].” And he was quite correct. We could all see that. Given his comments on the papers and further class observations, he eventually turned into a truly good teacher (i.e. one who got his students to learn) not merely a popular one. There were lots of similar examples. Sometimes we could spot big deficiencies in teaching, incidentally. In composition courses, for example, it was clear from some of the professors’ comments that they did not really know what a good paper was. My department covered humanities and social sciences, presumably the hardest subjects in which to evaluate quality of student work. So judging teaching on the basis of student work is possible in all fields. Now, to return to that new professor, after careful discussion it became obvious that he did not challenge his students enough. He changed and became a terrific professor. This brings to an important point. Good teaching requires challenging the students. But when the institution takes student evaluations seriously, many professors are afraid of demanding much from the students, for if the course is hard the students may retaliate at the end of the semester. Adjuncts are particularly fearful. So we get shallow education as a result in many places. The resulting student work makes it all too painfully clear. The “data” that comes from those forms is simply preposterous, anyway. In ours, one question has to do with whether the professor keeps his office hours. In years in which I kept every minute of my office hours, and had additional ones for students who could not make the official ones, classes that liked me gave me highest marks, and classes that didn’t gave me lower marks. Same office hours for all. Same perfect record. I saw hundreds of similar examples in the cases of other professors as well. So what we should do, if we had the guts, would be to discourage students from filling out the stupid forms. And then to put pressure on our institutions to come up a serious and useful method of determining quality of teaching, which must be determined by the quality of work we get students to do. It can be done. We did it for several years. Disclosure: I was one of those who got students to do good work while also getting good to excellent student evaluations.

  15. Mary Pamela

    Some colleges have withheld their grades. Our university doesn’t! They do not believe in penalizing students. As an Administrator, I used to give a push for online course evaluations, when our ratings were at 100%. One student told me that if the evaluations ever went online she would not fill them out. When I asked why, she said she hated filling them out, but she filled them out with me standing in the room. Some people wrote pages and pages of comments, while others wrote nothing at all. While in the room, some students would put aside their course evals, I would walk up to them and gave them a fresh paper eval. After a while they got the message, fill out the course evals. There is no one pushing the students to complete their course evals online. Incentives work on some, but not all. I don’t know what would work, besides someone standing or walking up and down in the classroom to ensure the students fill out the course evals on their personal electronic devices. And how will you ensure that they are filling out the course evals and not surfing on the web or on social media?

Leave a Reply