With an increasing number of rating systems now online, the question of who completes those surveys (since not all students do) is one with important implications. Are those students dissatisfied with the course and the instruction they received more likely to fill out the online surveys? If so, that could bias the results downward. But if those students satisfied with the course are more likely to evaluate it, that could interject bias in the opposite direction.
This question was explored in a study that involved a 4,000-student population and 848 undergraduate courses. The students have a two-week window during which they can electronically submit their anonymous course evaluations, one for each course in which they are registered. During that two-week period, they receive three email reminders.
The collected data enabled the faculty researcher to identify several characteristics that differentiated students who completed the course evaluations from those who did not. First-term beginning students respond more, as do students who are evaluating a course that is a requirement for their major. The author suggests that new students may be more enthusiastic about participating in university life. More seasoned students may think that the evaluations are not taken seriously by the instructor or institution and therefore are less motivated to complete them. It makes sense that students would consider courses in their major more important than other courses. Interestingly, course size was not a variable that reliably predicted who would complete the surveys.
The data also revealed that men, students with light course loads, and students with low cumulative GPAs and low course grades were less likely to evaluate the course. Why are women more likely to evaluate their courses? The researcher refers to this result as “puzzling.” (p. 22) The course load variable “appears to be a measure of student attachment to the university.” (p. 23); those taking fewer courses tend to be less committed to the institution.
Certainly the most interesting finding is the data indicating that students doing poorly in the course are less likely to complete the course evaluations: “A matched pairs test that completely controls for class- and instructor-invariant student characteristics confirms the finding that students who do better in the course are more likely to participate in SET (student evaluation of teaching).” (p.28) Add to this another finding documenting that students who are more likely to have strong opinions about the course (indicated by how quickly within the two-week window they submitted their evaluations) had, on average, positive views about the course and instructor. The author concludes that based on this data, online course evaluations do not attract disproportionately more dissatisfied students. In fact, they do the opposite, giving credence to the contention that the results may be biased in favor of the instructor and course as opposed to against them.
Reference: Kherfi, S. (2011). Whose opinion is it anyway? Determinants of participation in student evaluation of teaching. Journal of Economic Education, 42 (1), 19-30.
Reprinted from “Who Participates in End-of-Course Ratings?”The Teaching Professor, 25.9 (2011): 4,5.
[articles-report-button]
This Post Has 0 Comments
This really surprised me. When my institution went to online evaluations, I was concerned that it would skew the results downward, since it would open up evaluations to those who were not attending class, and thus would probably have more negative things to say. (Not that I'm opposed to negative evaluations; I just have a harder time respecting the opinion of someone who rarely comes to class when it comes to my instruction.) So far, I haven't noticed much of a difference in the overall tone of the evaluations from when I did them on paper.
I was also surprised by the findings. I thought that the disgruntled would be just as likely to fill out the surveys. This study says a lot to me about engagment – it implies that those that check out at some point in the couse seem to a) check out for good and b) are already less engaged to begin with.
Similar statistics happened for our IT course.First-term beginning students respond more when compared to senior students.
If the study investigated whether students have completed the evaluations independently or done it with a mutual or collective ways. In the later cases, the results of the study may be questionable since the statistical tests used assume independence.