When graded papers get a quick glance before being shoved into a backpack or deposited into the trash can on the way out of class, it’s often hard for teachers to summon the motivation to write lots of comments on papers. That’s why I was pleased to find evidence in two studies that students do value written comments on their work.
The Weaver study (reference below) surveyed business and design students, seeking answers to four questions: Do students understand the feedback? What are their perceptions of the feedback? What kind of comments do they find helpful versus not helpful? And how can the value of the feedback provided be increased? The Smith study (reference below) surveyed business students in their junior year and was motivated by similar questions, such as what types of comments students find most useful in improving their writing and what form those comments should take. (p. 325)
Students in both studies reported that they did read written comments. In the Smith study, an average of 4.73 (out of 5.0) was generated in response to the statement “I read the comments that my professor makes in the body of my paper.” A 1.54 mean (with 1 being “strongly disagree”) resulted in response to the statement “I just look at the paper’s grade, not the comments.”
Both studies contain useful details. For example, in the Weaver study, students were asked to respond to a series of words and phrases commonly written on papers and then indicate how confident they were that they understood what the instructor meant. Here are a few examples: “Logical and coherent structure”: 42 percent were very confident they understood what that meant and 58 percent were fairly confident; “Lacks application of theory”: 50 percent were very confident and 29 percent fairly confident; and “Superficial analysis”: only 5 percent were very confident and 54 percent fairly confident. In this last case, don’t forget about the group of more than 40 percent who said they did not understand what the comment meant.
Smith gave students examples of different grading methods, and then asked them to identify their preferences and, in response to an open query, say why they preferred the one picked. Method 1 used a matrix or rubric that identified several different areas, assigned points to each, and included some brief comments. Method 2 offered a paragraph that identified the problems with the paper. Method 3 also offered feedback in a paragraph, but in this case, three positive features of the essay were identified, and then the same problems were discussed. Sixty percent of the respondents preferred the Method 1 rubric, with the Method 3 paragraph being second favorite, preferred by 36.4 percent of the students.
Both studies also contained clear indications of what compromises the effectiveness of comments on papers. Students in the two studies wanted both positive and negative feedback. For example, on the Smith survey, “I want to know what I did correctly on my papers, not just what I did wrong” generated a mean response of 4.49 (out of 5.0).
Weaver’s study also included a qualitative component in which students were asked to bring samples of papers with commentary to a group discussion to talk about the comments. Out of the discussions emerged four characteristics of comments that students did not find helpful when they tried to improve their writing on subsequent papers. First were comments identified as being too general or too vague, such as “A sound answer, generally” or “You’ve got the important stuff right.” One is tempted to point out that if students wrote comments like those, most teachers would ask them to be more explicit. Second, students found it difficult to improve when the commentary provided no guidance. They wanted to know specifically and concretely what they needed to do better on the next paper. Laudatory comments on one paper included a list of the four things the student most needed to work on in the next paper.
Reaffirming what was found in other data were objections to commentary that focused entirely on the negative. An analysis of the comments revealed that negative feedback tended to be more specific than positive feedback. When offered, positive comments tended to be vague, such as the word “good” scrawled down the side of a paper.
Some students do ignore instructor feedback, whether it’s written comments on papers or face-to-face feedback, but maybe not as many as instructors are inclined to think. Perhaps that number could be reduced further still if instructors attended to this feedback on their own feedback.
References:
Smith, L. J. (2008). Grading written projects: What approaches do students find most helpful? Journal of Education for Business, July/August 2008, 325-330.
Weaver, M. R. (2006). Do students value feedback? Student perceptions of tutors’ written responses. Assessment & Evaluation in Higher Education, 31 (3), 379-394.
Excerpted from Written Feedback: What’s Most and Least Helpful, The Teaching Professor, February 2009.
This Post Has 0 Comments
"Perhaps that number could be reduced further still if instructors attended to this feedback on their own feedback."
Why not include the average time it takes to provide the desired feedback? Then compare this to the average hourly pay an adjunct would be compensated to produce such feedback, and what percentage of campus classes are taught by adjuncts.
Way to raise the entitlements.