Four Reasons Assessment Doesn’t Work and What We Can Do About It

Hands of business people working with documents

I admit that I’m an assessment geek, nerd, or whatever name you’d like to use. I pore over evaluations, rubrics, and test scores to see what kinds of actionable insights I can glean from them. I’ve just always assumed that it’s part of my job as a teacher to do my very best to make sure students are learning what we need them to learn.

That being said, since serving on my university’s assessment committee, and for the last two years having acted as the university director of assessment, I have heard a litany of excuses for not utilizing assessment. Some are the types of excuses that would test the patience of any professor hearing them from a student. Here are a few of my favorites:

  1. It’s the students! Assessment doesn’t work when you’re looking at the results only in terms of what the student did wrong or right. Yes, student populations change, and student characteristics differ depending on whether you are teaching first-year college students or returning adult learners. But placing all the blame on the students—saying they don’t study or are unprepared—only adds to our frustration and gives the false impression that students are the only factor in the teaching and learning equation.
  2. It’s just busywork! Yes, for most of us assessment is an essential part of accreditation and for ensuring we maintain standards for our work. However, if you look at it as only busywork, you will just fill out the paperwork, check the boxes that need to be checked, and not take a hard look at what the results are trying to tell you. An effective assessment should force you to examine what it means to be a successful teacher, where your students are now, and how you can help them get where they need to go.
  3. I was hired to teach! We were all hired to teach, and that’s probably because we are good at our chosen profession. But we choose to teach, and part of being not merely a good but an excellent teacher is continually evaluating how well you’re doing. Just as buying a car means more than filling it with gas, it’s important that we examine assessment results to see whether we need to put “new tires” on our content.
  4. I am too busy to deal with it! OK, we are all busy—we have classes to teach, students to advise, and research to conduct, and we’re probably sitting on a few different committees. It’s not an easy job, especially for beginning faculty. Whether we’re assessing the effectiveness of a single course, a program, or an entire institution, assessment can be messy, frustrating, and, at times, difficult to hear. But there’s strength in numbers, and I have yet to meet a single faculty member who is not willing to share experiences, rubrics, and advice to help a colleague get better. There’s no need to go it alone and toil in isolation. Why try to reinvent the wheel when there’s an abundance of work that has proven effective?

Assessment does work
Now that we’ve outlined the different times assessment doesn’t work, let’s discuss when it does. Assessment works when we learn to look at it as a process for improving the quality of our teaching. It works when we dialogue with colleagues, both within our discipline and across campus, and create new ideas to help students learn. Assessment works when we try something new and don’t get disheartened when it doesn’t work; instead, we reevaluate and try something else. Assessment works when something new proves effective and we gain information that moves our curriculum forward. Assessment can work if we quit making excuses as to why it’s so difficult and messy and instead look to the information to reinforce what works and discard what doesn’t.  Assessment works when we embrace the challenge of always getting better.

Vickie Kelly is the program director in the Master of Health Science at Washburn University.

This Post Has 6 Comments

  1. goodsensecynic

    If, by assessment, you mean assigning and grading student research projects, essays, seminar presentations, examinations, tests and quizzes, then the top four objections cited are not familiar to me.

    No one I’ve ever known (and, in a little less than six weeks, I’ll be completing my 50th year as a post-secondary classroom teacher) anyone who has expressed such sentiments.

    What I DO hear (and have been known to express myself) are complaints about administrative interference in academic matters, the teaching and learning process and assessment practices. From demands for standardization, measurable learning outcomes, specified employability skills and all sorts of bureaucratic templates, rubrics and other tools, I have found the managerial mentality and corporate culture eating away at anything approaching authentic intellectual development.

    Let the bureaucrats take care of paying the electricity bills, ensuring that there’s chalk in every (traditional) classroom, ploughing snow from parking lots (in cooler climates) and schmoozing with government officials and industry leaders when necessary, and keeping their tiny authoritarian hands and tinier pedagogical prejudices out of the classroom, the library and the lab.

    Higher education is not a business to be run on the model of discount department stores. Associate professors are not to be transformed into the academic equivalent of Walmart Associates. Left alone to do their work, faculty can generally be trusted to do what is necessary and what is right. Micromanaged according to crack-pot business models, the only possible results are sycophancy or insurrection – neither of which makes us “better.”

    1. Marae

      Well spoken. I understand why teachers are leaving the profession in droves. Everybody wants to “standardize” education and make it a cookie-cutter approach. The problem is that people are not cookies. Everybody learns differently, and the learning process is messy, not a one-size-fits-all proposition. Taking the individuality out of the students and the instructors may be neater, but it just doesn’t work. H. L. Mencken said, “For every complex problem, there is an answer that is clear, simple, and wrong.” Standardization is clear, simple, and wrong.

  2. Carol Welsh

    Excellent and to the point! Getting beyond test scores and finding ways to improve the learning experience is what it is all about. Am somewhat of an assessment geek myself. Actually prefer looking at assessment as assuring learning, learning from the outcomes, and innovating improvements. (Been in the classroom for 30+ years and still enjoy the adventure)

  3. Gonzalo Munevar

    Assessment is a sham. Of course, the idea of determining the quality of education you are giving students in your program, for the purpose of improving that education,
    is commendable. But the way assessment is actually practiced amounts to little more than a pseudo-scientific managerial pretense likely to interfere with that purpose, while hampering faculty’s ability to attend to their true dual mission: teaching and research.

    Let me give you two examples. In 1999 I was hired as head of humanities,
    social sciences and communication at a private technological university, with
    the mission of improving our “core courses,” which were required of all
    students. In the humanities we adopted the Great Books approach. As we saw it,
    an educated person should be able to read difficult but worthwhile texts, make
    plausible interpretations of them, and then defend those interpretations well
    both orally and in writing. That person should also be able to demonstrate critical
    reasoning in discussing other interpretations or questions about the texts. The goal of our program was to make educated people of the university’s students by having them develop such skills in the minimum of seven courses they had to take from our department. Every professor had the job to make sure that students had read the material carefully before the class meeting and to challenge them to argue intelligently.

    To assess our degree of success, we did several things. One was to have class
    visitations in which we observed whether the students were indeed developing
    the desired skills. Had they read the assigned material? Did they make intelligent
    remarks about it? And so on. Then at the end of the semester all the instructors who had taught a section of a particular class would meet around a table. Each would bring copies of two “A” papers, two “B” papers, and two “C” papers, as well as the distribution of grades in the section. We then read the students’ work, including the instructor’s comments. It was not difficult to determine which instructors were doing better work. At one meeting, for example, a young adjunct who was very popular and received extremely high evaluations, suddenly asked, “What is wrong with me?” He looked at me: “My ‘A’ students would barely get a ‘C’ from you or form Phil.” The rest of us around the table were in agreement. The work produced by my students and by Phil’s was far superior. After that we discussed how he could be more challenging.
    In subsequent semesters he improved (i.e. his students did much better work). Class visitations and these meetings, together with personal advice from me or other senior faculty allowed instructors to get better work out of their students. Since all the students had to take at least one upper division course in the humanities or social sciences, we were able to determine the level of quality after our string of required courses. We also assessed our own majors along similar lines.

    Unfortunately the assessment movement caught on and by2003, or so, accreditation agencies pushed it down the throat of most universities. Now, you would think that
    since our “assessment” program was successful and had helped us improve
    education, the accreditation agency would love it. Not at all. We actually looked at graded student work. A no-no. And to the comments written by the instructors. This was not objective. And we used our expertise to determinequality. Clearly subjective. No, no. What was required was a sheet full of numbers, meaningless statistics.
    Worthless pseudoscience that in all the years since never led to any improvement, but that clearly pleased the accreditation agency. And in the meantime we got rid of the one system that had been so successful in improving teaching and learning.

    But we got rubrics as a gift. For example, before the assessment movement,
    we had a guideline for writing papers: You need to do at least this much to get
    a “C” and these other things to get a “B.” For an “A” you will need to….
    Our department’s representative to the University’s Assessment Committee
    came up with a rubric to grade papers, a rubric loved by the Committee and by
    the accrediting agency. There were ten or twelve categories (it was a while ago), such as “grammar” or “style” that instructors rated from 1-10. The instructor would then add the scores and divide by the number of categories. Wonderful. Except that it was
    possible to be completely ignorant of the subject matter, in fact to have nevereven read the material, and still get an “A” in the paper. How? Because “knowledge of class material” was only one category. You could get a 1 in it and still manage to
    get 90% or more in the paper as a whole. Absurd. So we kept our old guideline for writing papers to guide also our real grading, but we also had a working meeting at
    which all members of the department used the rubric to grade a random sample of
    student papers from that year. These phony but scientifically looking statistics were then fed to the collaborators upstairs, and our department received the due accolades. No improvement of education, in our department or others, ever came from such charades. But the burden of time was truly extraordinary. It still is. Faculty have to attend lots of meetings to be brainwashed about the importance of assessment. Accreditation agencies want evidence of “commitment”! And there are all those hours wasted on carrying out all the pointless busy work. Time that could have been used for doing research. Or for doing a better job of teaching. As we used to. Fortunately I am retired.

  4. Grumplicious

    “Assessment works when we take it seriously…” Etc. etc. etc. Really? What evidence do you have? What studies can you show me that indicate that the kind of assessment that is currently required by accreditation agencies improves the quality of student learning? Please present your evidence in the form of numerical data.

    This is nonsense. Yes, “assessment” works, if by assessment we mean the process than most of us have always done — using a variety of /qualitative/ measures to assess how students are learning, when they’re not, and why not. Often these measures are non-quanitifiable; I think the most effective are conversations /with students/, similar in form to a Platonic dialogue.

    As for the “assessment” that is required of me by my institution, that is a useless piece of busy work, that consists of assigning arbitrary numeric values to discrete skills that cannot be numerically assessed with any consistency, and pretending that this means something. It’s sole purpose is to satisfy the assessment committee, and there is no good evidence that it does anyone any good. Worse, it implicitly defines “education” as the acquisition of discrete, quantifiably assessable techniques, thus undermining the very form and purpose of a general undergraduate education.

    “Assessment works”? Hardly. Assessment works only in that it provides self-validating data to prove that assessment has been done; as long as you’ve drunk the kool-aid, you can convince yourself that this matters. I view it as a waste of time, and deeply pernicious to the educational mission.

    1. disqus1994

      As we’ve all become aware, most (yes, more than 50%) of all social science research can’t be replicated (see the replicability crisis). Thus, if we question assessment based on numerical data and academic studies (a majority of which are just plain wrong anyway, even when quantified), then assessment is no different than most of the research found in academic journals – sketchy, unfounded, invalid, and – indisputably -unreliable. If the criteria used to judge assessment (numerical, empirical research) is unreliable, and we apply the same criteria to academic research, that would make over half of the research in *all* social science researchers and studies just plain wrong. That’s millions of hours, dollars, and pieces of paper from academic journals wasted (no one outside a narrow sub-discipline or in the general public reads that stuff anyway, so most of it’s wasted to begin with). So, by all means, let’s start judging all of our work – including teaching, research, and service – based on numerical, quantifiable studies. I’ll start with one: there’s absolutely no evidence (zip, zero, nada, none) that faculty research has any impact on making instructors better teachers. Let’s just face the fact – there’s no numerical, quantifiable evidence that college teaching in general works, and professors are only in it to self-validate their narrow research agendas. College teaching is a waste of time, and counter to the college mission of learning. Students can learn more about history from the history channel, and can learn more about physics from watching Neil DeGrasse than any rude, misogynist, and tenured science professor with bad social skills and questionable hygiene.

Leave a Reply