The current state of student assessment in the classroom is mediocre, vague, and reprehensibly flawed. In much of higher education, we educators stake a moral high ground on positivistic academics. Case in point: assessment. We claim that our assessments within the classroom are objective, not subjective. After all, you wouldn’t stand in front of class and say that your grading is subjective and that students should just deal with it, right? Can we honestly examine a written paper or virtually any other assessment in our courses and claim that we grade completely void of bias? Let’s put this idea to the test. Take one of your assessments previously completed by a student. Grade the assignment using your rubric. Afterwards, have another educator among the same discipline grade the assignment using your exact rubric. Does your colleague’s grade and yours match? How far off are the two grades? If your assessment is truly objective, the grades should be exact. Not close but exact. Anything else reduces the reliability of your assessment.
Types of assessment
The problem with my argument so far is the intention of the assessment. What if you are interested in formative assessment? You may use observations or questions to start a formative assessment among students, which is absolutely appropriate. The problem is when we as educators attach a grade to the assessment or use a summative assessment that has a documented impact (e.g., grade) on the student. For entirely too long, we have given lip service to the notion of assessment around the watercooler by documenting and discussing various techniques used to evaluate our students. At best, many of us have crunched numbers and presented results that state how our students improved in this category or with that skill. Never mind the lack of attention to the statistical significance on which we base the success of our evaluations. Good evaluations would employ analyses like pre- and posttests (as an example), with a keen eye to results of effect size, but that is for another conversation. Consider an example of a poor assessment I have created.
Example of a poor assessment
My Introduction to Sociology students are asked to craft and conduct a presentation in front of their classmates. One of their requirements is to show other students and myself the rich content they have learned. Consider an exact requirement they are to complete.
It is quite apparent that the learner has examined his or her topic and has a deep knowledge of the material. Terminology is used in the presentation of material and answering of questions. The knowledge presented goes beyond the textbook and .com sites, and represents many weeks of preparation.
This requirement is assessed and granted a specific number of points ranging from 0-15. As you can imagine, the score of 0 indicates that the requirement was not fulfilled, while a score of 15 is a perfect fulfillment of the requirement. Even with a point breakdown in which the rubric indicates what is a below average, average, above average, and excellent, the process is still not objectively clear. Rather, the assessment of this requirement is embarrassingly biased. How do I evaluate the degree of content the student has learned and conveyed? Am I counting the number of words spoken? Did the student state a specific theory? Did the student refer to peer-reviewed references? While I may use these questions in the back of my mind during the student’s presentation to make an assessment, my grade may differ wildly from that of another evaluator. So what are we to do? Should we make a rubric that is more detailed? Is your rubric one page long? Should it be five pages long? The assessment does not need to be longer; it needs to be more specific. Please welcome axial assessment.
Axial assessment
In order to properly assess students, we need absolutely clear assessment techniques. Axial assessment is taken from the notion of “axis,” which means a fixed reference. Axial assessment is a method of evaluation that combines the traditional format of grading with basic statistics. Axial assessment grades on whether an item was competed—not the degree to whether the item was completed. For example, how could the above presentation requirement be reconfigured using axial assessment? See axial assessment applied to the previous requirement below.
The learner completed the following:1 – A 30-60-second introduction began the presentation in which the student stated his name, the title of the presentation, and the significance of the topic.
3 – Peer-reviewed references were verbally or visually displayed.
3 – Exact terms within the discipline were orally expressed.
5 – Slides were presented to the class.
1 – A 30-60-second conclusion ended the presentation in which the student stated the importance of the topic.
Using axial assessment
So how does this so-called “axial assessment” differ from how thousands of educators are currently assessing their students? Numbers. Notice that each sub-requirement (e.g., peer-reviewed references) requires a certain number of completed objects. In the case of peer-reviewed references, three are needed. So how is this assessment more effective? The evaluator simply checks whether the item was completed. Was there an introductory sentence given at a specific point in time? Check. Were there three peer-reviewed references? Check. The evaluation continues in this fashion throughout the remainder of the rubric. Regardless of who is evaluating the assessment, the final grade should be exact. If the grade is not exact, the axial assessment needs revision.
Losing autonomy?
At least two questions may be running through your mind. First, doesn’t this take away from the expert opinion of the evaluator who, after all, is trained on the very assessment he or she is conducting? Yes, it does take away from the expert opinion, but experts in the field make mistakes all the time. Did the students complete your specific instructions? Yes or no? That is what matters here. There should be no room for half-measures of evaluation. If we are using our opinions to assess something, then the assessment is still subjective. Our expertise should be how we account for specific knowledge within our discipline to be correctly and precisely measured—not the degree to which we feel it has been obtained.
Second, does this take away from teacher autonomy? Yes, it does take out some teaching autonomy. Like any valid and reliable instrument used in research, it should be well-normed. Chances are we have used our assessments on other students. This does not count as a well-normed measure. We have plenty of autonomy in selecting what should be assessed. We should have less autonomy in how it should be assessed.
Finally, note that this enhanced and more specific way of grading may seem like common sense. However, many instructors continue to use vague descriptions for requirements. There must be an exact measure to reduce bias. Think back to when you were a college student. Did you ever think an assessment was unclear or even biased? If you did, chances are that axial assessment was not utilized. Despite whatever flaws may be introduced in the discussions that follow this type of assessment, we as educators must highlight the subjectivity that creeps into our assessments. The only way to fix a problem is to first realize that there is a problem. Let’s take another look at our assessments, shall we?
Sam Buemi is an instructor at Northcentral Technical College, Wausau, Wis.