Have your students ever told you that your tests are too hard? Tricky? Unfair? Many of us have heard these or similar comments. The conundrum is that, in some circumstances, those students may be right. Assessing student learning is a big responsibility. The reason we report scores and assign grades is to communicate information about the extent of student learning. We use these indicators to judge whether students are prepared for more difficult work or ready to matriculate into majors or sit for certification exams. Ideally, scores and grades reflect a student’s learning of a particular body of content, content we intended them to learn. Assessments (e.g., tests, quizzes, projects, and presentations) that are haphazardly constructed, even if unintentionally, can result in scores and grades that misrepresent the true extent of students’ knowledge and leave students confused about what they should have been learning. Fortunately, in three easy steps, test blueprinting can better ensure that we are testing what we’re teaching.
Step 1: Align objectives, assessments, and learning opportunities.
Learning results from students’ engagement with course content, not from the content itself (Light, Cox, & Calkins, 2009). However, this is often not how we approach the planning of our courses. In our courses and lessons, we need to make sure that clear learning objectives drive the planning, that assessments are constructed to measure and provide evidence of the true extent to which students are meeting the objectives, and that, through the learning opportunities we provide students, they can engage with the content in ways that allow them to meet the objectives and demonstrate their learning. This is not a linear process—it is iterative, often messy, and shaped by contextual factors. Nonetheless, when alignment is a criterion for successful planning, we are more likely to be measuring what we’re teaching. We do have to start somewhere, and a good place to start is with learning objectives.
Step 2: Write meaningful and assessable objectives.
If objectives drive the assessments and learning opportunities that we create for students, then the objectives must be meaningful (Biggs, 2003) as well as specific and measurable. The objectives are where we establish expectations for student learning. If, for example, we want students to think critically, our objectives must reflect what we mean by critical thinking. What we sometimes lack is specific language. Taxonomies (e.g., Bloom’s Taxonomy, Anderson & Krathwohl, 2001; Fink’s Taxonomy of Significant Learning, 2013; Wiggins and McTighe’s Facets of Understanding, 2003; Biggs’ Structure of Observed Learning Outcomes, 2003) can be consulted to help craft the specific objectives to which we will teach.
We recommend crafting no more than 5–8 course learning objectives. The format of the objectives should follow this example: Upon successful completion of this course, students will be able to evaluate theories through empirical evidence.
Step 3: Create test blueprints.
Designers of any major high-stakes exam (e.g., SAT, GRE, NCLEX) have to be able to claim that it tests what it purports to test. One way they do this is by building a test blueprint, or table of specifications. A test blueprint is a document, matrix, or other kind of chart that maps each question on an assessment to its corresponding objective, theme, or topic. If it doesn’t map, it’s not included in the assessment. A completed map represents how the items of an entire assessment are weighted and distributed across objectives, themes, or topics as well as how they are distributed across other important dimensions (e.g., item difficulty or type of question). A test blueprint is, essentially, a tool to help align assessments with objectives.
Course instructors also need to be able to assert that assessments provide evidence of the extent to which students are meeting the established objectives. If the blueprint doesn’t represent the content that is being tested, adjustments should be made before administering the test. A test blueprint is easy to develop and flexible enough to adjust to just about any instructor’s needs.
Consider the template below. The left-hand column lists—for the relevant chunk of content—objectives, themes, and topics. Column heads can represent whatever “other” dimensions are important to you. For example, in a political science course you could map higher level versus lower level items. Or, in a statistics course, you could map question categories such as recall, skills, and conceptual understanding. Once the structure of your blueprint is established, (a) plot each item with the numbers in the cells representing the numbers of items in each of the intersecting categories; (b) total the rows and columns; and (c) analyze the table and make sure the test will well represent student learning, given the objectives and students’ learning opportunities for that content.
As you develop the “map” of your assessment, consider these questions: What does each column heading mean? For example, what does “higher level” versus “lower level” really mean? Do you know? Would students know? Would student learning be improved if you shared the blueprint in advance? And, ultimately, will this planned assessment represent what you taught, what you intend to test, and how you intend to test it?
References:
Anderson, L. W., & Krathwohl, D. R. et al. (Eds.) (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Boston, MA: Allyn & Bacon.
Biggs, J. B. (2003). Teaching for quality learning at university. London: Open University Press.
Fink, L. D. (2013). Creating significant learning experiences for college classrooms: An integrated approach to designing college courses. San Francisco: Jossey-Bass.
Light, G., Cox, R., & Calkins, S. (2009). Learning and teaching in higher education: The reflective professional, 2nd ed.
Wiggins, G., & McTighe, J. (2005). Understanding by design. Upper Saddle River, NJ: Merrill Prentice Hall.
Cindy Decker Raynak is a senior instructional designer at the Schreyer Institute for Teaching Excellence at Penn State University. Crystal Ramsay is a research project manager of faculty programs at Penn State University. Their session titled “Are You Testing What You’re Teaching” was one of the top-rated workshops at the 2016 Teaching Professor Conference.
© Magna Publications. All rights reserved.
This Post Has 5 Comments
As it happens, I don’t give “tests” (i.e., true-false, multiple-choice, short-answer, 250-word “essay” questions. So, my reply may, at best be of only tangential interest. Still, it may do no harm to say that I assign only “take-home” essays (from 1250-words to 3,500-words). They complain that they’ve never been asked to do anything that onerous in high school or in any other college courses (they are right). They insist that they can’t find any information about the topics (despite having access to ample libraries and the ubiquitous internet), and that they can’t think of anything to say when they do get their “research” completed (also right, because they’ve never been asked to think).
I listen attentively and sympathetically to their unimaginative tales of woe, provide them with a few rudimentary research skills and hints about how to construct a satisfactory piece of work and send them away – disgruntled because they’d hoped that I would somehow lessen their burden.
Eventually, most of them succeed in producing serviceable, if neither insightful or particularly particularly well-constructed bits of writing. Some need to be sent to the student aid facilities in the college library (now idiotically “rebranded” as a “learning commons”) where they are cajoled into producing a minimal effort. It’s not entirely their fault, of course, for what passes for education is – despite all the false rhetoric about “student-centered” education and so on – little more than sets of Skinnerian practices in which young people are made into the academic equivalents of rodents learning to press the correct behavioral levers to win their “food pellets” in the form of passing grades.
I remain resolute in my approach, however, if for no reason other than that I find it irredeemable that graduates leave an institution of allegedly higher learning with an accredited degree without ever having to write a paper similar to the ones that were expected many times in many courses every semester when I was an undergraduate a half-century ago.
A few, I am delighted to say, take the assignments as opportunities to free themselves from the shackles of standardized testing and to write with enthusiasm about something that are actually interested in – leaving questions such as “what do you want in this paper? in the dust. To these people, I tip my hat and from these people I receive great gratitude for allowing/encouraging/demanding work of something approaching scholarship.
My greatest reward, however, came from a young man who’d been in my class about 15 years before and whom I met quite by accident in a local shopping mall. I remembered him (vaguely) as a recalcitrant student who’d sat sullenly at the back of the class and had turned in work that belied his apparent resentment at even being there. He told me frankly that, at the time, he thought the course (modern political thought) was “total BS” and that it had taken a decade for him to appreciate fully not merely the content of the course, but also the attitude toward learning and thinking that he’d seemingly acquired as if by osmosis.
In time, I shall leave the classroom forever and carry with me the reassurance that he might not have been alone. As for the rest, I am confident that whatever effect, if any, I might have had will have done them no permanent injury and that (for whatever it may be worth) at least they will have been exposed to the possibility of genuine “critical thinking” – not just the Rubik’s Cube pedagogy that claims ownership of that much overused and usually meaningless phrase.
Apologies! My comment (immediately above) was in reply to a question that appeared in the original post (on “The Teaching Professor”) to which I thought I was responding; namely, “Have your students ever told you that your tests are too hard?”
Only with that in mind does my opening sentence beginning “All the time …” make sense.
Liked it anyway Howard!
I’ve long felt that ultimately, students learn according to how they are tested.
The constant lament that I hear from professors is something like “oh I can’t get the students to think critically, or outside the box, or do this or that”. Then you look at their exams. Of course the students aren’t going to do critical thinking and this and that – because that stuff’s not on the test.
What’s on the test are canned questions that test memorization and ask the student to regurgitate back what they did in class. The grape vine is very efficient. Students know when that’s what you test.
I teach accounting. Most typically in accounting, tests are pulled from test banks, the solutions to which are available on the open market and can be bought. They frequently are the same ones as the homework but with different numbers. Even instructors who don’t use the actual test questions provided by the textbook company will merely take the same problems from the homework, or the same ones from what was shown in class, change a number or two (or sometimes don’t) and that’s what’s on the test.
Those aren’t my learning goals. I tell students right up front, “I want you to be able to take situations that you’ve never seen before, and figure it out.” Those are my favorite
three words …. “figure it out”.
When I talk to people, I’ll frequently hear something like “oh, I tried that. They all failed because they couldn’t think that way. Then they complained on the evaluations that my exams weren’t fair, so I went back to doing it this way!”
Well of course they are going to complain. Do you do any of that in class or the homework? If the homework and class are just regurgitation, then you suddenly ask for more advanced thinking on the test, they are going to feel like you sneak attacked
them, and ya know something – they’re right. You did.
But wait a minute – they haven’t been taught how to think like that! Yes, I know. I cannot fix the whole education system. But we can build slowly and learn how to do this way by doing it every day. Are they all going to become great “outside the box” thinkers? No. I’m not foolish enough to expect that. Some will, but MOST can become at least PRETTY good at it after they spend 28 weeks of a 2-course sequence in the classroom with me.
But they are NEVER going to get there if I don’t
1) ask for that type of thinking, incrementally, in the preparatory readings and homework,
2) expect that type of thinking, incrementally, in class (I use Socratic Teaching and cold call with questions) and
3) PUT THAT, IN APPROPRIATE DOSES, ON THE EXAMS ALONG THE WAY!
I’m glad you brought up the importance of establishing clear measurable learning objectives and aligning them to course assessments (i.e. tests, activities, assignments, projects, etc.). I know this first hand with over 16 years of teaching experinece in higher education and in my current role as an instructional designer at a research university. When I work with faculty and mention “learning objectives,” I frequently get the “I’m not messing with those” look. I typically see this “look” from faculty that don’t have clear leaning outcomes to begin with. Objectives/goals that simply state “students will understand…………….” are not measurable. Then when faculty discuss how their students are frequently confused on what and why they are doing assigned activities or why they’re being tested on material not covered in class, my response is simple………………..are your learning outcomes (objectives) measurable and aligned to specific course activities? The biggest criticism of establishing learning objectives is that they take away creativity in the classroom. From experience, I beg to differ and find this type of thinking dated and invalid.
Like @howarddoughty:disqus below, I don’t implement tests. Although they have their place in the classroom, I’ve learned they don’t benefit students much other than recalling what they’ve read from their text and/or heard from lecture. Instead, I use a project-based approach for assessment. Essentially, I don’t want my students lingering on the lower levels of Bloom’s but engage more on the higher levels. Learning needs to be authentic, in my view, if students are going to engage with the content and make meaning toward their own learning. Even with this method of assessment, it essential to have clear measurable learning objectives in place. Thanks for this great thought-provoking read.