Creating Better Multiple-Choice Tests for Online Courses

Multiple-choice tests are commonly used to assess achievement of learning objectives because they can be efficient. Despite their widespread use, they’re often poorly designed. Poorly written multiple-choice tests are equally damaging in classroom-based and online courses, but in online courses learners often have to contend with more challenges, and poor assessments can add insult to injury.

Some plusses and minuses to multiple-choice tests
Multiple-choice tests can be developed for many different types of content and, if the test items are well written, can measure achievement of multiple levels of learning objectives, from simple recall and comprehension to more complex levels, such as ability to analyze a situation, apply principles, discriminate, interpret, judge relevance, select best solutions, and so on.

Multiple-choice tests are easy to administer and can be improved using item analysis in order to eliminate or correct poorly written items. They are easy to score and less susceptible to scoring subjectivity than short-answer or essay-type items. They don’t measure writing ability (which can be a plus or minus) and often do assess reading ability (another potential plus or minus, but in reality often a minus). They are more subject to guessing than many other types of learning assessments.

Multiple-choice tests are often promoted as “objective.” Although scoring them doesn’t involve subjectivity, humans do judge what questions to ask and how to ask them. These are very subjective decisions!

When multiple-choice is appropriate
Multiple-choice test items call for learners to select an answer or answers from a list of alternatives. Because they do not ask learners to construct an answer or actually perform, they tend to measure knowing about rather than knowing how.

Multiple-choice items cannot assess learners’ ability to construct, build, or perform. They are best used for objectives that can be assessed by selecting the correct answer from a list of choices rather than supplying the answer or performing a task. Think for a moment about how different selecting is from constructing and performing and you’ll recognize the limitations of multiple-choice testing.
[report_ofie=5216]
Writing better multiple-choice items
Confusing and ambiguous language and poorly written or implausible distractors are very common errors when writing multiple-choice test items. Here’s a to-do list to help you avoid these mistakes and write better multiple-choice test items.

  • Provide clear directions. Group questions with the same directions together.
  • Include as much of the question as possible in the stem, and reduce wordiness of alternatives.
  • Include words in the stem that would otherwise be repeated in each of the alternatives.
  • Make sure language is precise, clear, and unambiguous. Include qualifiers as needed, but don’t add unnecessary information or irrelevant sources of difficulty.
  • Avoid highly technical language or jargon unless technical knowledge and jargon are part of the assessment.
  • Avoid negatives and these words: always, often, frequently, never, none, rarely, and infrequently. When a negative is used, it should be CAPITALIZED, underlined, or bolded to call attention to it.
  • Don’t use double negatives or double-barreled questions (asking two things in one question).

Although it takes time and practice to write good items, this time and effort is well spent.

Patti Shank, PhD, CPT, is a widely recognized instructional designer and technologist, writer, and author who builds and helps others build good online courses and facilitate learning. She can be reached through her website: http://www.learningpeaks.com/.

This Post Has 0 Comments

Leave a Reply