Mastering Your Multiple-Choice Questions

Posted By Doug Peterson

If you’re going to the Questionmark Users Conference in New Orleans this week (March 21 – 23), be sure to grab me and introduce yourself! I want to speak to as many customers as possible so that I can better understand how they want to use Questionmark technologies for all types of assessments.

This year Sharon Shrock and Bill Coscarelli are giving a pre-conference workshop on Tuesday called “Criterion-Referenced Test Development.” I remember three years ago at the Users Conference in Memphis when these two gave the keynote. They handed out a multiple-choice test that everyone in the room passed on the first try. The interesting thing was that the test was written in complete gibberish. There was not one intelligible word!

The point they were making is that multiple-choice questions need to be constructed correctly or you can give the answer away without even realizing it. Last week I ran across an article in Learning Solutions magazine called “The Thing About Multiple-Choice Tests…” by Mike Dickinson that explores the same topic. If you author tests and quizzes, I encourage you to read it.

A few of the points made by Mr. Dickinson:

  • Answer choices should be roughly the same length and kept as short as possible.
  • Provide a minimum of three answer choices and a maximum of five. Four is considered optimal.
  • Keep your writing clear and concise – you’re testing knowledge, not reading comprehension.
  • Make sure that you’re putting the correct answer in the first two positions as often as the last two positions.

One of the most interesting points in the article is about the correctness of the individual answer choices. Mr. Dickinson proposes the following:

  • One answer is completely correct.
  • Another answer is mostly correct, but not completely. This will distinguish between those who really know the subject matter and those who have a more shallow understanding.
  • Another answer is mostly incorrect, but plausible. This will help reveal if test takers are just guessing or perhaps racing through the test without giving it much thought.
  • The fourth answer would be totally incorrect, but still fit within the context of the question.

Once you have an assessment made up of well-crafted multiple-choice questions, you can be confident that your analysis of the results will give you an honest picture of the learners’ knowledge as well as the learning situation itself – and this is where Questionmark’s reports and analytics really shine!

The Coaching Report shows you exactly how a participant answered each individual question. If your questions have been constructed as described above, you can quickly assess each participant’s understanding of the subject matter. For example, two participants might fail the assessment with the same score, but if one is consistently selecting the “almost correct” answer while the other is routinely selecting the plausible answer or the totally incorrect answer, you know that the first participant has a better understanding of the material and may just need a little help, while the second participant is completely lost.

The Item Analysis report shows you how a question itself (or even the class or course materials) is performing.

  • If pretty much everyone is answering correctly, the question may be too easy.
  • If almost no one is getting the question correct, it may be too hard or poorly written.
  • If the majority of participants are selecting the same wrong answer:
    • The question may be poorly written.
    • The wrong answer may be flagged as correct.
    • There may be a problem with what is being taught in the class or course materials.
  • If the answer selection is evenly distributed among the choices:
    • The question may be poorly written and confusing.
    • The topic may not be covered well in the class or course materials.
  • If the participants who answer the question correctly also tend to pass the assessment, it shows that the question is a “good” question – it’s testing what it’s supposed to test.

What question writing rules do you use when constructing a multiple-choice question? What other tips and tricks have you come up with for writing good multiple-choice questions? Feel free to leave your comments, questions and suggestions in the comments area!

Improving Multiple-Choice Questions

Posted By Doug Peterson

I’ve heard a lot of criticisms of multiple-choice questions (MCQs) over the years.

For instance:

  • They really only test recall, not true understanding
  • They test reading comprehension as much, if not more so, than they test actual knowledge
  • They’re unfair to people with dyslexia
  • They imprint incorrect answers in the learner’s brain, which may be recalled in error at some point in the future
  • It’s easy to infer the correct answer from the recognition of keywords or the length of answers

But many of these problems might not be so much because of MCQs by definition: they may be due to  poorly written MCQs!

Let’s face it – we all know that MCQs are pretty much the most-used question type on the planet. Why?

  • They’re easy to use – they work online, on paper, etc.
  • They’re easy to score
    • Completely objective – anyone can do it
    • Easy to automate with bubble sheets and scanners

Given the popularity of multiple-choice questions, we can always do with good advice about how to improve them!

The latest issue of Learning Solutions Magazine has a very well-written article on writing better MCQs. It’s by Mike Dickinson and it’s called Writing Multiple-Choice Questions for Higher-level Thinking. Mr. Dickinson presents several effective techniques for writing better MCQs that I believe you’ll find to be very useful if you use MCQs in your assessments.

Measuring Learning Results: Eight Recommendations for Assessment Designers

Joan PhaupPosted by Joan Phaup

Is it possible to build the perfect assessment design? Not likely, given the intricacies of the learning process! But a white paper available on the Questionmark Web site helps test authors respond effectively to the inevitable tradeoffs in order to create better assessments.

Measuring Learning Results, by Dr. Will Thalheimer of Work-Learning Research, considers findings from fundamental learning research and how they relate to assessment. The paper explores how to create assessments that measure how well learning interventions are preparing learners to retrieve information in future situations—which as Will states it is the ultimate goal of training and education.

The eight bits of wisdom that conclude the paper give plenty of food for thought for test designers. You can download the paper to find out how Will arrived at them.

1. Figure out what learning outcomes you really care about. Measure them. Prioritize the importance of the learning outcomes you are targeting. Use more of your assessment time on high-priority information.

2. Figure out what retrieval situations you are preparing your learners for. Create assessment items that mirror or simulate those retrieval situations.

3. Consider using delayed assessments a week or month (or more) after the original learning ends—in addition to end-of-learning assessments.

4. Consider using delayed assessments instead of end-of-learning assessments, but be aware that there are significant tradeoffs in using this approach.

5. Utilize authentic questions, decisions, or demonstrations of skill that require learners to retrieve information from memory in a way that is similar to how they’ll have to retrieve it in the retrieval situations for which you are preparing them. Simulation-like questions that provide realistic decisions set in real-world contexts are ideal.

6. Cover a significant portion of the most important learning points you want your learners to understand or be able to utilize. This will require you to create a list of the objectives that will be targeted by the instruction.

7. Avoid factors that will bias your assessments. Or, if you can’t avoid them, make sure you understand them, mitigate them as much as possible, and report their influence. Beware of the biasing effects of end-of-learning assessments, pretests, assessments given in the learning context, and assessment items that are focused on low-level information.

8. Follow all the general rules about how to create assessment items. For example, write clearly, use only plausible alternatives (for multiple-choice questions), pilot-test your assessment items to improve them, and utilize psychometric techniques where applicable.