Test above knowledge: Use scenario questions

John Kleeman portraitPosted by John Kleeman

Here’s the one piece of advice I’d give above all others to anyone creating quizzes, tests or exams: Test above knowledge.

You may be familiar with Bloom’s taxonomy of learning objectives, which is shown in the diagram below. At the lowest level is Knowledge; questions that test Knowledge ask for simple remembering of facts. At the highest level is Evaluation; questions that test Evaluation require participants to use judgement.

Bloom pyramid

It’s great if you can write questions that assess at the highest levels of Bloom’s Taxonomy, but many organizations have a lot to gain by asking questions at any of the levels  above Knowledge in the pyramid. Although there are times when testing facts can be useful, it’s usually better to test the application or comprehension of those facts.

In today’s world, where facts are easily googleable, it is the use and understanding of those facts in the real world that is genuinely useful. By testing above knowledge, you are checking not just that the participant knows something but that they can apply it in some scenario. This is more valid and more realistic — and for most applications it is also more useful.

Here is a simple example to illustrate the point:

Traffic lightsWhat does a yellow traffic light mean?

  • Stop
  • Go
  • Caution

This is purely a factual, knowledge question.

But here, the question requires that the respondent to apply to meaning of a yellow traffic light to an actual situation:

If you are driving toward an intersection and the light turns from yellow to red, what should you do?

  • Speed up and cross the intersection
  • Stop suddenly
  • Stop gradually

This is a very simple example, but I hope it makes you realize that converting factual questions to scenarios is not very hard.

I’d encourage you to consider using scenarios in your questions: Ask people to apply their knowledge, not just prove that they know some facts. Have your test-takers apply what they know to actual situations.

How many items are needed for each topic in an assessment? How PwC decides

Posted by John Kleeman

I really enjoyed last week’s Questionmark Users Conference in Los Angeles, where I learned a great deal from Questionmark users. One strong session was on best practice in diagnostic assessments, by Sean Farrell and Lenka Hennessy from PwC (PricewaterhouseCoopers).

PwC prioritize the development of their people — they’ve been awarded #1 in Training Magazine’s top 125 for the past 3 years — and part of this is their use of diagnostic assessments. They use diagnostic assessments for many purposes but one is to allow a test-out. Such diagnostic assessments cover critical knowledge and skills covered by training courses. If people pass the assessment, they can avoid unnecessary training and not attend the course. They justify the assessments by the time saved from training not needed – being smart accountants using billable time saved!

imagePwC use a five-stage model for diagnostic assessments: Assess, Design, Develop, Implement and Evaluate as shown in the graph on the right.

The Design phase includes blueprinting, starting from learning objectives. Other customers I speak to often ask how many questions or items they should include on each topic in an assessment, and I thought PwC have a great approach for this. They rate all their learning objectives by Criticality and Domain size, as follows:

Criticality
1 = Slightly important but needed only once in a while
2 = Important but not used on every job
3 = Very important, but not used on every job
4 = Critical and used on every job

Domain size
1 = Small (less than 30 minutes to train)
2 = Medium (30-59 minutes to train)
3 = Large (60-90 minutes to train)

The number of items they use for each learning objective is the Criticality multiplied by the Domain size. So for instance if a learning objective is Criticality 3 (very important but not used on every job) and Domain size 2 (medium), they will include 6 items on this objective in the assessment. Or if the learning objective is Criticality 1 and Domain size 1, they’d only have a single item.

I was very impressed by the professionalism of PwC and our other users at the conference. This seems a very useful way of deciding how many items to include in an assessment, and I hope passing on their insight is useful for you.