Saving Time and Money with Diagnostic Testing: A SlideShare Presentation

Headshot JuliePosted by Julie Delazyn

Having employees “test out” of corporate training using diagnostic assessments can save valuable resources and improve motivation, but there are many factors to be considered.

How do you ensure that time spent developing a robust diagnostic assessment provides value to the business?

A team from PwC explained their approach to this at the Questionmark 2013 Users Conference, and we’re happy to share the handouts from their presentation with you.

The Half-Time Strategy: Saving Your Organization Time and Money with Diagnostic Testing includes examples of diagnostic test-out assessments for business-critical self-study programs. It explains how diagnostic assessments can help organizations save training time while still maintaining quality. It also includes tips for building defensible assessments that people can take to test out of training – and for minimizing the time people spend taking them.

Questionmark Users Conferences offer many opportunities to learn from the experience of fellow learning and assessment professionals. Registration is already open for the 2014 Users Conference March 4 – 7 in San Antonio, Texas. Plan to be there!

PwC’s half-time strategy for ensuring effective diagnostic tests

John Kleeman HeadshotPosted by John Kleeman

I shared in a previous blog article how PwC explained at the recent Questionmark Users Conference some good practices for diagnostic tests that allow people to test out of training. In this article, I’ll explain what they call their “half-time” strategy, which enables even greater time savings.

A downside of a test-out strategy is that everyone, including the people who fail, have to spend time taking the test. PwC have found that in some cases, particularly if the test is difficult to pass, having a test-out can actually increase the total time taken for pre-testing and training. Although the test may add learning value in itself, the aim of testing is to reduce time spent, not to increase it!

PwC’s “half time strategy” deals with this issue by dividing the test into two halves; if a test taker does poorly in the first half, they don’t attempt the second half as they would have no chance of passing the test. This reduces the time spent in taking the test for many people, and so improves the overall efficiency of the process.

See the diagram below for a visual representation.

PwC chart 2

Essentially, if someone is doing badly at the test, it’s best to tell them so and stop wasting their time. If the test taker is doing poorly on the first half, cut their losses right then and there.

Suppose, for example, the pass score is 80% and you need to get 14 out of 17 questions right to be able to pass the test and test out of the training. Then you might divide the test into a first half with 9 questions and a second half with 8 questions. If you don’t score at least 6 out of 9 in the first half, then even if you get all the questions in the second half right, you couldn’t pass it. So people who score less than 6 in the first half of the test needn’t take the second half.

In my previous post, I gave an example of 1,000 people who need to take compliance training and could test out by taking a 20-minute test. It takes 333 hours for people to complete the test, but in the example I gave, time was saved overall by reducing time spent training.

Using the half-time strategy, suppose that 45% of those people drop out after the first half of the test — so they only need to spend 10 minutes on the test rather than 20 minutes. Then you save an extra 75 hours of test taker time in aggregate (450 x 10 minutes). This approach also prevents learner frustration by not requiring someone to continue with a test they are going to fail.

The half-time strategy is very easy to implement in Questionmark and seems a useful technique for minimizing the total amount of time spent on diagnostic testing and subsequent training.

I hope this idea is useful for you.

You can see some other articles inspired by PwC’s practices on the Questionmark blog at Applying the principles of item and test analysis to yield better results and How many items are needed for each topic in an assessment? How PwC decides.

Good practice from PwC in testing out of training

John Kleeman Headshot Posted by John Kleeman

I attended an excellent session at the Questionmark Users Conference in Baltimore by Steve Torkel and John LoBianco of PwC and would like to share some of their ideas on building diagnostic assessments.

PwC, like many organizations, creates tests that allow participants to “test out” of training if they pass. Essentially, if you already know the material being taught, then you don’t need to spend time in the training. So as shown in the diagram below – if you pass the test, you skip training and if you don’t, you attend it.

pWc chart 1

The key advantage of this approach is that you save time when people don’t have to attend the training that they don’t need. Time is money for most organizations, and saving time is an important benefit.

Suppose, for example, you have 1,000 people who need to take some training that lasts 2 hours. This is 2,000 hours of people’s time. Now, suppose you can give a 20-minute test that 25% of people pass and therefore skip the training. The total time taken is 333 hours for the test and 1,500 hours for the training, which adds up to 1,833 hours. So having one-fourth of the test takers skip the training saves 9% of the time that would have been required for everyone to attend the training.

In addition to saving time, using diagnostic tests in this way helps people who attend training courses focus their attention on areas they don’t know well and be more receptive to the training.

Some good practices that PwC shared for such building such tests are:

  • Blueprint the test, ensuring that all important topics are covered in the usual way
  • Use item analysis to identify and remove poorly performing items and calibrate question difficulty
  • Make the test-out at least as difficult as the assessment at the end of the training course. In fact PwC makes it more difficult
  • Make the test-out optional. If someone wants to skip it and just do the training, let them.
  • Tell people that if they don’t know the answers to questions, they can just skip them or finish the test early – there are no consequences for doing badly on the test
  • Only allow a single attempt. If someone fails the test, they must do the training
  • Pilot the test items well – PwC finds it useful to pilot questions using the comments facility in Questionmark

PwC also has introduced an innovative strategy for such tests, which they call a “half-time strategy”. This makes the process more efficient by allowing weaker test takers to finish the test sooner. I’ll explain the Half-time strategy in a follow-up article soon.

Saving time and money with diagnostic assessments

Joan Phaup HeadshotPosted by Joan Phaup

It’s always a pleasure to talk with customers about the ways in which they use assessment to meet their business needs. My recent conversation with Dr. Steve Torkel, Director of Evaluation and Assessment at PwC, was no exception.

I asked Steve about the case study he and his colleagues, Sean Farrell, and John LoBianco, will present at the Questionmark 2013 Users Conference in Baltimore March 3 – 6. It’s called The Half-Time Strategy: Saving Your Organization Time and Money with Diagnostic Testing – and I wanted to get the story behind that intriguing title.

Could you tell me about your job role at PwC and your involvement with assessments?

Dr. Steve Torkel

Dr. Steve Torkel

I am the U.S. evaluation and assessment leader. I lead a team that is responsible for creating assessments for a variety of programs across the U.S. firm and evaluating the impact of training courses throughout the firm.

Your session will focus on how diagnostic assessments play an important role at PwC. Why is that topic important to you?

Diagnostic assessments help us run training like a business. It helps us put the right people in the right program at the right time, which is very different from saying, “We have these training programs; go to them.”  It’s being respectful of our staff’s valuable time. When I say diagnostic assessments, I mean before someone participates in a training program — a face to face or e-learning program — we give them the opportunity to in essence “test out” of it. If they know the information already, why should they take the time to participate in the program?

Your session title talks about a “half-time strategy.” Can you elaborate on that?

Imagine a football game, just before the end of the first half. The score is 42 nothing. Knowing that the team with no touchdowns is not going to come back from 42 nothing and win the game, wouldn’t you just as soon end the game right then?

We call our approach to diagnostic assessments a half-time strategy because) we split the assessment into two halves. If someone performs poorly on the first half, we end it and don’t give them the opportunity to take the second half. They would be wasting their time and perhaps getting more frustrated. It benefits them to stop: It saves them time and frustration. It also has benefits for the firm: We are looking at the time people spend. If we can drive down the time people send on an initiative and still maintain quality, it’s good for everyone.

What do you expect your listeners to take away from your session?

I expect them to take a way a couple of things. One is to learn how to use this approach to help to run training like a business. Another is for them to realize that they can still deliver a quality assessment if they go through the right procedures. Half the time does not equal half the quality! We are saying that if you go through a very structured procedure, you can deliver a high level of quality and save time for your learners.

We’ve created a spread sheet that helps us estimate various factors, for example how many questions we are going to use on an assessment, how long it will to take people to complete it or how much time it will save. We use this information to determine if a diagnostic approach is going to make sense. If it’s only going to save a little time, it doesn’t make a lot of sense. So we’ll be sharing this interactive way to help people figure out if a diagnostic test makes sense in a particular situation.

What are you looking forward to at the conference?

Networking with other assessment leaders at organizations, to see how they are running their assessments – and to find out if anybody has any other business-oriented approaches to assessments. For me, that’s really the key! I’m looking for how people are using assessments to run their businesses more efficiently.

Register for the conference by January 18 to take advantage of early-bird discounts. Check out the conference agenda.

 

PricewaterhouseCoopers wins assessment excellence award

Posted by Joan Phaup

Greg Line and Sean Farrell accept the Questionmark Getting Results Award on behalf of PwC

I’m blogging this morning from New Orleans, where we have just completed the 10th annual Questionmark Users Conference.

It’s been a terrific time for all of us, and we are already looking forward to next year’s gathering. 2013.

One highlight this week was yesterday’s presentation of a Questionmark Getting Results Award to PricewaterhouseCoopers.

Greg Line, a PwC Director in Global Human Capital Transformation, and Sean Farrell, Senior Manager of Evaluation & Assessment at PwC, accepted the award.

Getting Results Award

The award acknowledges PWC’s successful global deployment of diagnostic and post-training assessments to more than 100,000 employees worldwide, as well as 35,000 employees in the United States.

In delivering more than 230,000 tests each year —  in seven different languages — PwC defines and adheres to sound practices in the use of diagnostic and post-training assessments as part of its highly respected learning and compliance initiatives. These practices include developing test blueprints, aligning test content with organizational goals, utilizing sound item writing techniques, carefully reviewing question quality and using Angoff ratings to set passing scores.

Adhering to these practices has helped PwC deploy valid, reliable tests for a vast audience – an impressive accomplishment that we were very pleased to celebrate at the conference.

So that’s it for 2012! But mark your calendar for March 3 – 6, 2013, when we will meet at the Hyatt Regency in Baltimore!

How many items are needed for each topic in an assessment? How PwC decides

Posted by John Kleeman

I really enjoyed last week’s Questionmark Users Conference in Los Angeles, where I learned a great deal from Questionmark users. One strong session was on best practice in diagnostic assessments, by Sean Farrell and Lenka Hennessy from PwC (PricewaterhouseCoopers).

PwC prioritize the development of their people — they’ve been awarded #1 in Training Magazine’s top 125 for the past 3 years — and part of this is their use of diagnostic assessments. They use diagnostic assessments for many purposes but one is to allow a test-out. Such diagnostic assessments cover critical knowledge and skills covered by training courses. If people pass the assessment, they can avoid unnecessary training and not attend the course. They justify the assessments by the time saved from training not needed – being smart accountants using billable time saved!

imagePwC use a five-stage model for diagnostic assessments: Assess, Design, Develop, Implement and Evaluate as shown in the graph on the right.

The Design phase includes blueprinting, starting from learning objectives. Other customers I speak to often ask how many questions or items they should include on each topic in an assessment, and I thought PwC have a great approach for this. They rate all their learning objectives by Criticality and Domain size, as follows:

Criticality
1 = Slightly important but needed only once in a while
2 = Important but not used on every job
3 = Very important, but not used on every job
4 = Critical and used on every job

Domain size
1 = Small (less than 30 minutes to train)
2 = Medium (30-59 minutes to train)
3 = Large (60-90 minutes to train)

The number of items they use for each learning objective is the Criticality multiplied by the Domain size. So for instance if a learning objective is Criticality 3 (very important but not used on every job) and Domain size 2 (medium), they will include 6 items on this objective in the assessment. Or if the learning objective is Criticality 1 and Domain size 1, they’d only have a single item.

I was very impressed by the professionalism of PwC and our other users at the conference. This seems a very useful way of deciding how many items to include in an assessment, and I hope passing on their insight is useful for you.