Applying the principles of item and test analysis to yield better results

Posted by Julie Delazyn

Using item and test analysis reports gives you valuable data that can help you improve your assessments – but how do you interpret  that data and use it effectively?

This SlideShare presentation put together by Sean Farrell, Senior Manager Evaluation & Assessment at PricewaterhouseCoopers, explains the principles of item and test analysis and shows you how to make them work for the benefit of your organization.

(You can learn about Questionmark item analysis and test analysis reports here.)

Check out this presentation to see how the principles of good item and test writing play out in real life – and how heeding them results in better items and assessments.

PricewaterhouseCoopers wins assessment excellence award

Posted by Joan Phaup

Greg Line and Sean Farrell accept the Questionmark Getting Results Award on behalf of PwC

I’m blogging this morning from New Orleans, where we have just completed the 10th annual Questionmark Users Conference.

It’s been a terrific time for all of us, and we are already looking forward to next year’s gathering. 2013.

One highlight this week was yesterday’s presentation of a Questionmark Getting Results Award to PricewaterhouseCoopers.

Greg Line, a PwC Director in Global Human Capital Transformation, and Sean Farrell, Senior Manager of Evaluation & Assessment at PwC, accepted the award.

Getting Results Award

The award acknowledges PWC’s successful global deployment of diagnostic and post-training assessments to more than 100,000 employees worldwide, as well as 35,000 employees in the United States.

In delivering more than 230,000 tests each year —  in seven different languages — PwC defines and adheres to sound practices in the use of diagnostic and post-training assessments as part of its highly respected learning and compliance initiatives. These practices include developing test blueprints, aligning test content with organizational goals, utilizing sound item writing techniques, carefully reviewing question quality and using Angoff ratings to set passing scores.

Adhering to these practices has helped PwC deploy valid, reliable tests for a vast audience – an impressive accomplishment that we were very pleased to celebrate at the conference.

So that’s it for 2012! But mark your calendar for March 3 – 6, 2013, when we will meet at the Hyatt Regency in Baltimore!

How many items are needed for each topic in an assessment? How PwC decides

Posted by John Kleeman

I really enjoyed last week’s Questionmark Users Conference in Los Angeles, where I learned a great deal from Questionmark users. One strong session was on best practice in diagnostic assessments, by Sean Farrell and Lenka Hennessy from PwC (PricewaterhouseCoopers).

PwC prioritize the development of their people — they’ve been awarded #1 in Training Magazine’s top 125 for the past 3 years — and part of this is their use of diagnostic assessments. They use diagnostic assessments for many purposes but one is to allow a test-out. Such diagnostic assessments cover critical knowledge and skills covered by training courses. If people pass the assessment, they can avoid unnecessary training and not attend the course. They justify the assessments by the time saved from training not needed – being smart accountants using billable time saved!

imagePwC use a five-stage model for diagnostic assessments: Assess, Design, Develop, Implement and Evaluate as shown in the graph on the right.

The Design phase includes blueprinting, starting from learning objectives. Other customers I speak to often ask how many questions or items they should include on each topic in an assessment, and I thought PwC have a great approach for this. They rate all their learning objectives by Criticality and Domain size, as follows:

Criticality
1 = Slightly important but needed only once in a while
2 = Important but not used on every job
3 = Very important, but not used on every job
4 = Critical and used on every job

Domain size
1 = Small (less than 30 minutes to train)
2 = Medium (30-59 minutes to train)
3 = Large (60-90 minutes to train)

The number of items they use for each learning objective is the Criticality multiplied by the Domain size. So for instance if a learning objective is Criticality 3 (very important but not used on every job) and Domain size 2 (medium), they will include 6 items on this objective in the assessment. Or if the learning objective is Criticality 1 and Domain size 1, they’d only have a single item.

I was very impressed by the professionalism of PwC and our other users at the conference. This seems a very useful way of deciding how many items to include in an assessment, and I hope passing on their insight is useful for you.