Applying the principles of item and test analysis to yield better results

Posted by Julie Delazyn

Using item and test analysis reports gives you valuable data that can help you improve your assessments – but how do you interpret  that data and use it effectively?

This SlideShare presentation put together by Sean Farrell, Senior Manager Evaluation & Assessment at PricewaterhouseCoopers, explains the principles of item and test analysis and shows you how to make them work for the benefit of your organization.

(You can learn about Questionmark item analysis and test analysis reports here.)

Check out this presentation to see how the principles of good item and test writing play out in real life – and how heeding them results in better items and assessments.

PricewaterhouseCoopers wins assessment excellence award

Posted by Joan Phaup

Greg Line and Sean Farrell accept the Questionmark Getting Results Award on behalf of PwC

I’m blogging this morning from New Orleans, where we have just completed the 10th annual Questionmark Users Conference.

It’s been a terrific time for all of us, and we are already looking forward to next year’s gathering. 2013.

One highlight this week was yesterday’s presentation of a Questionmark Getting Results Award to PricewaterhouseCoopers.

Greg Line, a PwC Director in Global Human Capital Transformation, and Sean Farrell, Senior Manager of Evaluation & Assessment at PwC, accepted the award.

Getting Results Award

The award acknowledges PWC’s successful global deployment of diagnostic and post-training assessments to more than 100,000 employees worldwide, as well as 35,000 employees in the United States.

In delivering more than 230,000 tests each year —  in seven different languages — PwC defines and adheres to sound practices in the use of diagnostic and post-training assessments as part of its highly respected learning and compliance initiatives. These practices include developing test blueprints, aligning test content with organizational goals, utilizing sound item writing techniques, carefully reviewing question quality and using Angoff ratings to set passing scores.

Adhering to these practices has helped PwC deploy valid, reliable tests for a vast audience – an impressive accomplishment that we were very pleased to celebrate at the conference.

So that’s it for 2012! But mark your calendar for March 3 – 6, 2013, when we will meet at the Hyatt Regency in Baltimore!

How many items are needed for each topic in an assessment? How PwC decides

Posted by John Kleeman

I really enjoyed last week’s Questionmark Users Conference in Los Angeles, where I learned a great deal from Questionmark users. One strong session was on best practice in diagnostic assessments, by Sean Farrell and Lenka Hennessy from PwC (PricewaterhouseCoopers).

PwC prioritize the development of their people — they’ve been awarded #1 in Training Magazine’s top 125 for the past 3 years — and part of this is their use of diagnostic assessments. They use diagnostic assessments for many purposes but one is to allow a test-out. Such diagnostic assessments cover critical knowledge and skills covered by training courses. If people pass the assessment, they can avoid unnecessary training and not attend the course. They justify the assessments by the time saved from training not needed – being smart accountants using billable time saved!

imagePwC use a five-stage model for diagnostic assessments: Assess, Design, Develop, Implement and Evaluate as shown in the graph on the right.

The Design phase includes blueprinting, starting from learning objectives. Other customers I speak to often ask how many questions or items they should include on each topic in an assessment, and I thought PwC have a great approach for this. They rate all their learning objectives by Criticality and Domain size, as follows:

Criticality
1 = Slightly important but needed only once in a while
2 = Important but not used on every job
3 = Very important, but not used on every job
4 = Critical and used on every job

Domain size
1 = Small (less than 30 minutes to train)
2 = Medium (30-59 minutes to train)
3 = Large (60-90 minutes to train)

The number of items they use for each learning objective is the Criticality multiplied by the Domain size. So for instance if a learning objective is Criticality 3 (very important but not used on every job) and Domain size 2 (medium), they will include 6 items on this objective in the assessment. Or if the learning objective is Criticality 1 and Domain size 1, they’d only have a single item.

I was very impressed by the professionalism of PwC and our other users at the conference. This seems a very useful way of deciding how many items to include in an assessment, and I hope passing on their insight is useful for you.

Twitter: A Job Analysis Tool?

greg_pope-150x1502

Posted by Greg Pope

I was talking recently with Sean Farrell, a manager of Evaluation and Assessment at a global professional services firm. Sean mentioned an interesting idea that I’d like to share with you, and I’d like to know what you think of it.

Sean recently signed up for a Twitter account. Observing how easy it is for people to post updates and comments there, he began to wonder about how an industrial psychologist could use Twitter. He found a Twitter application to use on his Blackberry, began to search through the options, and came across a function that would remind him to update his tweets on a timed schedule, say every 30 or 60 minutes. Then it hit him! Sean thought perhaps Twitter could be a very useful tool for collecting job task information. This idea made sense to me! I wanted to hear more about what Sean was thinking.

Job analysis is an important part of building valid assessments but in practice it is very difficult to capture good job analysis information. One technique cited in text books is to have job incumbents complete a work journal that captures what they are doing at various times of the work day. Often this technique is viewed as too time consuming and cumbersome for employees to complete. Sean thought: what if we were to ask employees to tweet every 15 or 30 minutes and explain what they are doing at that moment? The person conducting the study could ‘follow’ all the employees and have an instant combined view of tasks completed throughout the day.

twitter-logoIf today’s emerging workforce is already familiar with Twitter and finds it a fun activity then perhaps employees would not mind participating in a Twitter-based job analysis. I think this potential application of Twitter that Sean came up with is really interesting and could be a great way to augment traditional job task analysis information collection via surveys and other means. I want to throw it out there for discussion. Does anyone else think this approach could have merit and want to try it?