How to use item analysis to get positive information from compliance assessments to feed into better training

Posted by John Kleeman

I was speaking to one of our customers recently about how they use Questionmark Perception for compliance and I was struck by one of his comments – how item analysis is useful not just to prove to the regulator that things are going well, but also to show how it identifies weaknesses that you react to.

Many companies in financial services, pharmaceuticals, utilities and other regulated industries need to assess their employees regularly to prove their competence to the regulator.

Employees who pass the tests can continue to do their jobs, and employees who fail need re-training. Compliance assessment managers naturally focus on making the assessment programme fair and defensible so that they can prove to the regulator that their assessments are valid and reliable and that someone who passes is genuinely competent. Results can also be used to demonstrate to a failing candidate that they have failed for fair reasons.  As part of this process, it’s usual to run an item analysis report that gives statistics on the questions in your tests.  This allows you to weed out poorly performing  questions to improve the validity  of tests.

With all the focus on the important mission of proving to the regulator that your employees are competent and dealing with failing employees, it’s easy to miss some positive benefit from compliance assessments.

For instance, look at this item analysis report fragment. It shows a question asking the participant which product to recommend to a customer.  Of the four choices provided, Product C is the correct one for the particular customer’s needs. The question has a p value of 0.72 which means that that 72% of your participants get the question right. It also correlates very well with the total test score, as indicated by the ‘Item-total correlation’. It appears a reasonable question to include in the assessment.

Item analysis screenshot

Something to note, however, is that many participants are choosing Product A including some high achievers in the upper group (6%). This could indicate that there was some confusion, either in the instruction or in the question wording, which caused high achievers to choose this particular incorrect answer. This is a flag to have to question reviewed to ensure that the wording/content is accurate as well as a flag to look into the instruction of the course material to ensure that there were no breakdowns in how the material for this particular question was taught.

Organizations want to be very careful about giving good advice to customers, and if high achievers are getting things wrong, this is an issue to look into. Taking  this information to your training team as a potential issue, and working with them to correct it, will help ensure that consistent, accurate messaging is going out to customers.  Also, potentially when the regulator next comes round to visit, you can show them not only that your testing programme is showing that your employees are competent, but also that you are using the results to help improve your training.

Leave a Reply