Sharing slides: 7 ways that online assessments help ensure compliance

John Kleeman HeadshotPosted by John Kleeman

I recently gave a webinar with my colleague Brian McNamara for the SCCE ( Society of Corporate Compliance and Ethics) on 7 ways that online assessments can help ensure compliance.

Here are the slides:

As you can see, we started the webinar by running through some general concepts on assessments including why it’s important that assessments are reliable and valid. Then we described seven key ways in which online assessments can help ensure compliance.

Here are the six pieces of good practice we advocated in the webinar:

  1. Use scenario questions – test above knowledge
  2. Use topic feedback
  3. Consider observational assessments
  4. Use item analysis
  5. Set a pass or cut score
  6. Use a code of conduct

To view a recorded version of this webinar, go to SCCE’s website to purchase the CD from SCCE (Questionmark does not receive any remuneration for this). Or, view a slightly shorter, complimentary version through Questionmark, which is scheduled for September. Go to our UK website  or our  US website for webinar details and registration.

To Your Health! Good practice for competency testing in laboratories

John Kleeman HeadshotPosted by John Kleeman

In the world of health care, from pathology labs to medical practitioners to pharmaceutical manufacturers, a mistake can mean much more than a regulatory fine or losing money – people’s lives and health are at stake. Hospitals, laboratories and other medical organizations have large numbers of people and need effective systems to make them work well together.

I’ve been learning about how assessments are used in the health care sector. Here is the first of a series of blog articles in the  theme of “learning from health care”.

In this article, I’d like to share some of what I’ve learned about how pathology and other health care laboratories approach competency assessment. Laboratory personnel have to work tirelessly and in an error-free way to give good quality, reliable pathology results. And mistakes cost – as the US College of American Pathologists (CAP) state in their trademarked motto “Every number is a life”. I think there is a lot we can all learn from how they do competency testing.

Job Description -> Task-specific Training -> Competency Assessment -> Competency RecognitionA good place to start is with the World Health Organization (WHO). Their training on personnel management reminds us that “personnel are the most important laboratory resource” and they promote competency assessment based on a job description and task-specific training as shown in the diagram on the right.

WHO advise that competency assessments should be conducted regularly (usually once or twice a year) and they recommend observational assessments for many areas of competence:  “Observation is the most time-consuming way to assess employee competence, but this method is advised when assessing the areas that may have a higher impact on patient care.” Their key steps for conducting observational assessments are:

  • Assessor arranges with employee a pre-arranged time for the assessment
  • The assessment is done on routine work tasks
  • To avoid subjectivity, the assessment should be recorded on a fixed check-list with everyone assessed the same way, to avoid bias
  • The results of the assessment are recorded, kept confidential but shared with the employee
  • If remediation is needed, an action plan involving retraining is defined and agreed with the employee

WHO’s guidance is international. Here is some additional guidance from the US, from a 2012 presentation in the US by CAP’s inspections team lead on competency assessment for pathology labs. This advice seems to make sense in a wider context:

  • If it’s not documented, it didn’t happen!
  • You need to do competency assessment on every person on every important system they work with
  • If employees who are not in your department or organization, contribute significantly to the work product, you  need to assess their competence too. Otherwise the quality of your work product is impacted
  • Competency assessment often contains quizzes/tests, observational assessments, review of records, demonstration of taking corrective action and troubleshooting
  • If people fail competency assessment, you need to re-train, re-assess and document that

If your organization relies on employees working accurately, I hope this provides value and interest to you. I will share more of what I’m learning in future articles.

Good practice from PwC in testing out of training

John Kleeman Headshot Posted by John Kleeman

I attended an excellent session at the Questionmark Users Conference in Baltimore by Steve Torkel and John LoBianco of PwC and would like to share some of their ideas on building diagnostic assessments.

PwC, like many organizations, creates tests that allow participants to “test out” of training if they pass. Essentially, if you already know the material being taught, then you don’t need to spend time in the training. So as shown in the diagram below – if you pass the test, you skip training and if you don’t, you attend it.

pWc chart 1

The key advantage of this approach is that you save time when people don’t have to attend the training that they don’t need. Time is money for most organizations, and saving time is an important benefit.

Suppose, for example, you have 1,000 people who need to take some training that lasts 2 hours. This is 2,000 hours of people’s time. Now, suppose you can give a 20-minute test that 25% of people pass and therefore skip the training. The total time taken is 333 hours for the test and 1,500 hours for the training, which adds up to 1,833 hours. So having one-fourth of the test takers skip the training saves 9% of the time that would have been required for everyone to attend the training.

In addition to saving time, using diagnostic tests in this way helps people who attend training courses focus their attention on areas they don’t know well and be more receptive to the training.

Some good practices that PwC shared for such building such tests are:

  • Blueprint the test, ensuring that all important topics are covered in the usual way
  • Use item analysis to identify and remove poorly performing items and calibrate question difficulty
  • Make the test-out at least as difficult as the assessment at the end of the training course. In fact PwC makes it more difficult
  • Make the test-out optional. If someone wants to skip it and just do the training, let them.
  • Tell people that if they don’t know the answers to questions, they can just skip them or finish the test early – there are no consequences for doing badly on the test
  • Only allow a single attempt. If someone fails the test, they must do the training
  • Pilot the test items well – PwC finds it useful to pilot questions using the comments facility in Questionmark

PwC also has introduced an innovative strategy for such tests, which they call a “half-time strategy”. This makes the process more efficient by allowing weaker test takers to finish the test sooner. I’ll explain the Half-time strategy in a follow-up article soon.