Get tips for combatting test fraud

Chloe MendoncaPosted by Chloe Mendonca

There is a lot of research to support the fact that stepping up investment in learning, training and certification is critical to professional success. A projection from the Institute for Public Policy Research states that ‘between 2012 and 2022, over one-third of all jobs will be created in high-skilled occupations’. This growing need for high-skilled jobs is resulting in a rapid increase in professional qualifications and certifications.

Businesses are recognising the need to invest in skills, spending some £49 billion in 2011 alone on training [figures taken from CBI on skills] — and assessments are a big part of this. They have become widely adopted in helping to evaluate the competence, performance and potential of employees and job candidates. In many industries such as healthcare, life sciences and manufacturing, the stakes are high. Life, limb and livelihood are on the line, so delivering such assessments safely and securely is vital.

Sadly, many studies show that the higher the stakes of an assessment, the higher the potential and motivation to commit test fraud. We see many examples of content theft, impersonation and cheating in the news, so what steps can be taken to mitigate security risks?? What impact do emerging trends such as online remote proctoring have on certification programs? How can you use item banking, secure delivery apps and reporting tools to enhance the defensibility of your assessments?

This October, Questionmark will deliver breakfast briefings in two UK cities, providing the answers to these questions. The briefings will include presentations and discussions on the tools and practices that can be used to create and deliver secure high-stakes tests and exams.

These briefings, due to take place in London and Edinburgh, will be ideal for learning, training and compliance professionals who are using or thinking about using assessments. We invite you to find out more and register for one of these events:

 

To Your Health! To err is human but assessments can help

John Kleeman HeadshotPosted by John Kleeman

To err is human, but how do errors happen? And can assessments help reduce them?

As part of my learning on assessments in health care, I’ve come across some interesting statistics on errors in UK hospitals by the Medical and Healthcare Products Regulatory Agency (MHRA). They have a system called SABRE which collects and analyzes errors in hospitals relating to serious adverse reactions in the area of blood transfusions and handling. Hospitals are encouraged to report errors so that better practice can be identified, and the MHRA gathered 788 human errors in 2011.

MHRA did a root cause analysis of why the human errors happened. You can see their report here, I have slightly simplified their information to identify the six areas below:

  • Process or procedure incorrect (22%)
  • Procedural steps omitted (23%)
  • Concentration error (29%)
  • Training misunderstood – it covered the area of the error but was misunderstood (15%)
  • Training missing – out of date or did not cover the area (6%)
  • Poor communication/rushing (5%)

Root causes of human error - graph showing table above as an image

The root cause of errors may vary in other contexts, but it’s very interesting to see this data and I suspect that these six areas are common causes for error in many organizations, even if the percentages vary. Taking things beyond the MHRA data (and without any MHRA endorsement), I am wondering where assessments can help reduce errors.

Let’s focus on the errors related to training issues and the incorrect or omitted procedures:

Some errors happen because training is misunderstood. Perhaps the training covers the area but the employee didn’t understand it properly, can’t remember or cannot apply the training on the job. Perhaps the employee simply can’t remember the training. Assessments check that people do indeed understand. They also reduce the forgetting curve and can be used to give scenarios or problems that check if people can apply the training to everyday situations.

Other errors happen because training is different to what the real job involves. Competencies or tasks needed in the real world aren’t part of the training or the post-training assessment. Job task analysis, using surveys to ask practitioners what really happens in a role, is a great way to correct this. See Doug Peterson’s article in this blog or mine on Job Task Analysis in Questionmark for more on this.

For errors that happen where procedural steps are omitted, an observational assessment is an effective, pro-active solution. See Observational Assessments—why and how for more on this, or see my earlier post in this series on competency testing in health care. Such assessments will also pick up some issues for procedures or processes being wrong.

What about the other 34% of errors that arise from concentration failures, rushing and poor communication, I am sure there are ways in which assessment can help and would welcome your comments and ideas.