Get tips for combatting test fraud

Chloe MendoncaPosted by Chloe Mendonca

There is a lot of research to support the fact that stepping up investment in learning, training and certification is critical to professional success. A projection from the Institute for Public Policy Research states that ‘between 2012 and 2022, over one-third of all jobs will be created in high-skilled occupations’. This growing need for high-skilled jobs is resulting in a rapid increase in professional qualifications and certifications.

Businesses are recognising the need to invest in skills, spending some £49 billion in 2011 alone on training [figures taken from CBI on skills] — and assessments are a big part of this. They have become widely adopted in helping to evaluate the competence, performance and potential of employees and job candidates. In many industries such as healthcare, life sciences and manufacturing, the stakes are high. Life, limb and livelihood are on the line, so delivering such assessments safely and securely is vital.

Sadly, many studies show that the higher the stakes of an assessment, the higher the potential and motivation to commit test fraud. We see many examples of content theft, impersonation and cheating in the news, so what steps can be taken to mitigate security risks?? What impact do emerging trends such as online remote proctoring have on certification programs? How can you use item banking, secure delivery apps and reporting tools to enhance the defensibility of your assessments?

This October, Questionmark will deliver breakfast briefings in two UK cities, providing the answers to these questions. The briefings will include presentations and discussions on the tools and practices that can be used to create and deliver secure high-stakes tests and exams.

These briefings, due to take place in London and Edinburgh, will be ideal for learning, training and compliance professionals who are using or thinking about using assessments. We invite you to find out more and register for one of these events:

 

A whistle-stop tour round the A-model

Posted by John Kleeman

In an earlier blog, I described how the A-model starts with Problem, Performance and Program. But what is the  A-model? Here’s an overview.

It’s easy to explain why it’s called the A-model. The model (shown on the left) traces the progress from Analysis & Design to Assessment & Evaluation. When following the A-model, you move from the lower left corner of the “A” up to the delivery of the Program (at the top or apex of the “A”) and then down the right side of the model to evaluate how the Problem has been resolved.

The key logic in the model is that you work out the requirements for success in Analysis & Design, and then you assess against them in the Assessment & Evaluation phase.A-model overview

Analysis and Design

During Analysis & Design, you define the measures of success, including:

  • How can you know or measure if the problem is solved?
  • What performance are you looking for and how do you measure if it is achieved?
  • What are the requirements for the Program and how do you measure if they are met?

It’s crucial to be able to do this in order to do Assessment & Evaluation against what you’ve worked out in Analysis & Design. Assessments are useful in Analysis & Design – for example needs assessments, performance analysis surveys, job task analysis surveys and employee/customer opinion surveys.

Assessment & Evaluation of Program

A common way to evaluate the program is to administer surveys covering perceptions of satisfaction, relevance and intent to apply the Program in the workplace. This is like a “course evaluation survey” but it focuses on all the requirements for the Program. Evaluation of program delivery therefore also includes other factors identified in Analysis & Design that indicate whether the solution is delivered as planned (for example, whether the program is delivered to the intended target audience at the right time).

Assessment & Evaluation of Performance

The A-model suggests that in order to improve Performance, you identify Performance Enablement measures – enablers that are necessary to support the performance, typically learning, a skill, a new attitude, performance support or an incentive.

You may be able to measure the performance itself using business metrics like number of transactions processed or other productivity measures. Assessments can be useful to measure performance and performance enablers, for instance:Tests to assess knowledge and skill. For instance:

  • Tests to assess knowledge and skill
  • Observational assessments (e.g. a workplace supervisor assessing performance against a checklist)
  • 360 degree surveys of performance from peers, colleagues and managers

Measuring whether the problem is solved

How you measure whether the problem is solved will arise from the analysis and design done originally. A useful mechanism can be an impact survey or follow-up survey, but there should also be concrete business data to provide evidence that the problem has been solved or business performance improved.

Putting it all together: the A-model

The key in the A-model is to put it together, as shown in the diagram below. You define the requirements for the Problem, the Performance, Performance Enablement and the Program. Then you assess the outcomes – for the delivery of the program, for the Performance Enablement and the Performance itself and then for the Impact against the business.

A-model

I hope you enjoyed this whistle-stop tour. For a more thorough explanation of the A-model, read Dr. Bruce C. Aaron’s excellent white paper, available here (free with registration).