What is the Single Best Way to Improve Assessment Security?

John KleemanPosted by John Kleeman

Three intersecting circles, one showing Confidentiality, one showing Availability and one showing IntegrityAssessment results matter. Society relies on certifications and qualifications granted to those who pass exams. Organizations take important decisions about people based on test scores. And individuals work hard to learn skills and knowledge they can demonstrate in tests and exams. But in order to be able to trust assessment results, the assessment process needs to be secure.

Security is usefully broken down into three aspects: confidentiality, integrity and availability.

  • Confidentiality for assessments includes that questions are kept secure and that results are available only to those who should see them.
  • Integrity for assessments includes that that the process is fair and robust, that identify of the test-taker is confirmed and that cheating does not take place.
  • Availability includes that assessments can be taken when needed and that results are stored safely for the long term.

A failure of security, particularly one of confidentiality or integrity reduces the usefulness and trustworthiness of test results. A confidentiality failure might mean that results are meaningless as some test-takers knew questions in advance. An integrity failure means that some results might not be genuine.

So how do you approach making an assessment program secure? The best way to think about this is in terms of risk. Risk assessment is at the heart of all successful security systems and central to the widely respected ISO 27001 and NIST 800-53 security standards. In order to focus resources to make an assessment program secure and to reduce cheating, you need to enumerate and quantify the risks and identify probability (how likely they are to happen) and impact (how serious it is if they do). You then allocate mitigation effort at the ones with higher probability and impact. This is shown illustratively in the diagram – the most important risks to deal with are those that have high probability and high impact.

Four quadrants showing high probability, high impact in red and Low probability, low impact in green. With yellow squares for high probability, low impact and low probability, high impact

One reason why risk assessment is sensible is that it focuses effort on issues that matter. For example, the respected Verizon data breach investigations report for 2017 reported that 81% of hacking-related breaches involved weak or stolen passwords. For most assessment programs, it will make sense to put in place measures like strong passwords and training on good password practice for assessment administrators and authors to help mitigate this risk.

There is no “one size fits all approach”. Some risks will differ between assessment programs. To give a simple example, some organizations are concerned  about people having reference materials or “cheat sheets” to look up answers in and this can be an important risk to mitigate against; whereas in other programs, exams are open book and this is not a concern. In some programs, identity fraud (where someone pretends to be someone else to take the exam for them) is a big concern; in others the nature of the proctoring or the community makes this much less likely.

If you’re interested in learning more about the risk approach to assessment security, I’m presenting a webinar “9 Risks to Test Security (and what to do about them)” on 28th November which:

  • Explains the risk approach to assessment security.
  • Details nine key risks to assessment security from authoring through delivery and into reporting.
  • Gives some real examples of the threats for each risk.
  • Suggests some mitigations and measures to consider to improve security.

You can see more details on the webinar and register here.

Assessment security matters because it impacts the quality and trustworthiness of assessment results. If you are not already doing it, starting a risk-based approach to analyzing risks to your security is the single best way to improve assessment security.

A Test-Taker’s Guide to Technology-Based Testing

Posted by John Kleeman

The International Test Commission (ITC) is a federation of national associations of psychologists and other experts in the field of psychological testing. Several eminent Professors of Psychology sit on the commission’s council. They’ve recently produced a four-page guide, “A Test-Taker’s Guide to Technology-Based Testing”, which is freely available on their website.

If you are looking for a simple but authoritative guide to give to people new to computerized assessment, this could be a useful resource. (For a fuller guide to other standards for defensible assessments, see Greg Pope’s blog article on this subject.)

The ITC provides 10 guidelines that test-takers should expect of people preparing tests, and also 10 things expected of test-takers. I’m pleased to see that many of the improved capabilities in version 5 of Questionmark Perception directly help meet their guidelines.

One guideline requires that hardware and software to take a test on should be suitable, and Questionmark Perception’s Browser Check, which checks the configuration of a participant’s browser to ensure compatibility (much improved in version 5), will be a key part of this for many Questionmark users.

Another guideline covers procedures for dealing with problems such as technical failures and other distractions. Our improved Save As You Go capability in version 5 helps minimize such problems: It uses HTML technology built into every browser and can save the answer after every question. This also makes it easy to resume an assessment, and if any problems arise will help minimize them.

The guidelines also cover ensuring that adjustments are available for test-takers with disabilities. Perception version 5 provides many such adjustments,  and our best practice guide for creating accessible assessments (available to our Software Support Plan customers) helps people use these tools effectively.

Other guidelines include providing practice test questions, ensuring security, ensuring confidentiality and providing password protected results and timely feedback, all of which can be done well in Perception.

Any Questionmark Perception user should be able to structure their assessments to meet the guidelines, and if you’re looking for a simple document to check if you’re following some simple good practice, these guidelines are short, readable and freely available online.