Do privacy laws mean you have to delete a test result if a test-taker asks you to?

Posted by John Kleeman

We have all heard about the “right to be forgotten”, which allows individuals to ask search engines or other organizations to delete their personal data. This right was made stronger in Europe in 2018, when the General Data Protection Regulation (“GDPR”) entered into force, and is gradually becoming recognized in some form in other jurisdictions, for example in the new California privacy law, the California Consumer Privacy Act (“CCPA”).

I’m often asked questions by customers about what the situation is if test-takers ask to delete the results for tests and exams.  Let’s take an example:

  • Your organization runs a global certification program for third party candidates;
  • One of your European candidates takes an exam in your program;
  • The candidate then reaches out to you and asks for all their personal data to be deleted.

What do you need to do? Do you verify his/her identity and delete the data? Or can you hold onto it and deny the request if you have reasons why you need to – for example, if you want to enforce retake policies or you are concerned about possible cheating. Here is an answer based on typical circumstances in Europe (but please get advice from your lawyer and/or privacy adviser regarding your specific circumstances).

Under the GDPR, although as a general principle you do need to delete personal data if retaining it for a longer period cannot be justified for the purposes for which it was initially collected or another permitted lawful purpose, there are exemptions which may allow you to decline an erasure request.

For example, you may refuse to delete personal data in response to a request from an individual if retaining the data is necessary to establish, exercise or defend against legal claims. If you follow this exception, you must be comfortable that retention of the data is necessary, and you must only use the data for this purpose, but you do not need to fully delete it.

Another broader reason for refusing to delete data may arise if you articulate in advance of the candidate taking the exam that processing is performed based on the data controller’s (usually the test sponsor’s) legitimate interests. The GDPR permits processing based on legitimate interests if you balance such interests against the interests, rights and freedoms of an individual. The GDPR also specifically says that such legitimate interests may be used to prevent fraud (and this obviously includes test fraud).

If you want to be able to refuse to delete information on this basis:

  • You should first conduct and document a legitimate interests assessment which justifies the purpose of the processing, considers whether the processing is really necessary, and balances this against the individual’s interests. (See this guidance from the UK Information Commissioner for more information);
  • You should communicate to candidates in advance, for example in your privacy policy or candidate agreement, that you are processing their data based on explained legitimate interests;
  • If you then later receive a deletion request, you should carefully analyse whether notwithstanding the request you have overriding legitimate interests to retain the data;
  • If you conclude that you do have such an interest, you should only retain the data for as long as that continues to be the case and only keep the data to which the overriding legitimate interest applies. This might mean that you have to delete some data, but can keep the rest.
  • You also need to let the individual know about your decision promptly providing them with information including their right to complain to the appropriate supervisory authority if they are unhappy with your decision.

The CCPA also has some exceptions where you do not need to delete data, including where you need to retain the data to prevent fraudulent activity.

In general, you may well want to follow delete requests, but if you have good reason not to, you may not need to.

For further information, there is some useful background in the Association of Test Publishers (ATP) GDPR Compliance Guide, in other ATP publications and in Questionmark’s white paper “Responsibilities of a Data Controller When Assessing Knowledge, Skills and Abilities” obtainable at https://www.questionmark.com/wc/WP-ENUS-Data-Controller.

I hope this article helps you if this issue arises for you.

NEW: Listen Now to “Unlocking the Potential of Assessments” Podcast

Posted by Kristin Bernor

Welcome to Questionmark’s new podcast series, “Unlocking the Potential of Assessments.” This monthly series delves into creating, delivering and reporting on valid and reliable assessments.

“Unlocking the Potential of Assessments” will offer advice and thought leadership to those just starting out with assessments, those who have been in the industry at length and anyone with a keen interest in the future of assessments lead by host, John Kleeman – Questionmark’s Founder and Executive Director.

In our first episode, John spoke with assessment luminary, Jim Parry, Owner and Chief Executive Manager of Compass Consultants. Jim has over 40 years experience as a course designer, developer and instructor. He served over 22 years with the United States Coast Guard and was employed for nearly 12 years by the US Coast Guard as a civilian employee as the Test Development and e-testing Manager at a major training command.

During his tenure, Jim guided the move from paper to online testing for the entire Coast Guard and developed the first ever Standard Operating Procedure, a document of over 300 pages, which established policy and guidelines for all testing within the Coast Guard. He is a consulting partner with Questionmark and has presented numerous best practice webinars.

Subscribe to the podcast series today and join Questionmark on our quest to discover and examine the latest in best practice guidance with a wide array of guests – including assessment luminaries, industry influencers, SMEs and customers – and discuss “all things assessment.”

Don’t miss out. For our next episode, John will be speaking with our very own Steve Lay, Questionmark’s product manager and an expert on scalable, computerized assessment and integration between systems. Subscribe today so you don’t miss it.

You can subscribe to the series by visiting our podcast page and selecting your preferred player.

Please reach out to me with any suggestions of further topics you’d like explored or assessment luminaries you want to hear from.

10 Reasons Why Frequent Testing Makes Sense

Posted by John Kleeman

It matters to society, organizations and individuals that test results are trustable. Tests and exams are used to make important decisions about people and each failure of test security reduces that trustworthiness.

There are several risks to test security, but two important ones are identity fraud and getting help from others. With identity fraud, someone asks a friend to take the test for them or pays a professional cheater to take the test and pretend to be them. With getting help from others, a test-taker subverts the process and gets a friend or expert to help them with the test, feeding them the right answers. In both cases, this makes the individual test result meaningless and detracts from the value and trustworthiness of the whole assessment process.

There are lots of mitigations to these risks – checking identity carefully, having well trained proctors, using forensics or other reports and using technical solutions like secure browsers – and these are very helpful. But testing more frequently can also reduce the risk: let me explain.

Suppose you just need to pass a single exam to get an important career step – certification, qualification or other important job requirement, then the incentive to cheat on that one test is large. But if you have a series of smaller tests over a period, then it’s more hassle for a test taker to conduct identity fraud or to get help from others each time. He or she would have to pay the proxy test taker several times.  And make sure the same person is available in case photos are captured. And for the expert help you also must reach out more often, and evade whatever security there is each time

There are other benefits too; here is a list of ten reasons why more frequent testing makes sense:

  1. More reliable. More frequent testing contributes to more reliable testing. A single large test is vulnerable to measurement error if a test taker is sick or has an off day, whereas that is less likely to impact frequent tests.
  2. More up to date. With technology and society changing rapidly, more frequent tests can make tests more current. For instance, some IT certification providers create “delta” tests measuring understanding of their latest releases and encourage people to take quarterly tests to ensure they remain up to date.
  3. Less test anxiety. Test anxiety can be a big challenge to some test takers (see Ten tips on reducing test anxiety for online test-takers), and more frequent tests means less is at stake for each one, and so may help test takers be less anxious.
  4. More feedback. More frequent tests give feedback to test takers on how well they are performing and allow them to identify training or continuing education to improve.
  5. More data for testing organization. In today’s world of business intelligence and analytics, there is potential for correlations and other valuable insight from the data of people’s performance in a series of tests over time.
  6. Encourages test takers to target retention of learning. We all know of people who cram for an exam and then forget it afterwards. More frequent tests encourage people to plan to learn for the longer term.
  7. Encourages spaced out learning. There is strong evidence that learning at spaced out intervals makes it more likely knowledge and skills will be retained. Periodic tests encourage revision at regular intervals and so make it more likely that learning will be remembered.
  8. Testing effect. There is also evidence that tests themselves give retrieval practice and aid retention and more frequent tests will give more such practice.
  9. More practical. With online assessment software and online proctoring, it’s very practical to test frequently, and no longer necessary to bring test takers to a central testing center for one off large tests.
  10. Harder to cheat. Finally, as described above, more frequent testing makes it harder to use identity fraud or to get help from others, which reduce cheating.

I think we’re seeing a slow paradigm shift from larger testing events that happen at a single point in time to smaller, online testing events happening periodically. What do you think?

5 Things I Learned at the European Association of Test Publishers Conference Last Week

Posted by John Kleeman

I just attended the Association of Test Publisher’s European conference (EATP), held last week in Madrid, and wanted to share some of what I learned.

The Association of Test Publishers (ATP) is the trade association for the assessment industry and promotes good practice in assessment. Questionmark have been members for a long time and I am currently on their board of directors. The theme of the conference was “Transforming Assessments: Challenge. Collaborate. Inspire.”

Panel at European Association of Test Publishers

As well as seeing a bit of Madrid (I particularly enjoyed the beautiful Retiro Park), here are some things I learned at the conference. (These are all my personal opinions, not endorsed by Questionmark or the ATP).

1. Skills change. One area of discussion was skills change. Assessments are often used to measure skills, so as skills change, assessments change too. There were at least three strands of opinion. One is that workplace skills are changing rapidly – half of what you learn today will be out of date in five years, less if you work in technology. Another is that many important skills do not change at all – we need to collaborate with others, analyze information and show emotional resilience; these and other important skills were needed 50 years ago and will still be needed in 50 years’ time. And a third suggested by keynote speaker Lewis Garrad is that change is not new. Ever since the industrial revolution, there has been rapid change, and it’s still the case now. All of these are probably a little true!

2. Artificial Intelligence (AI). Many sessions at the conference covered AI. Of course, a lot of what gets called AI is in fact just clever marketing of smart computer algorithms. But nevertheless, machine learning and other things which might genuinely be AI are definitely on the rise and will be a useful tool to make assessments better. The industry needs to be open and transparent in the use of AI. And in particular, any use of AI to score people or identify anomalies that could indicate test cheating needs to be very well built to defend against the potential of bias.

3. Debate is a good way to learn. There were several debates at the conference, where experts debated issues such as performance testing, how to detect fraud and test privacy vs security, with the audience voting before and after. As the Ancient Greeks knew, this is a good format for learning, as you get to see the arguments on both sides presented with passion. I’d encourage others to use debates for learning.

4. Privacy and test security genuinely need balance. I participated in the privacy vs test security debate, and it’s clear that there is a genuine challenge balancing the privacy rights of individual test-takers and the needs of testing organizations to ensure results are valid and have integrity. There is no single right answer. Test-taker rights are not unlimited. And testing organizations cannot do absolutely anything they want to ensure security. The growing rise of privacy laws including the GDPR has brought discussion about this to the forefront as everyone seeks to give test-takers their mandated privacy rights whilst still being able to process data as needed to ensure test results have integrity. A way forward seems to be emerging where test-takers have privacy and yet testing organizations can assert legitimate interests to resist cheating.

5. Tests have to be useful as well as valid, reliable and fair. One of the highlights of the conference was a CEO panel, where Marten Roorda, CEO of ACT, Norihisa Wada, a senior executive at EduLab in Japan, Sangeet Chowfla, CEO of the Graduate Management Admission Council and Saul Nassé, CEO of Cambridge Assessment gave their views on how assessment was changing. I moderated this panel (see picture below) and it was great to hear these very smart thought leaders talk of the future.  There is widespread agreement that validity, reliability and fairness are key tenets for assessments , but also a reminder that we also need “efficacy” – i.e. that tests need to be useful for their purpose and valuable to those who use them.

There was a huge amount of other conference conversations including sessions on online proctoring, test translation, the update to the ISO 10667 standard, producing new guidelines on technology based assessment and much, much more.

I found it challenging, collaborative and inspiring and I hope this blog gives you a small flavor of the conference.

xAPI: A Way to Enable Learning Analytics

Posted by John Kleeman

Many organizations train and test individuals to ensure they have the right skills and competencies. In doing so, they amass vast amounts of data, which can be used to identify further training opportunities and improve performance. One way of managing this data is to use the Experience API (or xAPI) to pass data from disparate systems into a central Learning Record Store.

xAPI is maintained by the United States Advanced Distributed Learning Initiative (see www.adlnet.gov) and many Questionmark users have requested that we support xAPI so that they can export test data for analysis. For this reason, we’re pleased to let you know that earlier this year, we released our xAPI Connector for OnPremise and OnDemand customers. The integration lets the Questionmark platform connect and ‘talk’ to Learning Record Stores, creating an agile and effective learning and development ecosystem.

The challenges organizations face

For any organization, measuring the competence of employees or consultants through assessment is an essential element of ensuring the team is capable and fit-for-purpose. During this process, organizations collect large amounts of data that needs to be stored under strict data privacy regulations.

Once employers have control of learning and assessment data, it can then be interrogated to analyze employees and the effectiveness of training programs. With Questionmark’s xAPI integration, customers will now be able to transfer data from the assessment platform to their Learning Record Store.

What xAPI does

xAPI provides a standard means for collecting data from training and assessment experiences. The specification allows different systems to communicate and share data, which can then be stored and analyzed. This helps organizations to make better decisions by collecting, tracking, and quantifying learning activities to see what works and what doesn’t.

Organizations are increasingly investing in Learning Record Stores to host and analyze learning and assessment data. With xAPI, Questionmark customers will now to be able to send assessment data directly to their Learning Record Stores, so that they can measure the impact of learning and development activities and maximize the impact of their investment.

xAPI offers universal integration, meaning users can store data anywhere. Reporting across multiple geographies is easy, so users can analyze, compare and contrast data. The data is also presented in a universal format, making it easy to understand and interpret. This provides a solid starting point for big data learning analytics. And, as an assessment technology provider, Questionmark has widened its footprint in the total learning ecology by releasing the xAPI functionality.

If you’d like to find out more about the full range of assessment features that Questionmark offers, contact us or request a demo.



Workplace Exams 101: How to Prevent Cheating

John Kleeman

Posted by John Kleeman

A hot topic in the assessment world today is cheating and what to do to prevent it. Many organizations test their employees, contractors and other personnel to check their competence and skills. These include compliance tests, on-boarding tests, internal certification tests, end-of-course tests and product knowledge quizzes.

There are two reasons why cheating matters in workplace exams:

Issue #1: Validity

Firstly, the validity of the test or exam is compromised; any decision made as a result of the test is invalid. For example, you may use a test to check whether someone is safe to sell your products, but if cheating happens, then he/she is not. Or you may be checking if someone is safe to do a task, and if cheating happens, safety is compromised. Tests and exams are used to make important decisions about people with business, financial and regulatory consequences. If someone cheats at a test or exam, you are making the decision based on bad data.

Issue #2: Integrity

Secondly, people who cheat at tests or exams have demonstrated a lack of integrity. If they will cheat on a test or exam, what else might they lie, cheat or defraud your organization about? Will falsifying a record or report be next? Regulators often have rules requiring integrity and have sanctions if someone demonstrates a lack of it.

For example, in the financial sector, FINRA’s Rule 2010 requires individuals to “observe high standards of commercial honor” and is used to ban people found cheating at exams or continuing education tests. In the accountancy sector, both AICPA and CIMA require accountants to have integrity and those found cheating at tests have been banned or otherwise sanctioned. And in the medical and pharmaceutical field, regulators have codes of conduct which include honesty. For example, the UK General Medical Council requires doctors to “always be honest about your experience, qualifications and current role” and interprets cheating at exams as a violation of this.

The well-respected International Test Commission Guidelines on the Security of Tests, Exams and Other Assessments suggests six categories of cheating threats shown below, alongside examples from me of how they can take place in the work environment.


ITC categoriesTypical examples in the workplace
Using test content pre-knowledge– An employee takes the test and passes questions to a colleague still to take it
– Someone authoring questions leaks them to test-takers
– A security vulnerability allows questions to be seen in advance
Receiving expert help while taking the test– One employee sits and coaches another during the test
– IM or phone help while taking a test
– A manager or proctor supervising the test helps a struggling employee
Using unauthorized test aids– Access to the Internet allows googling the answers
– Unauthorized study guides brought to the test
Using a proxy test taker– A manager sends an assistant or secretary to take the test in place of him/her
– Other situations where a colleague stands in for another
Tampering with answer sheets or stored test results– Technically minded employees subvert communication with the LMS or other corporate systems and change their results
Copying answers from another user– Two people sitting near each other share or copy answers
– Organized answer sharing within a cohort or group of trainees


If you are interested in learning more about any of the threats above, I’ve shared approaches to mitigate them in the workplace in our webinar, Workplace Exams 101: How to Prevent Cheating. You can download the webinar recording slides HERE.