Can you be GDPR compliant without testing your employees?

Posted by John Kleeman

The GDPR is a new extra-territorial, data protection law which imposes obligations on anyone who processes personal data on European residents. It impacts companies with employees in Europe, awarding bodies and test publishers who test candidates in Europe, universities and colleges with students in Europe and many others. Many North American and other non-European organizations will need to comply.

See my earlier post How to use assessments for GDPR compliance for an introduction to GDPR. The question this blog post addresses is whether it’s practical for a large organization to be compliant with the GDPR without giving tests and assessments to their employees?

I’d argue that for most organizations with 100s or 1000s of employees, you will need to test your employees on your policies and procedures for data protection and the GDPR. Putting it simply, if you don’t and your people make mistakes, fines are likely to be higher.

Here are four things the GDPR law says (I’ve paraphrased the language and linked to the full text for those interested):


1. Organizations must take steps to ensure that everyone who works for them only processes personal data based on proper instructions. (Article 32.4)

2. Organizations must conduct awareness-raising and training of staff who process personal data (Article 39.1). This is extended to include “monitoring training” for some organizations in Article 47.2.

3. Organizations must put in place risk-based security measures to ensure confidentiality and integrity and must regularly test, assess and evaluate the effectiveness of these measures. (Article 32.1)

4. If you don’t follow the rules, you could be fined up to 20 million Euros or 4% of turnover. How well you’ve implemented the measures in article 32 (i.e. including those above) will impact how big these fines might be. (Article 83.2d)


So let’s join up the dots.

Firstly, a large company has to ensure that everyone who works for it only processes data based on proper instructions. Since the nature of personal data, processing and instructions each have particular meanings, this needs training to help people understand. You could just train and not test, but given that the concepts are not simple, it would seem sensible to test or otherwise check their understanding.

A company is required to train its employees under Article 39. But the requirement in Article 32 is for most companies stronger. For most large organizations the risk of employees making mistakes and the risk of insider threat to confidentiality and integrity is considerable. So you have to put in place training and other security measures to reduce this risk. Given that you have to regularly assess and evaluate the effectiveness of these measures, it seems hard to envisage an efficient way of doing this without testing your personnel. Delivering regular online tests or quizzes to your employees is the obvious way to check that training has been effective and your people know, understand and can apply your processes and procedures.

Lastly, imagine your company makes a mistake and one of your employees causes a breach of personal data or commits another infraction under the GDPR? How are you going to show that you took all the steps you could to minimize the risk? An obvious question is whether you did your best to train that employee in good practice and in your processes and procedures? If you didn’t train, it’s hard to argue that you took the proper steps to be compliant. But even if you trained, a regulator will ask you how you are evaluating the effectiveness of your training. As a regulator in another context has stated:

“”where staff understanding has not been tested, it is hard for firms to judge how well the relevant training has been absorbed”

So yes, you can imagine a way in which a large company might manage to be compliant with the GDPR without testing employees. There are other ways of checking understanding, for example 1:1 interviews, but they are very time consuming and hard to roll out in time for May 2018. Or you may be lucky and have personnel who don’t make mistakes! But for most of us, testing our employees on knowledge of our processes and procedures under the GDPR will be wise.

Questionmark OnDemand is a trustable, easy to use and easy to deploy system for creating and delivering compliance tests and assessments to your personnel. For more information on using assessments to help ensure GDPR compliance visit this page of our website or register for our upcoming webinar on 29 June.

7 Strategies to Shrink Satisficing & Improve Survey Results

John Kleeman Headshot

Posted by John Kleeman

My previous post Satisficing: Why it might as well be a four-letter word explained that satisficing on a survey is when someone answers survey questions adequately but not as well as they can. Typically they just fill in questions without thinking too hard. As a commenter on the blog said: “Interesting! I have been guilty of this, didn’t even know it had a name!”

Examples of satisficing behavior are skipping questions or picking the first answer that makes some kind of sense. Satisficing is very common.  As explained in the previous blog, some reasons for it are participants not being motivated to answer well, not having the ability to answer well, them finding the survey too hard or them simply becoming fatigued at too long a survey.

Satisficing is a significant cause of survey error, so here are 7 strategies for a survey author to reduce satisficing:

1. Keep surveys short. Even the keenest survey respondent will get tired in a long survey and most of your respondents will probably not be keen. To get better results, make the survey as short as you possibly can.Bubble-Sheet---Printing-and-Scanning_2

2. Keep questions short and simple. A long and complex question is much more likely to get a poor quality answer.  You should deconstruct complex questions into shorter ones. Also don’t ask about events that are difficult to remember. People’s memory of the past and of the time things happened is surprisingly fragile, and if you ask someone about events weeks or months ago, many will not recall well.

3. Avoid agree/disagree questions. Satisficing participants will most likely just agree with whatever statement you present. For more on the weaknesses of these kind of questions, see my blog on the SAP community network: Strongly Disagree? Should you use Agree/Disagree in survey questions?

4. Similarly remove don’t know options. If someone is trying to answer as quickly as possible, answering that they don’t know is easy for them to do, and avoids thinking about the questions.

5. Communicate the benefit of the survey to make participants want to answer well. You are doing the survey for a good reason.  Make participants believe the survey will have positive benefits for them or their organization. Also make sure each question’s results are actionable. If the participant doesn’t feel that spending the time to give you a good answer is going to help you take some useful action, why should they bother?

6. Find ways to encourage participants to think as they answer. For example, include a request to ask participants to carefully deliberate – it could remind them to pay attention. It can also be helpful to occasionally ask participants to justify their answers – perhaps adding a text comment box after the question explaining why they answered that way. Adding comment boxes is very easy to do in Questionmark software.

7. Put the most important questions early on. Some people will satisfice and they are more likely to do it later on in the survey. If you put the questions that matter most early on, you are more likely to get good results from them.

There is a lot you can do to reduce satisficing and encourage people to give their best answers. I hope these strategies help you shrink the amount of satisficing your survey participants do, and in turn give you more accurate results.

Online Proctoring: FAQs

John Kleeman HeadshotPosted by John Kleeman

Online proctoring was a hot-button topic at Questionmark’s annual Users Conference. And though we’ve discussed the pros and cons in this blog and even offered an infographic highlighting online versus test-center proctoring, many interesting questions arose during the Ensuring Exam Integrity with Online Proctoring  session I presented with Steve Lay at Questionmark Conference 2016.

I’ve compiled a few of those questions and offered answers to them. For context and additional information, make sure to check out a shortened version of our presentation. If you have any questions you’d like to add to the list, comment below!

What control does the online proctor have on the exam?

With Questionmark solutions, the online proctor can:

  • Converse with the participant
  • Pause and resume the exam
  • Give extra time if needed
  • Terminate the exam

What does an online proctor do if he/she suspects cheating?

Usually the proctor will terminate the exam and file a report to the exam sponsor.

What happens if the exam is interrupted, e.g. by someone coming in to the room?

This depends on your security protocols. Some organizations may decide  to terminate the exam and require another attempt. In some cases, if it seems an honest mistake, the organization may decide that the proctor can use discretion to permit the exam to continue.

Which is more secure, online or face-to-face proctoring?online proctoring

On balance, they are about equally secure.

Unfortunately there has been a lot of corruption with face-to-face proctoring, and online proctoring makes it much harder for participant and proctor to collude as there is no direct contact, and all communication can be logged.

But if the proctors are honest, it is easier to detect cheating aids in a face-to-face environment than via a video link.

What kind of exams is online proctoring good for?

Online proctoring works well for exams where:

  • The stakes are high and so you need the security of a proctor
  • Participants are in many different places, making travel to test centers costly
  • Participants are computer literate – have and know how to use their own PCs
  • Exams take 2-3 hours or less

If your technology or subject area changes frequently, then online proctoring is particularly good because you can easily give more frequent exams, without requiring candidates to travel.

What kind of exams is online proctoring less good for?

Online proctoring is less appropriate for exams where:

  • Exams are long and participants needs breaks
  • Exams where participants are local and it’s easy to get them into one place to take the exam
  • Participants do not have access to their own PC and/or are not computer literate

How do you prepare for online proctoring?

Here are some preparation tasks:

  • Brief and communicate with your participants about online proctoring
  • Define clearly the computer requirements for participants
  • Agree what happens in the event of incidents – e.g. suspected cheating, exam interruptions
  • Agree what ID is acceptable for participants and whether ID information is going to be stored
  • Make a candidate agreement or honor code which sets out what you expect from people to encourage them to take the exam fairly

I hope these Q&A and the linked presentation are interesting. You can find out more about Questionmark’s online proctoring solution here.

Satisficing: Why it might as well be a four-letter word

John Kleeman Headshot

Posted by John Kleeman

Have you ever answered a survey without thinking too hard about it, just filling in questions in ways that seem half sensible? This behavior is called satisficing – when you give responses which are adequate but not optimal. Satisficing is a big cause of error in surveys and this post explains what it is and why it happens.

These are typical satisficing behaviors:

  • selecting the first response alternative that seems reasonable
  • agreeing with any statement that asks for agree/disagree answers
  • endorsing the status quo and not thinking through questions inviting change
  • in a matrix question, picking the same response for all parts of the matrix
  • responding “don’t know”
  • mentally coin flipping to answer a question
  • leaving questions unanswered

How prevalent is it?

Very few of us satisfice when taking a test. We usually try hard to give the best answers we can. But unfortunately for survey authors, it’s very common in surveys to answer half-heartedly, and satisficing is one of the common causes of survey errors.

For instance, a Harvard University study looked at a university survey with 250 items. Students were given a $15 cash incentive to complete it:

  • Eighty-one percent of participants satisficed at least in part.
  • Thirty-six percent rushed through parts of the survey too fast to be giving optimal answers.
  • The amount of satisficing increased later in the survey.
  • Satisficing impacted the validity and reliability of the survey and of any correlations made.

It is likely that for many surveys, satisficing plays an important part in the quality of the data.

How does it look like?

There are a few tricks to help identify satisficing behavior, but the first thing to look for when examining the data is straight-lining on grid questions. According to How to Spot a Fake, an article based on the Practices that minimize online panelist satisficing behavior by Shawna Fisher, “an instance or two may be valid, but often, straight-lining is a red flag that indicates a respondent is satisficing.” See the illustration for a visual.

Why does it happen?

Research suggests that there are four reasons participants typically satisfice:

1. Participant motivation. Survey participants are often asked to spend time and effort on a survey without much apparent reward or benefit. One of the biggest contributors to satisficing is lack of motivation to answer well.

2. Survey difficulty. The harder a survey is to answer and the more mental energy that needs to go into thinking about the best answers, the more likely participants are to give up and choose an easy way through.

3. Participant ability. Those who find the questions difficult, either because they are less able, or because they have not had a chance to consider the issues being asked in other contexts are more likely to satisfice.

4. Participant fatigue. The longer a survey is, the more likely the participant is to give up and start satisficing.

So how can we reduce satisficing? The answer is to address these reasons in our survey design. I’ll suggest some ways of doing this in a follow-up post.

I hope thinking about satisficing might give you better survey results with your Questionmark surveys!

Job Task Analysis Surveys Legally Required?

John Kleeman Headshot

Posted by John Kleeman

I had a lot of positive feedback on my blog post Making your Assessment Valid: 5 Tips from Miami. There is a lot of interest in how to ensure your assessment is valid, ensuring that it measures what it is supposed to measure.

If you are assessing for competence in a job role or for promotion into a job role, one critical step in making your assessment valid is to have a good, current analysis of what knowledge, skills and abilities are needed to do the job role. This is called a job task analysis (JTA), and the most common way of doing this analysis is to conduct a JTA Survey.

Job Task Analysis SurveyIn a JTA Survey, you ask existing people in the job role, or other experts, what tasks they do. A common practice is to survey them on how important each task is, how difficult it is and how often it is done. The resultant reports then guide the construction of the test blueprint and which topics and how many questions on each you include in the test.

If you cannot show that your assessment matches the requirements of a job, then your assessment is not only invalid but it is likely unfair — if you use it to select people for the job or measure competence in the job. And if you use an invalid assessment to select people for promotion or recruitment into the job, you may face legal action from people you reject.

Not only is this common sense, but it was also confirmed by a recent US district court ruling against the Boston Police Department. In this court case, sergeants who had been rejected for promotion to lieutenant following an exam sued that the assessment was unfair, and won.

The judge ruled that the exam was not sufficiently valid, because it omitted many job skills crucial for a police lieutenant role, and so it was not fair to be used to select for the role (see news report).

The 82-page judge’s ruling sets out in detail why the exam was unfair. He references the Uniform Guidelines on Employee Selection Procedures which state:

“There should be a job analysis which includes an analysis of the important work behavior(s) required for successful performance and their relative importance”

But the judge ruled that although a job analysis had been done, it had not been used properly in the test construction process. He said:

“When using a multiple choice exam, the developer must convert the job analysis result into a test plan to ensure a direct and strong relationship between the job analysis and the exam.

However, in this case, the job analysis was not used sufficiently well to construct the exam. The judge went on to say:

The Court cannot find, however, that the test plan ensured a strong relationship between the job analysis and the exam. … too many skills and abilities were missing from the … test outline. 

Crucially, he concluded:

“And a high score on the … exam simply was not a good indicator that a candidate would be a good lieutenant”.

Due to the pace of business change and technological advance, job roles are changing fast. Make sure that you conduct regular JTAs  of roles in your organization and make sure your assessments match the most important job tasks. Find out more about Job Task Analysis here.

Making your Assessment Valid: 5 Tips from Miami

John Kleeman Headshot

Posted by John Kleeman

A key reason people use Questionmark’s assessment management system is that it helps you make more valid assessments. To remind you, a valid assessment is one that genuinely measures what it is supposed to measure. Having an effective process to ensure your assessments are valid, reliable and trustable was an important topic at Questionmark Conference 2016 in Miami last week. Here is some advice I heard:

Reporting back from 3 days of learning and networking at Questionmark Conference 2016 in Miami

Tip 1: Everything starts from the purpose of your assessment. Define this clearly and document it well. A purpose that is not well defined or that does not align with the needs of your organization will result in a poor test. It is useful to have a formal process to kick off  a new assessment to ensure the purpose is defined clearly and is aligned with business needs.

Tip 2: A Job Task Analysis survey is a great way of defining the topics/objectives for new-hire training assessments. One presenter at the conference sent out a survey to the top performing 50 percent of employees in a job role and asked questions on a series of potential job tasks. For each job task, he asked how difficult it is (complexity), how important it is (priority) and how often it is done (frequency). He then used the survey results to define the structure of knowledge assessments for new hires to ensure they aligned with needed job skills.

Tip 3: The best way to ensure that a workplace assessment starts and remains valid is continual involvement with Subject Matter Experts (SMEs). They help you ensure that the content of the assessment matches the content needed for the job and ensure this stays the case as the job changes. It’s worth investing in training your SMEs in item writing and item review. Foster a collaborative environment and build their confidence.

Tip 4: Allow your participants (test-takers) to feed back into the process. This will give you useful feedback to improve the questions and the validity of the assessment. It’s also an important part of being transparent and open in your assessment programme, which is useful because people are less likely to cheat if they feel that the process is well-intentioned. They are also less likely to complain about the results being unfair. For example it’s useful to write an internal blog explaining why and how you create the assessments and encourage feedback.

Lunch with a view at Questionmark Conference 2016 in Miami

Tip 5: As the item bank grows and as your assessment programme becomes more successful, make sure to manage the item bank and review items. Retire items that are no longer relevant or when they have been overexposed. This keeps the item bank useful, accurate and valid.

There was lots more at the conference – excitement that Questionmark NextGen authoring is finally here, a live demo of our new easy to use Printing and Scanning solution … and having lunch on the hotel terrace in the beautiful Miami spring sunshine – with Questionmark branded sunglasses to keep cool.

There was a lot of buzz at the conference about documenting your assessment decisions and making sure your assessments validly measure job competence. There is increasing understanding that assessment is a process not a project, and also that to be used to measure competence or to select for a job role, an assessment must cover all important job tasks.

I hope these tips on making assessments valid are helpful. Click here for more information on Questionmark’s assessment management system.