7 Strategies to Shrink Satisficing & Improve Survey Results

John Kleeman Headshot

Posted by John Kleeman

My previous post Satisficing: Why it might as well be a four-letter word explained that satisficing on a survey is when someone answers survey questions adequately but not as well as they can. Typically they just fill in questions without thinking too hard. As a commenter on the blog said: “Interesting! I have been guilty of this, didn’t even know it had a name!”

Examples of satisficing behavior are skipping questions or picking the first answer that makes some kind of sense. Satisficing is very common.  As explained in the previous blog, some reasons for it are participants not being motivated to answer well, not having the ability to answer well, them finding the survey too hard or them simply becoming fatigued at too long a survey.

Satisficing is a significant cause of survey error, so here are 7 strategies for a survey author to reduce satisficing:

1. Keep surveys short. Even the keenest survey respondent will get tired in a long survey and most of your respondents will probably not be keen. To get better results, make the survey as short as you possibly can.Bubble-Sheet---Printing-and-Scanning_2

2. Keep questions short and simple. A long and complex question is much more likely to get a poor quality answer.  You should deconstruct complex questions into shorter ones. Also don’t ask about events that are difficult to remember. People’s memory of the past and of the time things happened is surprisingly fragile, and if you ask someone about events weeks or months ago, many will not recall well.

3. Avoid agree/disagree questions. Satisficing participants will most likely just agree with whatever statement you present. For more on the weaknesses of these kind of questions, see my blog on the SAP community network: Strongly Disagree? Should you use Agree/Disagree in survey questions?

4. Similarly remove don’t know options. If someone is trying to answer as quickly as possible, answering that they don’t know is easy for them to do, and avoids thinking about the questions.

5. Communicate the benefit of the survey to make participants want to answer well. You are doing the survey for a good reason.  Make participants believe the survey will have positive benefits for them or their organization. Also make sure each question’s results are actionable. If the participant doesn’t feel that spending the time to give you a good answer is going to help you take some useful action, why should they bother?

6. Find ways to encourage participants to think as they answer. For example, include a request to ask participants to carefully deliberate – it could remind them to pay attention. It can also be helpful to occasionally ask participants to justify their answers – perhaps adding a text comment box after the question explaining why they answered that way. Adding comment boxes is very easy to do in Questionmark software.

7. Put the most important questions early on. Some people will satisfice and they are more likely to do it later on in the survey. If you put the questions that matter most early on, you are more likely to get good results from them.

There is a lot you can do to reduce satisficing and encourage people to give their best answers. I hope these strategies help you shrink the amount of satisficing your survey participants do, and in turn give you more accurate results.

Reliability and validity are the keys to trust

image.png

Not reliable

John Kleeman HeadshotPosted by John Kleeman

How can you trust assessment results? The two keys are reliability and validity.

Reliability explained

An assessment is reliable if it measures the same thing consistently and reproducibly. If you were to deliver an assessment with high reliability to the same participant on two occasions, you would be very likely to reach the same conclusions about the participant’s knowledge or skills. A test with poor reliability might result in very different scores across the two instances.An unreliable assessment does not measure anything consistently and cannot be used for any trustable measure of competency. It is useful visually to think of a dartboard; in the diagram to the right, darts have landed all over the board—they are not reliably in any one place.In order for an assessment to be reliable, there needs to be a predictable authoring process, effective beta testing of items, trustworthy delivery to all the devices used to give the assessment, good-quality post-assessment reporting and effective analytics.

Validity explained

image.png

Reliable but not valid

Being reliable is not good enough on its own. The darts in the dartboard in the figure to the right are in the same place, but not in the right place. A test can be reliable but not measure what it is meant to measure. For example, you could have a reliable assessment that tested for skill in word processing, but this would not be valid if used to test machine operators, as writing is not one of the key tasks in their jobs.

An assessment is valid if it measures what it is supposed to measure. So if you are measuring competence in a job role, a valid assessment must align with the knowledge, skills and abilities required to perform the tasks expected of a job role. In order to show that an assessment is valid, there must be some formal analysis of the tasks in a job role and the assessment must be structured to match those tasks. A common method of performing such analysis is a job task analysis, which surveys subject matter experts or people in the job role to identify the importance of different tasks.

Assessments must be reliable AND valid

Trustable assessments must be reliable AND valid.

image.png

Reliable and valid

The darts in the figure to the right are in the same place and at the right place.

When you are constructing an assessment for competence, you are looking for it to consistently measure the competence required for the job.

 Comparison with blood tests

It is helpful to consider what happens if you go to the doctor with an illness. The doctor goes through a process of discovery, analysis, diagnosis and prescription. As part of the discovery process, sometimes the doctor will order a blood test to identify if a particular condition is present, which can diagnose the illness or rule out a diagnosis.

It takes time and resources to do a blood test, but it can be an invaluable piece of information. A great deal of effort goes into making sure that blood tests are both reliable (consistent) and valid (measure what they are supposed to measure). For example, just like exam results, blood samples are labelled carefully, as shown in the picture, to ensure that patient identification is retained.

image_thumb.pngA blood test that was not reliable would be dangerous—a doctor might think that a disease is not present when it is. Furthermore, a reliable blood test used for the wrong purpose is not useful—for example, there is no point in having a test for blood glucose level if the doctor is trying to see if a heart attack is imminent.

The blood test results are a single piece of information that helps the doctor make the diagnosis in conjunction with other data from the doctor’s discovery process.

In exactly the same way, a test of competence is an important piece of information to determine if someone is competent in their job role.

Using the blood test metaphor, it is easy to understand the personnel and organizational risks that can result from making decisions based on untrustworthy results. If an organization assesses someone’s knowledge, skill or competence for health and safety or regulatory compliance purposes, you need to ensure the assessment is designed correctly and runs consistently, which means that they must be reliable and valid.

For assessments to be reliable and valid, it is necessary that you follow structured processes at each step from planning through authoring to delivery and reporting. These processes are explained in our new white paper “Assessment Results You can Trust” and I’ll be sharing some of the content in future articles in this blog.

For fuller information, you can download the white paper, click here

Get trustable results: How many test or exam retakes should you allow?

John Kleeman HeadshotPosted by John Kleeman

How many times is it fair and proper for a participant to retake an assessment if they fail?

One of our customers asked me about this recently in regard to a certification exam. I did some research and thought I’d share it  here.

For a few kinds of assessments, you would normally only allow a single attempt, typically if you are measuring something at a specific point in time. A pre-course or post-course test might only be useful if it is taken right before or right after a training course.

Regarding assessments that just give retrieval practice or reinforce learning, you needn’t be concerned. It may be fine to allow as many retakes as people want. The more times they practice answering the questions, the more they will retain the learning.

But how can you decide how many attempts to allow at a certification assessment measuring competence and mastery?

Consider test security

Retakes can jeopardize test security. Someone might take and retake a test to harvest the items to share with others. The more retakes allowed, the more this risk increases.

International Test Commission draft security guidelines say:

“Retake policies should be developed to reduce the opportunities for item harvesting and other forms of test fraud. For example, a test taker should not be allowed to retake a test that he or she “passed” or retake a test until a set amount of time has passed.”

Consider measurement error

All assessment scores have measurement error. A certification exam classifies people as having mastery (pass) or not (failing), but it doesn’t do so perfectly.

If you allow repeat retakes, you increase the risk of classifying someone as a master who is not competent, but  you also decrease the risk of classifying a competent person as having failed. This is because someone can suffer test anxiety or be ill or make a stupid mistake and fail the test despite being competent.

Require participants to wait for retakes

It’s usual to require a time period to elapse before a retake. This  stops people from  using quick, repeated retakes to take unfair advantage of measurement error. It also encourage reflection and re-learning before the next attempt. Standard 13.6 in the Standards for Educational and Psychological Testing says:

“students. . . should have a reasonable number of opportunities to succeed. . . the time intervals between the opportunities should allow for students to have the opportunity to obtain the relevant instructional experiences.”

If we had a perfectly reliable assessment, there would be no concern about multiple attempts. Picking the number of attempts is a compromise between what is fair to the participants and the limitations of our resources as assessment developers.

Think about test preparation

Could your retake policy affect how people prepare for the exam?

If retakes are easily available, some participants might prepare less effectively, hoping that they can “wing it” since  they can retake at will.  On the other hand, if retakes are limited, this could increase test anxiety and stress. It could also increase the motivation to cheat.

What about fairness?

Some people suffer test anxiety, some people make silly mistakes on the test or use poor time management, and some may be not at their full capacity on the day of the exam. It’s usually fair to offer a retake in such situations. If you do not offer sufficient opportunities to retake, this will impact the face validity of the assessment: people might not consider it fair.

If your exam is open the public, you may not be able to limit retakes. Imagine a country where you were not allowed to retake your driving test once you’d failed it 3 times! It might make the roads safer, but most people wouldn’t see it as equitable.

In my next post on this subject, I will share what some organizations do in practice and offer some steps for arriving at an answer that will be suitable for your organization.