7 Strategies to Shrink Satisficing & Improve Survey Results

John Kleeman Headshot

Posted by John Kleeman

My previous post Satisficing: Why it might as well be a four-letter word explained that satisficing on a survey is when someone answers survey questions adequately but not as well as they can. Typically they just fill in questions without thinking too hard. As a commenter on the blog said: “Interesting! I have been guilty of this, didn’t even know it had a name!”

Examples of satisficing behavior are skipping questions or picking the first answer that makes some kind of sense. Satisficing is very common.  As explained in the previous blog, some reasons for it are participants not being motivated to answer well, not having the ability to answer well, them finding the survey too hard or them simply becoming fatigued at too long a survey.

Satisficing is a significant cause of survey error, so here are 7 strategies for a survey author to reduce satisficing:

1. Keep surveys short. Even the keenest survey respondent will get tired in a long survey and most of your respondents will probably not be keen. To get better results, make the survey as short as you possibly can.Bubble-Sheet---Printing-and-Scanning_2

2. Keep questions short and simple. A long and complex question is much more likely to get a poor quality answer.  You should deconstruct complex questions into shorter ones. Also don’t ask about events that are difficult to remember. People’s memory of the past and of the time things happened is surprisingly fragile, and if you ask someone about events weeks or months ago, many will not recall well.

3. Avoid agree/disagree questions. Satisficing participants will most likely just agree with whatever statement you present. For more on the weaknesses of these kind of questions, see my blog on the SAP community network: Strongly Disagree? Should you use Agree/Disagree in survey questions?

4. Similarly remove don’t know options. If someone is trying to answer as quickly as possible, answering that they don’t know is easy for them to do, and avoids thinking about the questions.

5. Communicate the benefit of the survey to make participants want to answer well. You are doing the survey for a good reason.  Make participants believe the survey will have positive benefits for them or their organization. Also make sure each question’s results are actionable. If the participant doesn’t feel that spending the time to give you a good answer is going to help you take some useful action, why should they bother?

6. Find ways to encourage participants to think as they answer. For example, include a request to ask participants to carefully deliberate – it could remind them to pay attention. It can also be helpful to occasionally ask participants to justify their answers – perhaps adding a text comment box after the question explaining why they answered that way. Adding comment boxes is very easy to do in Questionmark software.

7. Put the most important questions early on. Some people will satisfice and they are more likely to do it later on in the survey. If you put the questions that matter most early on, you are more likely to get good results from them.

There is a lot you can do to reduce satisficing and encourage people to give their best answers. I hope these strategies help you shrink the amount of satisficing your survey participants do, and in turn give you more accurate results.

Satisficing: Why it might as well be a four-letter word

John Kleeman Headshot

Posted by John Kleeman

Have you ever answered a survey without thinking too hard about it, just filling in questions in ways that seem half sensible? This behavior is called satisficing – when you give responses which are adequate but not optimal. Satisficing is a big cause of error in surveys and this post explains what it is and why it happens.

These are typical satisficing behaviors:

  • selecting the first response alternative that seems reasonable
  • agreeing with any statement that asks for agree/disagree answers
  • endorsing the status quo and not thinking through questions inviting change
  • in a matrix question, picking the same response for all parts of the matrix
  • responding “don’t know”
  • mentally coin flipping to answer a question
  • leaving questions unanswered

How prevalent is it?

Very few of us satisfice when taking a test. We usually try hard to give the best answers we can. But unfortunately for survey authors, it’s very common in surveys to answer half-heartedly, and satisficing is one of the common causes of survey errors.

For instance, a Harvard University study looked at a university survey with 250 items. Students were given a $15 cash incentive to complete it:

  • Eighty-one percent of participants satisficed at least in part.
  • Thirty-six percent rushed through parts of the survey too fast to be giving optimal answers.
  • The amount of satisficing increased later in the survey.
  • Satisficing impacted the validity and reliability of the survey and of any correlations made.

It is likely that for many surveys, satisficing plays an important part in the quality of the data.

How does it look like?

There are a few tricks to help identify satisficing behavior, but the first thing to look for when examining the data is straight-lining on grid questions. According to How to Spot a Fake, an article based on the Practices that minimize online panelist satisficing behavior by Shawna Fisher, “an instance or two may be valid, but often, straight-lining is a red flag that indicates a respondent is satisficing.” See the illustration for a visual.

Why does it happen?

Research suggests that there are four reasons participants typically satisfice:

1. Participant motivation. Survey participants are often asked to spend time and effort on a survey without much apparent reward or benefit. One of the biggest contributors to satisficing is lack of motivation to answer well.

2. Survey difficulty. The harder a survey is to answer and the more mental energy that needs to go into thinking about the best answers, the more likely participants are to give up and choose an easy way through.

3. Participant ability. Those who find the questions difficult, either because they are less able, or because they have not had a chance to consider the issues being asked in other contexts are more likely to satisfice.

4. Participant fatigue. The longer a survey is, the more likely the participant is to give up and start satisficing.

So how can we reduce satisficing? The answer is to address these reasons in our survey design. I’ll suggest some ways of doing this in a follow-up post.

I hope thinking about satisficing might give you better survey results with your Questionmark surveys!

Assessment types and their uses: reaction assessments

Posted by Julie Delazyn

To use assessments effectively, it’s important to understand their context and uses within the learning process.

Last week I wrote about needs assessments, and today I’ll explore reaction assessments.

Typical uses:

  • Determining the satisfaction level with a learning or certification experience
  • Gathering opinions from learners about course materials, instructors, learning environments, and so forth
  • Identifying shortcomings of a learning experience in order to help improve it for others
  • Aiding the planning process for revising a course and/or the way in which it is delivered

Types:

  • Level 1 evaluations (as per Donald Kirkpatrick)
  • Course evaluations
  • Smile sheets/ happy sheets
  • Opinion surveys

Stakes: low

Example:
Answers to Question 8, analyzed below, reveal at that respondents feel they have sufficient time for the training they need to do their jobs well. But their answers to Question 9 — indicating that many people had problems with the timing of training courses — prompted their company to revise its training schedule.

For more details about assessments and their uses check out the white paper, Assessments Through the Learning Process. You can download it free here, after login. Another good source for testing and assessment terms is our glossary.

In my last post in this series, I will take a look at summative assessments.

Assessment types and their uses: Needs Assessments

Posted by Julie Delazyn

Assessments have many different purposes, and to use them effectively it’s important to understand their context and uses within the learning process.

Last week I wrote about formative assessments, and today I’ll explore needs assessments.

Typical uses:

  • Determining the knowledge, skills, abilities and attitudes of a group to assist with gap analysis and courseware development
  • Determining the difference between what a learner knows and what they are required to know
  • Measuring against requirements to determine a gap that needs to be filled
  • Helping training managers, instructional designers, and instructors work out what courses to develop or administer to satisfy their constituents’ needs
  • Determining if participants were routed to the right kind of learning experiences

Types:

  • Job task analysis (JTA) survey
  • Needs analysis survey
  • Skills gap survey

Stakes: low


Example:
A food service company can run a needs analysis survey to identify differences between the knowledge of subject matter experts and people on the job. Evaluating the different groups’ scores, as shown in the gap analysis chart below, reveals the overall differences between the experts’ and workers’ knowledge. But more significantly, it diagnoses a strong need for workers to improve their understanding of food safety. Information like this can inform the organization’s decision about further staff development plans and learning programs.

For more details about assessments and their uses check out the white paper, Assessments Through the Learning Process. You can download it free here, after login. Another good source for testing and
assessment terms is our glossary.

In the coming weeks I’ll take a look at two other assessment types:

  • Reaction
  • Summative

Easy Browser-based Authoring of Course Evaluations in Questionmark Live

jim_small

Posted by Jim Farrell

The Questionmark Live team has added a dynamic new assessment type to its growing array of features: the ability to author course evaluation surveys. We have included four topics of library questions that you can add to your survey to make authoring fast and easy. Watch the following video to see this solution in action.

Questionmark Live makes it easy for subject matter experts (SMEs)  to write questions and then export them for use in Questionmark Perception. Questionmark Software Support Plan customers use it free of charge, but anyone is welcome to try it out by clicking here.

Embedding Questionmark Assessments in Google Wave

Screenshot of an embedded assessment in Google WaveEmbed a Questionmark Perception assessment, survey or quiz inside your Google Wave profile.

  • To see how this would look, see a snapshot of an assessment embedded into Google Wave.
  • Check out this How-to on our developer Web site.
  • Google Wave is an online tool for real-time communication and collaboration.  Embedding an assessment into Google Wave may be useful if you want to ask the members of your Wave to complete a quiz or  simply fill in a survey. The results can then be analyzed and reported on from Perception.