Agree or disagree? 10 tips for better surveys — Part 2

John Kleeman HeadshotPosted by John Kleeman

In my first post in this series, I explained that survey respondents go through a four-step process when they answer each question: comprehend the question, retrieve/recall the information that it requires, make a judgement on the answer and then select the response. There is a risk of error at each step. I also explained the concept of “satisficing”, where participants often give a satisfactory answer rather than an optimal one – another potential source of error.

Today, I’m offering some tips for effective online attitude survey design, based on research evidence. Following these tips should help you reduce error in your attitude surveys.

Tip #1 – Avoid Agree/Disagree questions

Although these are one of the most common types of questions used in surveys, you should try to avoid questions which ask participants whether they agree with a statement.

There is an effect called acquiescence bias, where some participants are more likely to agree than disagree. It seems from the research that some participants are easily influenced and so tend to agree with things easily. This seems to apply particularly to participants who are more junior or less well educated, who may tend to think that what is asked of them might be true. For example Krosnick and Presser state that across 10 studies, 52 percent of people agreed with an assertion compared to 42 percent of those disagreeing with its opposite. If you are interested in finding more about this effect, see this 2010 paper by Saris, Revilla, Krosnick and Schaeffer.

Satisficing – where participants just try to give a good enough answer rather than their best answer – also increases the number of “agree” answers.

For example, do not ask a question like this:

My overall health is excellent. Do you:

  • Strongly Agree
  • Agree
  • Neither Agree or Disagree
  • Disagree
  • Strongly Disagree

Instead re-word it to be construct specific:

How would you rate your health overall?

  • Excellent
  • Very good
  • Good
  • Fair
  • Bad
  • Very bad

 

Tip #2 – Avoid Yes/No and True/False questions

For the same reason, you should avoid Yes/No questions and True/False questions in surveys. People are more likely to answer Yes than No due to acquiescence bias.

Tip #3 – Each question should address one attitude only

Avoid double-barrelled questions that ask about more than one thing. It’s very easy to ask a question like this:

  • How satisfied are you with your pay and work conditions?

However, someone might be satisfied with their pay but dissatisfied with their work conditions, or vice versa. So make it two separate questions.

Tip #4 – Minimize the difficulty of answering each question

If a question is harder to answer, it is more likely that participants will satisfice – give a good enough answer rather than the best answer. To quote Stanford Professor  Jon Krosnick, “Questionnaire designers should work hard to minimize task difficulty”.  For example:

  • Use as few words as possible in question and responses.
  • Use words that all your audience will know.
  • Where possible, ask questions about the recent past not the distant past as the recent past is easier to recall.
  • Decompose complex judgement tasks into simpler ones, with a single dimension to each one.
  • Where possible make judgements absolute rather than relative.
  • Avoid negatives. Just like in tests and exams, using negatives in your questions adds cognitive load and makes the question less likely to get an effective answer.

The less cognitive load involved in questions, the more likely you are to get accurate answers.

Tip #5 – Randomize the responses if order is not importantSetting choices to be shuffled

The order of responses can significantly influence which ones get chosen.

There is a primacy effect in surveys where participants more often choose the first response than a later one. Or if they are satisficing, they can choose the first response that seems good enough rather than the best one.

There can also be a recency effect whereby participants read through a list of choices and choose the last one they have read.

In order to avoid these effects, if your choices do not have a clear progression or some other reason for being in a particular order, randomize them.  This is easy to do in Questionmark software and will remove the effect of response order on your results.

Here is a link to the next segment of this series: Agree or disagree? 10 tips for better surveys — part 3

Writing Good Surveys, Part 6: Tips for the form of the survey

Doug Peterson HeadshotPosted By Doug Peterson

In this final installment of this series, we’ll take a look at some tips for the form of the survey itself.

The first suggestion is to avoid labeling sections of questions. Studies have shown that when it is obvious that a series of questions belong to a group, respondents tend to answer all the questions in the group the same way they answer the first question in the group. The same is true with visual formatting, like putting a box around a group of questions or extra space between groups. It’s best to just present all of the questions in a simple, sequentially numbered list.

As much as possible, keep questions at about the same length, and present the same number of questions (roughly, it doesn’t have to be exact) for each topic. Longer questions or more questions on a topic tend to require more reflection by the respondent, and tend to receive higher ratings. I suspect this might have something to do with the respondent feeling like the question or group of questions is more important (or at least more work) because it is longer, possibly making them hesitant to give something “important” a negative rating.

It is important to collect demographic information as part of a survey. However, a suspicion that he or she can be identified can definitely skew a respondent’s answers. Put the demographic information at the end of the survey to encourage honest responses to the preceding questions. Make as much of the demographic information optional as possible, and if the answers are collected and stored anonymously, assure the respondent of this. If you don’t absolutely need a piece of demographic information, don’t ask for it. The more anonymous the respondent feels, the more honest he or she will be.

Group questions with the same response scale together and present them in a matrix format. This reduces the cognitive load on the respondent; the response possibilities do not have to be figured out on each individual question, and the easier it is for respondents to fill out the survey, the more honest and accurate they will be. If you do not use the matrix format, consider listing the response scale choices vertically instead of horizontally. A vertical orientation clearly separates the choices and reduces the chance of accidentally selecting the wrong choice. And regardless of orientation, be sure to place more space between questions than between a question and its response scale.

I hope you’ve enjoyed this series on writing good surveys. I also hope you’ll join us in San Antonio in March 2014 for our annual Users Conference – I’ll be presenting a session on writing assessment and survey items, and I’m looking forward to hearing ideas and feedback from those in attendance!

Writing Good Surveys, Part 4: Response Scales

Doug Peterson HeadshotPosted By Doug Peterson

In part 2 and part 3 of this series, we looked at writing good survey questions. Now it’s time to turn our attention to the response scale.

Response scales come in several flavors. The binary or dichotomous scale is your basic yes/no option. A multiple choice scale offers three or more discreet selections from which to choose. For example, you might ask “What is the public sector organization in which you work?” and provide the following choices:

  • Federal
  • State
  • County
  • Municipal
  • City/Local
  • Other

Dichotomous and multiple choice scales are typically used for factual answers, not for opinions or ratings. The key to these types of scales is that you must make sure that you offer the respondent an answer they can use. For example, asking a hotel guest “Did you enjoy your stay?” and then giving them the options “yes” and “no” is not a good idea. They may have greatly enjoyed their room, but were very dissatisfied with the fitness center, and this question/response scale pairing does not allow them to differentiate between different aspects of their visit. A better approach might be to ask “Did you use the fitness center?” with a yes/no response, and if they did, have them answer more detailed questions about their fitness center experience.

The response scale we typically think about when it comes to surveys is the descriptive scale, where the respondent describes their opinion or experience as a point on a continuum between two extremes such as “strongly disagree” to “strongly agree”. These are the scales that elicit the most amount of debate among the experts, and I strongly encourage you to Google “survey response scales” and do a little reading. The main points of discussion are number of responses, direction, and labeling.

Number of Responses

We’ve all seen the minimalist approach to rating scales:

Disagree     Neutral     Agree

[1]                   [2]              [3]

There are certainly situations where this scale is valid, but most of the time you will want to provide more options to allow the respondent to provide a more detailed answer:

Strongly Disagree     Disagree        Neutral        Agree        Strongly Agree

[1]                                   [2]                  [3]                  [4]                      [5]

Five choices is very typical, but I would agree with the point made by Ken Phillips during his session on surveys at ASTD 2013 – Five may not be enough. Think of it this way: I’m pretty much going to either disagree or agree with the statement, so two of the choices are immediately eliminated. Therefore, what I *really* have is a three point scale – Strongly Disagree, Disagree, and Neutral, or Neutral, Agree, and Strongly Agree (and an argument can be made for taking “neutral” out of the mix when you do agree or disagree at some level). Ken, based on the Harvard Business Review article, “Getting the Truth into Workplace Surveys” by Palmer Morrel-Samuels, recommends using a minimum of seven choices, and indicates that nine and eleven choice scales are even better. However, there are a number of sources out there who feel that anything over seven puts a cognitive load on the respondent: they are presented with too many options and have trouble choosing between them. Personally, I recommend either five or seven choices.

Direction

By “direction” I’m referring to going from “Strongly Disagree” to “Strongly Agree”, or going from “Strongly Agree” to “Strongly Disagree”. Many experts will tell you that it doesn’t matter, but others bring out the fact that studies have shown that respondents tend to want to agree with the statement. If you start with “Strongly Agree”, the respondent may tend to select that choice “automatically”, as this is a positive response and they tend to want to agree anyway. This could skew your results. However, if the first choice is “Strongly Disagree”, the respondent will have more of a tendency to read through the choices, as “Strongly Disagree” has a negative feel to it (it’s not an attractive answer) and respondents will shy away from it unless they truly feel that way. At this point, the respondent will have more of a tendency to truly differentiate between “Agree” and “Strongly Agree”, instead of seeing “Strongly Agree” first and thinking, “Yeah, what the heck, that’s good enough,” and selecting it without much thought.

In Part 5, we’ll finish up this discussion by taking a look at labeling each choice, along with a few other best practices related to response scales.

If you are interested in authoring best practices, be sure to register for the Questionmark 2014 Users Conference in San Antonio, Texas March 4 – 7. See you there!