Writing Good Surveys, Part 5: Finishing Up on Response Scales

Doug Peterson HeadshotPosted By Doug Peterson

If you have not already seen part 4 of this series, I’d recommend reading what it has to say about number of responses and direction of response scales as an introduction to today’s discussion.

To label or not to label, that is the question (apologies to Mr. Shakespeare). In his Harvard Business Review article, Getting the Truth into Workplace Surveys, Palmer Morrel-Samuels presents the following example”surveys 5 1

Mr. Morrel-Samuels’ position is that the use of words or phrases to label the choices is to be avoided because the labels may mean different things to different people. What I consider to be exceeding expectations may only just meet expectations according to someone else. And how far is “far” when someone far exceeds expectations? Is it a great deal more that “meets expectations” and a little bit more than “exceeds expectations,” or is it a great deal more than “exceeds expectations?” Because of this ambiguity, Mr. Morrel-Samuels recommends only labeling the first and last option, and using numbers to label every option as shown here:

surveys 5 2

The idea behind this approach is that “never” and “always” should mean the same thing to every respondent, and that the use of numbers indicates an equal difference between each choice.

However, a quick Googling of “survey response scales” reveals that many survey designers recommend just the opposite – that scale choices should all be labeled! Their position is that numbers have no meaning on their own and that you’re putting more of a cognitive load on the respondent by forcing them to determine the meaning of “5” versus “6” instead of providing the meaning with a label.

I believe that both sides of the argument have valid points. My personal recommendation is to label each choice, but to take great care to construct labels that are clear and concise. I believe this is also a situation where you must take into account the average respondent – a group of scientists may be quite comfortable with numeric labels, while the average person on the street would probably respond better to textual labels.

Another possibility is to avoid the problem altogether by staying away from opinion-based answers whenever possible. Instead, look for opportunities to measure frequency. For example:

I ride my bicycle to work:surveys 5 3

 In this example, the extremes are well-defined, but everything in the middle is up to the individual’s definition of frequency. This item might work better
like this:

On average, I ride my bicycle to work:

surveys 5 4

 Now there is no ambiguity among the choices.

A few more things to think about when constructing your response scales:

  • Space the choices evenly. Doing so provides visual reinforcement that there is an equal amount of difference between the choices.
  • If there is any possibility that the respondent may not know the answer or have an opinion, provide a “not applicable” choice. Remember, this is different from a “neutral” choice in the middle of the scale. The “not applicable” choice should be different in appearance, for example, a box instead of a circle and greater space between it and the previous choice.
  • If you do use numbers in your choice labels, number them from low to high going left to right. That’s how we’re used to seeing them, and we tend to associate low numbers with “bad” and high numbers with “good” when asked to rate something. (See part 4 in this series for a discussion on going from negative to positive responses. Obviously, if you’re dealing with a right-to-left language (e.g., Arabic or Hebrew), just the opposite is true.
  • When possible, use the same term in your range of choices. For example, go from “not at all shy” to “very shy” instead of “brave” to “shy”. Using two different terms hearkens back to the problem of different people having different definitions for those terms.

Be sure to stay tuned for the next installment in this series. In part 6, we’ll take a look at putting the entire survey together – some “form and flow” best practices. And if you enjoy learning about things like putting together good surveys and writing good assessment items, you should really think about attending our European Users Conference or our North American Users Conference. Both conferences are great opportunities to learn from Questionmark employees as well as fellow Questionmark customers!

Writing Good Surveys, Part 4: Response Scales

Doug Peterson HeadshotPosted By Doug Peterson

In part 2 and part 3 of this series, we looked at writing good survey questions. Now it’s time to turn our attention to the response scale.

Response scales come in several flavors. The binary or dichotomous scale is your basic yes/no option. A multiple choice scale offers three or more discreet selections from which to choose. For example, you might ask “What is the public sector organization in which you work?” and provide the following choices:

  • Federal
  • State
  • County
  • Municipal
  • City/Local
  • Other

Dichotomous and multiple choice scales are typically used for factual answers, not for opinions or ratings. The key to these types of scales is that you must make sure that you offer the respondent an answer they can use. For example, asking a hotel guest “Did you enjoy your stay?” and then giving them the options “yes” and “no” is not a good idea. They may have greatly enjoyed their room, but were very dissatisfied with the fitness center, and this question/response scale pairing does not allow them to differentiate between different aspects of their visit. A better approach might be to ask “Did you use the fitness center?” with a yes/no response, and if they did, have them answer more detailed questions about their fitness center experience.

The response scale we typically think about when it comes to surveys is the descriptive scale, where the respondent describes their opinion or experience as a point on a continuum between two extremes such as “strongly disagree” to “strongly agree”. These are the scales that elicit the most amount of debate among the experts, and I strongly encourage you to Google “survey response scales” and do a little reading. The main points of discussion are number of responses, direction, and labeling.

Number of Responses

We’ve all seen the minimalist approach to rating scales:

Disagree     Neutral     Agree

[1]                   [2]              [3]

There are certainly situations where this scale is valid, but most of the time you will want to provide more options to allow the respondent to provide a more detailed answer:

Strongly Disagree     Disagree        Neutral        Agree        Strongly Agree

[1]                                   [2]                  [3]                  [4]                      [5]

Five choices is very typical, but I would agree with the point made by Ken Phillips during his session on surveys at ASTD 2013 – Five may not be enough. Think of it this way: I’m pretty much going to either disagree or agree with the statement, so two of the choices are immediately eliminated. Therefore, what I *really* have is a three point scale – Strongly Disagree, Disagree, and Neutral, or Neutral, Agree, and Strongly Agree (and an argument can be made for taking “neutral” out of the mix when you do agree or disagree at some level). Ken, based on the Harvard Business Review article, “Getting the Truth into Workplace Surveys” by Palmer Morrel-Samuels, recommends using a minimum of seven choices, and indicates that nine and eleven choice scales are even better. However, there are a number of sources out there who feel that anything over seven puts a cognitive load on the respondent: they are presented with too many options and have trouble choosing between them. Personally, I recommend either five or seven choices.


By “direction” I’m referring to going from “Strongly Disagree” to “Strongly Agree”, or going from “Strongly Agree” to “Strongly Disagree”. Many experts will tell you that it doesn’t matter, but others bring out the fact that studies have shown that respondents tend to want to agree with the statement. If you start with “Strongly Agree”, the respondent may tend to select that choice “automatically”, as this is a positive response and they tend to want to agree anyway. This could skew your results. However, if the first choice is “Strongly Disagree”, the respondent will have more of a tendency to read through the choices, as “Strongly Disagree” has a negative feel to it (it’s not an attractive answer) and respondents will shy away from it unless they truly feel that way. At this point, the respondent will have more of a tendency to truly differentiate between “Agree” and “Strongly Agree”, instead of seeing “Strongly Agree” first and thinking, “Yeah, what the heck, that’s good enough,” and selecting it without much thought.

In Part 5, we’ll finish up this discussion by taking a look at labeling each choice, along with a few other best practices related to response scales.

If you are interested in authoring best practices, be sure to register for the Questionmark 2014 Users Conference in San Antonio, Texas March 4 – 7. See you there!