Writing Good Surveys, Part 5: Finishing Up on Response Scales
Posted By Doug Peterson
If you have not already seen part 4 of this series, I’d recommend reading what it has to say about number of responses and direction of response scales as an introduction to today’s discussion.
To label or not to label, that is the question (apologies to Mr. Shakespeare). In his Harvard Business Review article, Getting the Truth into Workplace Surveys, Palmer Morrel-Samuels presents the following example”
Mr. Morrel-Samuels’ position is that the use of words or phrases to label the choices is to be avoided because the labels may mean different things to different people. What I consider to be exceeding expectations may only just meet expectations according to someone else. And how far is “far” when someone far exceeds expectations? Is it a great deal more that “meets expectations” and a little bit more than “exceeds expectations,” or is it a great deal more than “exceeds expectations?” Because of this ambiguity, Mr. Morrel-Samuels recommends only labeling the first and last option, and using numbers to label every option as shown here:
The idea behind this approach is that “never” and “always” should mean the same thing to every respondent, and that the use of numbers indicates an equal difference between each choice.
However, a quick Googling of “survey response scales” reveals that many survey designers recommend just the opposite – that scale choices should all be labeled! Their position is that numbers have no meaning on their own and that you’re putting more of a cognitive load on the respondent by forcing them to determine the meaning of “5” versus “6” instead of providing the meaning with a label.
I believe that both sides of the argument have valid points. My personal recommendation is to label each choice, but to take great care to construct labels that are clear and concise. I believe this is also a situation where you must take into account the average respondent – a group of scientists may be quite comfortable with numeric labels, while the average person on the street would probably respond better to textual labels.
Another possibility is to avoid the problem altogether by staying away from opinion-based answers whenever possible. Instead, look for opportunities to measure frequency. For example:
In this example, the extremes are well-defined, but everything in the middle is up to the individual’s definition of frequency. This item might work better
On average, I ride my bicycle to work:
Now there is no ambiguity among the choices.
A few more things to think about when constructing your response scales:
- Space the choices evenly. Doing so provides visual reinforcement that there is an equal amount of difference between the choices.
- If there is any possibility that the respondent may not know the answer or have an opinion, provide a “not applicable” choice. Remember, this is different from a “neutral” choice in the middle of the scale. The “not applicable” choice should be different in appearance, for example, a box instead of a circle and greater space between it and the previous choice.
- If you do use numbers in your choice labels, number them from low to high going left to right. That’s how we’re used to seeing them, and we tend to associate low numbers with “bad” and high numbers with “good” when asked to rate something. (See part 4 in this series for a discussion on going from negative to positive responses. Obviously, if you’re dealing with a right-to-left language (e.g., Arabic or Hebrew), just the opposite is true.
- When possible, use the same term in your range of choices. For example, go from “not at all shy” to “very shy” instead of “brave” to “shy”. Using two different terms hearkens back to the problem of different people having different definitions for those terms.
Be sure to stay tuned for the next installment in this series. In part 6, we’ll take a look at putting the entire survey together – some “form and flow” best practices. And if you enjoy learning about things like putting together good surveys and writing good assessment items, you should really think about attending our European Users Conference or our North American Users Conference. Both conferences are great opportunities to learn from Questionmark employees as well as fellow Questionmark customers!