How many errors can you spot in this survey question?

John KleemanPosted by John Kleeman

Tests and surveys are very different. In a test, you look to measure participant knowledge or skill; you know what answer you are looking for, and generally participants are motivated to answer well. In a survey, you look to measure participant attitude or recollection; you don’t know what answer you are looking for, and participants may be disinterested.

Writing good surveys is an important skill. If you’re interested in how to write good surveys of opinion and attitude in training, learning, compliance, certification, based on research evidence, you might be interested in a webinar I’m giving on May 15th on Designing Effective Surveys. Click HERE for more information or to register.

In the meantime, here’s a sample survey question. How many errors can you spot in the question?

The material and presentation qualty at Questionmark webinars is always excellent. Strongly Agree Agree Slightly agree Neither agree nor disagree Disagree Strongly disagree

 

There are quite a few errors. Try to count the errors before you look at my explanation below!!

 

 

I count seven errors:

  1. I am sure you got the mis-spelling of “quality”. If you mis-spell something in a survey question, it indicates to the participant that you haven’t taken time and trouble writing your survey, so there is little incentive for them to spend time and trouble answering.
  2. It’s not usually sensible to use the word “always” in a survey question. Some participants make take the statement literally, and it’s much more likely that webinars are usually excellent than that every single one is excellent.
  3. The question is double-barreled. It’s asking about material AND presentation quality. They might be different. This really should be two questions to get a consistent answer.
  4. The “Agree” in “Strongly Agree” is capitalized but not in other places, e.g. “Slightly agree”. Capitalization should be equal in every part of the scale.

You can see these four errors highlighted below.

Red marking corresponding to four errors above

Is that all the errors? I count three more, making a total of seven:

  1. The scale should be balanced. Why is there a “Slightly agree” and not a “Slightly disagree”?
  2. This is a leading or “loaded” question, not a neutral one, it encourages you to a positive answer. If you genuinely want to get people’s opinion in a survey question, you need to ask it without encouraging the participant to answer a particular way.
  3. Lastly, any agree/disagree question has acquiescence bias. Research evidence suggests that some participants are more likely to agree when answering survey questions. Particularly those who are more junior or less educated who may tend to think that what is asked of them might be true. It would be better to word this question to ask people to rate the webinars rather than agree with a statement about them.

Did you get all of these? I hope you enjoyed this little exercise. If you did, I’ll explain more about this and more about good survey practice in our Designing Effective Surveys webinar, click HERE to register:

This webinar is based on some sessions I’ve given at past Questionmark user conferences which got high ratings. I’ll do my best to give you interesting material and engaging presentation quality in the webinar!

Writing Good Surveys, Part 6: Tips for the form of the survey

Doug Peterson HeadshotPosted By Doug Peterson

In this final installment of this series, we’ll take a look at some tips for the form of the survey itself.

The first suggestion is to avoid labeling sections of questions. Studies have shown that when it is obvious that a series of questions belong to a group, respondents tend to answer all the questions in the group the same way they answer the first question in the group. The same is true with visual formatting, like putting a box around a group of questions or extra space between groups. It’s best to just present all of the questions in a simple, sequentially numbered list.

As much as possible, keep questions at about the same length, and present the same number of questions (roughly, it doesn’t have to be exact) for each topic. Longer questions or more questions on a topic tend to require more reflection by the respondent, and tend to receive higher ratings. I suspect this might have something to do with the respondent feeling like the question or group of questions is more important (or at least more work) because it is longer, possibly making them hesitant to give something “important” a negative rating.

It is important to collect demographic information as part of a survey. However, a suspicion that he or she can be identified can definitely skew a respondent’s answers. Put the demographic information at the end of the survey to encourage honest responses to the preceding questions. Make as much of the demographic information optional as possible, and if the answers are collected and stored anonymously, assure the respondent of this. If you don’t absolutely need a piece of demographic information, don’t ask for it. The more anonymous the respondent feels, the more honest he or she will be.

Group questions with the same response scale together and present them in a matrix format. This reduces the cognitive load on the respondent; the response possibilities do not have to be figured out on each individual question, and the easier it is for respondents to fill out the survey, the more honest and accurate they will be. If you do not use the matrix format, consider listing the response scale choices vertically instead of horizontally. A vertical orientation clearly separates the choices and reduces the chance of accidentally selecting the wrong choice. And regardless of orientation, be sure to place more space between questions than between a question and its response scale.

I hope you’ve enjoyed this series on writing good surveys. I also hope you’ll join us in San Antonio in March 2014 for our annual Users Conference – I’ll be presenting a session on writing assessment and survey items, and I’m looking forward to hearing ideas and feedback from those in attendance!

Writing Good Surveys, Part 4: Response Scales

Doug Peterson HeadshotPosted By Doug Peterson

In part 2 and part 3 of this series, we looked at writing good survey questions. Now it’s time to turn our attention to the response scale.

Response scales come in several flavors. The binary or dichotomous scale is your basic yes/no option. A multiple choice scale offers three or more discreet selections from which to choose. For example, you might ask “What is the public sector organization in which you work?” and provide the following choices:

  • Federal
  • State
  • County
  • Municipal
  • City/Local
  • Other

Dichotomous and multiple choice scales are typically used for factual answers, not for opinions or ratings. The key to these types of scales is that you must make sure that you offer the respondent an answer they can use. For example, asking a hotel guest “Did you enjoy your stay?” and then giving them the options “yes” and “no” is not a good idea. They may have greatly enjoyed their room, but were very dissatisfied with the fitness center, and this question/response scale pairing does not allow them to differentiate between different aspects of their visit. A better approach might be to ask “Did you use the fitness center?” with a yes/no response, and if they did, have them answer more detailed questions about their fitness center experience.

The response scale we typically think about when it comes to surveys is the descriptive scale, where the respondent describes their opinion or experience as a point on a continuum between two extremes such as “strongly disagree” to “strongly agree”. These are the scales that elicit the most amount of debate among the experts, and I strongly encourage you to Google “survey response scales” and do a little reading. The main points of discussion are number of responses, direction, and labeling.

Number of Responses

We’ve all seen the minimalist approach to rating scales:

Disagree     Neutral     Agree

[1]                   [2]              [3]

There are certainly situations where this scale is valid, but most of the time you will want to provide more options to allow the respondent to provide a more detailed answer:

Strongly Disagree     Disagree        Neutral        Agree        Strongly Agree

[1]                                   [2]                  [3]                  [4]                      [5]

Five choices is very typical, but I would agree with the point made by Ken Phillips during his session on surveys at ASTD 2013 – Five may not be enough. Think of it this way: I’m pretty much going to either disagree or agree with the statement, so two of the choices are immediately eliminated. Therefore, what I *really* have is a three point scale – Strongly Disagree, Disagree, and Neutral, or Neutral, Agree, and Strongly Agree (and an argument can be made for taking “neutral” out of the mix when you do agree or disagree at some level). Ken, based on the Harvard Business Review article, “Getting the Truth into Workplace Surveys” by Palmer Morrel-Samuels, recommends using a minimum of seven choices, and indicates that nine and eleven choice scales are even better. However, there are a number of sources out there who feel that anything over seven puts a cognitive load on the respondent: they are presented with too many options and have trouble choosing between them. Personally, I recommend either five or seven choices.

Direction

By “direction” I’m referring to going from “Strongly Disagree” to “Strongly Agree”, or going from “Strongly Agree” to “Strongly Disagree”. Many experts will tell you that it doesn’t matter, but others bring out the fact that studies have shown that respondents tend to want to agree with the statement. If you start with “Strongly Agree”, the respondent may tend to select that choice “automatically”, as this is a positive response and they tend to want to agree anyway. This could skew your results. However, if the first choice is “Strongly Disagree”, the respondent will have more of a tendency to read through the choices, as “Strongly Disagree” has a negative feel to it (it’s not an attractive answer) and respondents will shy away from it unless they truly feel that way. At this point, the respondent will have more of a tendency to truly differentiate between “Agree” and “Strongly Agree”, instead of seeing “Strongly Agree” first and thinking, “Yeah, what the heck, that’s good enough,” and selecting it without much thought.

In Part 5, we’ll finish up this discussion by taking a look at labeling each choice, along with a few other best practices related to response scales.

If you are interested in authoring best practices, be sure to register for the Questionmark 2014 Users Conference in San Antonio, Texas March 4 – 7. See you there!