Writing Good Surveys, Part 5: Finishing Up on Response Scales

Doug Peterson HeadshotPosted By Doug Peterson

If you have not already seen part 4 of this series, I’d recommend reading what it has to say about number of responses and direction of response scales as an introduction to today’s discussion.

To label or not to label, that is the question (apologies to Mr. Shakespeare). In his Harvard Business Review article, Getting the Truth into Workplace Surveys, Palmer Morrel-Samuels presents the following example”surveys 5 1

Mr. Morrel-Samuels’ position is that the use of words or phrases to label the choices is to be avoided because the labels may mean different things to different people. What I consider to be exceeding expectations may only just meet expectations according to someone else. And how far is “far” when someone far exceeds expectations? Is it a great deal more that “meets expectations” and a little bit more than “exceeds expectations,” or is it a great deal more than “exceeds expectations?” Because of this ambiguity, Mr. Morrel-Samuels recommends only labeling the first and last option, and using numbers to label every option as shown here:

surveys 5 2

The idea behind this approach is that “never” and “always” should mean the same thing to every respondent, and that the use of numbers indicates an equal difference between each choice.

However, a quick Googling of “survey response scales” reveals that many survey designers recommend just the opposite – that scale choices should all be labeled! Their position is that numbers have no meaning on their own and that you’re putting more of a cognitive load on the respondent by forcing them to determine the meaning of “5” versus “6” instead of providing the meaning with a label.

I believe that both sides of the argument have valid points. My personal recommendation is to label each choice, but to take great care to construct labels that are clear and concise. I believe this is also a situation where you must take into account the average respondent – a group of scientists may be quite comfortable with numeric labels, while the average person on the street would probably respond better to textual labels.

Another possibility is to avoid the problem altogether by staying away from opinion-based answers whenever possible. Instead, look for opportunities to measure frequency. For example:

I ride my bicycle to work:surveys 5 3

 In this example, the extremes are well-defined, but everything in the middle is up to the individual’s definition of frequency. This item might work better
like this:

On average, I ride my bicycle to work:

surveys 5 4

 Now there is no ambiguity among the choices.

A few more things to think about when constructing your response scales:

  • Space the choices evenly. Doing so provides visual reinforcement that there is an equal amount of difference between the choices.
  • If there is any possibility that the respondent may not know the answer or have an opinion, provide a “not applicable” choice. Remember, this is different from a “neutral” choice in the middle of the scale. The “not applicable” choice should be different in appearance, for example, a box instead of a circle and greater space between it and the previous choice.
  • If you do use numbers in your choice labels, number them from low to high going left to right. That’s how we’re used to seeing them, and we tend to associate low numbers with “bad” and high numbers with “good” when asked to rate something. (See part 4 in this series for a discussion on going from negative to positive responses. Obviously, if you’re dealing with a right-to-left language (e.g., Arabic or Hebrew), just the opposite is true.
  • When possible, use the same term in your range of choices. For example, go from “not at all shy” to “very shy” instead of “brave” to “shy”. Using two different terms hearkens back to the problem of different people having different definitions for those terms.

Be sure to stay tuned for the next installment in this series. In part 6, we’ll take a look at putting the entire survey together – some “form and flow” best practices. And if you enjoy learning about things like putting together good surveys and writing good assessment items, you should really think about attending our European Users Conference or our North American Users Conference. Both conferences are great opportunities to learn from Questionmark employees as well as fellow Questionmark customers!

Writing Good Surveys, Part 2: Question Basics

Doug Peterson HeadshotPosted By Doug Peterson

In the first installment in this series, I mentioned the ASTD book, Survey Basics, by Phillips, Phillips and Aaron. The fourth chapter, “Survey Questions,” is especially good, and it’s the basis for this installment.

The first thing to consider when writing questions for your survey is whether or not the questions return the data for which you’re looking. For example,let’s say one of the objectives for your survey is to “determine the amount of time per week spent reading email.”

Which of these questions would best answer the question?

  1. How many emails do you receive per week, on average?
  2. On average, how many hours do you spend responding to emails every week?
  3. How long does it take to read the average email?
  4. On average, how many hours do you spend reading emails every week?

All four questions are related to dealing with email, but only one pertains directly to the objective. Numbers 1 and 3 could be combined to satisfy the objective if you’re willing to assume that every email received is read – a bit of a risky assumption, in my opinion (and experience). Number two is close, but there is a difference between reading an email and responding to it, and again, you may not respond to every email you read.

The next thing to consider is whether or not the question can be answered, and if so, ensuring that the question does not lead to a desired answer.

The authors give two examples in the book. The first describes a situation where the author was asked to respond to the question, “Were you satisfied with our service?” with a yes or no. He was not dissatisfied with the service he received, but he wasn’t satisfied with it, either. However, there was no middle ground, and he was unable to answer the question.

The second example involves one of the authors checking out of a hotel. When she tells the clerk that she enjoyed her stay, the clerk tells her that they rate customer satisfaction on a scale of one to ten, and asks if she would give them a ten. She felt pressured into giving the suggested response instead of feeling free to give a nine or an eight.

Another basic rule for writing survey questions is to make sure the respondent can understand the question. If they can’t understand it at all, they won’t answer or they will answer randomly (which is worse than not answering, as it is garbage data that skews your results). If they misunderstand the question, they’ll be answering a question that you didn’t ask. Remember, the question author is a subject matter expert (SME); he or she understands the big words and fancy jargon. Of course the question makes sense to the SME! But the person taking the survey is probably not an SME, which means the question needs to be written in plain language. You’re writing for the respondent, not the SME.

Even more basic than providing enough options for the respondent to use (see the “yes or no” example above) is making sure the respondent even has the knowledge to answer. This is typically a problem with “standard” surveys. For example, a standard end-of-course survey might ask if the room temperature was comfortable. While this question is appropriate for an instructor-led training class where the training department has some control over the environment, it really doesn’t apply to a self-paced, computer-based e-learning course.

Another example of a question for which the respondent would have no way of knowing the answer would be something like, “Does your manager provide monthly feedback to his/her direct reports?” How would you know? Unless you have access to your manager’s schedule and can verify that he or she met with each direct report and discussed their performance, the only question you could answer is, “Does your manager provide you with monthly feedback?” The same thing is true about asking questions that start off with, “Do your coworkers consider…” – the respondent has no idea what his/her coworkers thoughts and feelings are, so only ask questions about observable behaviors.

Finally, make sure to write questions in a way that respondents are willing to answer. Asking a question such as “I routinely refuse to cooperate with my coworkers” is probably not going to get a positive response from someone who is, in fact, uncooperative. Something like “Members of my workgroup routinely cooperate with each other” is not threatening and does not make the respondent look bad, yet they can still answer with “disagree” and provide you with insights as to the work atmosphere within the group.

Here’s an example of a course evaluation survey that gives the respondent plenty of choices.

Writing Good Surveys, Part 1: Getting Started

Doug Peterson HeadshotPosted By Doug Peterson

In May of 2013 I attended the American Society of Training and Development (ASTD) conference in Dallas, TX. While there I took in Ken Phillips’ session called “Capturing Elusive Level 3 Data: The Secrets of Survey Design.” Survey BasicsI also picked up the book “Survey Basics” by Patricia Pulliam Phillips, Jack J. Phillips, and Bruce Aaron. (Apparently there is some sort of cosmic connection between surveys and people named “Phillips”. Who knew?) Over the course of my next few blog posts, I’d like to discuss some of the things I’ve learned about surveys.

Surveys can be accomplished in several ways:

  1. Self-administered
  2. Interviews
  3. Focus groups
  4. Observation

In this series, I’m going to be looking at #1 and #4. The self-administered survey is what we typically think about when we hear the word “survey” – taking an evaluation survey at the end of a training experience. Was the room temperature comfortable? Did you enjoy the training experience? Many times you hear them referred to as “smile sheets” and they relate to level 1 of the Kirkpatrick model (reaction). Questionmark excels at creating these types of surveys, and our Questionmark Live browser-based authoring tool even has a dedicated “Course Evaluation” assessment template that comes with a library of standard questions from which to select, in addition to writing questions of your own.

Surveys can also be used for Kirkpatrick level 3 evaluation – behavior. In other words, was the training applied back on the job? Many times level 3 data is derived from job statistics such as an increase in widgets produced per day or a decrease in the number of accidents reported per month. However, surveys can also be used to determine the impact of the training on job performance. Not only can the survey be taken by the learner, the survey can also take the form of an observational assessment filled out by someone else. Questionmark makes it easy to set up observational assessments – identify the observer and who they can observe, the observer logs in and specifies who he/she is observing, and the results are tied to the person being observed.

To write a good survey, it is important to understand the objectives of the survey. Define your objectives up front and then use them to drive which questions are included. If a question doesn’t pertain to one of the objectives, throw it out. The best results come from a survey that is only as long as it needs to be.

The next step is to define your target audience. The target audience of a level 1 survey is pretty obvious – it’s the people who took the training! However, level 3 surveys can be a bit trickier. Typically you would include those who participated in the training, but you may want to include others, as well. For example, if the training was around customer relations, you may want to survey some customers (internal and/or external). The learner’s peers and colleagues might be able to provide some valuable information as to how the learner is applying what was learned. The same is true about the learner’s management. In certain situations, it might also be appropriate to survey the learner’s direct reports. For example, if a manager takes leadership training, who better to survey than the people he or she is leading? The key thing is that the group being surveyed must have first-hand knowledge of the learner’s behavior.

A few more things to take into account when deciding on a target audience:

  • How disruptive or costly is the data collection process? Are you asking a lot of highly paid staff to take an hour of their time to fill out a survey? Will you have to shut down the production line or take customer representatives away from their phones to fill out the survey?
  • How credible do the results need to be? Learners tend to overinflate how much they use what they’ve learned, so if important decisions are being made based on the survey data
  • What are the stakeholders expecting?

Whereas well-defined objectives define which questions are asked, the target audience defines how they are asked. Surveying the learner will typically involve more responses about feelings and impressions, especially in level 1 surveys. Surveying the learner’s colleagues, management, direct reports, and/or customers will involve questions more related to the learner’s observable behaviors.  As this series progresses, we’ll look at writing survey questions in more depth.

Posts in this series: