Writing Good Surveys, Part 3: More Question Basics

Doug Peterson HeadshotPosted By Doug Peterson

In part 2 of this series, we looked at several tips for writing good survey questions. To recap:

  • Make sure to ask the right question so that the question returns the data you actually want.
  • Make sure the question is one the respondent can actually answer, typically being about something they can observe or their own personal feelings, but
    not the thoughts/feelings/intentions of others.
  • Make sure the question doesn’t lead or pressure the respondent towards a certain response.
  • Stay away from jargon.
  • Provide an adequate rating scale. Yes/No or Dislike/Neutral/Like may not provide enough options for the respondent to reply honestly.

In this installment, I’d like to look at two more tips. The first is called “barreling”, and it basically refers to asking two or more questions at once. An example might be “The room was clean and well-lit.” Clearly the survey is trying to uncover the respondent’s opinion about the atmosphere of the training room, but it’s conceivable that the room could have been messy yet well-lit, or clean but dimly lit. This is really two questions:

  • The room was clean.
  • The room was well-lit.

I always look for the words “and” and “or” when I’m writing or reviewing questions. If I see an “and” or an “or”, I immediately check to see if I need to split the question out into multiple questions.

The second tip is to keep your questions as short, as clear, and as concise as possible. Long and complex questions tend to confuse the respondent; they get lost along the way. If a sentence contains several commas, phrases or clauses inserted with dashes – you know, like this – or relative or dependent clauses, which are typically set off by commas and words like “which”, it may need to be broken out into several sentences, or may contain unneeded information that can be deleted. (Did you see what I did there?)

In the next few entries in this series, we’re going to take a look some other topics involved in putting together good surveys. These will include how to construct a rating scale as well as some thoughts about the flow of the survey itself. In the meantime, here are some resources you might want to review:

Problems with Survey Questions” by Patti J. Phillips. This covers much of what we looked at in this and the previous post, with several good examples.
Performance-Focused Smile Sheets” by Will Thalheimer. This is an excellent commentary on writing level 2 and level 3 surveys.
Correcting Four Types of Error in Survey Design” by Patti P. Phillips. In this blog article, Patti give a quick run-down of coverage error, sampling error, response rate error, and measurement error.
Getting the Truth into Worplace Surveys” by Palmer Morrel-Samuels in the February 2002 Harvard Business Review. You have to register to read the entire article, or you can purchase it for $6.95 (registration is free).

If you are interested in authoring best practices, be sure to register for the 2014 Questionmark Users Conference  in San Antonio, Texas March 4 – 7. See you there!

Writing Good Surveys, Part 2: Question Basics

Doug Peterson HeadshotPosted By Doug Peterson

In the first installment in this series, I mentioned the ASTD book, Survey Basics, by Phillips, Phillips and Aaron. The fourth chapter, “Survey Questions,” is especially good, and it’s the basis for this installment.

The first thing to consider when writing questions for your survey is whether or not the questions return the data for which you’re looking. For example,let’s say one of the objectives for your survey is to “determine the amount of time per week spent reading email.”

Which of these questions would best answer the question?

  1. How many emails do you receive per week, on average?
  2. On average, how many hours do you spend responding to emails every week?
  3. How long does it take to read the average email?
  4. On average, how many hours do you spend reading emails every week?

All four questions are related to dealing with email, but only one pertains directly to the objective. Numbers 1 and 3 could be combined to satisfy the objective if you’re willing to assume that every email received is read – a bit of a risky assumption, in my opinion (and experience). Number two is close, but there is a difference between reading an email and responding to it, and again, you may not respond to every email you read.

The next thing to consider is whether or not the question can be answered, and if so, ensuring that the question does not lead to a desired answer.

The authors give two examples in the book. The first describes a situation where the author was asked to respond to the question, “Were you satisfied with our service?” with a yes or no. He was not dissatisfied with the service he received, but he wasn’t satisfied with it, either. However, there was no middle ground, and he was unable to answer the question.

The second example involves one of the authors checking out of a hotel. When she tells the clerk that she enjoyed her stay, the clerk tells her that they rate customer satisfaction on a scale of one to ten, and asks if she would give them a ten. She felt pressured into giving the suggested response instead of feeling free to give a nine or an eight.

Another basic rule for writing survey questions is to make sure the respondent can understand the question. If they can’t understand it at all, they won’t answer or they will answer randomly (which is worse than not answering, as it is garbage data that skews your results). If they misunderstand the question, they’ll be answering a question that you didn’t ask. Remember, the question author is a subject matter expert (SME); he or she understands the big words and fancy jargon. Of course the question makes sense to the SME! But the person taking the survey is probably not an SME, which means the question needs to be written in plain language. You’re writing for the respondent, not the SME.

Even more basic than providing enough options for the respondent to use (see the “yes or no” example above) is making sure the respondent even has the knowledge to answer. This is typically a problem with “standard” surveys. For example, a standard end-of-course survey might ask if the room temperature was comfortable. While this question is appropriate for an instructor-led training class where the training department has some control over the environment, it really doesn’t apply to a self-paced, computer-based e-learning course.

Another example of a question for which the respondent would have no way of knowing the answer would be something like, “Does your manager provide monthly feedback to his/her direct reports?” How would you know? Unless you have access to your manager’s schedule and can verify that he or she met with each direct report and discussed their performance, the only question you could answer is, “Does your manager provide you with monthly feedback?” The same thing is true about asking questions that start off with, “Do your coworkers consider…” – the respondent has no idea what his/her coworkers thoughts and feelings are, so only ask questions about observable behaviors.

Finally, make sure to write questions in a way that respondents are willing to answer. Asking a question such as “I routinely refuse to cooperate with my coworkers” is probably not going to get a positive response from someone who is, in fact, uncooperative. Something like “Members of my workgroup routinely cooperate with each other” is not threatening and does not make the respondent look bad, yet they can still answer with “disagree” and provide you with insights as to the work atmosphere within the group.

Here’s an example of a course evaluation survey that gives the respondent plenty of choices.

Writing Good Surveys, Part 1: Getting Started

Doug Peterson HeadshotPosted By Doug Peterson

In May of 2013 I attended the American Society of Training and Development (ASTD) conference in Dallas, TX. While there I took in Ken Phillips’ session called “Capturing Elusive Level 3 Data: The Secrets of Survey Design.” Survey BasicsI also picked up the book “Survey Basics” by Patricia Pulliam Phillips, Jack J. Phillips, and Bruce Aaron. (Apparently there is some sort of cosmic connection between surveys and people named “Phillips”. Who knew?) Over the course of my next few blog posts, I’d like to discuss some of the things I’ve learned about surveys.

Surveys can be accomplished in several ways:

  1. Self-administered
  2. Interviews
  3. Focus groups
  4. Observation

In this series, I’m going to be looking at #1 and #4. The self-administered survey is what we typically think about when we hear the word “survey” – taking an evaluation survey at the end of a training experience. Was the room temperature comfortable? Did you enjoy the training experience? Many times you hear them referred to as “smile sheets” and they relate to level 1 of the Kirkpatrick model (reaction). Questionmark excels at creating these types of surveys, and our Questionmark Live browser-based authoring tool even has a dedicated “Course Evaluation” assessment template that comes with a library of standard questions from which to select, in addition to writing questions of your own.

Surveys can also be used for Kirkpatrick level 3 evaluation – behavior. In other words, was the training applied back on the job? Many times level 3 data is derived from job statistics such as an increase in widgets produced per day or a decrease in the number of accidents reported per month. However, surveys can also be used to determine the impact of the training on job performance. Not only can the survey be taken by the learner, the survey can also take the form of an observational assessment filled out by someone else. Questionmark makes it easy to set up observational assessments – identify the observer and who they can observe, the observer logs in and specifies who he/she is observing, and the results are tied to the person being observed.

To write a good survey, it is important to understand the objectives of the survey. Define your objectives up front and then use them to drive which questions are included. If a question doesn’t pertain to one of the objectives, throw it out. The best results come from a survey that is only as long as it needs to be.

The next step is to define your target audience. The target audience of a level 1 survey is pretty obvious – it’s the people who took the training! However, level 3 surveys can be a bit trickier. Typically you would include those who participated in the training, but you may want to include others, as well. For example, if the training was around customer relations, you may want to survey some customers (internal and/or external). The learner’s peers and colleagues might be able to provide some valuable information as to how the learner is applying what was learned. The same is true about the learner’s management. In certain situations, it might also be appropriate to survey the learner’s direct reports. For example, if a manager takes leadership training, who better to survey than the people he or she is leading? The key thing is that the group being surveyed must have first-hand knowledge of the learner’s behavior.

A few more things to take into account when deciding on a target audience:

  • How disruptive or costly is the data collection process? Are you asking a lot of highly paid staff to take an hour of their time to fill out a survey? Will you have to shut down the production line or take customer representatives away from their phones to fill out the survey?
  • How credible do the results need to be? Learners tend to overinflate how much they use what they’ve learned, so if important decisions are being made based on the survey data
  • What are the stakeholders expecting?

Whereas well-defined objectives define which questions are asked, the target audience defines how they are asked. Surveying the learner will typically involve more responses about feelings and impressions, especially in level 1 surveys. Surveying the learner’s colleagues, management, direct reports, and/or customers will involve questions more related to the learner’s observable behaviors.  As this series progresses, we’ll look at writing survey questions in more depth.

Posts in this series:

 

QR codes for surveys: A perfect fit?

jim_small Posted by Jim Farrell

Let’s face it…QR codes are everywhere. They are in magazines, stores and even food boxes.

Last week when I was in a local store, I was able to do product feature and price comparisons right on my smart phone with the QR codes on the price tags. My nine-year-old daughter scans food bar codes to see their nutritional grades. (For those of you who do not know, QR codes — or Quick Response codes — are a type of barcode that can be used to encode a URL, text or other data.)

So now you might be asking how this fits into surveys. Let me back up a bit and explain.

The hardest part of  using surveys or course evaluations is getting people to complete them. For years, people have been  trying to find out how to make others want to fill out surveys. I personally avoid surveys at all cost (ironic I know). Some universities have gone as far as not giving a student credit for a class without filling out a course evaluation survey. But I am not sure that is the best way to collect valid information. At the other extreme, some have just stopped trying to collect the information. I am not sure that is right either.  It’s important to give people a vehicle for sharing their thoughts or feelings – and also to heed what they are telling you.

So how do we make people want to fill out surveys?  I think QR codes could prove to be an effective technique for drawing people in and encouraging them to participate. We know that many people don’t fill out paper forms (which need to be rescanned anyway)  and a lot of them also avoid links. But people are drawn into QR codes.  The mystery of what is behind the code is enough for most people to draw out their phone and give it a scan.

So how might this apply to learning? QR codes could be put onto a class syllabus, a poster board at a conference or on a webpage (yes you can scan a QR code on your computer). By asking the right questions, you can later filter results by the demographic data you collect on your surveys.

So how about giving this idea a whirl? Try out my QR code survey  below and then try and see if QR codes increase traffic on your surveys.

Job Task Analysis in Questionmark

john_smallPosted by John Kleeman

Job Task Analysis (JTA) surveys are used to analyze what tasks within a job role are most important. They are often used to construct and validate certification programs, to ensure that the questions being asked are relevant to the job. Typically you survey “masters”, people doing a job already or who are already certified as to which tasks in their job are most important and done most frequently to determine which areas to ask questions on. JTAs are an important way to make certifications fair and legally defensible by ensuring that the coverage of questions matches the coverage of what is needed to do a job.

You can easily construct JTAs in Questionmark Perception; here is an example survey that asks some questions about enterprise sales people, to illustrate how to do it.

If you are making your own JTAs, then I’d advise making each part of the survey a separate question, usually a Likert Scale question. In the example (see screenshot below), the two questions about whether the task is important and how often it is done are separate Likert Scale questions, formatted to appear close together. Keeping them as independent questions makes it much easier to report on than combining them into a single question.

Screenshot of Job Task Analysis Survey

Research Survey for Test Takers: You Can Help

greg_pope-150x1502

Posted by Greg Pope

I am working with Dr. Bruno Zumbo, professor at the University of British Columbia, on  a research study about the beliefs of people who are waiting to take, or have taken, a certification or licensure examination.

In this initial study we want to document people’s attitudes and beliefs regarding taking these exams as well as issues in the area of certification and licensure testing. This research is designed to help certification and licensing organizations improve high-stakes exams by shedding light on test takers’ perspectives.

To complete our research, we need input from anyone who is planning to take or has already taken a certification or licensing exam. If you are a test taker we thank you in advance for answering a 35-question survey that will take 5 or 10 minutes to complete. This is an opportunity to weigh in on important issues in the testing industry. If you are a test taker, please take the survey!  If you know certification or licensing exam participants, we’d appreciate it if you could encourage them to take it too.

We will report on the results of our research this fall and appreciate your help!