How many errors can you spot in this survey question?

John KleemanPosted by John Kleeman

Tests and surveys are very different. In a test, you look to measure participant knowledge or skill; you know what answer you are looking for, and generally participants are motivated to answer well. In a survey, you look to measure participant attitude or recollection; you don’t know what answer you are looking for, and participants may be disinterested.

Writing good surveys is an important skill. If you’re interested in how to write good surveys of opinion and attitude in training, learning, compliance, certification, based on research evidence, you might be interested in a webinar I gave titled, “Designing Effective Surveys.” Click HERE for the webinar recording and slides.

In the meantime, here’s a sample survey question. How many errors can you spot in the question?

The material and presentation qualty at Questionmark webinars is always excellent. Strongly Agree Agree Slightly agree Neither agree nor disagree Disagree Strongly disagree

There are quite a few errors. Try to count the errors before you look at my explanation below!!

I count seven errors:

  1. I am sure you got the mis-spelling of “quality”. If you mis-spell something in a survey question, it indicates to the participant that you haven’t taken time and trouble writing your survey, so there is little incentive for them to spend time and trouble answering.
  2. It’s not usually sensible to use the word “always” in a survey question. Some participants make take the statement literally, and it’s much more likely that webinars are usually excellent than that every single one is excellent.
  3. The question is double-barreled. It’s asking about material AND presentation quality. They might be different. This really should be two questions to get a consistent answer.
  4. The “Agree” in “Strongly Agree” is capitalized but not in other places, e.g. “Slightly agree”. Capitalization should be equal in every part of the scale.

You can see these four errors highlighted below.

Red marking corresponding to four errors above

Is that all the errors? I count three more, making a total of seven:

  1. The scale should be balanced. Why is there a “Slightly agree” and not a “Slightly disagree”?
  2. This is a leading or “loaded” question, not a neutral one, it encourages you to a positive answer. If you genuinely want to get people’s opinion in a survey question, you need to ask it without encouraging the participant to answer a particular way.
  3. Lastly, any agree/disagree question has acquiescence bias. Research evidence suggests that some participants are more likely to agree when answering survey questions. Particularly those who are more junior or less educated who may tend to think that what is asked of them might be true. It would be better to word this question to ask people to rate the webinars rather than agree with a statement about them.

Did you get all of these? I hope you enjoyed this little exercise. If you did, I explain more about this and good survey practice in our Designing Effective Surveys webinar, click HERE for the webinar recording and slides.

How many questions should you have in a web survey?

John Kleeman Headshot

Posted by John Kleeman

Web surveys offer a quick, effective means of gathering data and attitudes that can help you make decisions and improvements. But how many questions should you ask? What is the best length for a web survey? Here are some tips:

Want to learn more about survey techniques? I will be presenting a session on harnessing the power of your surveys at the 2016 Questionmark Conference in Miami April 12-15.

Research evidence

The best survey length depends on the survey purpose and audience, but here are some useful research findings:

  • The market research industry has studied ideal survey length in detail. In such surveys participants are often panel members or people with time who can be motivated or incentivized to answer longish surveys. A debated but often quoted rule of thumb in market research is that 20 minutes is about as long as a typical person can concentrate on a survey and so surveys should be no longer than 20 minutes.
  • In typical web surveys, dropout rates increase with a larger number of questions. For example one controlled study found  a drop-out rate of 29 percent on a 42-question web survey compared to a smaller dropout rate 23 percent on a 20-question one.
  • In long web surveys, participants often reduce time spent answering later questions, which can mean less accurate answers. This is an example of satisficing – participants not thinking too hard about how to answer but just giving an answer. Survey Monkey did an analysis of 100,000 real-world web surveys and found that for surveys of 3 – 10 questions, participants spent an average of 30 seconds answering each question, whereas for surveys of 26 – 30 questions, participants spent an average of 19 seconds.  So a longer survey may get lower-quality answers.
  • Task difficulty also matters. Shorter isn’t always better. Research (for example here) identifies that difficulty matters as well as length. Participants may abandon a survey when faced with too hard questions, when they would be willing to fill in a longer, less challenging survey.
  • Mobile users often have a reduced attention span, and it can take longer to answer questions on a smartphone than on a PC. One experienced commentator suggests that surveys take 20 – 30 percent longer on a mobile device.

So how long should your survey be?

There is no single right answer to this question, here are some tips:

Editing a jump block - choosing to skip to end of assessment if previous question was not applicable1. A key factor is the engagement of your participants. You can risk a longer survey if your participants are motivated. For example participants who have just undergone a three day course will be more motivated to fill in a longer survey about it than someone who’s just done a short e-learning session.

2. Consider using  branching to skip any unneeded questions.

3. Ask concise questions without lengthy explanations, this will reduce the apparent length of the survey.

4. Pretest your survey to try to remove difficult or confusing questions – a longer, clearer survey is better than a shorter, confusing one.

5. If your survey covers very different topics, consider breaking it down into two or more shorter surveys.

6. Make sure results for each question are actionable. There is no point asking questions where you aren’t going to take action depending on what you discover. Participants may disengage if their answers don’t seem likely to be useful .

7. Look at each question and check you really need it. As your survey length increases, your response rate will drop and the quality of the answers may reduce.  Work out for each question, whether you need the data badly enough to live with the drop in quality. Ask as few questions as you need – some successful surveys (e.g. Net Promoter Score ) just ask one question. Very often an effective and actionable survey can be ten questions or less.

Want to learn more about survey techniques? I will be presenting a session on harnessing the power of your surveys at the 2016 Questionmark Conference in Miami April 12-15. There’s only 1 week left to take advantage of our early-bird discount. Sign up before January 21 and save $200! I look forward to seeing you there!

Read more >

Q&A: Pre-hire, new-hire and ongoing assessments at Canon

HollyGroder

Holly Groder

Headshot JuliePosted by Julie Delazyn

Holly Groder and Mark Antonucci are training developers for Canon Information Technology Services, Inc. (Canon ITS). During their case study presentation at the Questionmark 2015 Users Conference in Napa Valley March 10-13, they will talk about Leveraging Questionmark’s Reports and Analytics Tools for Deeper Insight.

Their session will explore Canon’s use of assessments in hiring, training, continuing job skills assessment and company-wide information gathering via surveys.

I asked them recently about their case study:

Why did you start using Questionmark? 

The primary reason for seeking a new assessment tool was our desire to collect more information from our assessments, quicker. Questionmark offered the flexibility of web-based question creation and built-in reports. Questionmark also offered the ability to add jump blocks and a variety of templates. The survey capabilities were just a bonus for us. We were able to streamline our survey process to one point of contact and eliminate an additional software program.

What kinds of assessments do you use?

MarkAntonucci1

Mark Antonucci

The principal function is split between four business needs: Pre-hire employment assessments, new-hire or cross-training assessments, continuing job knowledge assessments, and business information gathering (surveys).

How are you using those tools?

First, potential employees are required to participate in a technical knowledge assessment prior to an offer of employment. Once employment has been offered and accepted, the new employees are assessed throughout the new-hire training period. Annually, all call center agents participate in a job skills assessment unique to their department. And finally, all employees participate in various surveys ranging from interest in community events to feedback on peer performance.

What are you looking forward to at the conference?

We are interested in best practices, insight into psychometrics, and, most important, networking with other users.

Thank you Holly and Mark for taking time out of your busy schedules to discuss your session with us!

***

If you have not already done so, you still have a chance to attend this important learning event. Click here to register.

An easier approach to job task analysis: Q&A

Julie Delazyn HeadshotPosted by Julie Delazyn

Part of the assessment development process is understanding what needs to be tested. When you are testing what someone needs to know in order for them to do their job well, subject matter experts can help you harvest evidence for your test items by observing people at work. That traditionally manual process can take a lot of time and money.

Questionmark’s new job task analysis (JTA) capabilities enable SMEs to harvest information straight from the person doing the job. These tools also offer an easier way to see the frequency, importance, difficulty and applicability of a task in order to know if it’s something that needs to be included in an assessment.

Now that JTA question authoring, assessment creation and reporting are available to users of  Questionmark OnDemand and Questionmark Perception 5.7 I wanted to understand what makes this special and important. Questionmark Product Manager Jim Farrell, who has been working on the JTA question since its conception, was kind enough to speak to me about  its value, why it was created, and how it can now benefit our customers.

Here is a snippet of our conversation:

So … first things first … what exactly IS job task analysis and how would our customers benefit from using it?

Job task analysis, JTA, is a survey that you send out that has a list of tasks, which are broken down into dimensions. Those dimensions are typically difficulty, importance, frequency, and applicability. You want to find out things like this from someone who fills out the surveys: Do they find the job difficult? Do they deem it important? And how frequently do they do it? When you correlate all this data you’ll quickly see the items that are more important to test on and collect information on.

We have a JTA question type in Questionmark Live where you can either build your task list and your dimensions or you can import your tasks through a simple import process—so if you have a spreadsheet with all of your tasks you can easily import it. You would then add those to a survey and send them out to collect information. We also have two JTA reports that allow you to break down results by the actual dimension—just look at the difficulty for all the tasks—or you can look at a summary view of all of your tasks and all the dimensions all at
one time; have a snapshot.

That sounds very interesting and easy to use! I’m interested in how did question type actually came to be.

We initially developed the job task analysis survey for the US Navy. Prior to this, trainers would have to travel with paper and clipboards to submarines, battleships and aircraft carriers and watch sailors and others in the navy do their jobs. We developed the JTA survey to help them be more efficient to collect this data more easily and a lot more quickly than they did before.

What do you think is most valuable and exciting about JTA?

To me, the value comes in the ease of creating the questions and sending them out. And I am probably most excited for our customers. Most customers probably harvest information with paper and clipboard and walking around and watching people do their jobs. That’s a very expensive and time-consuming task, so by being able to send this survey out directly to subject matter experts you’re getting more authentic data because you are getting it right form the SMEs rather than from someone observing the behavior.

 

It was fascinating for me to understand how JTA was created and how it works … Do you find this kind of question type interesting? How do you see yourself using it? Please share your thoughts below!

Writing Good Surveys, Part 5: Finishing Up on Response Scales

Doug Peterson HeadshotPosted By Doug Peterson

If you have not already seen part 4 of this series, I’d recommend reading what it has to say about number of responses and direction of response scales as an introduction to today’s discussion.

To label or not to label, that is the question (apologies to Mr. Shakespeare). In his Harvard Business Review article, Getting the Truth into Workplace Surveys, Palmer Morrel-Samuels presents the following example”surveys 5 1

Mr. Morrel-Samuels’ position is that the use of words or phrases to label the choices is to be avoided because the labels may mean different things to different people. What I consider to be exceeding expectations may only just meet expectations according to someone else. And how far is “far” when someone far exceeds expectations? Is it a great deal more that “meets expectations” and a little bit more than “exceeds expectations,” or is it a great deal more than “exceeds expectations?” Because of this ambiguity, Mr. Morrel-Samuels recommends only labeling the first and last option, and using numbers to label every option as shown here:

surveys 5 2

The idea behind this approach is that “never” and “always” should mean the same thing to every respondent, and that the use of numbers indicates an equal difference between each choice.

However, a quick Googling of “survey response scales” reveals that many survey designers recommend just the opposite – that scale choices should all be labeled! Their position is that numbers have no meaning on their own and that you’re putting more of a cognitive load on the respondent by forcing them to determine the meaning of “5” versus “6” instead of providing the meaning with a label.

I believe that both sides of the argument have valid points. My personal recommendation is to label each choice, but to take great care to construct labels that are clear and concise. I believe this is also a situation where you must take into account the average respondent – a group of scientists may be quite comfortable with numeric labels, while the average person on the street would probably respond better to textual labels.

Another possibility is to avoid the problem altogether by staying away from opinion-based answers whenever possible. Instead, look for opportunities to measure frequency. For example:

I ride my bicycle to work:surveys 5 3

 In this example, the extremes are well-defined, but everything in the middle is up to the individual’s definition of frequency. This item might work better
like this:

On average, I ride my bicycle to work:

surveys 5 4

 Now there is no ambiguity among the choices.

A few more things to think about when constructing your response scales:

  • Space the choices evenly. Doing so provides visual reinforcement that there is an equal amount of difference between the choices.
  • If there is any possibility that the respondent may not know the answer or have an opinion, provide a “not applicable” choice. Remember, this is different from a “neutral” choice in the middle of the scale. The “not applicable” choice should be different in appearance, for example, a box instead of a circle and greater space between it and the previous choice.
  • If you do use numbers in your choice labels, number them from low to high going left to right. That’s how we’re used to seeing them, and we tend to associate low numbers with “bad” and high numbers with “good” when asked to rate something. (See part 4 in this series for a discussion on going from negative to positive responses. Obviously, if you’re dealing with a right-to-left language (e.g., Arabic or Hebrew), just the opposite is true.
  • When possible, use the same term in your range of choices. For example, go from “not at all shy” to “very shy” instead of “brave” to “shy”. Using two different terms hearkens back to the problem of different people having different definitions for those terms.

Be sure to stay tuned for the next installment in this series. In part 6, we’ll take a look at putting the entire survey together – some “form and flow” best practices. And if you enjoy learning about things like putting together good surveys and writing good assessment items, you should really think about attending our European Users Conference or our North American Users Conference. Both conferences are great opportunities to learn from Questionmark employees as well as fellow Questionmark customers!

Top 10 uses of assessments for compliance

Headshot JuliePosted by Julie Delazyn

We recently announced a webinar on September 18th about why it’s good to use assessments for compliance.

Today I’d like to focus on how to use them, particularly within  financial services organizations – for whom mitigating risk of non-compliance is essential.

You can find out more about these in a complimentary white paper that highlights good practices in using assessments for regulatory compliance: The Role of Assessments in Mitigating Risk for Financial Services Organizations. But here, for quick reference, are ten of the most useful applications of assessment in a compliance program:

1) Internal exams — Internal competency exams are the most commonly used assessments in financial services.

2) Knowledge checks — It’s common to give knowledge checks or post-course tests (also called Level 2s) immediately after training to ensure that the training has been understood and to help reduce forgetting. These assessments confirm learning and document understanding.

3) Needs analysis / diagnostic tests — These tests measure employee’s current skills in topics and help drive decisions on development topics. They can be used to allow employees to test out when it’s clear they already understand a particular subject.

4) Observational assessments — When checking practical skills, it’s common to have an observer monitor employees to see if they are following correct procedures. A key advantage of an observational assessment is that it measures behavior, not just knowledge. Using mobile devices for these assessments streamlines the process.

5) Course evaluation surveys — “Level 1” or “smile sheet” surveys let you check employee reaction following training. They are a key step in evaluating training effectiveness. In the compliance field You can use them to gather qualitative information on topics, such as how well policies are applied in the field. Here’s an example fragment from a course evaluation survey.

eval survey6) Employee attitude surveys — Commonly used by HR for measuring employee satisfaction, these surveys also can be used to determine attitudes about ethical and cultural issues.

7) Job task analysis surveys –How do you know that your competency assessments are valid and that they are addressing what is really needed for competence in a job role? A job task analysis (JTA) survey asks people who are experts in a job how important the task is for the job role and how often it is done. Analysis of JTA data lets you weight the number of questions associated with topics and tasks so that a competency test fairly measures the importance of different elements of a job role.

8) Practice tests — Practice tests often use questions that are retired from the exam question pool but remain valid. Practice tests are usually accompanied by question and topic feedback. As well as allowing candidates to assess their further study needs, practice tests give candidates experience with the technology and user interface before they take a real exam.

9) Formative quizzes— These quizzes are those we are all familiar with: during learning to inform instructors and learners about whether learners have understood the learning or need deeper instruction, they diagnose misconceptions and also help reduce forgetting. They provide the key evidence that helps instructors vary the pace of learning. Computerized formative quizzes are especially useful in remote or e-learning where an instructor cannot interact face-to-face with learners.

10) 360- degree assessments – This kind of assessment solicits opinions about an employee’s competencies from his/her superiors, reports and peers. It will usually cover job-specific competencies and general competencies such as integrity and communication skills. In compliance, such surveys allow you to potentially identify issues in people’s behavior and competencies that need review.

Click here for details and registration for the webinar, 7 Reasons to Use Online Assessments for Compliance.

You can download the white paper, The Role of Assessments in Mitigating Risk for Financial Services Organizations, here.