Agree or disagree? 10 tips for better surveys — part 3

John Kleeman HeadshotPosted by John Kleeman

This is the third and last post in my “Agree or disagree” series on writing effective attitude surveys. In the first post I explained the process survey participants go through when answering questions and the concept of satisficing – where some participants give what they think is a satisfactory answer rather than stretching themselves to give the best answer.

In the second post I shared these five tips based on research evidence on question and survey design.

Tip #1 – Avoid Agree/Disagree questions

Tip #2 – Avoid Yes/No and True/False questions

Tip #3 – Each question should address one attitude only

Tip #4 – Minimize the difficulty of answering each question

Tip #5 – Randomize the responses if order is not important

Here are five more:

Tip #6 –  Pretest your survey

Just as with tests and exams, you need to pretest or pilot your survey before it goes live. Participants may interpret questions differently than you intended. It’s important to get the language right so as to trigger in the participant the right judgement. Here are some good pre-testing methods:

  • Get a peer or expert to review the survey.
  • Pre-test with participants and measuring the response time for each question (shown in some Questionmark reports). A longer response time could be connected with a more confusing question.
  • Allow participants to provide comments on questions they think they are confusing.
  • Follow up with your pretesting group by asking them why they gave particular answers or asking them what they thought you meant by your  questions.

Tip #7 – Make survey participants realize how useful the survey is

The more motivated a participant is, the more likely he or she is to answer optimally rather than just satisficing and choosing a good enough answer. To quote Professor Krosnick in his paper The Impact of Satisficing on Survey Data Quality:

“Motivation to optimize is likely to be greater among respondents who think that the survey in which they are participating is important and/or useful”

Ensure that you communicate the goal of the survey and make participants feel that filling it in usefully will be a benefit to something they believe in or value.

Tip #8. Don’t include a “don’t know” option

Including a “don’t know” option usually does not improve the accuracy of your survey. In most cases it reduces it. To those of us used to the precision of testing and assessment, this is surprising.

Part of the reason is that providing a “don’t know” or “no opinion” option allows participants to disengage from your survey and so diminishes useful responses. Also,  people are better at guessing or estimating than they think they are, so they will tend to choose an appropriate answer if they do not have an option of “don’t know”. See this paper by Mondak and Davis, which illustrates this in the political field.

Tip #9. Ask questions about the recent past only

The further back in time they are asked to remember, the less accurately participants will answer your questions. We all have a tendency to “telescope” the timing of events and imagine that things happened earlier or later than they did. If you can, ask about the last week or the last month, not about the last year or further back.

Picture of a trends graphTip #10 – Trends are good

Error can creep into survey results in many ways. Participants can misunderstand the question. They can fail to recall the right information. Their judgement can be influenced by social pressures. And they are limited by the choices available. But if you use the same questions over time with a similar population, you can be pretty sure that changes over time are meaningful.

For example, if you deliver an employee attitude survey with the same questions for two years running, then changes in the results to a question (if statistically significant) probably mean a change in employee attitudes. If you can use the same or similar questions over time and can identify trends or changes in results, such data can be very trustworthy.

I hope you’ve found this series of articles useful.  For more information on how Questionmark can help you create, deliver and report on surveys, see www.questionmark.com. I’ll also be presenting at Questionmark’s 2016 Conference: Shaping the Future of Assessment in Miami April 12-15. Check out the conference page for more information.

Agree or disagree? 10 tips for better surveys — Part 2

John Kleeman HeadshotPosted by John Kleeman

In my first post in this series, I explained that survey respondents go through a four-step process when they answer each question: comprehend the question, retrieve/recall the information that it requires, make a judgement on the answer and then select the response. There is a risk of error at each step. I also explained the concept of “satisficing”, where participants often give a satisfactory answer rather than an optimal one – another potential source of error.

Today, I’m offering some tips for effective online attitude survey design, based on research evidence. Following these tips should help you reduce error in your attitude surveys.

Tip #1 – Avoid Agree/Disagree questions

Although these are one of the most common types of questions used in surveys, you should try to avoid questions which ask participants whether they agree with a statement.

There is an effect called acquiescence bias, where some participants are more likely to agree than disagree. It seems from the research that some participants are easily influenced and so tend to agree with things easily. This seems to apply particularly to participants who are more junior or less well educated, who may tend to think that what is asked of them might be true. For example Krosnick and Presser state that across 10 studies, 52 percent of people agreed with an assertion compared to 42 percent of those disagreeing with its opposite. If you are interested in finding more about this effect, see this 2010 paper by Saris, Revilla, Krosnick and Schaeffer.

Satisficing – where participants just try to give a good enough answer rather than their best answer – also increases the number of “agree” answers.

For example, do not ask a question like this:

My overall health is excellent. Do you:

  • Strongly Agree
  • Agree
  • Neither Agree or Disagree
  • Disagree
  • Strongly Disagree

Instead re-word it to be construct specific:

How would you rate your health overall?

  • Excellent
  • Very good
  • Good
  • Fair
  • Bad
  • Very bad

 

Tip #2 – Avoid Yes/No and True/False questions

For the same reason, you should avoid Yes/No questions and True/False questions in surveys. People are more likely to answer Yes than No due to acquiescence bias.

Tip #3 – Each question should address one attitude only

Avoid double-barrelled questions that ask about more than one thing. It’s very easy to ask a question like this:

  • How satisfied are you with your pay and work conditions?

However, someone might be satisfied with their pay but dissatisfied with their work conditions, or vice versa. So make it two separate questions.

Tip #4 – Minimize the difficulty of answering each question

If a question is harder to answer, it is more likely that participants will satisfice – give a good enough answer rather than the best answer. To quote Stanford Professor  Jon Krosnick, “Questionnaire designers should work hard to minimize task difficulty”.  For example:

  • Use as few words as possible in question and responses.
  • Use words that all your audience will know.
  • Where possible, ask questions about the recent past not the distant past as the recent past is easier to recall.
  • Decompose complex judgement tasks into simpler ones, with a single dimension to each one.
  • Where possible make judgements absolute rather than relative.
  • Avoid negatives. Just like in tests and exams, using negatives in your questions adds cognitive load and makes the question less likely to get an effective answer.

The less cognitive load involved in questions, the more likely you are to get accurate answers.

Tip #5 – Randomize the responses if order is not importantSetting choices to be shuffled

The order of responses can significantly influence which ones get chosen.

There is a primacy effect in surveys where participants more often choose the first response than a later one. Or if they are satisficing, they can choose the first response that seems good enough rather than the best one.

There can also be a recency effect whereby participants read through a list of choices and choose the last one they have read.

In order to avoid these effects, if your choices do not have a clear progression or some other reason for being in a particular order, randomize them.  This is easy to do in Questionmark software and will remove the effect of response order on your results.

Here is a link to the next segment of this series: Agree or disagree? 10 tips for better surveys — part 3

High-stakes assessment: It’s not just about test takers

Lance bio picPosted by

In my last post I spent some time defining how I think about the idea of high-stakes assessment. I also talked about how these assessments affect the people who take them including how important it is to their ability to get or do a job.

Now I want to talk a little bit about how these assessments affect the rest of us.

The rest of us

Guess what? The rest of us are affected by the outcomes of these assessments. Did you see that coming?

But seriously, the credentials or scores that result from these assessments affect large swathes of the public. Ultimately that’s the point of high-stakes assessment. The resulting certifications and licenses exist to protect the public. These assessments are acting as barriers preventing incompetent people from practicing professions where competency really matters.

 It really matters

What are some examples of “really matters”? Well, when hiring, it really matters to employers that the network techs they hire knows how to configure a network securely, not that the techs just say they do. It matters to the people crossing a bridge that the engineers who designed it knew their physics. It really matters to every one of us that our doctor, dentist, nurse, or surgeon know what they are doing when they treat us. It really matters to society at large when we measure (well) the children and adults who take large-scale assessments like college entrance exams.

At the end of the day, high-stakes exams are high-stakes because in a very real way, almost all of us have a stake in their outcome.

 Separating the wheat from the chaff

There are a couple of ways that high stakes assessments do what they do. Some assessments are simply designed to measure “minimal competence,” with test takers either ending above the line—often known as “passing”—or below the line. The dreaded “fail.”

Other assessments are designed to place test takers on a continuum of ability. This type of assessment assigns scores to test takers, and the range of
score often appear odd to laypeople. For example, the SAT uses a 200 – 800 scale.

Want to learn more? Hang on till next time!

Observational assessments: measuring performance in a 70+20+10 world

Posted by Jim Farrell

Informal Learning. Those two words are everywhere. You might see them trending on Twitter during a #lrnchat, dominating the agenda at a learning conference or gracing the pages of a training digest. We all know that informal learning is important, but measuring it can often be difficult. However, difficult does not mean impossible.

Remember that in the 70+20+10 model, 70 percent of learning results from on the job experiences and 20 percent of learning comes from feedback and the examples set by people around us. The final 10 percent is formal training. No matter how much money an organization spends on its corporate university, 90 percent of the learning is happening outside a classroom or formal training program.

So how do we measure the 90 percent of learning that is occurring to make sure we positively affect the bottom line?

First is performance support. Eons ago, when I was an instructional designer, it was the courseware and formal learning that received most of the attention. Looking back, we missed the mark: Although the projects were deemed successful, we likely did not have the impact we could have. Performance support is the informal learning tool that saves workers time and leads to better productivity. Simple web analytics can tell you the performance support that is most searched on and used on a daily basis.

But onto what I think Questionmark does best – that 20 percent that is occurring through feedback and examples around us. Many organizations have turned to coaching and mentoring to give employees good examples and to define the competencies necessary to be a great employee.

I think most organizations are missing the boat when it comes to collecting data on this 20 percent. While I think coaching and mentoring is a step in the right direction, they probably aren’t yielding good analytics. Yes, organizations may use surveys and/or interviews to measure how mentoring closes performance gaps, but how do we get employees to the next level? I propose the use of observational assessments. By definition, observational assessments enable measurement of participants’ behavior, skills and abilities in ways not possible via traditional assessment.

By allowing a mentor to observe someone perform while applying a rubric to their performance, you allow for not only analytics of performance but the ability to compare to other individuals or to agreed benchmarks for performing a task. Also, feedback collected during the assessment can be displayed in a coaching report for later debriefing and learning. And to me, that is just the beginning.

Developing an observational assessment should go beyond the tasks someone has to do to perform their day-to-day work. It should embody the competencies necessary to solve business problems. Observational assessments allow organizations to capture performance data, and measure the competencies necessary to push the organization to be successful.

If you would like more information about observational assessments, click here.