Agree or disagree? 10 tips for better surveys – Part 1

John Kleeman HeadshotPosted by John Kleeman

Writing good survey questions is different to writing good test questions. This short series of blog posts shares some pointers for writing questions for attitude surveys, based on research evidence and my own learning. It should help anyone who creates course evaluation surveys or other surveys that measure opinions, beliefs or attitudes.

I’d like to lay the groundwork by posing some essential questions:

How consistent is an attitude?  

We would like to think that an attitude is an enduring positive or negative feeling about a person, object or issue. We wish that attitudes were stable and retrievable so that a questionnaire could easily measure them. But with many topics, your survey participants may have fluid attitudes, making it likely that they will  be easily influenced by how you ask the questions.

In a well-reported 1980s experiment, Schuman and Presser asked different questions to two randomly selected groups of participants.

One group was asked this:

“Do you think the United States should forbid public speeches in favor of communism?”

The other group was asked a slightly different question:

“Do you think the United States should allow public speeches in favor of communism?”

The researchers found that 39% thought that speeches should be forbidden but 56% thought that such speeches should not be allowed. The difference in wording between “forbid” and “not allow” made a large difference in the attitude measured.

This demonstrates  that how you phrase a question about attitudes can influence people’s answers. The risk is that survey results may give imperfect measures of the underlying attitude. The purpose of good survey design is to get as accurate a measure as you can.

How do participants answer a question?

in order to understand this, it’s helpful to consider the process participants go through when answering a question.Comprehend the question, Recall/retrieve the information, Make a judgement, Select a response

As shown in the diagram above, the first thing a participant has to do is to comprehend a question and understand its meaning. If he or she understands the question differently than you intended, this will lead to error.

Next, the participant must recall or retrieve the information that the question is asking about. If the event being asked about is relatively recent, this may be simple, but if there is any time delay or complexity in the question, it’s possible that the respondent will fail to recall something or misremember or partially remember something.

Then the participant must make a judgement, which can be influenced by context. For example, earlier questions can set a context that influences judgement in later questions. Sometimes judgement will also be influenced by social desirability: This is what I’m expected to answer so I’ll give that answer, even though it’s not fully the case”.

And last, the participant must select a response. Except for open questions, which are difficult to analyze quantitatively, the response will be constrained and perhaps adjusted to the options that you provide.

Do participants give the best answer they can?

In an ideal world, all your participants will go carefully through each step and give you the best answer they can.

But unlike in tests and exams, where participants have a strong motivation to answer optimally, in a survey, participants often take shortcuts or give an answer they think is satisfactory rather than taking the time and effort to give the best answer.

This effect is called satisficing. It can involve skipping steps 2 and 3 and just selecting a response that seems to make sense without thinking too much about it —  or else rushing through or short-cutting any of the steps.

Satisficing is increased if the questions are difficult to answer and if the participants do not have motivation to answer well. Obviously, satisficing can have a big impact on the quality of the survey results.

How can you prevent this? In my next post in this series, I’ll share some tips for good practice in attitude questionnaire design, based on research evidence. I will discuss whether asking if Agree/Disagree-style questions is good practice.

In the meantime, if you are interested in some other survey advice, a good academic article is a chapter on Question and Questionnaire Design by Krosnick and Prosser.  Click here for a previous set of blog articles about writing surveys.

 

Leave a Reply