Getting the results you need from surveys

Brian McNamara HeadshotPosted by Brian McNamara

bsmlA survey is only as good as the results you get from it. That’s why it’s important to carefully consider and plan for survey forms that will yield accurate, valid data that can be analyzed to yield the answers that you and your stakeholders are seeking.

This article looks at a few general tips on identifying the information you want to capture, writing survey questions, structuring surveys and planning ahead for how you or your stakeholders will want to analyze data.

1. Provide a brief introduction to the survey that lets the respondents know the:

  • Purpose of the survey – why do you want the respondents’ opinions?
  • Length of the survey (Number of questions? How long will it take to complete?)
  • Closing date for survey responses

Tip: It also makes sense to include this information in the initial invitation to help set expectations and boost response rates.

2. Keep the survey short and sweet (only ask the minimum number of questions required)… the longer the survey, the more likely that respondents will  abandon it or refuse to participate.

3. Avoid ambiguity in how your questions are worded; be as direct as possible.

4. Within the survey form, let respondents know how much longer they have to finish the assessment – built-in progress bars (available in most of  Questionmark’s standard question-by-question assessment templates) can help here. For example:

bsmal3

5. Consider the flow of the assessment. Ideally your survey should group similar types of questions together. For example, in a course evaluation survey,  you might ask two or three questions about the course content, then questions about the venue, and then questions about an instructor.

6. Avoid the potential for confusing respondents by keeping your Likert scale questions consistent where possible. For example, don’t follow a question  that uses a positive-to-negative scale (e.g. “Strongly Agree” to “Strongly Disagree”) with a question that uses a negative-to-positive scale (e.g. “Very Dissatisfied” to “Very Satisfied”).

7. Make it easy for respondents to answer surveys via a wide variety of devices and browsers. Check out previous blog articles on this topic: Tips for making your assessments BYOD-friendly.

8. Consider what respondent demographics and other information you may wish use for filtering and/or comparing your survey results. For example, in a typical course evaluation, you might be looking to capture information such as:

  • Course name
  • Instructor name
  • Location/Venue
  • Date (or range of dates)

Questionmark provides different options for capturing demographic data into “special fields” that can be used in in Questionmark’s built-in survey and course evaluation reports for filtering and comparison. Likewise, this demographic data can be exported along with the survey results to ASCII or Microsoft Excel format if you prefer to use third-party tools for additional analysis.

9. Consider how you wish to capture demographic information.

  • Easiest way: you can ask a question! In Questionmark assessments, you can designate certain questions as “demographic questions” so their results are saved to “special fields” used in the reporting process.Typically you would use a multiple choice and/or drop-down question type to ask for such information. For example, if you were surveying a group of respondents who attended a “Photoshop Basics” course in three different cities, you might ask the following to capture this data:bsml 2
  •  Embedding demographic data within assessment URLs: In some cases, you might already have certain types of demographic information on hand. For example, if you are emailing an invitation only to London respondents of the “Photoshop Basics” course, then you can embed this information as a parameter of a Questionmark assessment URL – it will be one less question you’ll need to ask your respondents, and a sure-fire way you’ll capture accurate location demographics with the survey results!

If you are looking for an easy way to rapidly create surveys and course evaluations, check out Questionmark Live – click here. And for more information about Questionmark’s survey and course  evaluation reporting tools, click here.

Good practice from PwC in testing out of training

John Kleeman Headshot Posted by John Kleeman

I attended an excellent session at the Questionmark Users Conference in Baltimore by Steve Torkel and John LoBianco of PwC and would like to share some of their ideas on building diagnostic assessments.

PwC, like many organizations, creates tests that allow participants to “test out” of training if they pass. Essentially, if you already know the material being taught, then you don’t need to spend time in the training. So as shown in the diagram below – if you pass the test, you skip training and if you don’t, you attend it.

pWc chart 1

The key advantage of this approach is that you save time when people don’t have to attend the training that they don’t need. Time is money for most organizations, and saving time is an important benefit.

Suppose, for example, you have 1,000 people who need to take some training that lasts 2 hours. This is 2,000 hours of people’s time. Now, suppose you can give a 20-minute test that 25% of people pass and therefore skip the training. The total time taken is 333 hours for the test and 1,500 hours for the training, which adds up to 1,833 hours. So having one-fourth of the test takers skip the training saves 9% of the time that would have been required for everyone to attend the training.

In addition to saving time, using diagnostic tests in this way helps people who attend training courses focus their attention on areas they don’t know well and be more receptive to the training.

Some good practices that PwC shared for such building such tests are:

  • Blueprint the test, ensuring that all important topics are covered in the usual way
  • Use item analysis to identify and remove poorly performing items and calibrate question difficulty
  • Make the test-out at least as difficult as the assessment at the end of the training course. In fact PwC makes it more difficult
  • Make the test-out optional. If someone wants to skip it and just do the training, let them.
  • Tell people that if they don’t know the answers to questions, they can just skip them or finish the test early – there are no consequences for doing badly on the test
  • Only allow a single attempt. If someone fails the test, they must do the training
  • Pilot the test items well – PwC finds it useful to pilot questions using the comments facility in Questionmark

PwC also has introduced an innovative strategy for such tests, which they call a “half-time strategy”. This makes the process more efficient by allowing weaker test takers to finish the test sooner. I’ll explain the Half-time strategy in a follow-up article soon.

Standard Setting – How Much Does the Ox Weigh?

Austin FosseyPosted by Austin Fossey

At the Questionmark 2013 Users Conference, I had an enjoyable debate with one of our clients about the merits and pitfalls underlying the assumptions of standard setting.

We tend to use methods like Angoff or the Bookmark Method to set standards for high-stakes assessments, and we treat the resulting cut scores as fact, but how can we be sure that the results of the standard setting reflect reality?

In his book, The Wisdom of Crowds, James Surowiecki recounts a story about Sir Francis Galton visiting a fair in 1906. Galton observed a game where people could guess the weight of an ox, and whoever was closest would win a prize.

Because guessing the weight of an ox was considered to be a lot of fun in 1906, hundreds of people lined up and wrote down their best guess. Galton got his hands on their written responses and took them home. He found that while no one guess was exactly right, the crowd’s mean guess was pretty darn good: only one pound off from the true weight of the ox.weight ox

We cannot expect any individual’s recommended cut score in a standard setting session to be spot on, but if we select a representative sample of experts and provide them with relevant information about the construct and impact data, we have a good basis for suggesting that their aggregated ratings are a faithful representation of the true cut score.

This is the nature of education measurement—our certainty about our inferences is dependent on the amount of data we have and the quality of that data. Just as we infer something about a student’s true abilities based on their responses to carefully selected items on a test, we have to infer something about the true cut score based on our subject matter experts’ responses to carefully constructed dialogues in the standard setting process.

We can also verify cut scores through validity studies, thus strengthening the case for our stakeholders. So take heart—your standard setters as a group have a pretty good estimate on the weight of that ox.

Learning Café: Video to help you align learning solutions with strategic goals

Headshot JuliePosted by Julie Delazyn

We’re thrilled to be continuously adding rich new material to our Learning Café.

“How-to” videos and brief presentations about best practices, will give you valuable pointers about authoring, delivery and integration.

For instance, here’s a video that will make your learning initiatives more relevant and effective. It describes the A-model, which provides a framework for progressing from analysis and design right through to measurement and evaluation — using solid, results-based guidance for developing effective learning programs that serve organizational goals.

Watch the video on Learning Café and download a white paper about the A-model


What is OData, and why is it important?

Steve Lay HeadshotPosted by Steve Lay

At the recent Questionmark Users Conference I teamed up with Howard Eisenberg, our Director of Solution Services, to talk about OData. Our session included some exciting demonstrations of our new OData API for Analytics. But what is OData and why is it important?

OData is a standard for providing access to data over the internet. It has been developed by Microsoft as an open specification. To help demonstrate its open approach, Microsoft is now working with OASIS to create a more formal standard. OASIS stands for the Organization for the Advancement of Structured Information Standards; it provides the software industry with a way to create standards using open and transparent procedures. OASIS has published a wide range of standards, particularly in the areas of document formats and web service protocols — for example, the OpenDocument formats used by the OpenOffice application suite.

Why OData?

Questionmark’s Open Assessment Platform already includes a set of web-service APIs (application programming interfaces). We call them QMWISe and they are ideal for programmers who are integrating server-based applications. With one QMWISe request you can trigger a series of actions typical of a number of common use cases. There are, inevitably, times when you need more control over your integration, though, and that is where OData comes in.

Unlike QMWISe, OData provides access to just the data you want. It has scalability built right in to the protocol. Using the conventions of OData, you can make highly specific requests to get a single data item or you can use the feature of linked-data to quickly uncover relationships.

OData works just like the web: each record returned by an OData request contains links to other related records in exactly the same way as web pages contain hyperlinks to other web pages. Want to know about all the results for a specific assessment? It is easy with OData, just follow the results link in the assessment’s record.

OData is also based on pre-existing internet protocols, which means that web developers can use it in their applications with a much easier learning curve. In fact, if a tool already supports RSS/Atom you can probably start accessing OData-feeds right away!

OData Ecosystem

As we build our support for the OData protocol, we join a growing community. OData makes sense as the starting point for any data-rich standard. Last week I was at CETIS 2013, where there was already talk of other standards organizations in the e-Learning community adopting OData as a way of standardizing the way they share information.

SlideShare Presentation on Assessment Feedback

Headshot JuliePosted by Julie Delazyn

The impact of assessments on learning is something Questionmark Chairman John Kleeman has written about extensively in this blog. He has explained psychology research that demonstrates the importance of retrieval practice – including taking formative quizzes with feedback — as an efficient way of retaining learning for the long term.

John has been focusing lately on what the effective use of feedback can bring to assessments, and he shared what he’s been learning during a presentation at the Questionmark Users Conference on Assessment Feedback – What Can We Learn from Psychology Research?

In this SlideShare presentation, John Kleeman explains how assessments and feedback can influence learning and offers some good practice recommendations.

For more on this theme, check out John’s conversation with Dr. Douglas Larsen, an expert in medical education at the Washington University in St Louis, about Dr. Larsen’s research on how tests and quizzes taken during learning aid learning and retention in medical education. You can also click here to read John’s post about ten benefits of quizzes and tests in educational practice.

Next Page »