5 Steps to Better Tests

Julie ProfilePosted by Julie Delazyn

Creating fair, valid and reliable tests requires starting off right: with careful planning. Starting with that foundation, you will save time and effort while producing tests that yield trustworthy results.five steps white paper

Five essential steps for producing high-quality tests:

1. Plan: What elements must you consider before crafting the first question? How do you identify key content areas?

2. Create: How do you write items that increase the cognitive load, avoid bias and stereotyping?

3. Build: How should you build the test form and set accurate pass/ fail scores?

4. Deliver: What methods can be implemented to protect test content and discourage cheating?

5. Evaluate: How do you use item-, topic-, and test-level data to assess reliability and improve quality?

Download this complimentary white paper full of best practices for test design, delivery and evaluation.

 

Agree or disagree? 10 tips for better surveys — Part 2

John Kleeman HeadshotPosted by John Kleeman

In my first post in this series, I explained that survey respondents go through a four-step process when they answer each question: comprehend the question, retrieve/recall the information that it requires, make a judgement on the answer and then select the response. There is a risk of error at each step. I also explained the concept of “satisficing”, where participants often give a satisfactory answer rather than an optimal one – another potential source of error.

Today, I’m offering some tips for effective online attitude survey design, based on research evidence. Following these tips should help you reduce error in your attitude surveys.

Tip #1 – Avoid Agree/Disagree questions

Although these are one of the most common types of questions used in surveys, you should try to avoid questions which ask participants whether they agree with a statement.

There is an effect called acquiescence bias, where some participants are more likely to agree than disagree. It seems from the research that some participants are easily influenced and so tend to agree with things easily. This seems to apply particularly to participants who are more junior or less well educated, who may tend to think that what is asked of them might be true. For example Krosnick and Presser state that across 10 studies, 52 percent of people agreed with an assertion compared to 42 percent of those disagreeing with its opposite. If you are interested in finding more about this effect, see this 2010 paper by Saris, Revilla, Krosnick and Schaeffer.

Satisficing – where participants just try to give a good enough answer rather than their best answer – also increases the number of “agree” answers.

For example, do not ask a question like this:

My overall health is excellent. Do you:

  • Strongly Agree
  • Agree
  • Neither Agree or Disagree
  • Disagree
  • Strongly Disagree

Instead re-word it to be construct specific:

How would you rate your health overall?

  • Excellent
  • Very good
  • Good
  • Fair
  • Bad
  • Very bad

 

Tip #2 – Avoid Yes/No and True/False questions

For the same reason, you should avoid Yes/No questions and True/False questions in surveys. People are more likely to answer Yes than No due to acquiescence bias.

Tip #3 – Each question should address one attitude only

Avoid double-barrelled questions that ask about more than one thing. It’s very easy to ask a question like this:

  • How satisfied are you with your pay and work conditions?

However, someone might be satisfied with their pay but dissatisfied with their work conditions, or vice versa. So make it two separate questions.

Tip #4 – Minimize the difficulty of answering each question

If a question is harder to answer, it is more likely that participants will satisfice – give a good enough answer rather than the best answer. To quote Stanford Professor  Jon Krosnick, “Questionnaire designers should work hard to minimize task difficulty”.  For example:

  • Use as few words as possible in question and responses.
  • Use words that all your audience will know.
  • Where possible, ask questions about the recent past not the distant past as the recent past is easier to recall.
  • Decompose complex judgement tasks into simpler ones, with a single dimension to each one.
  • Where possible make judgements absolute rather than relative.
  • Avoid negatives. Just like in tests and exams, using negatives in your questions adds cognitive load and makes the question less likely to get an effective answer.

The less cognitive load involved in questions, the more likely you are to get accurate answers.

Tip #5 – Randomize the responses if order is not importantSetting choices to be shuffled

The order of responses can significantly influence which ones get chosen.

There is a primacy effect in surveys where participants more often choose the first response than a later one. Or if they are satisficing, they can choose the first response that seems good enough rather than the best one.

There can also be a recency effect whereby participants read through a list of choices and choose the last one they have read.

In order to avoid these effects, if your choices do not have a clear progression or some other reason for being in a particular order, randomize them.  This is easy to do in Questionmark software and will remove the effect of response order on your results.

Here is a link to the next segment of this series: Agree or disagree? 10 tips for better surveys — part 3

Announcing two pre-conference workshops in New Orleans March 20, 2012

Joan PhaupPosted by Joan Phaup

Sharon Shrock

Sharon Shrock

As we continue to build the program for the Questionmark 2012 Users Conference we are delighted to announce two full-day pre-conference workshops at the Ritz Carlton New Orleans on Tuesday, March 20, 2012: 

Criterion-Referenced Test Development, with Sharon Shrock and Bill Coscarelli. Sharon and Bill — who have presented outstanding keynote addresses at two previous users conferences  — will help participants in this workshop understand testing best practices, meet rigorous competency testing standards and interpret test results correctly.

The workshop,  based on Sharon and Bill’s book, Criterion-Referenced Test Development: Technical and Legal Guidelines for Corporate Training, is ideal for trainers, instructional designers, course developers and training managers. Understanding the principles of skillful test authoring will help you create ethical and dependable testing programs that yield meaningful, measurable results.

Bill Coscarelli

Bill Coscarelli

Rick Ault

Rick Ault

Here’s a great way to learn the basics of using Questionmark technologies before the conference starts: Bring your laptop to a basic training course! Questionmark Trainer Rick Ault will give you hands-on practice  in creating questions, putting together an assessment, then scheduling it, taking it and seeing the results. If you are just starting out — or if you have little experience with Questionmark — joining this workshop will give you a head start and help you get the most from the conference program.

Whether you add a workshop to your plans or not, now is a great time to register for the conference: Early-bird registration savings of $200 are available through December 9th.

Ever attended the Performance Testing Council Summit?

jim_small Posted by Jim Farrell

One of the things I enjoy most about working for Questionmark is attending conferences run by elearning and testing associations. I just returned from the Performance Testing Council Summit in Chevy Case, Maryland, and I must say it was one of the most interesting meetings I have attended.

As an instructional designer with previous companies, I was not aware of the Performance Testing Council. We often struggled to create rigorous yet fair performance tests and felt we were on an island. Little did I know there is a group whose sole purpose is to remove barriers in developing performance tests.

The summit is exactly what you would want from a two-day event –- lots of presentations by people and organizations using performance testing  and time to socialize with thought leaders in the area. The discussions were fascinating and inspiring, bringing together bringing together test developers, program managers, psychometricians and testing providers to talk about the finer points of creating effective performance tests.

While I was scribbling notes, one member said, “We must keep in mind that we are trying to stay faithful to what is being done on the job.” That immediately made me think about our Observational Assessment solution in the latest version of  Questionmark Perception OnDemand. Observational Assessments (sometimes called “Workplace Assessments”) offer a way to assess a participant in their everyday tasks and rate their cognitive knowledge or abilities that would not normally get reflected in answers to a standard assessment. Testing someone actually doing an everyday task in the workplace is certainly one way to achieve that goal of staying faithful to what is being done on the job!

Something else that struck me during the summit was that there really is no single industry  that has mastered performance testing. There were people who represented K-12, military, certification bodies, and large corporations with lots of experience to share.

If you’re interested in performance testing I’d highly recommend you visit the council’s Web site and check out the resources there.

Podcast: Assessments for workers all across the globe

Bon Crowder

 

Posted by Joan Phaup

I spent some time talking the other day with Bon Crowder, a global instructional consultant for a large oilfield services company. She explained how her organization has expanded its use of online assessments to include not only high-stakes exams and certifications but also formative assessments such as quizzes.  With participants all over the world — and having recently launched an assessment to 40,000 people — Bon’s organization values the ability to gather and analyze data that will help improve instructional programs.

We touched on many subjects, including the involvement of subject matter experts (SMEs) in using Questionmark Live to create assessment content, options for monitoring some higher-stakes assessments taken outside of  testing centers, and a technique Bon has devised for easily creating multiple math-related questions from a single question stem.

Questionmark Conference Close-up: Making Assessments Accessible

Posted by Joan Phaup

John Kleeman

Making Your Assessments Accessible and Available to All will be presented by Performance Solutions Specialist Cheryl Johnson and Questionmark Chairman John Kleeman during the Questionmark Users Conference March 14 – 17 in Miami. Cheryl brings a strong commitment to accessibility and usability to her work as an instructional designer as well as a trainer. She has been inspired by seeing how technology can dramatically expand opportunities for individuals with disabilities. I am looking forward to seeing Cheryl at the conference and would like to share my recent conversation with her:

Q: Could you tell me about your work as an instructional designer?

Cheryl Johnsoon

A: I’ve been involved with many  corporate and government training programs over the years. Although I have developed some instructor-led training, I’ve worked primarily on elearning. Lately I’ve been moving into social learning, mobile learning, high-level simulations, and gaming.

Q: Let’s talk specifically about the subject of accessibility: could you discuss your experience with that?

A: In the mid-nineties I started training people to use assistive technologies. That was before assistive technology was really considered a productivity tool. Back then it was only about making information and technology accessible to people. There really were no tutorials or any good training out there, so I started writing my own training with the motivation of reducing the amount of tech support I had to provide to the people I’d trained! A few years later I was living in Utah and often had to train people remotely. My learners were primarily vocational rehabilitation clients, and sometimes I would  be training just one person at a time. It was not cost-effective for me to drive long distances to train one person. I worked with a colleague to develop what would have been called in those days a “distance learning” program to help people use voice recognition technology effectively.  He went on to patent the technology, called VoiceWindows (http://www.voiceteach.com/), as an online tutorial. It is a tutorial on voice recognition technology and its use with various software applications.  In addition there are many macros built into it to increase productivity when using voice recognition technology. I have also trained many quadriplegics and have seen the huge impact technology could have on the quality of their lives, in some cases opening up work opportunities. Technology can open up a lot of possibilities and help make people be more productive both inside and outside the workplace.

Q: What are the key things people involved with assessment need to know about accessibility?

A: It calls for being creative and thinking differently about things. When I teach classes on accessibility I explain some rules: things like 508 compliance and the various standards people need to meet. But people get frustrated because they don’t know how to meet those requirements using what they have. Section 508 compliance really is just the basic standard. It’s more about making sure  the person can access that information rather than whether is it really usable. I try to focus on helping people make information and technology usable as well as accessible. There are no written rules out there for doing that, so creativity is key. I recommend that when organizations are designing assessments or learning materials, they have people who use the technology on the design team from the very beginning. They know their technology. They know how things work. And they can suggest alternative ways of achieving something. That might mean taking a hot spot question, which would require the use of a mouse, and making text links for the hot spot areas so that someone who’s unable to use a mouse can answer the same question using a keyboard.

Q: What do you hope people will take away from your presentation?

A: An understanding of how important it is to make sure that people who use assistive technology are part of their design team. A realization that quite often people who use assistive technology are sometimes novice users and need clear direction and instructions in using it for a particular application—for example a  particular assessment that you have adapted for their use. People with disabilities like to use interactive tools and want as rich a learning and assessment experience as everyone else.  And of course,  enthusiasm for finding creative ways to make assessments more enjoyable, usable and effective for everyone. John will be showing how Questionmark Perception addresses accessibility and usability issues. He’ll be covering what you have to do to make your assessments in Perception accessible, so that will be a major takeaway, too!

Q: What are you looking forward to at the Questionmark Users Conference?

A: I’m excited to meet people who want to make high quality learning and assessment available to everyone. That’s exciting to me, because I’ve spent many years fighting that battle! I also have to admit I’m looking forward to the cruise on Tuesday night!

Join Cheryl, John and our many other presenters for three days of learning and networking. Check out the conference program and register today!