Saving Time and Money with Diagnostic Testing: A SlideShare Presentation

Headshot JuliePosted by Julie Delazyn

Having employees “test out” of corporate training using diagnostic assessments can save valuable resources and improve motivation, but there are many factors to be considered.

How do you ensure that time spent developing a robust diagnostic assessment provides value to the business?

A team from PwC explained their approach to this at the Questionmark 2013 Users Conference, and we’re happy to share the handouts from their presentation with you.

The Half-Time Strategy: Saving Your Organization Time and Money with Diagnostic Testing includes examples of diagnostic test-out assessments for business-critical self-study programs. It explains how diagnostic assessments can help organizations save training time while still maintaining quality. It also includes tips for building defensible assessments that people can take to test out of training – and for minimizing the time people spend taking them.

Questionmark Users Conferences offer many opportunities to learn from the experience of fellow learning and assessment professionals. Registration is already open for the 2014 Users Conference March 4 – 7 in San Antonio, Texas. Plan to be there!

Announcing secure delivery of higher-stakes tests on the Apple iPad

Headshot JuliePosted by Julie Delazyn

Questionmark Secure been safeguarding the delivery of medium- and high-stakes tests for over a decade and last year became available to Mac users. Today, I’m pleased to announce that Questionmark Secure is now available for iPad users, too – and that you can download Questionmark Secure for iPad from the iTunes App sotre

Like its predecessors, this free app locks down the browser, disabling functions that participants could use to print or copy exam material, ‘”accidentally” exit a test, or gain access to materials on their devices or the Internet that could give an unfair advantage. The app provides a secure environment for delivering higher-stakes assessments such as tests and exams. Used along with other measures for combating impersonation and content theft, it can help reduce the risk of cheating.

Organizations can use the app to deliver medium and high-stakes tests via low-cost, highly portable tablets — perfect for a BYOD situation or for setting up mobile test centers.

We are very pleased to offer this option to our customers, who are embracing the use of mobile devices for many different purposes.

Shenandoah University School of Pharmacy helped beta test the new app as part of an integrated mobile learning program. So far, students involved in the program have been using the Apple MacBook Pro for accessing course material and online assessments. They’ve also had their choice of using an iPod Touch, iPad 3G or iPhone for a quick mobile delivery of content. But this fall, incoming students will work with the iPad as well as the MacBook Pro. If you’d like more details about Shenandoah’s iMLearning initiative, see Shenandoah’s case study slides from this year’s  Questionmark Users Conference. (Here’s a link to the 2014 conference!) 

To learn more about the app or to download it form iTunes, click here.

Writing Good Surveys, Part 1: Getting Started

Doug Peterson HeadshotPosted By Doug Peterson

In May of 2013 I attended the American Society of Training and Development (ASTD) conference in Dallas, TX. While there I took in Ken Phillips’ session called “Capturing Elusive Level 3 Data: The Secrets of Survey Design.” Survey BasicsI also picked up the book “Survey Basics” by Patricia Pulliam Phillips, Jack J. Phillips, and Bruce Aaron. (Apparently there is some sort of cosmic connection between surveys and people named “Phillips”. Who knew?) Over the course of my next few blog posts, I’d like to discuss some of the things I’ve learned about surveys.

Surveys can be accomplished in several ways:

  1. Self-administered
  2. Interviews
  3. Focus groups
  4. Observation

In this series, I’m going to be looking at #1 and #4. The self-administered survey is what we typically think about when we hear the word “survey” – taking an evaluation survey at the end of a training experience. Was the room temperature comfortable? Did you enjoy the training experience? Many times you hear them referred to as “smile sheets” and they relate to level 1 of the Kirkpatrick model (reaction). Questionmark excels at creating these types of surveys, and our Questionmark Live browser-based authoring tool even has a dedicated “Course Evaluation” assessment template that comes with a library of standard questions from which to select, in addition to writing questions of your own.

Surveys can also be used for Kirkpatrick level 3 evaluation – behavior. In other words, was the training applied back on the job? Many times level 3 data is derived from job statistics such as an increase in widgets produced per day or a decrease in the number of accidents reported per month. However, surveys can also be used to determine the impact of the training on job performance. Not only can the survey be taken by the learner, the survey can also take the form of an observational assessment filled out by someone else. Questionmark makes it easy to set up observational assessments – identify the observer and who they can observe, the observer logs in and specifies who he/she is observing, and the results are tied to the person being observed.

To write a good survey, it is important to understand the objectives of the survey. Define your objectives up front and then use them to drive which questions are included. If a question doesn’t pertain to one of the objectives, throw it out. The best results come from a survey that is only as long as it needs to be.

The next step is to define your target audience. The target audience of a level 1 survey is pretty obvious – it’s the people who took the training! However, level 3 surveys can be a bit trickier. Typically you would include those who participated in the training, but you may want to include others, as well. For example, if the training was around customer relations, you may want to survey some customers (internal and/or external). The learner’s peers and colleagues might be able to provide some valuable information as to how the learner is applying what was learned. The same is true about the learner’s management. In certain situations, it might also be appropriate to survey the learner’s direct reports. For example, if a manager takes leadership training, who better to survey than the people he or she is leading? The key thing is that the group being surveyed must have first-hand knowledge of the learner’s behavior.

A few more things to take into account when deciding on a target audience:

  • How disruptive or costly is the data collection process? Are you asking a lot of highly paid staff to take an hour of their time to fill out a survey? Will you have to shut down the production line or take customer representatives away from their phones to fill out the survey?
  • How credible do the results need to be? Learners tend to overinflate how much they use what they’ve learned, so if important decisions are being made based on the survey data
  • What are the stakeholders expecting?

Whereas well-defined objectives define which questions are asked, the target audience defines how they are asked. Surveying the learner will typically involve more responses about feelings and impressions, especially in level 1 surveys. Surveying the learner’s colleagues, management, direct reports, and/or customers will involve questions more related to the learner’s observable behaviors.  As this series progresses, we’ll look at writing survey questions in more depth.

Posts in this series:


The Dog Days of Summer: Take Our Quiz!

Joan Phaup Headshot Posted by Joan Phaup

It’s been hot-hot-hot in many parts of the Northern Hemisphere this summer – and now we’re in what’s known as the Dog Days.

Some people may think it’s too hot outside for anyone to think straight, but we’re not going to ask too much of you.

Our short quiz is just a little something to get your mind in gear for cooler days ahead.

See how you do on these five questions:

Sharing slides: 7 ways that online assessments help ensure compliance

John Kleeman HeadshotPosted by John Kleeman

I recently gave a webinar with my colleague Brian McNamara for the SCCE ( Society of Corporate Compliance and Ethics) on 7 ways that online assessments can help ensure compliance.

Here are the slides:

As you can see, we started the webinar by running through some general concepts on assessments including why it’s important that assessments are reliable and valid. Then we described seven key ways in which online assessments can help ensure compliance.

Here are the six pieces of good practice we advocated in the webinar:

  1. Use scenario questions – test above knowledge
  2. Use topic feedback
  3. Consider observational assessments
  4. Use item analysis
  5. Set a pass or cut score
  6. Use a code of conduct

To view a recorded version of this webinar, go to SCCE’s website to purchase the CD from SCCE (Questionmark does not receive any remuneration for this). Or, view a slightly shorter, complimentary version through Questionmark, which is scheduled for September. Go to our UK website  or our  US website for webinar details and registration.

Using Questionmark’s OData API to Analyze Item Key Distribution

Austin FosseyPosted by Austin Fossey

The Questionmark OData API, which offers flexible access to data for the creation of custom reports, can help you ensure the quality of your tests.

For instance, you can use OData to create a frequency table of item keys in a multiple choice assessment. This report shows the number of items that have the first choice as the correct answer, the number of items that have the second choice as the correct answer, et cetera.

analyze 1

Why do we care about how often each choice number is the item key? If there is a pattern in how correct choices are assigned, it may affect how participants perform on the test, and this can lead to construct-irrelevant variance; i.e., the scores are being affected by factors other than the participant’s knowledge and abilities.

Let’s say that our assessment has 50 items, and on 30 (60%) of those items the second choice is the item key. Now let’s put ourselves in the shoes of a qualified participant. Halfway through the assessment, we might start thinking, “Gosh, I just picked the second choice four times in a row. Maybe I should go back and check some of those answers.” Because of poor test design, we are second-guessing our answers. Even if we do not change our responses, time is being wasted and test anxiety is rising, which might negatively affect our responses later in the assessment.

The opposite problem may arise too. If an unqualified participant figures out that the second choice is most often the key, he or she may pick the second choice even when he or she does not know the answer, resulting in an inflated score.

When looking at the distribution of keys across a selected response assessment, we expect to see an even distribution of the keys across the choices. For example, if we have a multiple choice assessment with four choices in each item labeled A, B, C, and D, we would like to see the 25% of the keys assigned to each of these choices.

analyze 2

You do not have to limit your assessment research to this example. The beauty of OData is that you can access your data whenever you have a new question you would like to investigate. For example, instead of reviewing the frequencies of your keys, you may want to determine the ratio of the length of the key to the length of the other options (a common item writing mistake is to write keys that are noticeably longer than the distractors). You may also want to look for patterns in the keys’ text (e.g., 10 items all have “OData” as the correct choice).

Click here for more information about the OData API. (If you are a Questionmark software support plan customer, you can get step-by-step instructions for using OData to create the item key frequency table in the Premium Videos section of the Questionmark Learning Café.)