Podcast: Alignment, Impact and Measurement With the A-model

Julie Delazyn HeadshotPosted by Julie Delazyn

The growing emphasis on  performance improvement — of which training is just a part — calls for new strategies for assessment and evaluation.

Bruce C. Aaron

Bruce C. Aaron

Measurement and evaluation specialist Dr. Bruce C. Aaron has devoted a lot of thought to this. His white paper, Alignment, Impact and Measurement with the A-model, describes a framework for aligning assessment and evaluation with an organization’s goals, objectives and human performance issues.

For more information on the A-model, check out the video and free white paper: Alignment, Impact and Measurement with the A-Model.

Our podcast interview with Bruce about the A-model has been of great interest to learning and HR professionals. The interview explores how this framework addresses the changes that have taken place in recent years and the resulting complexities of today’s workplace.

A-model diagramHere are a few excerpts from the conversation. If you’d like to learn more, listen to the 10-minute podcast below.

“The things that I’ve observed have to do with our moving away from a training focus into a performance focus. So we don’t speak so much about training or even training and development anymore. We speak a lot more about performance improvement, or human performance, or learning and performance in the workplace. And those sorts of changes have had a great impact in how we do our business, how we design our solutions and how we go about assessing and evaluating them.

…the A-model evolved out of dealing with the need to evaluate all of this and still focus on what are we trying to accomplish: how do we go about parsing up the components of our evaluation and keeping those things logically organized in their relationship to each other?

…If we have a complex, blended solution, if we haven’t done a good job of really tying that to our objectives and to the original business issue that we’re trying to address…it becomes apparent through a focus on evaluation and assessment.”

Getting more value from assessment results

Joan Phaup 2013 (3)

Posted by Joan Phaup

How do you maximize the value of assessment results? How do you tailor those results to meet the specific needs of your organization? We’ll address these question and many others at the Questionmark Users Conference in San Antonio March 4 – 7.

The conference program will cover a wide range of topics, offering learning opportunities for beginning, intermediate and advanced users of Questionmark technologies. The power and potential of open data will be a major theme, highlighted in a keynote by Bryan Chapman on Transforming OData into Meaning and Action.

Here’s the full program:Gen 3

Optional Pre-conference Workshops

  • Test Development Fundamentals, with Dr. Melissa Fein (half day)
  • Questionmark Boot Camp: Basic Training for Beginners, with Questionmark Trainer Rick Ault (full day)

General Sessions

  • Conference Kickoff and Opening General Session
  • Conference Keynote by Bryan Chapman – Transforming Open Data into Meaning and Action
  • Closing General Session — Leaping Ahead: A View Beyond the Horizon on the Questionmark Roadmap

Case Studiesgen 1

  • Using Questionmark to Conduct Performance Based Certifications —  SpaceTEC®
  • Better Outcomes Make the Outcome Better! —  USMC Marine Corps University
  • Generating and Sending Custom Completion Certificates — The Aurelius Group
  • Leveraging Questionmark’s Survey Capabilities with a Multi-system Model —  Verizon
  • Importing Questions into Questionmark Live on a Tri-Military Service Training Campus — Medical Education & Training Campus
  • How Can a Randomly Designed Test be Fair to All? —  U.S. Coast Guard

Best Practices

  • Principles of Psychometrics and Measurement Design
  • 7 Reasons to Use Online Assessments for Compliance
  • Reporting and Analytics: Understanding Assessment Resultsgen 2
  • Making it Real: Building Simulations Into Your Quizzes and Tests
  • Practical Lessons from Psychology Research to Improve Your Assessments
  • Item Writing Techniques for Surveys, Quizzes and Tests

Questionmark Features & Functions

  • Introduction to Questionmark for Beginners
  • BYOL: Item and Topic Authoring
  • BYOL: Collaborative Assessment Authoring
  • Integrating with Questionmark’s Open Assessment Platform
  • Using Questionmark’s OData API for Analytics
  • Successfully Deploying Questionmark Perception
  • Customizing the Participant Interfacegen 4

Discussions

  • Testing is Changing: Practical and Secure Assessment in the 21st Century
  • Testing what we teach: How can we elevate our effectiveness without additional time or resources?
  • 1 + 1 = 3…Questionmark, GP Strategies and You!

Drop-in Demos

  • Making the Most of Questionmark’s Newest Technologies

Future Solutions: Influence Questionmark’s Road Map

  • Focus Group on Authoring and Deliverygen 5
  • Focus Group on the Open Assessment Platform and Analytics

Tech Central

  • One-on-one meetings with Questionmark Technicians

Special Interest Group Meetings

  • Military/Defense US DOD and Homeland Security
  • Utilities/Energy Generation and Distribution
  • Higher Education
  • Corporate Universities

Social Events

Click here to see details about all these sessions, and register today!

night river banner

 

How can a randomized test be fair to all?

Joan Phaup 2013 (3) Posted by Joan Phaup

James Parry, who is test development manager at the U.S Coast Guard Training Center in Yorktown, Virginia, will answer this question during a case study presentation the Questionmark Users Conference in San Antonio March 4 – 7. He’ll be co-presenting with LT Carlos Schwarzbauer, IT Lead at the USCG Force Readiness Command’s Advanced Distributed Learning Branch.

James and I spoke the other day about why tests created from randomly drawn items can be useful in some cases—but also about their potential pitfalls and some techniques for avoiding them.

When are randomly designed tests an appropriate choice?

James Parry

James Parry

There are several reasons to use randomized tests.  Randomization is appropriate when you think there’s a possibility of participants sharing the contents of their test with others who have not taken it.  Another reason would be in a computer lab style testing environment where you are testing many on the same subject at the same time with no blinders between the computers. So even if participants look at the screens next to them, chances are they won’t see the same items.

How are you using randomly designed tests?

We use randomly generated tests at all three levels of testing low-, medium- and high-stakes.  The low- and medium-stakes tests are used primarily at the schoolhouse level for knowledge- and performance-based knowledge quizzes and tests.  We are also generating randomized tests for on-site testing using tablet computers or local installed workstations.

Our most critical use is for our high-stakes enlisted advancement tests, which are administered both on paper and by computer. Participants are permitted to retake this test every 21 days if they do not achieve a passing score.  Before we were able to randomize the test there were only three parallel paper versions. Candidates knew this so some would “test sample” without studying to get an idea of every possible question. They would retake the first version, then the second, and so forth until they passed it. With randomization the word has gotten out that this is not possible anymore.

What are the pitfalls of drawing items randomly from an item bank?

The biggest pitfall is the potential for producing tests that have different levels of difficulty or that don’t present a balance of questions on all the subjects you want to cover. A completely random test can be unfair.  Suppose you produce a 50-item randomized test from an entire test item bank of 500 items.   Participant “A” might get an easy test, “B” might get a difficult test and “C” might get a test with 40 items on one topic and 10 on the rest and so on.

How do you equalize the difficulty levels of your questions?

This is a multi-step process. The item author has to make sure they develop sufficient numbers of items in each topic that will provide at least 3 to 5 items for each enabling objective.  They have to think outside the box to produce items at several cognitive levels to ensure there will be a variety of possible levels of difficulty. This is the hardest part for them because most are not trained test writers.

Once the items are developed, edited, and approved in workflow, we set up an Angoff rating session to assign a cut score for the entire bank of test items.  Based upon the Angoff score, each item is assigned a difficulty level of easy, moderate or hard and assigned a metatag to match within Questionmark.  We use a spreadsheet to calculate the number and percentage of available items at each level of difficulty in each topic. Based upon the results, the spreadsheet tells how many items to select from the database at each difficulty level and from each topic. The test is then designed to match these numbers so that each time it is administered it will be parallel, with the same level of difficulty and the same cut score.

Is there anything audience members should do to prepare for this session?

Come with an open mind and a willingness to think outside of the box.

How will your session help audience members ensure their randomized tests are fair?

I will give them the tools to use starting with a quick review of using the Angoff method to set a cut score and then discuss the inner workings of the spreadsheet that I developed to ensure each test is fair and equal.

***

See more details about the conference program here and register soon.

Early-birds: Check out the conference program and register by tomorrow

Joan Phaup 2013 (3)Posted by Joan Phaup

You can still get a $100 early-bird registration discount for the Questionmark 2014 Users Conference if you register by tomorrow (Thursday, January 30th).

The program for March 4 – 7 includes a keynote by Learning Strategist Bryan Chapman on the power of open data, which will be a hot topic throughout this gathering at the Grand Hyatt San Antonio.

You can click here for program details, but here’s the line-up:

Optional Pre-Conference Workshopsbreakout 2013

  • ​Questionmark Boot Camp: Basic Training for Beginners  ​(full day)
  • ​Test Development Fundamentals  (half day)
  • ​The Art & Craft of Item Writing   ​(half day)

Case Studies

  • Using Questionmark to Conduct Performance Based Certifications —  SpaceTEC®
  • Better Outcomes Make the Outcome Better! — US Marine Corps University
  • ​Generating and Sending Custom Completion Certificates —  The Aurelius Group
  • ​Leveraging Questionmark’s Survey Capabilities with a Multi-system Model – Verizon
  • ​Importing Questions into Questionmark Live on a Tri-Military Service Training Campus Medical Education & Training Campus
  • How Can a Randomly Designed Test be Fair to All? US Coast Guard Training Center

Best Practices doug teaching 2013

  • ​Principles of Psychometrics and Measurement Design
  • 7 Reasons to Use Online Assessments for Compliance
  • ​Reporting and Analytics: Understanding Assessment Results
  • ​Making it Real: Building Simulations Into Your Quizzes and Tests
  • ​Practical Lessons from Psychology Research to Improve Your Assessments
  • Item Writing Techniques for Surveys, Quizzes and Tests

Questionmark Features & Functions

  • Introduction to Questionmark for Beginners
  • BYOL: Item and Topic Authoring
  • BYOL: Collaborative Assessment Authoring
  • Integrating with Questionmark’s Open Assessment Platform
  • Using Questionmark’s OData API for Analytics (BYOL)
  • Successfully Deploying Questionmark Perception
  • Customizing the Participant Interface

Discussions

  • Testing What We Teach: How can we elevate our effectiveness without additional time or resources?
  • Testing is Changing: Practical and Secure Assessment in the 21st Century

Future Solutions Focus Groups

  • Open Assessment Platform and Analytics
  • Authoring and Delivery

Special Interest Group Meetings

  • Military/Defense US DOD and Homeland Security
  • Utilities/Energy Generation and Distribution
  • Higher Education
  • Universities

Drop-in Demos of new Questionmark features and capabilities

Tech Central: Drop-in meetings with Questionmark technicians

—-

Register for the conference by tomorrow, Thursday, January 31st, to get the early-bird discount.

 

 

A streamlined system for survey administration and reporting

Joan Phaup 2013 (3)Posted by Joan Phaup

It’s great to talk to customers who will be presenting case studies at the Questionmark 2014 Users Conference. They all bring to their presentations the lessons they’ve learned from experience.

Conference participants have always taken a keen interest in how to use surveys effectively, so I was quite interested to find out from Scott Bybee, a training manager from Verizon who will be talking at the conference about Leveraging Questionmark’s Survey Capabilities Within a Multi-system Model.

What will you be sharing during your conference presentation?

A lot of it will be about how our surveys, which are mostly Level 1 evaluations for training events and Level 3 self-assessments. I will tell how we can use one generic survey template for all the courses that are being evaluated. We do this by passing parameters from our LMS into the special fields in Questionmark. I’ll also talk about how we integrate data from our LMS with the survey data to create detailed reports in a custom reporting system we built: We have everything we need to get very specific demographic reporting out of the system.

Scott Bybee

Scott Bybee

How is this approach helping you?

This system integrates reporting for all level 1 and 3 surveys. This provides us a single solution for all of our training-related reporting needs. Prior to this, we had to collect data from multiple systems and manually tie it all together. Before, we had a lot of different surveys being used by the business. It became hard to match up results due to variances in questions. With this approach, everyone sees the same set of questions and the quality of the reporting is much higher.

The alternative would have been to collect demographic information using drop-down lists, which we’d have to constantly update and maintain. There’s also the issue of the participant possibly choosing the wrong options from the drop-downs. This way, we are passing everything along for them. They can’t make a mistake. Another advantage is that automatically including that information means it takes less time for them to complete the survey.

Do you have a key piece of advice about how to get truly useful data from surveys?

Make sure you are asking the right kinds of questions and are not trying to put too much into one question. Also, consider passing information directly from your LMS into Questionmark, so participants can’t make a mistake filling out a drop-down.

What do you hope people will take away from your session?

I hope they find out there are some really creative ways to use Questionmark to get what you want. For instance, we realized that by using Perception Integration Protocol (PIP), we could pass in all the variables needed for user-interface as well as alignment with back-end reporting.  I also want them to appreciate how much can be done by tying different systems together. The investment to make Questionmark work for surveys as well as assessments dramatically increased our return on investment (ROI).

What do you hope to take away from the conference?

This will be my fourth one to go to. Every time I go I learn something from the people who are there – things I’d never even thought about. I want to learn from people who are using the tool in innovative ways, and I also want to hear about where things are going in the future.

The conference agenda is taking shape here. You can save $200 if you register for the conference by December 12.

 

Boot Camp for Beginners set for March 4, 2014

Joan Phaup 2013 (3)Posted by Joan Phaup

New customers who attended the Questionmark Users Conference in the past used to tell us that some hands-on instruction before the start of the conference would help them get a lot more out of the proceedings.

Enter Questionmark Boot Camp: Basic Training for Beginners – where people learn the basics before they join the throng at the conference. This full-day workshop has become a popular pre-conference option,  and we’re bringing it back again on Tuesday, March 4, 2014 – right before the Questionmark 2014 Users Conference in San Antonio.

I don’t get  to attend Boot Camp, but I did spend a few minutes talking about it with our trainer, Rick Ault, who has as much fun there as his pupils:

Rick Ault

Rick Ault

What happens at Boot Camp?
We talk about all the different Questionmark tools — how they are used together — and give people a solid understanding of the process and what they can do to build meaningful assessments. It’s hands-on. We ask people to bring their own laptops so that we can give them actual practice using the software. They have the chance to use Questionmark Live to build questions and assessments, then put that content onto a server to see it work.

Who should attend?
Any new users of Questionmark would benefit from it, because it’s designed to give people an understanding of what the product does and how it works.

What should they bring?
They should bring their laptops, plus some ideas for how they might like to use Questionmark. They should also bring some ideas for some fun content that they might want to create?

How does Boot Camp prepare new customers for the Users Conference?
It gives them exposure to all of the tools, and it helps them understand the process. By getting some hands-on experience with our technologies, they will be able to make better choices about what conference tracks and sessions to attend. They’ll also be able to think of meaningful questions to ask at the conference.

What do YOU like best about Boot Camp?
I like meeting new customers, and I like seeing their happiness when they create something. It’s great to see the birth of their new content as they join the Questionmark family!

Newcomers to Questionmark can join the familhy in style by attending Boot Camp. You can sign up when you register for the conference.