Q&A: Microlearning and the Role of Measurement in Learning and Development at Progressive

 

Posted by Kristin Bernor

Chris Gilbert is a Senior Instructional Designer for Progressive Insurance, one of the largest providers of insurance in the United States. During his case study presentation at the 2019 Questionmark Conference in San Diego taking place from February 26 – March 1, he will talk about Using Questionmark to Build Microlearning for Photo Estimators. Progressive photo estimators use videos and photographs to identify damage and write estimates for necessary repairs.

This session will explore Progressive’s use of microlearning modules and the process they use to develop them.

I asked him recently about their case study:

Tell us about Progressive and your use of assessments:

At Progressive, we seek to make informed, data-driven decisions. We also strive to prepare and develop our people through effective, targeted learning solutions. In our learning orgs, assessments are one mechanism for gathering data we use to make a variety of decisions, including:

  • Identifying aspects and features of learning experiences that resonate within our target audiences so we can implement them in more of our deliverables
  • Pinpointing opportunities to improve learning experiences for our target audiences

Over the past few years, we’ve had a renewed focus on the importance of learning measurement and have established and implemented standards and tools for performing Level 1 and Level 2 measurement across and within all of our learning organizations. We’re currently working on Level 3 measurement to be able to measure and communicate the on-the-job impact of our learning experiences more consistently.

What do you mean by microlearning and why is it important to Progressive?

Microlearning is skill-based learning delivered in small “bite-sized” pieces. Microlearning can be developed in a variety of formats including videos, games, scenarios, and several others. Depending on the situation and needs of the organization and learners, microlearning can be delivered standalone, or as a supplement other learning experiences like in-person or virtual classroom courses.

Progressive is interested in adding microlearning into our learning deliverable portfolio for a variety of reasons, including:

  • Faster development times can improve our ability to deliver just-in-time learning solutions at the speed of modern business and change
  • Tightly-focused, skill-based topics and practice directly support on-the-job application
  • Today’s corporate learners seek quick-hit learning that gives them practical tools they need to succeed
  • Delivered in conjunction with other learning experiences, microlearning can help learners overcome the forgetting curve

What role does Questionmark play in ensuring that microlearning is successful?

One of the primary reasons, we decided to use Questionmark for our microlearning pilot project is that the data the system captures and it’s reporting capabilities allow us to provide the business with insights into several aspects of the learners’ performance in the modules. In turn, these insights will help the business make informed decisions.

What else about your session would you like to share?

Besides sharing the story of our first foray into microlearning, I’m planning to discuss some of the learnings we had related to question-type capabilities that we hadn’t previously explored.

Who would benefit most from attending this session and why?

a. Anyone interested in using Questionmark beyond its traditional use because the way we’re using it is a bit unconventional

b. Anyone interested in adding microlearning to their learning deliverable portfolio because Questionmark may provide a way for them to develop, deliver, and report the results

c. Anyone interested in extending the functionality of Questionmark question types to meet a business need because I’ll dig into some of the challenges, realizations, and learnings I experienced from having to extend a few of the question types in the pilot project

What are you especially looking forward to at this year’s Questionmark conference?

Meeting and networking with other Questionmark users, especially those who are passionate about the role of measurement in learning and development, and gaining more insight into how others are using system features and functionality in their organizations

Thank you Chris for taking time out of your busy schedule to discuss your session with us!

***

If you have not already done so, you still have a chance to attend this important learning event. Click here to register.

The Importance of Safety in the Utilities Industry: A Q&A with PG&E

Headshot Julie

Posted by Julie Delazyn

Wendy Lau is a Psychometrician at Pacific Gas and Electric Company (PG&E). She will be leading a discussion at Questionmark Conference 2016 in Miami, about Safety and the Utilities Industry: Why Assessments Matter.

WendyLau_Q&A

Wendy Lau, Psychometrician, PG&E

Wendy’s session will describe a day in the life of a psychometrician in the utilities industry. It will explore the role assessments play at PG&E, and how Questionmark has helped the company focus on safety and train its employees.

I recently asked her about her session:

Tell me about PG&E and its use of assessments:

PG&E is a utilities company that provides natural gas and electricity to most of the northern two-thirds of California. Over the years, we have evolved into a more data-driven company, and Questionmark has been a part of that for the past 7 years. Having assessments readily available and secured within a platform that we can trust is very important to PG&E. We are also glad to have found a testing tool that offers such a wide variety of question types.

Why is safety important in the utilities industry?

Depending on the activity that our employees perform, most of the work has serious safety implications — whether it is a lineman climbing up a  pole to perform liveline work or a utility worker digging near a major gas pipeline. Our technical training must have safety in mind and, more importantly, it must ensure that after going through training, employees are competent to perform their tasks safely and proficiently. In order to ensure workforce capability, we rely heavily on testing to prove that our workforce is in fact safe and proficient and that the community we serve and our employees are safe and receiving reliable services.

What role does Questionmark play in ensuring that safety?

Questionmark helps us focus on safety-related questions by allowing special assessment strategies such as identifying critical versus coachable assessment items and identifying cutscores for each accordingly. Questionmark also allows a secured platform so that we can ensure our test items are never compromised and that our employees are truly being assessed under fair circumstances.

To find out more about the role of Questionmark plays in ensuring safety, you’ll just have to attend my session at Questionmark Conference 2016 in Miami!

What are you looking forward to at the conference?

I am very much looking forward to ‘talking shop’ with other Psychometricians and sharing best practices with others in the utilities industry and other companies alike!

Thank you, Wendy for taking time out of your busy schedules to discuss your session with us!

palm tree emoji 2If you have not already done so, you still have a chance to attend this important learning event. Click here to register.

 

Caveon Q&A: Enhanced security of high-stakes tests

Headshot JuliePosted by Julie Delazyn

Questionmark and Caveon Test Security, an industry leader in protecting high-stakes test programs, have recently joined forces to provide clients of both organizations with additional resources for their test administration toolboxes.

Questionmark’s comprehensive platform offers many features that help ensure security and validity throughout the assessment process. This emphasis on security, along with Caveon’s services, which include analyzing data to identify validity risks as well as monitoring the internet for any leak that could affect intellectual property, adds a strong layer of protection for customers using Questionmark for high-stakes assessment management and delivery.

I sat down with Steve Addicott, Vice President of Caveon, to ask him a few questions about the new partnership, what Caveon does and what security means to him. Here is an excerpt from our conversation

Who is Caveon? Tell me about your company.

At Caveon Test Security, we fundamentally believe in quality testing and trustworthy test results. That’s why Caveon offers test security and test item
development services dedicated to helping prevent test fraud and better protecting our clients’ items, tests, and reputations.

What does security mean to you, and why is it important?

High stakes test programs make important education and career decisions about test takers based on test results. We also spend a tremendous amount of time creating, administering, scoring, and reporting results. With increased security pressures from pirates and cheats, we are here to make sure that those results are trustworthy, reflecting the true knowledge and skills of test takers.

Why a partnership with Questionmark and why now?

With a growing number of Questionmark clients engaging in high-stakes testing, Caveon’s experience in protecting the validity of test results is a natural extension of Questionmark’s security features. For Caveon, we welcome the chance to engage with a vendor like Questionmark to help protect exam results.

And how does this synergy help Questionmark customers who deliver high-stakes tests and exams?

As the stakes in testing continue to rise, so do the challenges involved in protecting your program. Both organizations are dedicated to providing clients with the most secure methods for protecting exam administrations, test development investments, exam result validity and, ultimately, their programs’ reputations.

For more information on Questionmark’s dedication to security, check out this video and download the white paper: Delivering Assessments Safely and Securely.

Q&A: High-stakes online tests for nurses

Headshot JuliePosted by Julie Delazyn

I spoke recently with Leanne Furby, Director of Testing Services at the National League for Nursing (NLN), about her case study presentation at the Questionmark 2015 Users Conference in Napa Valley March 10-13.

Leanne’s presentation, Transitioning 70 Years of High-Stakes Testing to Questionmark, explains NLN’s switch from a proprietary computer- and paper-based test delivery engine to Questionmark OnDemand for securely delivering standardized exams worldwide. I’m happy to share a snippet from of our conversation:

Tell me about the NLN

The NLN is a national organization for faculty nurses and leaders in nurse education. We offer faculty development, networking opportunities, testing services, nursing research grants and public policy initiatives to more than 26,000 members.

Why did you switch to Questionmark?

Our main concern was delivering our tests and exams to a variety of different devices. We wanted our students to be able to take a test on a tablet or take a quiz on their own mobile devices, and this wasn’t something we could do with our proprietary test delivery engine.

Our second major reason to go with Questionmark was the Customized Assessment Reports and the analytics tools. Before making the switch, we were having to create reports and analyze results manually. It took time and resources. Now this is all integrated in Questionmark.

How do you use Questionmark assessments?

We have 90 different exam lines and deliver approximately 75,000 to 100,000 secure exams a year, both nationally and internationally, in multiple languages. The NLN partnered with Questionmark in 2014 to transition the delivery of these exams through a custom-built portal. Questionmark is now NLN’s turnkey solution—from item banking and test development with SMEs all over the world to inventory control, test delivery and analytics.

This transition has had a positive outcomes for both our organization and our customers. We have developed a new project management policy, procedures for system transition and documentation for training at all levels. This has transformed the way we develop, deliver and analyze exams and the way we collect data for business and education purposes.

What are you looking forward to at the conference?

I am most looking forward to the opportunity to speak to other users and product developers to learn tips, tricks and little secrets surrounding the product. It’s so important to speak to people who have experience and can share ways of utilizing the software in ways you hadn’t thought of.

Thank you Leanne for taking time out of your busy schedule to discuss your session with us!

***

You have the opportunity to save $100 on your own conference registration: Just sign up by January 29 to receive this special early-bird discount.

How can a randomized test be fair to all?

Joan Phaup 2013 (3) Posted by Joan Phaup

James Parry, who is test development manager at the U.S Coast Guard Training Center in Yorktown, Virginia, will answer this question during a case study presentation the Questionmark Users Conference in San Antonio March 4 – 7. He’ll be co-presenting with LT Carlos Schwarzbauer, IT Lead at the USCG Force Readiness Command’s Advanced Distributed Learning Branch.

James and I spoke the other day about why tests created from randomly drawn items can be useful in some cases—but also about their potential pitfalls and some techniques for avoiding them.

When are randomly designed tests an appropriate choice?

James Parry

James Parry

There are several reasons to use randomized tests.  Randomization is appropriate when you think there’s a possibility of participants sharing the contents of their test with others who have not taken it.  Another reason would be in a computer lab style testing environment where you are testing many on the same subject at the same time with no blinders between the computers. So even if participants look at the screens next to them, chances are they won’t see the same items.

How are you using randomly designed tests?

We use randomly generated tests at all three levels of testing low-, medium- and high-stakes.  The low- and medium-stakes tests are used primarily at the schoolhouse level for knowledge- and performance-based knowledge quizzes and tests.  We are also generating randomized tests for on-site testing using tablet computers or local installed workstations.

Our most critical use is for our high-stakes enlisted advancement tests, which are administered both on paper and by computer. Participants are permitted to retake this test every 21 days if they do not achieve a passing score.  Before we were able to randomize the test there were only three parallel paper versions. Candidates knew this so some would “test sample” without studying to get an idea of every possible question. They would retake the first version, then the second, and so forth until they passed it. With randomization the word has gotten out that this is not possible anymore.

What are the pitfalls of drawing items randomly from an item bank?

The biggest pitfall is the potential for producing tests that have different levels of difficulty or that don’t present a balance of questions on all the subjects you want to cover. A completely random test can be unfair.  Suppose you produce a 50-item randomized test from an entire test item bank of 500 items.   Participant “A” might get an easy test, “B” might get a difficult test and “C” might get a test with 40 items on one topic and 10 on the rest and so on.

How do you equalize the difficulty levels of your questions?

This is a multi-step process. The item author has to make sure they develop sufficient numbers of items in each topic that will provide at least 3 to 5 items for each enabling objective.  They have to think outside the box to produce items at several cognitive levels to ensure there will be a variety of possible levels of difficulty. This is the hardest part for them because most are not trained test writers.

Once the items are developed, edited, and approved in workflow, we set up an Angoff rating session to assign a cut score for the entire bank of test items.  Based upon the Angoff score, each item is assigned a difficulty level of easy, moderate or hard and assigned a metatag to match within Questionmark.  We use a spreadsheet to calculate the number and percentage of available items at each level of difficulty in each topic. Based upon the results, the spreadsheet tells how many items to select from the database at each difficulty level and from each topic. The test is then designed to match these numbers so that each time it is administered it will be parallel, with the same level of difficulty and the same cut score.

Is there anything audience members should do to prepare for this session?

Come with an open mind and a willingness to think outside of the box.

How will your session help audience members ensure their randomized tests are fair?

I will give them the tools to use starting with a quick review of using the Angoff method to set a cut score and then discuss the inner workings of the spreadsheet that I developed to ensure each test is fair and equal.

***

See more details about the conference program here and register soon.

A streamlined system for survey administration and reporting

Joan Phaup 2013 (3)Posted by Joan Phaup

It’s great to talk to customers who will be presenting case studies at the Questionmark 2014 Users Conference. They all bring to their presentations the lessons they’ve learned from experience.

Conference participants have always taken a keen interest in how to use surveys effectively, so I was quite interested to find out from Scott Bybee, a training manager from Verizon who will be talking at the conference about Leveraging Questionmark’s Survey Capabilities Within a Multi-system Model.

What will you be sharing during your conference presentation?

A lot of it will be about how our surveys, which are mostly Level 1 evaluations for training events and Level 3 self-assessments. I will tell how we can use one generic survey template for all the courses that are being evaluated. We do this by passing parameters from our LMS into the special fields in Questionmark. I’ll also talk about how we integrate data from our LMS with the survey data to create detailed reports in a custom reporting system we built: We have everything we need to get very specific demographic reporting out of the system.

Scott Bybee

Scott Bybee

How is this approach helping you?

This system integrates reporting for all level 1 and 3 surveys. This provides us a single solution for all of our training-related reporting needs. Prior to this, we had to collect data from multiple systems and manually tie it all together. Before, we had a lot of different surveys being used by the business. It became hard to match up results due to variances in questions. With this approach, everyone sees the same set of questions and the quality of the reporting is much higher.

The alternative would have been to collect demographic information using drop-down lists, which we’d have to constantly update and maintain. There’s also the issue of the participant possibly choosing the wrong options from the drop-downs. This way, we are passing everything along for them. They can’t make a mistake. Another advantage is that automatically including that information means it takes less time for them to complete the survey.

Do you have a key piece of advice about how to get truly useful data from surveys?

Make sure you are asking the right kinds of questions and are not trying to put too much into one question. Also, consider passing information directly from your LMS into Questionmark, so participants can’t make a mistake filling out a drop-down.

What do you hope people will take away from your session?

I hope they find out there are some really creative ways to use Questionmark to get what you want. For instance, we realized that by using Perception Integration Protocol (PIP), we could pass in all the variables needed for user-interface as well as alignment with back-end reporting.  I also want them to appreciate how much can be done by tying different systems together. The investment to make Questionmark work for surveys as well as assessments dramatically increased our return on investment (ROI).

What do you hope to take away from the conference?

This will be my fourth one to go to. Every time I go I learn something from the people who are there – things I’d never even thought about. I want to learn from people who are using the tool in innovative ways, and I also want to hear about where things are going in the future.

The conference agenda is taking shape here. You can save $200 if you register for the conference by December 12.