The Importance of Safety in the Utilities Industry: A Q&A with PG&E

Headshot Julie

Posted by Julie Delazyn

Wendy Lau is a Psychometrician at Pacific Gas and Electric Company (PG&E). She will be leading a discussion at Questionmark Conference 2016 in Miami, about Safety and the Utilities Industry: Why Assessments Matter.

WendyLau_Q&A

Wendy Lau, Psychometrician, PG&E

Wendy’s session will describe a day in the life of a psychometrician in the utilities industry. It will explore the role assessments play at PG&E, and how Questionmark has helped the company focus on safety and train its employees.

I recently asked her about her session:

Tell me about PG&E and its use of assessments:

PG&E is a utilities company that provides natural gas and electricity to most of the northern two-thirds of California. Over the years, we have evolved into a more data-driven company, and Questionmark has been a part of that for the past 7 years. Having assessments readily available and secured within a platform that we can trust is very important to PG&E. We are also glad to have found a testing tool that offers such a wide variety of question types.

Why is safety important in the utilities industry?

Depending on the activity that our employees perform, most of the work has serious safety implications — whether it is a lineman climbing up a  pole to perform liveline work or a utility worker digging near a major gas pipeline. Our technical training must have safety in mind and, more importantly, it must ensure that after going through training, employees are competent to perform their tasks safely and proficiently. In order to ensure workforce capability, we rely heavily on testing to prove that our workforce is in fact safe and proficient and that the community we serve and our employees are safe and receiving reliable services.

What role does Questionmark play in ensuring that safety?

Questionmark helps us focus on safety-related questions by allowing special assessment strategies such as identifying critical versus coachable assessment items and identifying cutscores for each accordingly. Questionmark also allows a secured platform so that we can ensure our test items are never compromised and that our employees are truly being assessed under fair circumstances.

To find out more about the role of Questionmark plays in ensuring safety, you’ll just have to attend my session at Questionmark Conference 2016 in Miami!

What are you looking forward to at the conference?

I am very much looking forward to ‘talking shop’ with other Psychometricians and sharing best practices with others in the utilities industry and other companies alike!

Thank you, Wendy for taking time out of your busy schedules to discuss your session with us!

palm tree emoji 2If you have not already done so, you still have a chance to attend this important learning event. Click here to register.

 

Caveon Q&A: Enhanced security of high-stakes tests

Headshot JuliePosted by Julie Delazyn

Questionmark and Caveon Test Security, an industry leader in protecting high-stakes test programs, have recently joined forces to provide clients of both organizations with additional resources for their test administration toolboxes.

Questionmark’s comprehensive platform offers many features that help ensure security and validity throughout the assessment process. This emphasis on security, along with Caveon’s services, which include analyzing data to identify validity risks as well as monitoring the internet for any leak that could affect intellectual property, adds a strong layer of protection for customers using Questionmark for high-stakes assessment management and delivery.

I sat down with Steve Addicott, Vice President of Caveon, to ask him a few questions about the new partnership, what Caveon does and what security means to him. Here is an excerpt from our conversation

Who is Caveon? Tell me about your company.

At Caveon Test Security, we fundamentally believe in quality testing and trustworthy test results. That’s why Caveon offers test security and test item
development services dedicated to helping prevent test fraud and better protecting our clients’ items, tests, and reputations.

What does security mean to you, and why is it important?

High stakes test programs make important education and career decisions about test takers based on test results. We also spend a tremendous amount of time creating, administering, scoring, and reporting results. With increased security pressures from pirates and cheats, we are here to make sure that those results are trustworthy, reflecting the true knowledge and skills of test takers.

Why a partnership with Questionmark and why now?

With a growing number of Questionmark clients engaging in high-stakes testing, Caveon’s experience in protecting the validity of test results is a natural extension of Questionmark’s security features. For Caveon, we welcome the chance to engage with a vendor like Questionmark to help protect exam results.

And how does this synergy help Questionmark customers who deliver high-stakes tests and exams?

As the stakes in testing continue to rise, so do the challenges involved in protecting your program. Both organizations are dedicated to providing clients with the most secure methods for protecting exam administrations, test development investments, exam result validity and, ultimately, their programs’ reputations.

For more information on Questionmark’s dedication to security, check out this video and download the white paper: Delivering Assessments Safely and Securely.

Q&A: High-stakes online tests for nurses

Headshot JuliePosted by Julie Delazyn

I spoke recently with Leanne Furby, Director of Testing Services at the National League for Nursing (NLN), about her case study presentation at the Questionmark 2015 Users Conference in Napa Valley March 10-13.

Leanne’s presentation, Transitioning 70 Years of High-Stakes Testing to Questionmark, explains NLN’s switch from a proprietary computer- and paper-based test delivery engine to Questionmark OnDemand for securely delivering standardized exams worldwide. I’m happy to share a snippet from of our conversation:

Tell me about the NLN

The NLN is a national organization for faculty nurses and leaders in nurse education. We offer faculty development, networking opportunities, testing services, nursing research grants and public policy initiatives to more than 26,000 members.

Why did you switch to Questionmark?

Our main concern was delivering our tests and exams to a variety of different devices. We wanted our students to be able to take a test on a tablet or take a quiz on their own mobile devices, and this wasn’t something we could do with our proprietary test delivery engine.

Our second major reason to go with Questionmark was the Customized Assessment Reports and the analytics tools. Before making the switch, we were having to create reports and analyze results manually. It took time and resources. Now this is all integrated in Questionmark.

How do you use Questionmark assessments?

We have 90 different exam lines and deliver approximately 75,000 to 100,000 secure exams a year, both nationally and internationally, in multiple languages. The NLN partnered with Questionmark in 2014 to transition the delivery of these exams through a custom-built portal. Questionmark is now NLN’s turnkey solution—from item banking and test development with SMEs all over the world to inventory control, test delivery and analytics.

This transition has had a positive outcomes for both our organization and our customers. We have developed a new project management policy, procedures for system transition and documentation for training at all levels. This has transformed the way we develop, deliver and analyze exams and the way we collect data for business and education purposes.

What are you looking forward to at the conference?

I am most looking forward to the opportunity to speak to other users and product developers to learn tips, tricks and little secrets surrounding the product. It’s so important to speak to people who have experience and can share ways of utilizing the software in ways you hadn’t thought of.

Thank you Leanne for taking time out of your busy schedule to discuss your session with us!

***

You have the opportunity to save $100 on your own conference registration: Just sign up by January 29 to receive this special early-bird discount.

How can a randomized test be fair to all?

Joan Phaup 2013 (3) Posted by Joan Phaup

James Parry, who is test development manager at the U.S Coast Guard Training Center in Yorktown, Virginia, will answer this question during a case study presentation the Questionmark Users Conference in San Antonio March 4 – 7. He’ll be co-presenting with LT Carlos Schwarzbauer, IT Lead at the USCG Force Readiness Command’s Advanced Distributed Learning Branch.

James and I spoke the other day about why tests created from randomly drawn items can be useful in some cases—but also about their potential pitfalls and some techniques for avoiding them.

When are randomly designed tests an appropriate choice?

James Parry

James Parry

There are several reasons to use randomized tests.  Randomization is appropriate when you think there’s a possibility of participants sharing the contents of their test with others who have not taken it.  Another reason would be in a computer lab style testing environment where you are testing many on the same subject at the same time with no blinders between the computers. So even if participants look at the screens next to them, chances are they won’t see the same items.

How are you using randomly designed tests?

We use randomly generated tests at all three levels of testing low-, medium- and high-stakes.  The low- and medium-stakes tests are used primarily at the schoolhouse level for knowledge- and performance-based knowledge quizzes and tests.  We are also generating randomized tests for on-site testing using tablet computers or local installed workstations.

Our most critical use is for our high-stakes enlisted advancement tests, which are administered both on paper and by computer. Participants are permitted to retake this test every 21 days if they do not achieve a passing score.  Before we were able to randomize the test there were only three parallel paper versions. Candidates knew this so some would “test sample” without studying to get an idea of every possible question. They would retake the first version, then the second, and so forth until they passed it. With randomization the word has gotten out that this is not possible anymore.

What are the pitfalls of drawing items randomly from an item bank?

The biggest pitfall is the potential for producing tests that have different levels of difficulty or that don’t present a balance of questions on all the subjects you want to cover. A completely random test can be unfair.  Suppose you produce a 50-item randomized test from an entire test item bank of 500 items.   Participant “A” might get an easy test, “B” might get a difficult test and “C” might get a test with 40 items on one topic and 10 on the rest and so on.

How do you equalize the difficulty levels of your questions?

This is a multi-step process. The item author has to make sure they develop sufficient numbers of items in each topic that will provide at least 3 to 5 items for each enabling objective.  They have to think outside the box to produce items at several cognitive levels to ensure there will be a variety of possible levels of difficulty. This is the hardest part for them because most are not trained test writers.

Once the items are developed, edited, and approved in workflow, we set up an Angoff rating session to assign a cut score for the entire bank of test items.  Based upon the Angoff score, each item is assigned a difficulty level of easy, moderate or hard and assigned a metatag to match within Questionmark.  We use a spreadsheet to calculate the number and percentage of available items at each level of difficulty in each topic. Based upon the results, the spreadsheet tells how many items to select from the database at each difficulty level and from each topic. The test is then designed to match these numbers so that each time it is administered it will be parallel, with the same level of difficulty and the same cut score.

Is there anything audience members should do to prepare for this session?

Come with an open mind and a willingness to think outside of the box.

How will your session help audience members ensure their randomized tests are fair?

I will give them the tools to use starting with a quick review of using the Angoff method to set a cut score and then discuss the inner workings of the spreadsheet that I developed to ensure each test is fair and equal.

***

See more details about the conference program here and register soon.

A streamlined system for survey administration and reporting

Joan Phaup 2013 (3)Posted by Joan Phaup

It’s great to talk to customers who will be presenting case studies at the Questionmark 2014 Users Conference. They all bring to their presentations the lessons they’ve learned from experience.

Conference participants have always taken a keen interest in how to use surveys effectively, so I was quite interested to find out from Scott Bybee, a training manager from Verizon who will be talking at the conference about Leveraging Questionmark’s Survey Capabilities Within a Multi-system Model.

What will you be sharing during your conference presentation?

A lot of it will be about how our surveys, which are mostly Level 1 evaluations for training events and Level 3 self-assessments. I will tell how we can use one generic survey template for all the courses that are being evaluated. We do this by passing parameters from our LMS into the special fields in Questionmark. I’ll also talk about how we integrate data from our LMS with the survey data to create detailed reports in a custom reporting system we built: We have everything we need to get very specific demographic reporting out of the system.

Scott Bybee

Scott Bybee

How is this approach helping you?

This system integrates reporting for all level 1 and 3 surveys. This provides us a single solution for all of our training-related reporting needs. Prior to this, we had to collect data from multiple systems and manually tie it all together. Before, we had a lot of different surveys being used by the business. It became hard to match up results due to variances in questions. With this approach, everyone sees the same set of questions and the quality of the reporting is much higher.

The alternative would have been to collect demographic information using drop-down lists, which we’d have to constantly update and maintain. There’s also the issue of the participant possibly choosing the wrong options from the drop-downs. This way, we are passing everything along for them. They can’t make a mistake. Another advantage is that automatically including that information means it takes less time for them to complete the survey.

Do you have a key piece of advice about how to get truly useful data from surveys?

Make sure you are asking the right kinds of questions and are not trying to put too much into one question. Also, consider passing information directly from your LMS into Questionmark, so participants can’t make a mistake filling out a drop-down.

What do you hope people will take away from your session?

I hope they find out there are some really creative ways to use Questionmark to get what you want. For instance, we realized that by using Perception Integration Protocol (PIP), we could pass in all the variables needed for user-interface as well as alignment with back-end reporting.  I also want them to appreciate how much can be done by tying different systems together. The investment to make Questionmark work for surveys as well as assessments dramatically increased our return on investment (ROI).

What do you hope to take away from the conference?

This will be my fourth one to go to. Every time I go I learn something from the people who are there – things I’d never even thought about. I want to learn from people who are using the tool in innovative ways, and I also want to hear about where things are going in the future.

The conference agenda is taking shape here. You can save $200 if you register for the conference by December 12.

 

Better outcomes make the outcome better!

Joan Phaup 2013 (3)Posted by Joan Phaup

As we build the program for the Questionmark 2014 Users Conference, I’m having a great time chatting with presenters about their plans.

I spoke recently with Gail Watson from the US Marine Corps University’s College of Distance Education and Training. As an institution that educates large numbers of people about large and complex subjects, the college has grappled with how to make sure the tests it administers yield meaningful outcomes.

Gail says that careful attention to the relationship of business practices and organization of the folder/topic system is critical to the effective delivery of a Questionmark assessment if the subject matter is broad or complex.  When subject matter experts (SMEs) are made aware of Questionmark’s capabilities prior to creating questions, they can organize topics and questions in ways that result in better feedback. The success of the assessments and topics can be better measured.

Her case study presentation  in San Antonio will also focus on principles for using multiple response, pull-down, matching and ranking questions.

Gail Watson

Gail Watson

 Who would benefit most from your presentation?

I think it will help users who either have broad, complex material to be assessed as well as new users who may be confused about where to start. Even if you are knowledgeable about Questionmark, sometimes a subject is so big you don’t know how to set up your topics or use Question blocks — things like that.

How will you help your audience?

I’d like to show how to assess the capabilities of Questionmark in order to make assessments fit in with (business) processes and decision points. It’s a matter of matching up the SME vocabulary with the Questionmark vocabulary. We have done this to such an extent that we now have a roadmap that we go through long before any items are put into Questionmark. I can look at a question number and tell from it exactly what topic it relates to and what folders it goes in.

I’d like to show the importance of understanding Questionmark’s capabilities and how they relate to the world of SMEs – so that people understand how to match their assessment up with decision points up front. You can’t do that after the fact. By getting this right, you end up providing good feedback to students, good feedback to SMEs and good feedback to managers.

Do your SMEs use Questionmark Live to author items?

Yes, and we use it heavily in our question review and approval process.  This actually relates to the importance of the question number used in the Description Field.  I work with large batches of approved questions, and without that question number, I don’t know what folder “path” to put them in.  An example question number is 5212.4.1. I can tell just from looking at number that this question resides in a topic folder 4 levels down.  So as I start receiving questions, I can start producing assessments.

How do question and topic outcomes influence assessment results?

Creating good question outcomes for multiple select, matching, pull down and ranking questions produces supporting data for item analysis of those question types. These types of questions can tell you what correct answers people are missing, what wrong answers are being collected, and so forth. Creating good topic outcomes give Subject Matter Experts (SME’s) insight into weak areas of knowledge from topic outcome reports and gives failing students the proper assessment feedback to focus their study for re-test

Can you share some quick tips for organizing topics and folders effectively?

List your SME “vocabulary” and processes, then list Questionmark’s “vocabulary” and capabilities.  After that, match them up to create a topic folder and assessment map; a question management process; an assessment management process; and a review and maintenance process.

What do you hope people will take away from your session? 

Everyone who touches the assessment process needs to know how Questionmark works so that they can best leverage Questionmark’s capabilities. I’d like to help people match up what Questionmark can do with what they need to accomplish in their learning organizations. I also will be talking about Questionmark Live, because we use it quite heavily in the process of creating questions. I hope they’ll learn some tips for using it effectively.

***

Early-bird registration for the conference, to be held in San Antonio March 4 – 7, is open through December 12. Click here to register.