Getting Results — The Questionmark Blog : Discussing assessment trends and best practices, tech tips, case studies and more.

Special Briefing: Cloud-based Assessment Management for Government and Defense Agencies

Posted by Kristin Bernor

In just two weeks, the special briefing about Questionmark OnDemand for Government, a new cloud-based service dedicated to the needs of U.S. governmental and defense agencies, takes place. You don’t want to miss it!

Join us on Thursday, May 17th in Washington, DC, to learn about how this new service enables agencies to securely author, deliver and analyze assessments, and hear from dynamic speakers including:

  • Jono Poltrack, contributor to the Sharable Content Object Reference Model (SCORM) while at Advanced Distributed Learning (ADL)
  • Ted Stille, Department of State, will discuss the agency’s motivations and experience as project sponsor for Questionmark OnDemand for Government
  • Christina McGhee, Schellman audit team, will discuss the 3PAO role in the FedRAMP authorization process
  • Zaree Singer, FedRAMP PMO Support, will explain the FedRAMP ATO approval process
  • Ganesh Shenbagaraman, Microsoft, will discuss Microsoft Azure’s government cloud service

Space is limited, so register today! Here’s how:

Questionmark has finalized its FedRAMP System Security Plan and this plan, which documents our security systems and processes, is now being reviewed by an accredited FedRAMP Third Party Assessment Organization (3PAO); this means that we are officially in audit. Once this document has been audited it becomes part of the FedRAMP library for Security Officers to review and provide individual agencies with an “Authorization to Operate” (ATO). Note: Briefing attendees will be eligible to receive a pre-release copy of the FedRAMP System Security Plan.

Questionmark is widely deployed by U.S. governmental and defense agencies to author, deliver and report on high-stakes advancement exams, post-course tests for distance learning, job task analysis, medical training and education, competency testing, course evaluations and more. For government agencies currently using the on-premise installed Questionmark Perception, OnDemand for Government provides a cost-effective option to upgrade to a secure, best-in-class cloud-based assessment management system.

We look forward to seeing you in Washington for a morning of learning and networking!

Item Analysis for Beginners – Getting Started

Posted by John Kleeman
Do you use assessments to make decisions about people? If so, then you should regularly run Item Analysis on your results.  Item Analysis can help find questions which are ambiguous, mis-keyed or which have choices that are rarely chosen. Improving or removing such questions will improve the validity and reliability of your assessment, and so help you use assessment results to make better decisions. If you don’t use Item Analysis, you risk using poor questions that make your assessments less accurate.

Sometimes people can be fearful of Item Analysis because they are worried it involves too much statistics. This blog post introduces Item Analysis for people who are unfamiliar with it, and I promise no maths or stats! I’m also giving a free webinar on Item Analysis with the same promise.

An assessment contains many items (another name for questions) as figuratively shown below. You can use Item Analysis to look at how each item performs within the assessment and flag potentially weak items for review. By keeping only stronger questions in the assessment, the assessment will be more effective.

Picture of a series of items with one marked as being weak

Item Analysis looks at the performance of all your participants on the items, and calculates how easy or hard people find the items (“item difficulty” or “p-value”) and how well the scores on items correlate with or show a relationship with the scores on the assessment as a whole (“item discrimination” or correlation). Some of problematic questions that Item Analysis can identify are:

  • Questions almost all participants get right, and so which are very easy. You might want to review to these to see if they are appropriate for the assessment. See my earlier post Item Analysis for Beginners – When are very Easy or very Difficult Questions Useful? for more information.
  • Questions which are difficult, where a lot of participants get the questionwrong. You should check such questions in case they are mis-keyed or ambiguous.
  • Multiple choice questions where some choices are rarely picked. You might want to improve such questions to make the wrong choices more plausible.
  • Questions where there is a poor correlation between participants who get the question right and who do well on the assessment. For example it will flag questions that high performing participants perform poorly on. You should look at such questions in case they are ambiguous, mis-keyed or off-topic.

There is a huge wealth of information available in an Item Analysis report, and assessment experts will delve into the report in detail. But much of the key information in an Item Analysis report is useful to anyone creating and delivering quizzes, tests and exams.

The Questionmark Item Analysis report includes a graph which shows the difficulty of items compared against their discrimination, like in the example below. It flags questions by marking them amber or red if they fall into categories which may need review. For example, in the illustration below, four questions are marked in amber as having low discrimination and so potentially be worth looking at.

Illustration of Questionmark item analysis report showing some questions green and some amber

If you are running an assessment program, and not using Item Analysis regularly, then this throws doubt on the trustworthiness of your results. By using it to identify and improve weak questions you should be able to improve your validity and reliability.

Item Analysis is surprisingly effective in practice. I’m one of the team responsible at Questionmark for managing our data security test which all employees have to take annually to check their understanding of information security and data protection. We recently reviewed the test and ran Item Analysis and very quickly found a question with poor stats where the technology had changed but we’d not updated the wording, and another question where two of the choices could be considered right, which made it hard to answer. It made our review faster and more effective and helped us improve the quality of the test.

If you want to learn a little more about Item Analysis, I’m running a free webinar on the subject “Item Analysis for Beginners” on May 2nd. You can see details and register for the webinar at https://www.questionmark.com/questionmark_webinars. I look forward to seeing some of you there!

 

Seven Ways Assessments Fortify Compliance

Posted by John Kleeman
Picture of a tablet being used to take an assessment with currency symbols adjacentWhy do most of the world’s banks, pharmaceutical companies, utilities and other large companies use online assessments to test the competence of their employees?

It’s primarily because compliance fines round the world are high and assessments reduce the risk of regulatory compliance failures. Assessments also give protection to the organization in the event of an individual mis-step by proving that the organization had checked the individual’s knowledge of the rules prior to the mistake.

Here are seven reasons companies use assessments from my experience:

1. Regulators encourage assessments 

Some regulators require companies to test their workforce regularly. For example the US FDIC says in its compliance manual:

“Once personnel have been trained on a particular subject, a compliance officer should periodically assess employees on their knowledge and comprehension of the subject matter”

And the European Securities and Market Authority says in its guidelines for assessment of knowledge and competence:

“ongoing assessment will contain updated material and will test staff on their knowledge of, for example, regulatory changes, new products and services available on the market”

Other regulators focus more on companies ensuring that their workforce is competent, rather than specifying how companies ensure it, but most welcome clear evidence that personnel have been trained and have shown understanding of the training.

People sitting at desks with computers taking tests2. Assessments demonstrate commitment to your workforce and to regulators

Many compliance errors happen because managers pay lip service to following the rules but indicate in their behavior they don’t mean it. If you assess all employees and managers regularly, and require additional training or sanctions for failing tests, it sends a clear message to your workforce that knowledge and observance of the rules is genuinely required.

Some regulators also take commitment to compliance by the organization into account when setting the level of fines, and may reduce fines if there is serious evidence of compliance activities, which assessments can be a useful part of. For example the German Federal Court recently ruled that fines should be less if there is evidence of effective compliance management.

3. Assessments find problems early

Online assessments are one of the few ways in which a compliance team can touch all employees in an organization. You can see results by team, department, location or individual and identify who understands what and focus in on weak areas to look at improving. There is no better way to reach all employees.

4. Assessments document understanding after training

Many regulators require training to be documented. Giving someone an assessment after training doesn’t just confirm he or she attended the course but confirms they understood the training.

5. Assessments increase retention of knowledge and reduce forgetting

Can you remember everything you learned? Of course, none of us can!

There is good evidence that quizzes and tests increase retention and reduce forgetting. This is partly because people study for tests and so remind themselves of the knowledge they learned, which helps retain it. And it is partly because retrieving information in a quiz or test makes it easier to retrieve the same information in future, and so more likely to be able to apply in practice when needed.

6. By allowing testing out, assessments reduce the time and cost of compliance trainingTake test. If pass, skip training. Otherwise do training.

Many organizations permit employees to “test out” of compliance training. People can take a test and if they demonstrate good enough knowledge, they don’t need to attend the training. This concentrates training resources and employee time on areas that are needed, and avoids demoralizing employees with boring compliance training repeating what they already know.

7. Assessments reduce human error which reduces the likelihood of a compliance mis-step

Many compliance failures arise from human error. Root cause analysis of human error suggests that a good proportion of errors are caused by people not understanding training, training being missing or people not following procedures. Assessments can pick up and prevent mistakes caused by people not understanding what they should do or how to follow procedures, and so reduce the risk of error.

 

If you are interested in learning more about the reasons online assessments mitigate compliance risk, Questionmark are giving a webinar “Seven Ways Assessments Fortify Compliance” on April 11th. To register for this or our other free webinars, go to www.questionmark.com/questionmark_webinars.

2018 Questionmark Conference Recap – 2019 Announced

Posted by Kristin Bernor

Wow, just wow! The 2018 Questionmark User’s Conference inspired all that are new to Questionmark and those that have been long time customers. Being able to share experiences and use case scenarios was exhilarating. The three day learning experience was a deep dive into learning, fun, and networking – the enthusiasm was truly contagious. From the opening general session where new features and functions were released to the trolley ride over to the railroad museum where guests were greeted with amazing local food and music, this year’s conference was without a doubt the preeminent complete user’s group experience!

Amazing customer presenters were on hand to deliver testimonials to their peers on best practices and lessons learned to truly maximize the power of their assessments. We greatly appreciated case studies presented by our valuable customers from Progressive Insurance, American Institute of Certified Public Accountants, Southwest Gas, Intuitive Surgical, Rio Salado College and Caterpillar.

A lively panel discussion focused on transformation and growth and how assessments are integral to all organizations in order to succeed in those areas. Panel participants included Andrew Dominguez of Southwest Gas, Tricia Allen of Polycom, Bernt Nilsen of Norsk Test and Dave “Lefty” Lefkowith of Louisiana Department of Education. Their invaluable insights into creating high performing teams through assessments was an “aha moment” for many and we are truly grateful for their participation.

Engaging sessions that delved into best practices of using assessment to identify knowledge gaps, improve performance and make informed and defensible decisions were widespread. Questionmark Founder and Executive Director, John Kleeman, presented a session based on a recent white paper, “Assessing for Situational Judgment”. Another well attended session detailed extending your assessment management capabilities with Questionmark apps and left attendees excited to get back to the office and implement.

Evenings provided many opportunities for networking, fine dining and fun. Small group “dine-arounds” — a long-time, popular tradition at our annual conference — gave us all a great chance to take in the sights as we strolled through beautiful, historic Savannah. Delegates and staff attending our Thursday evening reception, hosted at the Georgia State Railroad Museum, enjoyed delicious food, great music and networking.

The conference closed with the unveiling of next year’s Questionmark Conference dates and location. Drum roll please…

Save the date for Questionmark Conference 2019 us in San Diego at the Hard Rock Hotel, February 26 – March 1, 2019 for even more learning, networking and fun!

Watch the conference recap below!!

Questionmark Recognized with Brandon Hall Award for Assessment Technology

Posted by Kristin Bernor

What an amazing way to start the year and what an honor! We were recognized by Brandon Hall Group with a “Best Advance in Assessment and Survey Technology” award as part of their Excellence in Technology Program. Questionmark was presented with this award at the Brandon Hall Group HCM Excellence Conference Awards Gala in Palm Beach Gardens, Florida on February 1, 2018.https://blog.questionmark.com/wp-content/uploads/2018/03/bhgaward-jan2018.jpg

“Questionmark goes beyond just capturing results and provides tools for analyzing, interpreting, and understanding the information that assessments are gathering. For organizations with regulatory, compliance, or certification needs, Questionmark provides unrivaled tools for testing and measuring results,” from David Wentworth, Principal Analyst of the Brandon Hall Group.

The award recognizes those organizations that have successfully deployed programs, strategies, modalities, processes, systems, and tools that have achieved measurable results. Criteria used to judge applicants included value proposition, product innovation, unique differentiators, product demonstration and measurable results.

See the press release here.

Item Analysis for Beginners – When are very Easy or very Difficult Questions Useful?

Posted by John Kleeman

I’m running a session at the Questionmark user conference next month on Item Analysis for Beginners and thought I’d share the answer to an interesting question in this blog.

Item analysis fragment showing a question with difficulty of 0.998 and discrimination of 0.034When you run an Item Analysis report, one of the useful statistics you get on a question is its “p-value” or “item difficulty”. This is a number from 0 to 1, with the higher the value the easier the question. An easy question might have a p-value of 0.9 to 1.0, meaning 90% to 100% of participants answer the question correctly. A difficult question might have a p-value of 0.0 to 0.25 meaning less than 25% of participants answer the question correctly. For example, the report fragment to the right shows a question with p-value 0.998 which means it is very easy and almost everyone gets it right.

Whether such questions are appropriate depends on the purpose of the assessment. Most participants will get difficult questions wrong and easy questions right. In general, very easy and very difficult questions will not be as helpful as other questions in helping you discriminate between participants and so use the assessment for measurement purposes.

Here are three reasons why you might decide to include very difficult questions in an assessment:

  1. Sometimes your test blueprint requires questions on a topic and the only ones you have available are difficult ones – if so, you need to use them until you can write more.
  2. If a job has high performance needs and you need to filter out a few participants from many, then very difficult questions can be useful. This might apply for example if you are selecting potential astronauts or special forces team members.
  3. If you need to assess a wide range of ability within a single assessment, then you may need some very difficult questions to be able to assess abilities within the top performing participants.

And here are five reasons why you might decide to include very easy questions in an assessment:

  1. Answering questions gives retrieval practice and helps participants remember things in future – so including easy questions still helps reduce people’s forgetting.
  2. In compliance or health and safety, you may choose to include basic questions that almost everyone gets right. This is because if someone gets it wrong, you want to know and be able to intervene.
  3. More broadly, sometimes a test blueprint requires you to cover some topics that almost everyone knows, and it’s not practical to write difficult questions about.
  4. Easy questions at the start of an assessment can build confidence and reduce test anxiety. See my blog post Ten tips on reducing test anxiety for online test-takers for other ways to deal with test anxiety.
  5. If the purpose of your assessment is to measure someone’s ability to process information quickly and accurately at speed, then including many low difficulty questions that need to be answered in a short time might be appropriate.

If you want to learn more about Item Analysis, search this blog for other articles. You might also find the Questionmark user conference useful, since as well as my session on Item Analysis, there are also many other useful sessions including setting the cut-score in a fair, defensible way and identifying knowledge gaps. The conference also gives opportunity to learn and network with other assessment practitioners – I look forward to seeing some of you there.