Learning, Training and Assessments in Regulatory Compliance – Implementation Best Practices

Posted by John Kleeman

I’m pleased to let you know of a new joint SAP and Questionmark white paper on implementation best practices for learning, training and assessments in regulatory compliance. You can download the white paper here.

There has been a huge change in the regulatory environment for companies in the last few years. This is illustrated nicely by the graph below showing the number of formal warning letters the U.S. Food and Drug Administration (FDA) issued in the period from 2010 to 2016 for various compliance infractions.

Rise in FDA warning letters from 2010 to 2016, numbers increase from a few hundred a year to over 10,000 a year

Of course it’s not just letters that regulators issue, there have also been huge increases in the fines issued on companies in areas including rules breaches in banking, data protection, price-fixing and manufacturing.

Failure to effectively train or assess employees is a significant cause of compliance errors, and this white paper authored by SAP experts Thomas Jenewein, Simone Buchwald and Mark Tarallo and me (Questionmark Founder and Executive Director, John Kleeman) explains how technology can help address the issue.

The white paper starts by looking at key factors increasing the need for training, learning, and assessments to ensure that businesses stay compliant and then goes on to consider three drivers for compliance learning – Organization Imposed, Operations Critical and Regulatory. The white paper then looks at how

  • A Learning Management System (LMS) can manage compliance learning
  • Learning Content and Documentation Authoring Tools can author compliance learning
  • An Assessment Management System can be used to diagnose the training needed, to help direct learning and to check competence, knowledge and skills.

A typical LMS includes basic quiz and survey capabilities, but when making decisions about people, such as whether to promote, hire, fire, or confirm competence for compliance or certification purposes, companies need more. The robust functionality of an effective assessment management system allows organizations to create reliable, valid, and more trustworthy assessments. Often, assessment management systems and LMSs work together, and test-takers will often be directed to take assessments by using a single sign-on from the LMS.

The white paper describes how companies can use SAP SuccessFactors Learning, SAP Enable Now and Questionmark software work together productively to help companies manage and deliver effective compliance learning, training and assessments – to mitigate regulatory risk. It goes on to describe some key trends in compliance training and assessments that the authors see going forwards, including how cybersecurity and data protection are impacting compliance.

The white paper is a quick, easy and useful read – you can download it here.

Agree or disagree? 10 tips for better surveys — part 3

John Kleeman HeadshotPosted by John Kleeman

This is the third and last post in my “Agree or disagree” series on writing effective attitude surveys. In the first post I explained the process survey participants go through when answering questions and the concept of satisficing – where some participants give what they think is a satisfactory answer rather than stretching themselves to give the best answer.

In the second post I shared these five tips based on research evidence on question and survey design.

Tip #1 – Avoid Agree/Disagree questions

Tip #2 – Avoid Yes/No and True/False questions

Tip #3 – Each question should address one attitude only

Tip #4 – Minimize the difficulty of answering each question

Tip #5 – Randomize the responses if order is not important

Here are five more:

Tip #6 –  Pretest your survey

Just as with tests and exams, you need to pretest or pilot your survey before it goes live. Participants may interpret questions differently than you intended. It’s important to get the language right so as to trigger in the participant the right judgement. Here are some good pre-testing methods:

  • Get a peer or expert to review the survey.
  • Pre-test with participants and measuring the response time for each question (shown in some Questionmark reports). A longer response time could be connected with a more confusing question.
  • Allow participants to provide comments on questions they think they are confusing.
  • Follow up with your pretesting group by asking them why they gave particular answers or asking them what they thought you meant by your  questions.

Tip #7 – Make survey participants realize how useful the survey is

The more motivated a participant is, the more likely he or she is to answer optimally rather than just satisficing and choosing a good enough answer. To quote Professor Krosnick in his paper The Impact of Satisficing on Survey Data Quality:

“Motivation to optimize is likely to be greater among respondents who think that the survey in which they are participating is important and/or useful”

Ensure that you communicate the goal of the survey and make participants feel that filling it in usefully will be a benefit to something they believe in or value.

Tip #8. Don’t include a “don’t know” option

Including a “don’t know” option usually does not improve the accuracy of your survey. In most cases it reduces it. To those of us used to the precision of testing and assessment, this is surprising.

Part of the reason is that providing a “don’t know” or “no opinion” option allows participants to disengage from your survey and so diminishes useful responses. Also,  people are better at guessing or estimating than they think they are, so they will tend to choose an appropriate answer if they do not have an option of “don’t know”. See this paper by Mondak and Davis, which illustrates this in the political field.

Tip #9. Ask questions about the recent past only

The further back in time they are asked to remember, the less accurately participants will answer your questions. We all have a tendency to “telescope” the timing of events and imagine that things happened earlier or later than they did. If you can, ask about the last week or the last month, not about the last year or further back.

Picture of a trends graphTip #10 – Trends are good

Error can creep into survey results in many ways. Participants can misunderstand the question. They can fail to recall the right information. Their judgement can be influenced by social pressures. And they are limited by the choices available. But if you use the same questions over time with a similar population, you can be pretty sure that changes over time are meaningful.

For example, if you deliver an employee attitude survey with the same questions for two years running, then changes in the results to a question (if statistically significant) probably mean a change in employee attitudes. If you can use the same or similar questions over time and can identify trends or changes in results, such data can be very trustworthy.

I hope you’ve found this series of articles useful.  For more information on how Questionmark can help you create, deliver and report on surveys, see www.questionmark.com. I’ll also be presenting at Questionmark’s 2016 Conference: Shaping the Future of Assessment in Miami April 12-15. Check out the conference page for more information.

Agree or disagree? 10 tips for better surveys — Part 2

John Kleeman HeadshotPosted by John Kleeman

In my first post in this series, I explained that survey respondents go through a four-step process when they answer each question: comprehend the question, retrieve/recall the information that it requires, make a judgement on the answer and then select the response. There is a risk of error at each step. I also explained the concept of “satisficing”, where participants often give a satisfactory answer rather than an optimal one – another potential source of error.

Today, I’m offering some tips for effective online attitude survey design, based on research evidence. Following these tips should help you reduce error in your attitude surveys.

Tip #1 – Avoid Agree/Disagree questions

Although these are one of the most common types of questions used in surveys, you should try to avoid questions which ask participants whether they agree with a statement.

There is an effect called acquiescence bias, where some participants are more likely to agree than disagree. It seems from the research that some participants are easily influenced and so tend to agree with things easily. This seems to apply particularly to participants who are more junior or less well educated, who may tend to think that what is asked of them might be true. For example Krosnick and Presser state that across 10 studies, 52 percent of people agreed with an assertion compared to 42 percent of those disagreeing with its opposite. If you are interested in finding more about this effect, see this 2010 paper by Saris, Revilla, Krosnick and Schaeffer.

Satisficing – where participants just try to give a good enough answer rather than their best answer – also increases the number of “agree” answers.

For example, do not ask a question like this:

My overall health is excellent. Do you:

  • Strongly Agree
  • Agree
  • Neither Agree or Disagree
  • Disagree
  • Strongly Disagree

Instead re-word it to be construct specific:

How would you rate your health overall?

  • Excellent
  • Very good
  • Good
  • Fair
  • Bad
  • Very bad

 

Tip #2 – Avoid Yes/No and True/False questions

For the same reason, you should avoid Yes/No questions and True/False questions in surveys. People are more likely to answer Yes than No due to acquiescence bias.

Tip #3 – Each question should address one attitude only

Avoid double-barrelled questions that ask about more than one thing. It’s very easy to ask a question like this:

  • How satisfied are you with your pay and work conditions?

However, someone might be satisfied with their pay but dissatisfied with their work conditions, or vice versa. So make it two separate questions.

Tip #4 – Minimize the difficulty of answering each question

If a question is harder to answer, it is more likely that participants will satisfice – give a good enough answer rather than the best answer. To quote Stanford Professor  Jon Krosnick, “Questionnaire designers should work hard to minimize task difficulty”.  For example:

  • Use as few words as possible in question and responses.
  • Use words that all your audience will know.
  • Where possible, ask questions about the recent past not the distant past as the recent past is easier to recall.
  • Decompose complex judgement tasks into simpler ones, with a single dimension to each one.
  • Where possible make judgements absolute rather than relative.
  • Avoid negatives. Just like in tests and exams, using negatives in your questions adds cognitive load and makes the question less likely to get an effective answer.

The less cognitive load involved in questions, the more likely you are to get accurate answers.

Tip #5 – Randomize the responses if order is not importantSetting choices to be shuffled

The order of responses can significantly influence which ones get chosen.

There is a primacy effect in surveys where participants more often choose the first response than a later one. Or if they are satisficing, they can choose the first response that seems good enough rather than the best one.

There can also be a recency effect whereby participants read through a list of choices and choose the last one they have read.

In order to avoid these effects, if your choices do not have a clear progression or some other reason for being in a particular order, randomize them.  This is easy to do in Questionmark software and will remove the effect of response order on your results.

Here is a link to the next segment of this series: Agree or disagree? 10 tips for better surveys — part 3

Question Type Report: Use Cases

Austin Fossey-42Posted by Austin Fossey

A client recently asked me if there is a way to count the number of each type of item in their item bank, so I pointed them toward the Question Type Report in Questionmark Analytics. While this type of frequency data can also be easily pulled using our Results API, it can be useful to have a quick overview of the number of items (split out by item type) in the item bank.

The Question Type Report does not need to be run frequently (and Analytics usage stats reflect that observation), but the data can help indicate the robustness of an item bank.

This report is most valuable in situations involving topics for a specific assessment or set of related assessments. While it might be nice to know that we have a total of 15,000 multiple choice (MC) items in the item bank, these counts are trivial unless we have a system-wide practical application—for example planning a full program translation or selling content to a partner.

This report can provide a quick profile of the population of the item bank or a topic when needed, though more detailed item tracking by status, topic, metatags, item type, and exposure is advisable for anyone managing a large-scale item development project. Below are some potential use cases for this simple report.

Test Development and Maintenance:
The Question Type Report’s value is primarily its ability to count the number of each type of item within a topic. If we know we have 80 MC items in a topic for a new assessment, and they all need be reviewed by a bias committee, then we can plan accordingly.

Form Building:
If we are equating multiple forms using a common-item design, the report can help us determine how many items go on each form and the degree to which the forms can overlap. Even if we only have one form, knowing the number of items can help a test developer check that enough items are available to match the blueprint.

Item Development:
If the report indicates that there are plenty of MC items ready for future publications, but we only have a handful of essay items to cover our existing assessment form, then we might instruct item writers to focus on developing new essay questions for the next publication of the assessment.

Question type

Example of a Question Type Report showing the frequency distribution by item type.

 

High-stakes assessment: It’s not just about test takers

Lance bio picPosted by

In my last post I spent some time defining how I think about the idea of high-stakes assessment. I also talked about how these assessments affect the people who take them including how important it is to their ability to get or do a job.

Now I want to talk a little bit about how these assessments affect the rest of us.

The rest of us

Guess what? The rest of us are affected by the outcomes of these assessments. Did you see that coming?

But seriously, the credentials or scores that result from these assessments affect large swathes of the public. Ultimately that’s the point of high-stakes assessment. The resulting certifications and licenses exist to protect the public. These assessments are acting as barriers preventing incompetent people from practicing professions where competency really matters.

 It really matters

What are some examples of “really matters”? Well, when hiring, it really matters to employers that the network techs they hire knows how to configure a network securely, not that the techs just say they do. It matters to the people crossing a bridge that the engineers who designed it knew their physics. It really matters to every one of us that our doctor, dentist, nurse, or surgeon know what they are doing when they treat us. It really matters to society at large when we measure (well) the children and adults who take large-scale assessments like college entrance exams.

At the end of the day, high-stakes exams are high-stakes because in a very real way, almost all of us have a stake in their outcome.

 Separating the wheat from the chaff

There are a couple of ways that high stakes assessments do what they do. Some assessments are simply designed to measure “minimal competence,” with test takers either ending above the line—often known as “passing”—or below the line. The dreaded “fail.”

Other assessments are designed to place test takers on a continuum of ability. This type of assessment assigns scores to test takers, and the range of
score often appear odd to laypeople. For example, the SAT uses a 200 – 800 scale.

Want to learn more? Hang on till next time!

Does online learning and assessment help sustainability?

John Kleeman HeadshotPosted by John Kleeman

Encouraged by public interest and increasing statutory controls, most large organizations care about and report on environmental sustainability and greenhouse gas emissions. I’ve been wondering how much online assessments and the wider use of e-learning help sustainability. Does taking assessments and learning online contribute to the planet’s well-being?

Does using computers instead of paper save trees?Picture of trees, part cut down

It’s easy to see that by taking exams on computer, we save a lot of paper. Trees vary in size, but it seems the average tree might make about 50,000 pages of paper. If a typical paper test uses 10 pages of paper, then an organization that delivers 100,000 tests per year is using 20 trees a year. Or suppose a piece of learning material is 100 pages is distributed to 10,000 learners. The 20 trees cut down for that learning would be saved if the learning were delivered online.

These are useful benefits, but they need to be set against the environmental costs of the computers and electricity used. The environmental benefit is probably modest.

What about the benefits of reduced business travel?

A much stronger environmental case might be made around reduced travel. Taking a test on paper and/or in a test center likely means travelling. So we’re not surprised to be seeing increased use of online proctoring. For example, SAP are starting to use it for their certification exams. Online proctoring means that a candidate doesn’t have to travel to a test center but can take an exam from their home or office. This saves time and money. It also eliminates the environmental costs of  travel. Learning online rather than going to a classroom does the same.

Training and assessment are only a small reason for business travel, but the overall environmental impact of business travel is imagehuge.  One large services company has reported that 67 percent of their carbon footprint in 2014 was related to it. Another  indicates that cost at over 30 percent.. Many large companies have internal targets to reduce business travel greenhouse gas emissions.

In the academic world, the Open University in the UK performed a study a few years back on the carbon benefits of their model of distance learning compared with more conventional university education. The study suggested that carbon emissions were 85 percent lower with distance education compared with a more conventional university approach. However, the benefit of electronic delivery rather than paper delivery in distance learning was more modest at 12 percent, partly because students often print the e-learning materials. This suggests that there is a very substantial benefit in distance learning and a smaller benefit in it being electronic rather than paper-based.

The strongest benefit of online assessment is that it  gives accurate information about people’s knowledge, skills and abilities to help organizations make good decisions. But it does seem that there may well also be a useful environmental benefit too.