9 trends in compliance learning, training and assessment

John Kleeman HeadshotThis version is a re-post of a popular blog by John Kleeman

Where is the world of compliance training, learning and assessment going?

I’ve collaborated recently with two SAP experts, Thomas Jenewein of SAP and Simone Buchwald of EPI-USE, to write a white paper on “How to do it right – Learning, Training and Assessments in Regulatory Compliance[Free with registration]. In it, we suggested 9 key trends in the area. Here is a summary of the trends we see:

1. Increasing interest in predictive or forward-looking measures

Many compliance measures (for example, results of internal audits or training completion rates) are backwards looking. They tell you what happened in the past but don’t tell you about the problems to come. Companies can see clearly what is in their rear-view mirror, but the picture ahead of them is rainy and unclear. There are a lot of ways to use learning and assessment data to predict and look forward, and this is a key way to add business value.

2. Monitoring employee compliance with policies

A recent survey of chief compliance officers suggested that their biggest operational issue is monitoring employee compliance with policies, with over half of organizations raising this as a concern. An increasing focus for many companies is going to be how they can use training and assessments to check understanding of policies and to monitor compliance.

3. Increasing use of observational assessments

Picture of observational assessment on smartphoneWe expect growing use of observational assessments to help confirm that employees are following policies and procedures and to help assess practical skills. Readers of this blog will no doubt be familiar with the concept. If not, see Observational Assessments—why and how.

4. Compliance training conducted on mobile devices

The world is moving to mobile devices and this of course includes compliance training and assessment.

5. Informal learning

You would be surprised not to see informal learning in our list of trends. Increasingly we are all understanding that formal learning is the tip of the iceberg and that most learning is informal and often on the job.

6. Learning in the extended enterprise

Organizations are becoming more interlinked, and another important trend is the expansion of learning to the extended enterprise, such as contractors or partners. Whether for data security, product knowledge, anti-bribery or a host of other regulatory compliance reasons, it’s becoming crucial to be able to deliver learning and to assess not only your employees but those of other organizations who work closely with you.

7. Cloud

There is a steady movement towards the cloud and SaaS for compliance learning, training, and assessment – with the huge advantage of delegating all of the IT to an outside party being the strongest compelling factor.  Especially for compliance functions, the cloud offers a very flexible way to manage learning and assessment without requiring complex integrations or alignments with a company’s training departments or related functions.

8. Changing workforce needs

The workforce is constantly changing, and many “digital natives” are now joining organizations. To meet the needs of such workers, we’re increasingly seeing “gamification” in compliance training to help motivate and connect with employees. And the entire workforce is now accustomed to seeing high-quality user interfaces in consumer Web sites and expects the same in their corporate systems.

9. Big Data

E-learning and assessments are a unique way of touching all your employees. There is huge potential in using analytics based on learning and assessment data. We have the potential to combine Big Data available from valid and reliable learning assessments with data from finance, sales, and HR sources.  See for example the illustration below from SAP BusinessObjects showing assessment data graphed against performance data as an illustration of what can be done.

data exported using OData from Questionmark into SAP BusinessObjects

For information on these trends, see the white paper written with SAP and EPI-USE: “How to do it right – Learning, Training and Assessments in Regulatory Compliance”, available free to download with registration.

If you have other suggestions for trends, feel free to contribute them below.

9 trends in compliance learning, training and assessment

John Kleeman HeadshotPosted by John Kleeman

Where is the world of compliance training, learning and assessment going?

I’ve collaborated recently with two SAP experts, Thomas Jenewein of SAP and Simone Buchwald of EPI-USE, to write a white paper on “How to do it right – Learning, Training and Assessments in Regulatory Compliance[Free with registration]. In it, we suggested 9 key trends in the area. Here is a summary of the trends we see:

1. Increasing interest in predictive or forward-looking measures

Many compliance measures (for example, results of internal audits or training completion rates) are backwards looking. They tell you what happened in the past but don’t tell you about the problems to come. Companies can see clearly what is in their rear-view mirror, but the picture ahead of them is rainy and unclear. There are a lot of ways to use learning and assessment data to predict and look forward, and this is a key way to add business value.

2. Monitoring employee compliance with policies

A recent survey of chief compliance officers suggested that their biggest operational issue is monitoring employee compliance with policies, with over half of organizations raising this as a concern. An increasing focus for many companies is going to be how they can use training and assessments to check understanding of policies and to monitor compliance.

3. Increasing use of observational assessments

Picture of observational assessment on smartphoneWe expect growing use of observational assessments to help confirm that employees are following policies and procedures and to help assess practical skills. Readers of this blog will no doubt be familiar with the concept. If not, see Observational Assessments—why and how.

4. Compliance training conducted on mobile devices

The world is moving to mobile devices and this of course includes compliance training and assessment.

5. Informal learning

You would be surprised not to see informal learning in our list of trends. Increasingly we are all understanding that formal learning is the tip of the iceberg and that most learning is informal and often on the job.

6. Learning in the extended enterprise

Organizations are becoming more interlinked, and another important trend is the expansion of learning to the extended enterprise, such as contractors or partners. Whether for data security, product knowledge, anti-bribery or a host of other regulatory compliance reasons, it’s becoming crucial to be able to deliver learning and to assess not only your employees but those of other organizations who work closely with you.

7. Cloud

There is a steady movement towards the cloud and SaaS for compliance learning, training, and assessment – with the huge advantage of delegating all of the IT to an outside party being the strongest compelling factor.  Especially for compliance functions, the cloud offers a very flexible way to manage learning and assessment without requiring complex integrations or alignments with a company’s training departments or related functions.

8. Changing workforce needs

The workforce is constantly changing, and many “digital natives” are now joining organizations. To meet the needs of such workers, we’re increasingly seeing “gamification” in compliance training to help motivate and connect with employees. And the entire workforce is now accustomed to seeing high-quality user interfaces in consumer Web sites and expects the same in their corporate systems.

9. Big Data

E-learning and assessments are a unique way of touching all your employees. There is huge potential in using analytics based on learning and assessment data. We have the potential to combine Big Data available from valid and reliable learning assessments with data from finance, sales, and HR sources.  See for example the illustration below from SAP BusinessObjects showing assessment data graphed against performance data as an illustration of what can be done.

data exported using OData from Questionmark into SAP BusinessObjects

 

For information on these trends, see the white paper written with SAP and EPI-USE: “How to do it right – Learning, Training and Assessments in Regulatory Compliance”, available free to download with registration. Thomas, Simone and I are also doing a free-to-attend webinar on this subject on October 1st (also a German language one on September 22nd). You can see details and links to the webinars here.

If you have other suggestions for trends, feel free to contribute them below.

Conceptual Assessment Framework: Building the Task Model

Austin FosseyPosted by Austin Fossey

In my previous post, I introduced the student model—one of the three sections of the Conceptual Assessment Framework (CAF) in Evidence-Centered Design (ECD). At the other end of the CAF is the task model.

The task model defines the assumptions and specifications for what a participant can do within your assessment (e.g., Design and Discovery in Educational Assessment: Evidence-Centered Design, Psychometrics, and Educational Data Mining; Mislevy, Behrens, Dicerbo, & Levy, 2012). This may include the format of the items, the format of the assessment itself, and the work products that the participant may be expected to create during the assessment.

Most importantly, the task model should be built so that the assessment tasks are appropriate for eliciting the behavior that you will use as evidence about the assessed domain. For example, if you are assessing a participant’s writing abilities, an essay item would probably be specified in your task model instead of a slew of multiple choice items.

You may have already defined pieces of your task model without even realizing it. For example, if you are using Questionmark to conduct observational assessments in the workplace, you have probably decided that the best way to gather evidence about a participant’s proficiency is to have them perform a task in a work environment. That task model may elicit behavior (and therefore evidence) that perhaps could not be captured well in other environments, for instance a traditional computer-based assessment.

In the observational assessment example below, the task model specifies that the participant has a ladder and an environment in which they can set up and climb the ladder. The task model might also specify information about the size of the ladder and the state of the ladder when the assessment begins.

sample obvs

Sample of an observational assessment

The task model can also help you avoid making inappropriate assessment design decisions that might threaten the validity of your inferences about the results.

I often see test developers use more complicated item types (like drag and drop) when a multiple choice item would have been more appropriate. For example, picture an assessment about human anatomy. If you want to know if the participant can find a kidney amongst its surrounding anatomy during an operation, then you would likely build a drag and drop item. If you just want to know if a participant knows what a kidney looks like, you may want to use a multiple choice item with three to five pictures of organs from which the participant must choose.

The task model will also encompass other design and delivery decisions you will make about your assessment. For example, a time-limit, the participant’s ability to review answers, access to resources (e.g., references, calculators), and item translations might all be specified in your task model.

By specifying your task model in advance and tying your design decisions to the inferences you want to make about the participant’s results, you can ensure that your assessment instrument is built to gather the right evidence about your participants.

To Your Health! To err is human but assessments can help

John Kleeman HeadshotPosted by John Kleeman

To err is human, but how do errors happen? And can assessments help reduce them?

As part of my learning on assessments in health care, I’ve come across some interesting statistics on errors in UK hospitals by the Medical and Healthcare Products Regulatory Agency (MHRA). They have a system called SABRE which collects and analyzes errors in hospitals relating to serious adverse reactions in the area of blood transfusions and handling. Hospitals are encouraged to report errors so that better practice can be identified, and the MHRA gathered 788 human errors in 2011.

MHRA did a root cause analysis of why the human errors happened. You can see their report here, I have slightly simplified their information to identify the six areas below:

  • Process or procedure incorrect (22%)
  • Procedural steps omitted (23%)
  • Concentration error (29%)
  • Training misunderstood – it covered the area of the error but was misunderstood (15%)
  • Training missing – out of date or did not cover the area (6%)
  • Poor communication/rushing (5%)

Root causes of human error - graph showing table above as an image

The root cause of errors may vary in other contexts, but it’s very interesting to see this data and I suspect that these six areas are common causes for error in many organizations, even if the percentages vary. Taking things beyond the MHRA data (and without any MHRA endorsement), I am wondering where assessments can help reduce errors.

Let’s focus on the errors related to training issues and the incorrect or omitted procedures:

Some errors happen because training is misunderstood. Perhaps the training covers the area but the employee didn’t understand it properly, can’t remember or cannot apply the training on the job. Perhaps the employee simply can’t remember the training. Assessments check that people do indeed understand. They also reduce the forgetting curve and can be used to give scenarios or problems that check if people can apply the training to everyday situations.

Other errors happen because training is different to what the real job involves. Competencies or tasks needed in the real world aren’t part of the training or the post-training assessment. Job task analysis, using surveys to ask practitioners what really happens in a role, is a great way to correct this. See Doug Peterson’s article in this blog or mine on Job Task Analysis in Questionmark for more on this.

For errors that happen where procedural steps are omitted, an observational assessment is an effective, pro-active solution. See Observational Assessments—why and how for more on this, or see my earlier post in this series on competency testing in health care. Such assessments will also pick up some issues for procedures or processes being wrong.

What about the other 34% of errors that arise from concentration failures, rushing and poor communication, I am sure there are ways in which assessment can help and would welcome your comments and ideas.

Observational assessments: measuring performance in a 70+20+10 world

Posted by Jim Farrell

Informal Learning. Those two words are everywhere. You might see them trending on Twitter during a #lrnchat, dominating the agenda at a learning conference or gracing the pages of a training digest. We all know that informal learning is important, but measuring it can often be difficult. However, difficult does not mean impossible.

Remember that in the 70+20+10 model, 70 percent of learning results from on the job experiences and 20 percent of learning comes from feedback and the examples set by people around us. The final 10 percent is formal training. No matter how much money an organization spends on its corporate university, 90 percent of the learning is happening outside a classroom or formal training program.

So how do we measure the 90 percent of learning that is occurring to make sure we positively affect the bottom line?

First is performance support. Eons ago, when I was an instructional designer, it was the courseware and formal learning that received most of the attention. Looking back, we missed the mark: Although the projects were deemed successful, we likely did not have the impact we could have. Performance support is the informal learning tool that saves workers time and leads to better productivity. Simple web analytics can tell you the performance support that is most searched on and used on a daily basis.

But onto what I think Questionmark does best – that 20 percent that is occurring through feedback and examples around us. Many organizations have turned to coaching and mentoring to give employees good examples and to define the competencies necessary to be a great employee.

I think most organizations are missing the boat when it comes to collecting data on this 20 percent. While I think coaching and mentoring is a step in the right direction, they probably aren’t yielding good analytics. Yes, organizations may use surveys and/or interviews to measure how mentoring closes performance gaps, but how do we get employees to the next level? I propose the use of observational assessments. By definition, observational assessments enable measurement of participants’ behavior, skills and abilities in ways not possible via traditional assessment.

By allowing a mentor to observe someone perform while applying a rubric to their performance, you allow for not only analytics of performance but the ability to compare to other individuals or to agreed benchmarks for performing a task. Also, feedback collected during the assessment can be displayed in a coaching report for later debriefing and learning. And to me, that is just the beginning.

Developing an observational assessment should go beyond the tasks someone has to do to perform their day-to-day work. It should embody the competencies necessary to solve business problems. Observational assessments allow organizations to capture performance data, and measure the competencies necessary to push the organization to be successful.

If you would like more information about observational assessments, click here.

Tips for effective mobile assessments

Posted by Julie Delazyn

Mobile phone usage is growing, and it’s rapidly changing the way we tackle daily tasks. In 2009, 90 percent of the world’s population was covered by a mobile signal, as opposed to 61 percent in 2003. These numbers continue to grow.

What does this mean for the learning industry? It means the ability to reach millions of people anywhere, anytime. Mobile delivery enables new possibilities for observational assessment and exam rooms that can be set up when and where they are needed. It can change the way we test, by bringing assessments into the field and evaluating the performance of specific tasks as they are being done. Delivering assessments on location and offering low-stakes quizzes and surveys on smartphones makes it possible to gather information and get results on the spot. These capabilities, along with QR code technology, which allows anyone to scan a code with a smartphone and be automatically redirected to a webpage or assessment, bring a whole new meaning to “thinking small.”

The ability to fit information on small screens is the key to reaching people on the go. What are some of the tools you can use to reach and assess your own audience? How can you ensure that the assessments that you create fit this new medium?  We’ve put together tips for creating assessments and delivering them to mobile to mobile devices in this SlideShare presentation.  Check  out these ideas and feel free share your own by leaving us a comment!