Measuring the Effectiveness of Social and Informal Learning

Posted by Julie Delazyn

How you can use assessments to measure the effectiveness of informal learning?  If people are learning at different times, in different ways and without structure, how do you know it’s happening? And how can you justify investment in social and informal learning initiatives?

The 70+20+10 model of learning – which explains that we learn 70% on-the-job, 20% from others and 10% from formal study – brings out the importance of informal learning initiatives. But the effectiveness of such initiatives needs to be measured, and there needs to be proof that people are performing better as a result of their participation in social and informal learning.

This SlideShare presentation:  Measuring the Impact of Social and Informal Learning, explains various approaches to testing and measuring learning for a new generation of students and workers.  We hope you will use it to gather some new ideas about how to answer these important questions about learning:  Did they like it? Did they learn it? Are they doing it?



Is a compliance test better with a higher pass score?

John Kleeman portraitPosted by John Kleeman

Is a test better if it has a higher pass (or cut) score?

For example, if you develop a test to check that people know material for regulatory compliance purposes, is it better if the pass score is 60%, 70%, 80% or 90%? And is your organization safer if your test has a high pass score?

To answer this question, you first need to know the purpose of the test – how the results will be used and what inferences you want to make from it. Most compliance tests are criterion-referenced – that is to say they measure specific skills, knowledge or competency. Someone who passes the test is competent for the job role; and someone who fails has not demonstrated competence and might need remedial training.

Before considering a pass score, you need to consider whether questions are substitutable, i.e. that you can balance getting certain questions wrong and others right, and still be competent.  It could be that getting  particular questions wrong implies lack of competence, even if everything else is answered correctly. (For another way of looking at this, see Comprehensive Assessment Framework: Building the student model.) If a participant performs well on many items but gets a crucial safety question wrong, they still fail the test. See Golden Topics- Making success on key topics essential for passing a test for one way of creating tests that work like this in Questionmark.

But assuming questions are substitutable and that a single pass score for a test is viable, how do you work out what that pass score should be? The table below shows 4 possible outcomes:

Pass test Fail test
Participant competent Correct decision Error of rejection
Participant not competent Error of acceptance Correct decision

Providing that the test is valid and reliable, a competent participant should pass the test and a not-competent one should fail it.

Picking a number at randomClearly, picking a pass score as a number “out of a hat” is not the right way to approach this. For a criterion-referenced test, you need to match the pass score to the way your questions measure competence. If you have too high a pass score, then you increase the number of errors of rejection: competent people are rejected and you will waste time re-training them and having them re-take the test. If you have too low a pass score, you will have too many errors of acceptance: not competent people are accepted with potential consequences for how they do the job..

You need to use informed judgement or statistical techniques to choose a pass score that supports valid inferences about the participants’ skills, knowledge or competence in the vast majority of cases. This means the number of errors or misclassifications is tolerable for the intended use-case. One technique for doing this is the Angoff method, as described in this SlideShare. Using Angoff, you classify each question by how likely it is that a minimally- competent participant would get it right, and then roll this up to work out the pass score.

Going back to the original question of whether a better test has a higher pass score, what matters is that your test is valid and reliable and that your pass score is set to the appropriate level to measure competency. You want the right pass score, not necessarily the highest pass score.

So what happens if you set your pass score without going through this process? For instance, you say that your test will have an 80% pass score before you design it.  If you do this, you are assuming that on average all the questions in the test will have an 80% chance of being answered correctly by a minimally-competent participant. But unless you have ways of measuring and checking that, you are abandoning logic and trusting to luck.

In general, a lower pass score does not necessarily imply an easier assessment. If the items are very difficult, a low pass score may still yield low pass rates. Pass scores are often set with a consideration for the difficulty of the items, either implicitly or explicitly.

So, is a test better if it has a higher pass score?

The answer is no. A test is best if it has the right pass score. And if one organization has a compliance test where the pass score is 70% and another has a compliance test where the pass score is 80%, this tells you nothing about how good each test is. You need to ask whether the tests are valid and reliable and how the pass scores were determined. There is an issue of “face validity” here: people might find it hard to believe that a test with a very low pass score is fair and reasonable, but in general a higher pass score does not make a better test.

If you want to learn more about setting a pass score, search this blog for articles on “standard setting” or “cut score” or read the excellent book Criterion-Referenced Test Development, by Sharon Shrock and Bill Coscarelli. We’ll also be talking about this and other best practices at our upcoming Users Conferences in Barcelona November 10-12 and San Antonio, Texas, March 4 – 7.

Saving Time and Money with Diagnostic Testing: A SlideShare Presentation

Headshot JuliePosted by Julie Delazyn

Having employees “test out” of corporate training using diagnostic assessments can save valuable resources and improve motivation, but there are many factors to be considered.

How do you ensure that time spent developing a robust diagnostic assessment provides value to the business?

A team from PwC explained their approach to this at the Questionmark 2013 Users Conference, and we’re happy to share the handouts from their presentation with you.

The Half-Time Strategy: Saving Your Organization Time and Money with Diagnostic Testing includes examples of diagnostic test-out assessments for business-critical self-study programs. It explains how diagnostic assessments can help organizations save training time while still maintaining quality. It also includes tips for building defensible assessments that people can take to test out of training – and for minimizing the time people spend taking them.

Questionmark Users Conferences offer many opportunities to learn from the experience of fellow learning and assessment professionals. Registration is already open for the 2014 Users Conference March 4 – 7 in San Antonio, Texas. Plan to be there!

Sharing slides: 7 ways that online assessments help ensure compliance

John Kleeman HeadshotPosted by John Kleeman

I recently gave a webinar with my colleague Brian McNamara for the SCCE ( Society of Corporate Compliance and Ethics) on 7 ways that online assessments can help ensure compliance.

Here are the slides:

As you can see, we started the webinar by running through some general concepts on assessments including why it’s important that assessments are reliable and valid. Then we described seven key ways in which online assessments can help ensure compliance.

Here are the six pieces of good practice we advocated in the webinar:

  1. Use scenario questions – test above knowledge
  2. Use topic feedback
  3. Consider observational assessments
  4. Use item analysis
  5. Set a pass or cut score
  6. Use a code of conduct

To view a recorded version of this webinar, go to SCCE’s website to purchase the CD from SCCE (Questionmark does not receive any remuneration for this). Or, view a slightly shorter, complimentary version through Questionmark, which is scheduled for September. Go to our UK website  or our  US website for webinar details and registration.

Delivering a million+ assessments takes a village: A SlideShare Presentation

Headshot JuliePosted by Julie Delazyn

What does it take to deliver thousands of different assessments to thousands of students each year?

Rio Salado College, one of the largest online colleges in the United State – with 67,000 students — knows the answer: collaboration.

The people who run the college’s Questionmark assessments wear many hats. They are instructional designers, authors and programmers, as well as networking and IT services staff. Teamwork between people in these varying roles is essential. And since the college delivers more than one million assessments each year, external collaboration – with Questionmark staff – is essential, too.

A team from Rio Salado explained their cooperative approach during this year’s Questionmark Users Conference, and we’re happy to share the handouts from their presentation with you: It Takes a Village – Collaborating for Success with High-Volume Assessments.

This presentation includes an overview of how the college uses surveys, quizzes and tests within its extensive online learning programs. It also focuses on some of the many lessons gleaned from Rio Salado’s many years of involvement with Questionmark.

This is just one example of what people learn about at our Users Conferences. Registration is already open for the 2014 Users Conference March 4 – 7 in San Antonio, Texas. Plan to be there!

Test Security at Shenandoah University: A SlideShare Presentation

Headshot JuliePosted by Julie Delazyn

Students involved in Shenandoah University’s mobile learning initiative, iMLearning, use the Apple MacBook Pro for accessing course material and online assessments. They can also choose an iPodTouch, iPad 3G or iPhone for a quick mobile delivery option. Using the iPad for lab exercises, for instance, students can move from station to station without having to carry around a laptop.

In this presentation from the 2013 Questionmark Users Conference, Terra Walker and Cheri Lambert of Shenandoah University explore various security measures they use to ensure test integrity both in the iMLearning program and Windows-based testing. The presentation focuses on Using Questionmark Secure with Windows PCs and Macs but also covers measures such as question and answer randomization, seating arrangements, proctors and honor code.

This is just one example of what people learn about at our Users Conferences. Registration is already open for the 2014 Users Conference March 4 – 7 in San Antonio, Texas. Plan to be there!