Measuring the Effectiveness of Social and Informal Learning

Posted by Julie Delazyn

How you can use assessments to measure the effectiveness of informal learning?  If people are learning at different times, in different ways and without structure, how do you know it’s happening? And how can you justify investment in social and informal learning initiatives?

The 70+20+10 model of learning – which explains that we learn 70% on-the-job, 20% from others and 10% from formal study – brings out the importance of informal learning initiatives. But the effectiveness of such initiatives needs to be measured, and there needs to be proof that people are performing better as a result of their participation in social and informal learning.

This SlideShare presentation:  Measuring the Impact of Social and Informal Learning, explains various approaches to testing and measuring learning for a new generation of students and workers.  We hope you will use it to gather some new ideas about how to answer these important questions about learning:  Did they like it? Did they learn it? Are they doing it?

Acronyms, Abbreviations and APIs

Steve Lay HeadshotPosted by Steve Lay

As Questionmark’s integrations product owner, it is all too easy to speak in acronyms and abbreviations. Of course, with the advent of modern day ‘text-speak,’ acronyms are part of everyday speech. But that doesn’t mean everyone knows what they mean. David Cameron, the British prime minister, was caught out by the everyday ‘LOL’ when it was revealed during a recent public inquiry that he’d used it thinking it meant ‘lots of love’.

In the technical arena things are not so simple. Even spelling out an acronym like SOAP (which stands for Simple Object Access Protocol) doesn’t necessarily make the meaning any clearer. In this post, I’m going to do my best to explain the meanings of some of the key acronyms and abbreviations you are likely to hear talked about in relation to Questionmark’s Open Assessment Platform.

API

At a recent presentation (on Extending the Platform), while I was talking about ways of integrating with Questionmark technologies, I asked the audience how many people knew what ‘API’ stood for. The response prompted me to write this blog article!

The term, API, is used so often that it is easy to forget that it is not widely known outside of the computing world.

API stands for Application Programming Interface. In this case the ‘application’ refers to some external software that provides functionality beyond that which is available in the core platform. For example, it could be a custom registration application that collects information in a special way that makes it possible to automatically create a user and schedule them to a specified assessment.

The API is the information that the programmer needs to write this registration application. ‘Interface’ refers to the join between the external software and the platform it is extending. (Our own APIs are documented on the Questionmark website and can be reached directly from developer.questionmark.com.)

APIs and Standards

APIs often refer to technical standards. Using standards helps the designer of an API focus on the things that are unique to the platform concerned without having to go into too much incidental detail. Using a common standard also helps programmers develop applications more quickly. Pre-written code that implements the underlying standard will often be available for programmers to use.

To use a physical analogy, some companies will ask you to send them a self-addressed stamped envelope when requesting information from them. The company doesn’t need to explain what an envelope is, what a stamp is and what they mean by an address! These terms act a bit like technical standards for the physical world. The company can simply ask for one because they know you understand this request. They can focus their attention on describing their services, the types of requests they can respond to and the information they will send you in return.

QMWISe

QMWISe stands for Questionmark Web Integration Services Environment. This API allows programmers to exchange information with Questionmark OnDemand software-as-a-service or Questionmark Perception on-premise software. QMWISe is based on an existing standard called SOAP. (see above)

SOAP defines a common structure used for sending and receiving messages; it even defines the concept of a virtual ‘envelope’. Referring to the SOAP standard allows us to focus on the contents of the messages being exchanged such as creating participants, creating schedules, fetching results and so on.

REST

REST stands for REpresentational State Transfer and must qualify as one of the more obscure acronyms! In practice, REST represents something of a back-to-basics approach to APIs when contrasted with those based on SOAP. It is not, in itself, a standard but merely a set of stylistic guidelines for API designers defined by an academic paper written by Roy Fielding, a co-author of the HTTP standard (see below).

As a result, APIs are sometimes described as ‘RESTful’, meaning they adhere to the basic principles defined by REST. These days, publicly exposed APIs are more likely to be RESTful than SOAP-based. Central to the idea of a RESTful API is that the things your API deals with are identified by a URL (Uniform Resource Locator), the web’s equivalent of an address. In our case, that would mean that each participant, schedule, result, etc. would be identified by its own URL.

HTTP

RESTful APIs draw heavily on HTTP. HTTP stands for HyperText Transfer Protocol. It was invented by Tim Berners-Lee and forms one of the key inventions that underpin the web as we know it. Although conceived as a way of publishing HyperText documents (i.e., web pages), the underlying protocol is really just a way of sending messages. It defines the virtual envelope into which these messages are placed. HTTP is familiar as the prefix to most URLs.

OData

Finally this brings me to OData. OData just stands for Open Data. This standard makes it much easier to publish RESTful APIs. I recently OData in the post, What is Odata, and why is it important?

Although arguably simpler than SOAP, OData provides an even more powerful platform for defining APIs. For some applications, OData itself is enough, and tools can be integrated with no additional programming at all. The PowerPivot plugin for Microsoft Excel is a good example. Using Excel you can extract and analyse data using the Questionmark Results API (itself built on OData) without any Questionmark-specific programming at all.

For more about OData, check out this presentation on Slideshare.

Is a compliance test better with a higher pass score?

John Kleeman portraitPosted by John Kleeman

Is a test better if it has a higher pass (or cut) score?

For example, if you develop a test to check that people know material for regulatory compliance purposes, is it better if the pass score is 60%, 70%, 80% or 90%? And is your organization safer if your test has a high pass score?

To answer this question, you first need to know the purpose of the test – how the results will be used and what inferences you want to make from it. Most compliance tests are criterion-referenced – that is to say they measure specific skills, knowledge or competency. Someone who passes the test is competent for the job role; and someone who fails has not demonstrated competence and might need remedial training.

Before considering a pass score, you need to consider whether questions are substitutable, i.e. that you can balance getting certain questions wrong and others right, and still be competent.  It could be that getting  particular questions wrong implies lack of competence, even if everything else is answered correctly. (For another way of looking at this, see Comprehensive Assessment Framework: Building the student model.) If a participant performs well on many items but gets a crucial safety question wrong, they still fail the test. See Golden Topics- Making success on key topics essential for passing a test for one way of creating tests that work like this in Questionmark.

But assuming questions are substitutable and that a single pass score for a test is viable, how do you work out what that pass score should be? The table below shows 4 possible outcomes:

Pass test Fail test
Participant competent Correct decision Error of rejection
Participant not competent Error of acceptance Correct decision

Providing that the test is valid and reliable, a competent participant should pass the test and a not-competent one should fail it.

Picking a number at randomClearly, picking a pass score as a number “out of a hat” is not the right way to approach this. For a criterion-referenced test, you need to match the pass score to the way your questions measure competence. If you have too high a pass score, then you increase the number of errors of rejection: competent people are rejected and you will waste time re-training them and having them re-take the test. If you have too low a pass score, you will have too many errors of acceptance: not competent people are accepted with potential consequences for how they do the job..

You need to use informed judgement or statistical techniques to choose a pass score that supports valid inferences about the participants’ skills, knowledge or competence in the vast majority of cases. This means the number of errors or misclassifications is tolerable for the intended use-case. One technique for doing this is the Angoff method, as described in this SlideShare. Using Angoff, you classify each question by how likely it is that a minimally- competent participant would get it right, and then roll this up to work out the pass score.

Going back to the original question of whether a better test has a higher pass score, what matters is that your test is valid and reliable and that your pass score is set to the appropriate level to measure competency. You want the right pass score, not necessarily the highest pass score.

So what happens if you set your pass score without going through this process? For instance, you say that your test will have an 80% pass score before you design it.  If you do this, you are assuming that on average all the questions in the test will have an 80% chance of being answered correctly by a minimally-competent participant. But unless you have ways of measuring and checking that, you are abandoning logic and trusting to luck.

In general, a lower pass score does not necessarily imply an easier assessment. If the items are very difficult, a low pass score may still yield low pass rates. Pass scores are often set with a consideration for the difficulty of the items, either implicitly or explicitly.

So, is a test better if it has a higher pass score?

The answer is no. A test is best if it has the right pass score. And if one organization has a compliance test where the pass score is 70% and another has a compliance test where the pass score is 80%, this tells you nothing about how good each test is. You need to ask whether the tests are valid and reliable and how the pass scores were determined. There is an issue of “face validity” here: people might find it hard to believe that a test with a very low pass score is fair and reasonable, but in general a higher pass score does not make a better test.

If you want to learn more about setting a pass score, search this blog for articles on “standard setting” or “cut score” or read the excellent book Criterion-Referenced Test Development, by Sharon Shrock and Bill Coscarelli. We’ll also be talking about this and other best practices at our upcoming Users Conferences in Barcelona November 10-12 and San Antonio, Texas, March 4 – 7.

Saving Time and Money with Diagnostic Testing: A SlideShare Presentation

Headshot JuliePosted by Julie Delazyn

Having employees “test out” of corporate training using diagnostic assessments can save valuable resources and improve motivation, but there are many factors to be considered.

How do you ensure that time spent developing a robust diagnostic assessment provides value to the business?

A team from PwC explained their approach to this at the Questionmark 2013 Users Conference, and we’re happy to share the handouts from their presentation with you.

The Half-Time Strategy: Saving Your Organization Time and Money with Diagnostic Testing includes examples of diagnostic test-out assessments for business-critical self-study programs. It explains how diagnostic assessments can help organizations save training time while still maintaining quality. It also includes tips for building defensible assessments that people can take to test out of training – and for minimizing the time people spend taking them.

Questionmark Users Conferences offer many opportunities to learn from the experience of fellow learning and assessment professionals. Registration is already open for the 2014 Users Conference March 4 – 7 in San Antonio, Texas. Plan to be there!

Sharing slides: 7 ways that online assessments help ensure compliance

John Kleeman HeadshotPosted by John Kleeman

I recently gave a webinar with my colleague Brian McNamara for the SCCE ( Society of Corporate Compliance and Ethics) on 7 ways that online assessments can help ensure compliance.

Here are the slides:

As you can see, we started the webinar by running through some general concepts on assessments including why it’s important that assessments are reliable and valid. Then we described seven key ways in which online assessments can help ensure compliance.

Here are the six pieces of good practice we advocated in the webinar:

  1. Use scenario questions – test above knowledge
  2. Use topic feedback
  3. Consider observational assessments
  4. Use item analysis
  5. Set a pass or cut score
  6. Use a code of conduct

To view a recorded version of this webinar, go to SCCE’s website to purchase the CD from SCCE (Questionmark does not receive any remuneration for this). Or, view a slightly shorter, complimentary version through Questionmark, which is scheduled for September. Go to our UK website  or our  US website for webinar details and registration.

Delivering a million+ assessments takes a village: A SlideShare Presentation

Headshot JuliePosted by Julie Delazyn

What does it take to deliver thousands of different assessments to thousands of students each year?

Rio Salado College, one of the largest online colleges in the United State – with 67,000 students — knows the answer: collaboration.

The people who run the college’s Questionmark assessments wear many hats. They are instructional designers, authors and programmers, as well as networking and IT services staff. Teamwork between people in these varying roles is essential. And since the college delivers more than one million assessments each year, external collaboration – with Questionmark staff – is essential, too.

A team from Rio Salado explained their cooperative approach during this year’s Questionmark Users Conference, and we’re happy to share the handouts from their presentation with you: It Takes a Village – Collaborating for Success with High-Volume Assessments.

This presentation includes an overview of how the college uses surveys, quizzes and tests within its extensive online learning programs. It also focuses on some of the many lessons gleaned from Rio Salado’s many years of involvement with Questionmark.

This is just one example of what people learn about at our Users Conferences. Registration is already open for the 2014 Users Conference March 4 – 7 in San Antonio, Texas. Plan to be there!

Next Page »
SAP Microsoft Oracle HR-XML AAIC