How can you assess the effectiveness of informal learning?

Posted by John Kleeman

Lots of people ask me how you can use assessments to measure the effectiveness of informal learning.  If people are learning at different times, in different ways and without structure, how do you know it’s happening? And how can you justify investment in social and informal learning initiatives?

The 70+20+10 model of learning is increasingly understood – that we learn 70% on-the-job, 20% from others and 10% from formal study. But as people invest in informal learning initiatives, a key question arises. How do you measure the impact? Are people learning? And more importantly, are they performing better?

Did they like it? Did they learn it? Are they doing it?In a presentation at the Learning Technologies conference in London in January, I suggested there are three areas in which to use assessments:

Did they like it?

You can use surveys to evaluate attitudes and reactions – either to specific initiatives or to the whole 70+20+10 initiative. Measuring reaction does not prove impact, but yields useful data. For example, surveys yielding consistently negative results could indicate initiatives are missing the mark.

You could also look at the Success Case Method, which lets you home in on individual examples of success to get early evidence of a learning programme’s impact. See here and here for my earlier blog posts on how to do this.

Of course, if you are using Questionmark technology, you can deliver such surveys embedded in blogs, wikis or other informal learning tools and also on mobile devices.

Did they learn it?

There is strong evidence for the use of formative quizzes to help direct learning, strengthen memory and engage learners. You can easily embed quizzes inside informal learning, e.g. side by side with videos or within blogs, wikis and SharePoint, to track use and understanding of content.

With informal learning, you also have the option of encouraging user-generated quizzes. These allow the author to structure, improve and explain his or her knowledge and engage and help the learner.

You can also use more formal quizzes and tests to measure knowledge and skills. And you can compare someone’s skills before and after learning, compare to a benchmark or compare against others.

Are they doing it?

Of course, in 70+20+10, people are learning in multiple places, at different times and in different ways. So measuring informal learning can be more difficult than measuring formal, planned learning.

But if you can measure a performance improvement, that is more directly useful than simply measuring learning. A great way of measuring performance is with observational assessments. This is described well in Jim Farrell’s recent post Observational assessments- measuring performance in a 70+20+10 world.

To see the Learning Technology presentation on SlideShare, click here. For more information on Questionmark technologies that can help you assess informal learning, see www.questionmark.com.

Conference Close-up: Alignment, Impact & Measurement with the A-model

Posted by Joan Phaup

Key themes of the Questionmark Users Conference March 20 – 23 include the growing importance of informal and social learning — as reflected by the 70+20+10 model — and the role of assessment in performance improvement and talent management. It’s clear that new strategies for assessment and evaluations are needed within today’s complex workplaces.

Dr. Bruce C. Aaron

We’re delighted that measurement and evaluation specialist Dr. Bruce C. Aaron will be joining us at the conference to talk about the A-model framework he has developed for aligning assessment and evaluation with organizational goals, objectives and human performance issues.

A conversation Bruce and I had about A-model explores the changes that have taken place in recent years and today’s strong focus on performance improvement.

“We don’t speak so much about training or even training and development anymore,” Bruce explained. “We speak a lot more about performance improvement, or human performance, or learning and performance in the workplace. And those sorts of changes have had a great impact in how we do our business, how we design our solutions and how we go about assessing and evaluating them…We’re talking about formal learning, informal learning, social learning, classroom, blended delivery, everything from online learning to how people collect information from their networks and the knowledge management functions that we’re putting in place.”

In a complex world that requires complex performance solutions, Bruce observed that “the thing that doesn’t change is our focus on outcomes.”

The A-model evolved out of dealing with the need to stay focused on goals to logically organize the components of learning, evaluation and performance improvement. It’s a framework or map for holding the many elements of human performance in place — right from the original business problem or business issue up through program design and evaluation.

You can learn more about this from Bruce’s white paper, Alignment, Impact and Measurement with the A-model, from this recording of our conversation — and, of course, by attending the Users Conference! Register soon!

Observational assessments: measuring performance in a 70+20+10 world

Posted by Jim Farrell

Informal Learning. Those two words are everywhere. You might see them trending on Twitter during a #lrnchat, dominating the agenda at a learning conference or gracing the pages of a training digest. We all know that informal learning is important, but measuring it can often be difficult. However, difficult does not mean impossible.

Remember that in the 70+20+10 model, 70 percent of learning results from on the job experiences and 20 percent of learning comes from feedback and the examples set by people around us. The final 10 percent is formal training. No matter how much money an organization spends on its corporate university, 90 percent of the learning is happening outside a classroom or formal training program.

So how do we measure the 90 percent of learning that is occurring to make sure we positively affect the bottom line?

First is performance support. Eons ago, when I was an instructional designer, it was the courseware and formal learning that received most of the attention. Looking back, we missed the mark: Although the projects were deemed successful, we likely did not have the impact we could have. Performance support is the informal learning tool that saves workers time and leads to better productivity. Simple web analytics can tell you the performance support that is most searched on and used on a daily basis.

But onto what I think Questionmark does best – that 20 percent that is occurring through feedback and examples around us. Many organizations have turned to coaching and mentoring to give employees good examples and to define the competencies necessary to be a great employee.

I think most organizations are missing the boat when it comes to collecting data on this 20 percent. While I think coaching and mentoring is a step in the right direction, they probably aren’t yielding good analytics. Yes, organizations may use surveys and/or interviews to measure how mentoring closes performance gaps, but how do we get employees to the next level? I propose the use of observational assessments. By definition, observational assessments enable measurement of participants’ behavior, skills and abilities in ways not possible via traditional assessment.

By allowing a mentor to observe someone perform while applying a rubric to their performance, you allow for not only analytics of performance but the ability to compare to other individuals or to agreed benchmarks for performing a task. Also, feedback collected during the assessment can be displayed in a coaching report for later debriefing and learning. And to me, that is just the beginning.

Developing an observational assessment should go beyond the tasks someone has to do to perform their day-to-day work. It should embody the competencies necessary to solve business problems. Observational assessments allow organizations to capture performance data, and measure the competencies necessary to push the organization to be successful.

If you would like more information about observational assessments, click here.

Measuring learning in SharePoint: where to find info

Posted by Julie Delazyn

The way we learn is changing. By allowing us to more easily share information and acquire knowledge, the Internet has made it easier to learn informally. Moving away from the traditional academic model, we are increasingly learning from each other and on the job.

Microsoft SharePoint’s popularity as a collaboration environment for everyday work tasks makes it a readily available environment for learning functions — an idea that fits in well with the 70+20+10 learning model. Assessments also fit in well with that model, and with SharePoint, too.

Many types of assessments can work well with SharePoint – everything from quizzes, diagnostic tests, knowledge checks and competency tests to surveys and course evaluations. No matter what the setting – a formal learning program, regulatory compliance, performance support or an employee/partner portal, perhaps – assessments have key roles to play.

How to include assessments in SharePoint?
•    Inbuilt SharePoint – functional for basic surveys
•    Custom web parts – write your own!
•    Embed Flash apps – possible for simple quizzes
•    Embed web apps – easy to do. (See how a Questionmark user has embedded a quiz to engage learners.)

If you would like to learn more about using assessments within SharePoint, you can check out this Questionmark presentation on SlideShare.

For more details, download the white paper Learning and Assessment on SharePoint or visit John Kleeman’s SharePoint and Assessment blog.

Moments of Contingency: How Black and William conceptualize formative assessment

Professors Black and William

Paul Black (left) and Dylan William

Posted by John Kleeman

I’ve always believed instinctively that assessment is the cornerstone of learning. I’ve recently read an interesting paper by the eminent Professors Paul Black and Dylan William that conceptualizes this powerfully.

In Developing the theory of formative assessment, published in 2009 in the journal Educational Assessment, Evaluation and Accountability, they describe how formative assessment gives “Moments of Contingency” in instruction – critical points where learning changes direction depending on an assessment.

In their model, assessment gives you information to take decisions to direct learning, and so makes instruction and learning more effective than it would have been otherwise. There are many paths that instruction can go down, and formative assessment helps people choose the right path.

Person with 3 paths to go onBlack and William’s formal definition of formative assessment is how “evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of the evidence that was elicited”.

Like Professor David Nicol about whom I blogged earlier, an important point they make is that formative assessment is not only instructor-led, but is also about interaction with peers and self-assessment. Black and William have done most of their work in education, but their message resonates with the 70+20+10 model currently sweeping corporate learning. Increasingly we are realizing that interaction with learning peers is a critical part of learning: they can give you feedback, questions or insight that help you learn. As a learner, you can regulate your own learning and are responsible for it – and assessments help you make the decisions on how to adjust your learning.

See You in LA!

eric_smallPosted by Eric Shepherd

I am looking forward to meeting old friends and new at this year’s Questionmark Users Conference in Los Angeles March 15 – 18!

LA is a place that revels in finding new ways to do things, and the conference will reflect that spirit by exploring a sea change that’s transforming the world of learning and assessment: the increasing adoption of social and informal learning initiatives by organizations of all stripes.

One of the things we’ll be talking about at the conference is the 70+20+10 model for learning and development, which I recently wrote about in my own blog . This model suggests that about 70% what we learn is from real-life and on-the job experiences — with about 20% coming from feedback and from observing and working with other people. That leaves about 10% of learning taking place through study or formal instruction. So how do we measure the other 90%? Where does assessment fit in to 70+20+10? These questions will make for some lively conversation!

We’ll be providing some answers to them by showing how Questionmark’s Open Assessment Platform works together with many commonly used informal/social learning technologies such as wikis, blogs and portals – and we’ll be showing how we will build on that going forward. We’ll demonstrate features and applications ranging from embedded, observational and mobile assessments to content evaluation tools, open user interfaces, new authoring capabilities in Questionmark Live, and next-generation reporting and analytics tools.

Of course we’ll share plenty of information and inspiration about assessments in the here and now as well as in the future! In addition to tech training, case studies, best practice sessions and peer discussions, you’ll be able to meet one-on-one with our technicians and product managers and network with other Perception users who share your interests.

I can’t wait to welcome you to the conference and I am looking forward to learning together with you. The conference program offers something for every experience level, so I hope you will take a look at it, sign up soon and join us in Los Angeles.