Observational assessments: measuring performance in a 70+20+10 world

Posted by Jim Farrell

Informal Learning. Those two words are everywhere. You might see them trending on Twitter during a #lrnchat, dominating the agenda at a learning conference or gracing the pages of a training digest. We all know that informal learning is important, but measuring it can often be difficult. However, difficult does not mean impossible.

Remember that in the 70+20+10 model, 70 percent of learning results from on the job experiences and 20 percent of learning comes from feedback and the examples set by people around us. The final 10 percent is formal training. No matter how much money an organization spends on its corporate university, 90 percent of the learning is happening outside a classroom or formal training program.

So how do we measure the 90 percent of learning that is occurring to make sure we positively affect the bottom line?

First is performance support. Eons ago, when I was an instructional designer, it was the courseware and formal learning that received most of the attention. Looking back, we missed the mark: Although the projects were deemed successful, we likely did not have the impact we could have. Performance support is the informal learning tool that saves workers time and leads to better productivity. Simple web analytics can tell you the performance support that is most searched on and used on a daily basis.

But onto what I think Questionmark does best – that 20 percent that is occurring through feedback and examples around us. Many organizations have turned to coaching and mentoring to give employees good examples and to define the competencies necessary to be a great employee.

I think most organizations are missing the boat when it comes to collecting data on this 20 percent. While I think coaching and mentoring is a step in the right direction, they probably aren’t yielding good analytics. Yes, organizations may use surveys and/or interviews to measure how mentoring closes performance gaps, but how do we get employees to the next level? I propose the use of observational assessments. By definition, observational assessments enable measurement of participants’ behavior, skills and abilities in ways not possible via traditional assessment.

By allowing a mentor to observe someone perform while applying a rubric to their performance, you allow for not only analytics of performance but the ability to compare to other individuals or to agreed benchmarks for performing a task. Also, feedback collected during the assessment can be displayed in a coaching report for later debriefing and learning. And to me, that is just the beginning.

Developing an observational assessment should go beyond the tasks someone has to do to perform their day-to-day work. It should embody the competencies necessary to solve business problems. Observational assessments allow organizations to capture performance data, and measure the competencies necessary to push the organization to be successful.

If you would like more information about observational assessments, click here.

See You in LA!

eric_smallPosted by Eric Shepherd

I am looking forward to meeting old friends and new at this year’s Questionmark Users Conference in Los Angeles March 15 – 18!

LA is a place that revels in finding new ways to do things, and the conference will reflect that spirit by exploring a sea change that’s transforming the world of learning and assessment: the increasing adoption of social and informal learning initiatives by organizations of all stripes.

One of the things we’ll be talking about at the conference is the 70+20+10 model for learning and development, which I recently wrote about in my own blog . This model suggests that about 70% what we learn is from real-life and on-the job experiences — with about 20% coming from feedback and from observing and working with other people. That leaves about 10% of learning taking place through study or formal instruction. So how do we measure the other 90%? Where does assessment fit in to 70+20+10? These questions will make for some lively conversation!

We’ll be providing some answers to them by showing how Questionmark’s Open Assessment Platform works together with many commonly used informal/social learning technologies such as wikis, blogs and portals – and we’ll be showing how we will build on that going forward. We’ll demonstrate features and applications ranging from embedded, observational and mobile assessments to content evaluation tools, open user interfaces, new authoring capabilities in Questionmark Live, and next-generation reporting and analytics tools.

Of course we’ll share plenty of information and inspiration about assessments in the here and now as well as in the future! In addition to tech training, case studies, best practice sessions and peer discussions, you’ll be able to meet one-on-one with our technicians and product managers and network with other Perception users who share your interests.

I can’t wait to welcome you to the conference and I am looking forward to learning together with you. The conference program offers something for every experience level, so I hope you will take a look at it, sign up soon and join us in Los Angeles.