Video: Applying the A-model to a business problem

Posted by Julie Delazyn

In a recent post on his own blog,  Questionmark CEO Eric Shepherd offered some insights about the 70+20+10 learning model, in which social and informal learning play key roles in knowledge transfer and performance improvement.

The A-model, developed by Dr. Bruce C. Aaron, helps organizations make the most of all types of learning initiatives – both formal and informal — by providing an effective framework for defining a problem and its solution, implementing the solution, and tracking the results.

Eric’s post, A-model and Assessment: The 7-Minute Tutorial, notes the great feedback we have received in promoting awareness of the A-model and includes a brief video that walks viewers through a business problem  and explains how to approach it using the A-model.

You can check out this video – by Questionmark’s Doug Peterson – right here. If you’d like more details, you can download the white paper we collaborated on with Dr. Aaron: Alignment, Impact and Measurement with the A-model.

Conference Close-up: Alignment, Impact & Measurement with the A-model

Posted by Joan Phaup

Key themes of the Questionmark Users Conference March 20 – 23 include the growing importance of informal and social learning — as reflected by the 70+20+10 model — and the role of assessment in performance improvement and talent management. It’s clear that new strategies for assessment and evaluations are needed within today’s complex workplaces.

Dr. Bruce C. Aaron

We’re delighted that measurement and evaluation specialist Dr. Bruce C. Aaron will be joining us at the conference to talk about the A-model framework he has developed for aligning assessment and evaluation with organizational goals, objectives and human performance issues.

A conversation Bruce and I had about A-model explores the changes that have taken place in recent years and today’s strong focus on performance improvement.

“We don’t speak so much about training or even training and development anymore,” Bruce explained. “We speak a lot more about performance improvement, or human performance, or learning and performance in the workplace. And those sorts of changes have had a great impact in how we do our business, how we design our solutions and how we go about assessing and evaluating them…We’re talking about formal learning, informal learning, social learning, classroom, blended delivery, everything from online learning to how people collect information from their networks and the knowledge management functions that we’re putting in place.”

In a complex world that requires complex performance solutions, Bruce observed that “the thing that doesn’t change is our focus on outcomes.”

The A-model evolved out of dealing with the need to stay focused on goals to logically organize the components of learning, evaluation and performance improvement. It’s a framework or map for holding the many elements of human performance in place — right from the original business problem or business issue up through program design and evaluation.

You can learn more about this from Bruce’s white paper, Alignment, Impact and Measurement with the A-model, from this recording of our conversation — and, of course, by attending the Users Conference! Register soon!

A whistle-stop tour round the A-model

Posted by John Kleeman

In an earlier blog, I described how the A-model starts with Problem, Performance and Program. But what is the  A-model? Here’s an overview.

It’s easy to explain why it’s called the A-model. The model (shown on the left) traces the progress from Analysis & Design to Assessment & Evaluation. When following the A-model, you move from the lower left corner of the “A” up to the delivery of the Program (at the top or apex of the “A”) and then down the right side of the model to evaluate how the Problem has been resolved.

The key logic in the model is that you work out the requirements for success in Analysis & Design, and then you assess against them in the Assessment & Evaluation phase.A-model overview

Analysis and Design

During Analysis & Design, you define the measures of success, including:

  • How can you know or measure if the problem is solved?
  • What performance are you looking for and how do you measure if it is achieved?
  • What are the requirements for the Program and how do you measure if they are met?

It’s crucial to be able to do this in order to do Assessment & Evaluation against what you’ve worked out in Analysis & Design. Assessments are useful in Analysis & Design – for example needs assessments, performance analysis surveys, job task analysis surveys and employee/customer opinion surveys.

Assessment & Evaluation of Program

A common way to evaluate the program is to administer surveys covering perceptions of satisfaction, relevance and intent to apply the Program in the workplace. This is like a “course evaluation survey” but it focuses on all the requirements for the Program. Evaluation of program delivery therefore also includes other factors identified in Analysis & Design that indicate whether the solution is delivered as planned (for example, whether the program is delivered to the intended target audience at the right time).

Assessment & Evaluation of Performance

The A-model suggests that in order to improve Performance, you identify Performance Enablement measures – enablers that are necessary to support the performance, typically learning, a skill, a new attitude, performance support or an incentive.

You may be able to measure the performance itself using business metrics like number of transactions processed or other productivity measures. Assessments can be useful to measure performance and performance enablers, for instance:Tests to assess knowledge and skill. For instance:

  • Tests to assess knowledge and skill
  • Observational assessments (e.g. a workplace supervisor assessing performance against a checklist)
  • 360 degree surveys of performance from peers, colleagues and managers

Measuring whether the problem is solved

How you measure whether the problem is solved will arise from the analysis and design done originally. A useful mechanism can be an impact survey or follow-up survey, but there should also be concrete business data to provide evidence that the problem has been solved or business performance improved.

Putting it all together: the A-model

The key in the A-model is to put it together, as shown in the diagram below. You define the requirements for the Problem, the Performance, Performance Enablement and the Program. Then you assess the outcomes – for the delivery of the program, for the Performance Enablement and the Performance itself and then for the Impact against the business.


I hope you enjoyed this whistle-stop tour. For a more thorough explanation of the A-model, read Dr. Bruce C. Aaron’s excellent white paper, available here (free with registration).