Effective assessment feedback needs motive, opportunity and means

John Kleeman HeadshotPosted by John Kleeman

Assessment feedback, whether it relates to a question, a topic or an entire assessment, has tremendous value – but only if learners make practical use of it! I’d like to share some solid approaches about feedback and talk about how a couple of Questionmark reports can help you put them into practice.

From Andrew Morgan (quoted in Valerie Shute’s excellent ETS research report on feedback), we get the concept that to be effective and useful, feedback needs the following:

  • Motive – the learner wants to follow the feedback
  • Opportunity – the learner has it in time to use it: it’s not given too late for action
  • Means – the learner is able to use it: the feedback is actionable

Another good way to think about feedback comes from Dr Steve Draper of the University of Glasgow School of Psychology in his presentation at eAssessment Scotland in 2012:

“There is no point in giving feedback to a learner unless the learner acts on it: does something concrete and differently because of it”.

Feedback needs to be processed to be usefulFeedback that the learner doesn’t read isn’t valuable.

Feedback that the learner reads but doesn’t process isn’t valuable.

You must get the learner to evaluate the feedback and adjust their thinking as a result of the feedback, i.e. process the feedback and do something with it.

I’ve been wondering about how you can apply these concepts using the Questionmark coaching report when presenting an assessment score as feedback.

Most learners are motivated to use their score achieved in a test as feedback; they want to get a high scores next time. And if they can take a test again, they have the opportunity to use the feedback. But a score on its own is just a number. How can you help learners use their scores as catalysts for action?

Clearly, a score is more valuable if it can be compared to something, and there are three obvious comparisons:

  • Ipsative, comparing score to previous attempts: have you done better than last time?
  • Criterion referenced, comparing score to a benchmark: have you reached the desired standard?
  • Normative, comparing score to how others do: how do you compare to your peers?

Questionmark’s Transcript report lets learners view all their results and see how they improve or change over time. And Questionmark’s Coaching report includes the concept of benchmarks – you can set a benchmark for the assessment and for each topic. What you may not know is that the Coaching report allows you to compare a score against the average of a group of other test-takers. You can define the group of people by a variety of demographics and then display how the participant compares against their scores. This screenshot shows how to set this up:

Setting a comparison in the coaching report

Giving learners information about how they compare to others can be a powerful motivator; I encourage you to explore this capability of the Questionmark Coaching report.

For more on general concepts of processing feedback, see Steve Draper’s interesting page here. Questionmark users can see more information on the Coaching report comparison options on the Questionmark support site. And if you want to hear more from me about assessment feedback, I’ll be speaking about it at the Questionmark user conference in March.

Putting coaching reports front and center

Posted by Joan Phaup

As we approach our first early-bird registration deadline for the Questionmark 2013 Users Conference, I’m checking in with people who will be presenting various sessions during our three days together in Baltimore March 3 – 6.

Abdulquader Kinariwala

It was great to talk the other day with Accenture Certification Manager Abdulquader Kinariwala, who will deliver a case study presentation with his colleague John Kessler, about the use of coaching reports not only to help people succeed in certification exams but also to improve their overall work performance.

Could you briefly describe the role of Questionmark assessments in Accenture’s certification program?

Assessment are very important part of our evaluation mechanism.  When people claim that they are certified – – whether it’s a professional certification or a technology certification  — they must have a credential to prove their qualifications. And to be credentialed, he or she must participate in an assessment.  Assessments are an integral part of our certification process.

I understand you will be zeroing in on the use of coaching reports in your conference presentation. Why are coaching reports important to you?

When we are talking about certifications or learning in general, our assessment framework emphasizes the value of how we coach people to be successful. If we have an employee who is not successful in a certain assessment in a certification program, we share with them the coaching report of their previous performance in an assessments, and we have a discussion about it. With the employee’s permission, we will share the report with his or her career counselor, who will work with the person before they take the assessment again.

Coaching reports are often regarded as a peripheral mechanism for improving performance in an assessment, but we see coaching as a core element of learning.   We use coaching reports not only for failed outcomes but also for partial successes. Someone may have passed an assessment, but there might be a section or two where they could have done better. Our coaching reports provide details about individual topics, so we can pinpoint specific areas where there is room for improvement.

What makes Accenture’s use of coaching reports unique?

We have a customized version of the Questionmark Coaching Report – which we worked on with Questionmark consulting — which allows us to download an Excel extract of the assessment repository. We put that file into one of our custom communication tools which we have created in-house. This tool creates an email that highlights to the person their scores in particular sections and calls out areas where they can improve.

What do you hope people will gain from attending your case study presentation?

We’d like them to get an understanding of how we are using coaching reports,– even for successful candidates — and gain some insights on the custom solution we worked on with Questionmark. We will also have a discussion about how other people are currently using coaching reports, so everyone should have an opportunity to learn from one another.

You’ve attended several Questionmark Users Conference. What do you find most valuable about them?

The ability to network and interact with the Questionmark teams: especially product development, consulting and customer services. I also get a lot out of the keynotes as well as a lot of customer presentations. I like to hear people who have been using Questionmark and consider how to adapt their ideas to our own situation.


Note: The  early-bird Questionmark Users Conference registration discount of $200 ends today!  Check out other conference sessions and click here to sign up.

What is ipsative assessment and why would I use it?

Posted by John Kleeman

As I’m writing this, I’ve just got back from the gym, where I beat my personal best distance on an exercise bike. What’s this got to do with computerized assessment, you might ask? Hear me out.

You’re probably familiar with norm-referenced testing and criterion-referenced testing :

  • A norm-referenced test compares a test-taker against his or her peers. For example, you might compare my results with those of my Questionmark colleagues. (If you did, then seeing how energetic many are in the gym, I suspect my performance would not compare well!)
  • A criterion-referenced test measures a test-taker against external criteria. For example, it might be that people of a certain age should be expected to reach a certain distance in a certain time on an exercise bike.

A third type is sometimes called ipsative assessment.

  • An ipsative assessment in an education/learning context compares a test-taker’s results against his or her previous results. This is how I measure myself at the gym – I am pleased that I am doing better than I have before. I’m not worried if this meets some external criteria or if I’m better or worse than other people.

It’s very common to use criterion-referenced tests as computerized assessments because they help us measure competence. If you want to be sure that your employees know the rules, if you want to validate a pilot to fly a plane, or if you want to check that someone has understood training, a criterion-referenced test is usually the way to go.

But an advantage of ipsative assessment is that it measures progress and development – a test-taker can see if he or she is improving and whether or not he/she is taking advantage of feedback from previous assessments. Using ipsative assessment can help all test-takers improve: A weaker performer will be encouraged by seeing performance improvements over earlier attempts, and a stronger performer can be challenged to do better. This can deal with the risks of the weaker performer becoming demotivated from a poor test result and the strong performer complacent from a good one. Ipsative assessment can be used for objective measures (e.g. did I get a better score?) and also for more subjective measures (e.g. am I more confident about something?).

Questionmark software makes it easy to produce coaching reports on each attempt at an assessment, and these can easily be used to allow test-takers to compare results from previous attempts and see how they’ve improved. This is particularly useful for observational assessments, which measure skill and performance – areas where everyone wants to improve and there can never be a perfect score.

To learn more on ipsative assessment in education and learning, one resource is this study by Dr Gwyneth Huges of the Institute of Education. (As a heads-up, the term ipsative measure is also used in a different, technical way in psychological testing as a within-person measure.)

Expertise is built up by deliberate practice, and being tested can help identify where that practice is needed. I think it’s helpful for all of us to remember that progress and improvement is a useful thing to measure as well as achievement and competency.

Gathering comments from survey and course evaluation participants

Posted by Kate Soper

Are you delivering surveys or course evaluations? Why not enhance your questions with a participant’s comments box field? The comments box sits neatly below the question and will give your participants space to expand on their answers or provide you with additional comments and suggestions.
You can turn the participant comments box on and off in within Questionmark Perception’s Question Editor. It’s easy to add or omit a comments box when you are authoring in Questionmark Live, too. You can add participant comment boxes to most question types and view the responses in the Survey Report, Coaching Report and Question Statistics Report.  Participant comments boxes are also useful for questions you are beta testing.

For more information on using Questionmark Live to create your Course Evaluations, check out Jim Farrell’s blog post on that subject. Questionmark Support Plan customers can get more information about setting up comments boxes in this Knowledge Base article.