Evidence that topic feedback correlates with improved learning

John Kleeman HeadshotPosted by John Kleeman

It seems obvious that topic feedback helps learners, but it’s great to see some evidence!

Here is a summary of a paper, “Student Engagement with Topic-based Facilitative Feedback on e-Assessments” (see here for full paper) by John Dermo and Liz Carpenter of the University of Bradford, presented at the 2013 International Computer Assisted Assessment conference.

Dermo and Carpenter  delivered a formative assessment in Questionmark Perception over a period of 3 years to 300 students on an undergraduate biology module.  All learners were required to take the assessment once, and were allowed to re-take it as many times as they wanted. Most took the test several times. The assessment didn’t give question feedback, but gave topic feedback on the 11 main topic areas covered by the module.

The intention was for students to use the topic feedback as part of their revision and study to diagnose weaknesses in their learning: the comments provided might be able to direct students in their learning. The students were encouraged to incorporate this feedback into their study planners and to take the test repeatedly, expecting that students who engage with their feedback, and are “mindful” of their learning will  benefit most.

Here is an example end of test feedback screen.

Assessment Feedback screen showing topic feedback

As you can see, learners achieved “Distinction”, “Merit”, “Pass” and “Fail” for each topic. They were also given a topic score and some guidance on how to improve. The authors then correlated time spent on the tests, questions answered and distribution of taking the test over time with each student’s score on the end-of-module summative exam.  They found a correlation between taking the test and doing well on the exam. For example, the correlation factor on number of attempts on the formative assessment and the score on the  summative exam was 0.29 (spearman rank order correlation, p<0.01).

You can see some of their results below, with learners divided into a top, middle and bottom scoring group on the summative exam. This shows that the top scoring group answered more questions, spent more time on the test, and spread the effort over a longer period of time.

Clustered bar charts showing differences between top middle and bottom scoring groups on the dependent variables time, attempts, and distribution

The researchers also surveyed the learners, 82% of whom agreed or strongly agreed that “I found the comments and feedback useful”. Many students also drew attention to the fact that the assessment and feedback let them focus their revision time on the topics that needed most attention, for example one student said:

“It showed clearly areas for further improvement and where more work was needed”.”

There could be other reasons why learners who spent time on the formative assessments did well on the summative exam:  they might, for instance, have been more diligent in other things. So this research offers proof of correlation, not proof of cause and effect. However, it does provide evidence pointing to topic feedback being useful and valuable in improving learning by telling learners which areas they are weak in and need work on more. This seems likely to apply to the world of work as well as to higher education.

A Postcard from Preston

Posted by Steve Lay

Recently I attended the UK Academic briefing at the University of Central Lancashire, in Preston, UK.  It kicked off with Joseph Kelly, one of our Account Managers, giving an overview of some key technologies that make up the Questionmark assessment platform.

I also got a chance to update everyone on some of our current projects.

To finish off the event, John Dermo talked about the University of Bradford’s approach to formative assessment using Questionmark Perception.  I always find John’s talks interesting and, as usual, he provided plenty of pictures to help us visualise the data captured during his research.

John Dermo

His analysis looked in depth at student use of formative feedback, one of the conclusions being that “students who view the formative [e-assessments] and feedback as part of their learning show the greatest amount of progress”.

The slides from John’s presentation at the briefing are now available here.

I’d like to thank our hosts for providing an excellent venue and everyone who attended for making this such an interesting event.

Conference Close-up: Sustaining large-scale e-assessment

Posted by Joan Phaup

The University of Bradford in the U.K.  is delivering four times the number of e-assessments now as it did four years ago – about 60,000 annually these days.

John Dermo from the University’s Centre for Educational Development will tell how this came about – and how the university sustains this high level of assessment – during a case study presentation at the Questionmark European Users Conference in Brussels this October.

John Dermo

John’s session will build on some tips he shared in a previous blog article, but I asked him for a few details about what’s happening at the university and what he’ll be sharing at the conference.

Tell me about your work.

I’m responsible for technology-enhanced  formative and summative assessment  at the university.  I work with a range of people involved in assessment in different ways:  academic staff, administrators, IT support, the exams office and the invigilators. As well as initiating changes, I’m the go-between for the different groups.

A couple of projects that took place between 2007 and 2009 have paved the way for expansion and innovation in the area of assessment. Before that, we had limited, ad hoc use of e-assessment, but demand was building up so we built support systems to meet it.  We created a workflow model and figured out exactly who did what, and we aimed to make the whole thing scalable. We also built a new e-assessment room to help build up the summative, high-stakes side of our assessment programme.

How has e-assessment at the university changed in the past few years?

With summative assessments, we’ve seen an increase in the speed with which we can get grades to students.  Also, it’s now possible to use more multimedia, particularly high-res photographs.  The sort of the thing that’s too costly on paper is more practical on the screen. We can also run more authentic types of assessments:  we might combine a standard multiple choice assessment with some other online or computer-based tool.

For low-stakes and formative assessments the impact has been slightly different. There has been an increase in the amount of feedback that can be given, and certainly where that’s used it has been very popular with students. There has also been an increase in regular low-stakes assessments, so it has certainly  affected the way in which people use blended learning. There’s more interaction now than there was before.

What are the key issues and challenges in achieving sustainable development of e-assessment?

The key things are communication and knowing who does what at what point in the process. It’s easy to think that someone else is going to do a particular task, but that may not be so. Forward planning, fallback plans and communication between the roles is absolutely vital. It’s also important to give staff a certain level of autonomy. Yes, we need processes, but we need to allow for flexibility.

Another thing is keeping training and support as flexible as possible. Some people want to use e-assessment on a regular basis, other people just once or twice a year, so we often need to deliver support and training on a just-in-time basis as well as through more structured programmes. But it’s important to be realistic about what you can do for people. You can’t do everything yourself so you have to set realistic goals and  negotiate the most practical way of delivering  assessments, managing the workload between different groups as needed.

A big challenge is how to deal with the pioneers who drive innovation. Whilst of course you encourage the pioneers and the innovators, it’s more effective if you weave their enthusiasm into their teams. Relying on just one person won’t make innovation sustainable across the institution. A pioneer might move on, retire, something like that, and where does that leave you?  Having innovators share with those around them helps build a sustainable future.

Also, make sure you have some sort of institutionally recognized policy about assessment and keep revisiting it and documenting any changes that you make.

How will your session help people from other institutions expand their use of e-assessment?

Mainly by sharing my experience over the years in an institution where we have seen this growth. I’ll try to draw out practical tips that people can take away. I also want to give people the opportunity to share their own experiences.

I find the Questionmark users community really very, very supportive. It’s one of the reasons I’m attending the conference, in addition to being in constant contact with some members of the community.  I think the more we can share our experiences the better.

What are you looking forward to at the conference?

A lot! I’m looking forward to meeting up with some people I haven’t seen in a couple of years. I’m also very interested in the new functionality in Questionmark Perception version 5 because we are in the process of upgrading. And I want to learn about integrating v5 with other tools, in particular virtual learning environments.

There’s still time to register for the conference. Click here to learn more.

Podcast: An Innovative Approach to Delivering Questionmark Assessments

 

Posted By Sarah Elkins

The University of Bradford has recently developed an innovative e-assessment facility, using cutting-edge thin client technology to provide a 100-seat room dedicated primarily to summative assessment. The room provides enhanced security features for online assessment and has been used for the first time in 2009 with considerable success. The room’s flexible design maximises its usage by allowing for formative testing, diagnostic testing and general teaching.

John Dermo is the e-Assessment Advisor at the University of Bradford.  In this podcast he explains the technology behind this unique setup and talks about the benefits and challenges in using this room. John Dermo will also be presenting a session at the 2009 European Users Conference, where he will go into more detail about the project.