Dr. Larsen’s five principles for test-enhanced learning in medical education

Dr Douglas Larsen
Douglas Larsen

Posted by John Kleeman

In the first part of this interview, Dr. Douglas Larsen, an expert in medical education at the Washington University in St Louis explained his research on how tests and quizzes taken during learning act as retrieval practice and aid learning and retention in medical education. Answering questions in written or computer tests gives practice in recollecting relevant facts and aids future retrieval of such facts when they are needed. In this final part of the interview, he explains his five principles to implement test-enhanced learning successfully.

Your research shows that tests in medical education can significantly help long-term retention. What advice would you give medical educators?

A big misunderstanding is that the research suggests promoting more summative tests. What we’re actually talking about here is changing how we teach people, and simply having more tests at the end of a course probably won’t change a lot. This is an opportunity for educators to think about what it is they want students to learn to do. And then to make sure that the practice of doing it is incorporated in the entire educational process and not just simply at the end.

One of the things that this research has shown is that cramming (e.g. intensive study before an exam) leads to very short-term effects. The benefits disappear quite quickly.

We have come up with 5 principles we think are important for long-term retention.

What are the 5 principles?

1. Closely align the testing with educational objectives.

2. Make sure the test involves generating or recalling, not just recognition.

Generation questions include free-recall, fill-in-blank, short-answer or essay-type questions. We’re still researching this area, but it seems the more that you force the learner to generate and organize their own structure of knowledge, the better. So the less the question enforces structure, the better.

Some studies have shown benefits of multiple-choice tests, others have shown them to be no better than studying. I think the key to the success of a question is the amount of processing required. In some multi-step multiple-choice questions, you have to process and generate an answer, not just recognize the right answer, and this is better. Retrieval and having an opportunity to organize information yourself is important rather than just picking something out of a list.

3. Adequate repetition

There need to be enough opportunities that the knowledge or skill “sinks in”.

Just like when learning the piano, you have to practice many times. When learning information, there need to be multiple practice opportunities. It seems that procedural knowledge may not need to be repeated as often as declarative facts. We’ve seen in some studies that a single testing event can have effects years later when you are dealing with procedural information.

But declarative facts go away very quickly. In one of my studies, learning of facts was measured at 60-80% initially but had dropped to 40-50% in 2 weeks. You need a lot of repetition to interrupt the forgetting curve and maintain the information. The more times you retrieve something, the more likely you are to retain it.

4. Adequate spacing

There has been research to show that if you want learning to last months and years, you need to space out your testing on the order of weeks and months, not days or hours.

5. You need to have adequate feedback.

There is definitely a testing effect without feedback, but the research has shown that with feedback, the effect is greatly amplified. People just learn more.

What is good practice in feedback?

There are a couple of principles with feedback.

One is that when people simply get immediate feedback, where someone answers a question and then immediately gets told whether it’s correct or incorrect – they probably don’t retain it as well. There needs to be a degree of delay in the feedback. The reason for that is that we need to wash the information out of short term / working memory, and give them a chance to re-process it.

Feedback that leads to re-processing is likely to be the most beneficial. Where you are forced to go back and work out why you answered incorrectly, and how that compares to the correct answer. It’s important that learners actively process the feedback, not just passively read it. For instance, one technique we use is to have students go back and grade their own test – this makes them re-process.

What is your perspective on case study questions, where you navigate through a medical scenario and answer questions on the way?

I think those are excellent in the sense that you can better approximate your desired outcomes by aligning learning objectives. If you want people to recognize elements of a case, to deduce what they need to do and so on, you obviously have to have a context for that. So case studies can be very important. As before, it is best to structure questions so that they involve recall rather than recognition.

What impact is the research on test-enhanced learning having on medical education?

Many people are very positive and very excited. The challenge is for people to understand all the implications, and to understand that we’re not talking about more standardized tests or more summative tests, but we’re really saying that people have to go back and look at how they teach, and how they incorporate retrieval practice into the longitudinal teaching experience.

That is the biggest challenge, but my hope is that, as we keep talking about it, people will catch the vision and it will have an even greater impact on how people both teach and learn.

Creating an Extended Matching Question Type

Extended Matching Questions are similar to multiple choice questions but test knowledge in a far more applied, in-depth way. This question type is now available in Questionmark Live  browser-based authoring.

What it does:   An Extended Matching question provides an “extended” list of answer options for use in questions relating to at least two related scenarios or vignettes. (The number of answer options depends on the logical number of realistic options for the test taker.) The same answer choice could be correct for more than one question in the set, and some answer choices may not be the correct answer for any of the questions – so it is difficult the answer this type of question correctly by chance. A well-written lead-in question is so specific that students understand what kind of response is expected,  without needing to look at the answer options.

Who should use it: It is often used in medical education and other healthcare subject areas to test diagnostic reasoning.

What’s the process for creating it? This diagram shows how to create this question type in Questionmark Live:

How it looks:  Here is an example of an Extended Matching Question

Virtual microscopy enhances medical school exams

Posted by Julie Delazyn

It’s fascinating to see the many different ways in which our customers use multimedia and other technology to create assessments that mimic the real world.

At the University of Connecticut Health Center (UCHC), this includes replicating the functions of a microscope within histology tests.

Faculty wanted students to be able to view a slide, just as they would when using a microscope, and answer questions about it. Now, they can click on a microscope icon within a test to view a slide, then zoom in and out and move from the slide from side to side. It’s done by incorporating files created using a third-party Flash-based web application within a Questionmark Perception test – and we’re told the virtual “slides’ look as clear as they would under an actual microscope.

Click here to read our case study about this and various other ways in which UCHC is using assessments.