Can online quizzes before lectures increase reading by literature students?

John Kleeman HeadshotPosted by John Kleeman

It’s often suggested in higher education circles that computer-assisted assessment is more useful in scientific subjects than in the humanities.

I’d like to share a counter view from some research by Dr Judith Seaboyer at the University of Queensland. She presented a paper at the 2013 International Computer Assisted Assessment conference about how computerized quizzes can help greatly in teaching English Literature.

One challenge of Literature courses is ensuring students read required texts in advance of lectures: Sometimes students struggle to make time for necessary reading, but if they fail to do it, they will likely struggle later on in essays and exams.

Dr. Seaboyer’s solution? Require students to take an online quiz before each lecture. Students must complete the quiz before midnight the night before the first lecture on a text. The quiz, which includes 6 questions chosen at random from a pool of about 15, gives a small amount of course credit. The questions are as Google and eBook search-proof as possible: using different words to those in the text, so they require real reading and understanding.

Here, for example, is a question about Ian McEwan’s Atonement:

Where does Robbie notice a human limb, the memory of which will return to haunt him?

(a) in the fork of a tree

(b) in a Joe Lyons tea house

(c) on the beach at Dunkirk

The right answer is (a), but this would not be easy to identify by searching, as the book mentions a human “leg” not a “limb” and the other answers are plausible. Unless you’ve read the book recently, you will struggle to answer.

Students reported that the online quizzes motivated them to complete assigned reading before the lecture as can be seen in the survey result below:

"The online quiz motivates me to complete assigned reading before the lecture" Mean 4.31 on Strongly Agree to Strongly Disagree Likert Scale

Dr. Seaboyer’s  preliminary research suggests that 83% of first year English students read at least 5 out of 6 books in a course where quizzes were used as against around 45% in a control group.

I’ve seen other examples of quizzes encouraging learners to access learning material that they might otherwise put off until later, and I’d encourage others to consider this approach.

To quote Dr. Seaboyer:

“Computer-assisted assessment can result in more reading and persistent, careful, observant, resilient reading that leads to critical engagement.”

She also believes that this could also apply across a range of other disciplines as well as Literature.

Assessment for virtual training

john_smallPosted by John Kleeman

At the suggestion of the Masie Center, I’ve been reading an interesting book on Virtual Training Basics by Cindy Huggett. So I’ve been thinking about how you can use assessment effectively within Virtual training.

Virtual training is an online event, where a trainer meets up with participants and instructs them in an online classroom or similar environment (for instance Microsoft Office Live Meeting, Webex or Adobe Connect). A recent survey by ASTD suggests that 6.4% of formal US training hours are virtual, which is a lot of training hours.

Assessment is a cornerstone of all learning, but when you are remote from your participants, and so cannot see their facial reactions or body language, assessment is even more important than in face-t- face training. If you are delivering virtual training, here are some ways you can consider using assessments.

  • Pre-test: A pre-test before the virtual training session is valuable for understanding participants’ knowledge in advance and for creating intrigue. In virtual training, it’s harder to engage participants or check in verbally with what they know, so pre-tests are particularly important.
  • Poll slides: Many systems allow poll slides, which present simple questions — usually multiple choice — that allow you to check participants’ views or reactions. These are basic, but easy and useful.
  • Real-time knowledge checks: Whereas poll slides are helpful, they don’t usually store the results or identify people. In longer sessions, short quizzes that check knowledge of topics within the course are sometimes preferable. People can take the assessments in real time and you can see the results collectively and by individual. This is very easy to set up in Questionmark Perception. In some tools, like Live Meeting, you can simply include a web page (see here for instructions) and each participant will get their own versions of the quiz to fill in.
  • Course evaluations: These are important for all training, but in virtual training where you cannot see the reactions face-to-face, they are vital. Every virtual training session should have a course evaluation and should include questions on the virtual experience as well as the usual questions.
  • Post-course tests:  Like any other session, virtual or real, people will forget over time, and questions sent after the event can prevent forgetting and reinforce learning.

As Internet speeds get faster, software improves and travel challenges and costs grow, more and more of us are going to be delivering virtual training. I think assessment within virtual training will be essential to making the training successful and also measuring its success.

Topic based feedback goes to the ball

john_smallPosted by John Kleeman

In talking with some of our customers last week, I was reminded how valuable it can be to offer participants topic based feedback.

Obviously, everyone wants to know whether they’ve passed or failed a test. And most people look at their feedback on questions they got wrong, to understand how to improve. But you can get a single question wrong for many different reasons including misunderstanding the question, making a mistake or having a tiny gap in knowledge. However, if you score weakly in a topic area, it very likely means you have a weakness in that area that needs addressing.

In many ways, topic feedback is the Cinderella in the feedback world. Everyone expects assessment level feedback and item level feedback (perhaps it’s unfair to call them ugly sisters because they are useful and valuable), but there is a huge and often untapped learning value in topic feedback. For pre tests, post course tests, quizzes during learning and practice tests particularly, topic feedback is vital.

Suppose someone is taking an assessment in health and safety and they score 66% and this is a passing score. Sounds good! But what happens if, as in the screenshot below, they’ve scored very well in some topics and poorly in others?

Assessment feedback screenshot showing score of 88% and 100% in two topics and 63% and 13% in other two topics. Weaker topics have links to learning resources.

In this example, the fact that someone is very weak in electrical safety could well be concerning. (Don’t let them set up the lighting for the Holidays Ball!)

It’s obvious that you want to give people feedback on the topic level, but in many tools this isn’t as easy to set up as it should be. Questionmark Perception can be a fairy godmother for topic based feedback. There are lots of easy-to-use capabilities to present topic based feedback. Here are some links to support resources to help you create topic feedback in Perception.

  • You can easily create topic outcomes with feedback for different topic scores in Authoring Manager
  • You set these as standard at the topic level, but can override to adjust on a per assessment level
  • You can also make a topic a prerequisite for passing an assessment (for instance to prevent someone passing an assessment unless they get 60% or a specified score in key topics).
  • If you want to display only some topics in the list and not all, for instance if some of your topics aren’t meaningful to the participant, you can define which topics are reported on.
  • And then the feedback is displayed to participants at the end of assessment easily as in the screenshot above.

I hope you find this useful in getting your topic feedback working and helping your learners achieve their full potential.