Curiosity helps learning stick: how can assessments help?

John Kleeman HeadshotPosted by John Kleeman

Recent learning research has provided further evidence that being curious aids retention of learning. You are more likely to remember something if you were in a curious state of mind when you learned it. This blog article explores how assessments can help.

A recent study at the University of California at Davis found that people retained more information during learning when they were more curious. You can see the full paper here (paywall) and a Scientific American summary here. Participants learned a series of answers to trivia questions while at the same time seeing some unrelated faces. Curiosity was measured by asking the participants to report curiosity level and via brain scanning.  When tested a day later, participants scored higher on the trivia questions when they had been more curious. They also (to a smaller extent) recognized the faces they’d seen incidentally at those times when they had been more curious.

You can see the average numbers the day after the learning in the chart below – showing that people remembered about 46% of trivia information they were curious about as against 28% they were less curious about, and that they also remembered incidental faces slightly better, about 35% to 31%.

Average recall of answers to high-curiosity questions was 45.9% vs 28.1% for low-curiosity questions, with the ratio for faces 35.2% vs 31.2%.

Essentially the study provides evidence that people learn and retain information better when they are more curious. And it suggests that to quote the authors: “stimulating curiosity ahead of knowledge acquisition could enhance learning success”.

So if  curiosity stimulates learning, how can assessments help?

The most obvious way is to use pre-course tests and other questions prior to learning to create intrigue and stimulate curiosity. Pre-tests have lots of other benefits – combined with a post-test they allow you to measure change from learning, and they give instructors an understanding of the topics participants know. But a key benefit from such a pre-test will be to stimulate curiosity and so put participants in a state of mind to trigger the retention benefits shown above.

You can deliver questions to stimulate curiosity in many ways – but one way to consider is to deliver to mobile devices, easy to do with Questionmark software. If you have participants coming to learning, think how you could use Questionmark assessments that work on smartphones to stimulate curiosity.

The other side of the coin is that people who are not curious about or interested in something will be less apt to retain what they learn about it. This is another argument for allowing testing out of compliance training. If people are forced to take training about things they already know, not only will they not be curious, but they will likely be positively de-motivated. And this could easily spread to other learners and other training, devaluing other learning and training. There are several articles on testing out of compliance training on this blog. See for example Good practice from PwC in testing out of training or Testing out of training: It can save time and money.

What other ways are there to use assessments to stimulate curiosity? I’d love to hear your ideas. For as Arnold Edinborough has said: “Curiosity is the very basis of education and if you tell me that curiosity killed the cat, I say only the cat died nobly.

And I can’t help asking myself. “Did curiosity kill the cat, or should she have asked more questions?”

John Kleeman will discuss the different ways to use assessments at the 2015 Users Conference in Napa Valley, March 10-13. Register before Dec. 17 and save $200.

Item Development – Organizing a content review committee (Part 1)

AustinPosted by Austin Fossey

Once your items have passed through an initial round of edits, it is time for a content review committee to examine them. Remember that you should document the qualifications of your committee members, and if possible, recruit different people than those used to write the items or conduct other reviews.

In their chapter in Educational Measurement (4 th ed.), Cynthia Shmeiser and Catherine Welch explain that the primary function of the content review committee is to verify the accuracy of the items with regard to the defined domain, including content and cognitive classification of items. The committee might answer questions like:

  • Given the information in the stem, is the item key the correct answer in all situations?
  • Is enough information provided in the item for candidates to choose an answer?
  • Given the information in the stem, are the distractors incorrect in all situations?
  • Would a participant with specialized knowledge interpret the item and the options differently from the general population of participants?
  • Is the item tagged to the correct area of the specifications (e.g., topic, subdomain)?
  • Does the item function at the intended cognitive level?

Other content review goals may be added depending on your specific testing purpose. For example, in their chapter in Educational Measurement (4th ed.), Brian Clauser, Melissa Margolis, and Susan Case observe that for certification and licensure exams, a content review committee might determine whether items are relevant to new practitioners—the intended audience for such assessments.

Shmeiser and Welch also recommend that the review process be systematic, implying that the committee should apply a consistent level of scrutiny and decision criteria for each item they review. But how can you as the test developer keep things systematic?

One way is to use a checklist of the acceptance criteria for each item. By using a checklist, you can ensure that the committee reviews and signs off on each aspect of the item’s content. The checklist can also provide a standardized format for documenting problems that need to be addressed by the item writers. These checklists can be used to report the results of the content review, and they can be kept as supporting documentation for the Test Development and Revision requirements specified by the Standards for Educational and Psychological Testing.

In my next post, I’ll suggest some ways for you, as a test developer, to leverage your content review committee to gather content validity evidence for your assessment.

For best practice guidance and practical advice for the five key stages of test and exam development, check out our white paper: 5 Steps to Better Tests.

Share Your Story in Napa Valley

Julie Delazyn Headshot Posted by Julie Delazyn

We have officially announced the 2015 Users Conference March 10-13, and we look forward to seeing you at the Napa Valley Marriott Hotel & Spa for this important learning event.

In order to create a rich and varied conference program, we have opened a call for proposals and invite you to submit your idea for case study presentation or a peer discussion soon.collage

How do you know if you should submit a proposal? If you can answer yes to any of the questions below, we look forward to hearing from you!

  • Your experience with Questionmark technologies will help others
  • You have found innovative ways to use online assessments
  • You can explain how you organized your assessment program and what you learned
  • You have gained a lot from previous conferences and want to contribute in 2015
  • You are using assessments to support organizational goals
  • You have a unique application of online or mobile assessments
  • You have integrated Questionmark with another system

We are seeking case study and discussion proposals from now until November 20, so consider what you’d like to contribute.

Aside from helping your fellow Questionmark users by sharing your story, please note that presenters will also get some perks.

grape iconPresenters and discussion leaders will receive some red carpet treatment — including a special dinner in their honor on Wednesday, March 10. And we award one 50% registration for each case study presentation.

Click here  for more details and proposal forms.

Even if you are not sure you’ll attend the conference, we would like to hear from you! And whether you plan to present or not, plan now to have the conference in your budget for 2015. You will find information about conference return on investment and an ROI tookit here.

See you in Napa Valley for the 2015 Users Conference

Julie Delazyn HeadshotPosted by Julie Delazyn

We’re very excited to announce plans for the 2015 Questionmark Users Conference.Fall_in_Napa_Valley

Questionmark users will get together to learn best practices and discover new uses for online assessments from …Drumroll please…March 10 to 13 in the Napa Valley Marriott Hotel & Spa in the heart of California Wine Country.

Mark your calendar now for this important learning event, and register as soon as you can.

grape iconHere’s what you can expect during this gathering:

  • Real-world case studies by Questionmark users
  • Introductions to new solutions and featurescollage
  • Sessions explaining Questionmark features & functions
  • Presentations about testing and assessment best practices
  • Opportunities to influence future solutions
  • One-on-one meetings with Questionmark technicians
  • Plenty of time to network with your peers

grape iconHere’s some of the feedback we’ve received from people who joined us at this year’s conference:

“Excellent conference every year.”

“Really good sessions. I learned a lot of things that will help me improve our operation.”

“I got all the answers to all my questions and met the right people within my first day here. This conference is an experience I would do again in a heartbeat.”

“I learned so much! … The conference is three days of fun, learning, and meeting people who are going through the same things that you are.”

Early-bird registration discounts are available until January 29th 2015, so sign up soon and start making your plans for Napa Valley.

Item Development – Benefits of editing items before the review process

Austin FosseyPosted by Austin Fossey

Some test developers recommend a single round of item editing (or editorial review), usually right before items are field tested. When schedules and resources allow for it, I recommend that test developers conduct two rounds of editing—one right after the items are written and one after content and bias reviews are completed. This post addresses the first round of editing, to take place after items are drafted.

Why have two rounds of editing? In both rounds, we will be looking for grammar or spelling errors, but the first round serves as a filter to keep items with serious flaws from making it to content review or bias review.

In their chapter in Educational Measurement (4 th ed.), Cynthia Shmeiser and Catherine Welch explain that an early round of item editing “serves to detect and correct deficiencies in the technical qualities of the items and item pools early in the development process.” They recommend that test developers use this round of item editing to do a cursory review of whether the items meet the Standards for Educational and Psychological Testing.

Items that have obvious item writing flaws should be culled in the first round of item editing and either sent back to the item writers or removed. This may include item writing errors like cluing or having options that do not match the stem grammatically. Ideally, these errors will be caught and corrected in the drafting process, but a few items may have slipped through the cracks.

In the initial round of editing, we will also be looking for proper formatting of the items. Did the item writers use the correct item types for the specified content? Did they follow the formatting rules in our style guide? Is all supporting content (e.g., pictures, references) present in the item? Did the item writers record all of the metadata for the item, like its content area, cognitive level, or reference? Again, if an item does not match the required format, it should be sent back to the item writers or removed.

It is helpful to look for these issues before going to content review or bias review because these types of errors may distract your review committees from their tasks; the committees may be wasting time reviewing items that should not be delivered anyway due to formatting flaws. You do not want to get all the way through content and bias reviews only to find that a large number of your items have to be returned to the drafting process. We will discuss review committee processes in the following posts.

For best practice guidance and practical advice for the five key stages of test and exam development, check out our white paper: 5 Steps to Better Tests.

Podcast: Alignment, Impact and Measurement With the A-model

Julie Delazyn HeadshotPosted by Julie Delazyn

The growing emphasis on  performance improvement — of which training is just a part — calls for new strategies for assessment and evaluation.

Bruce C. Aaron

Bruce C. Aaron

Measurement and evaluation specialist Dr. Bruce C. Aaron has devoted a lot of thought to this. His white paper, Alignment, Impact and Measurement with the A-model, describes a framework for aligning assessment and evaluation with an organization’s goals, objectives and human performance issues.

For more information on the A-model, check out the video and free white paper: Alignment, Impact and Measurement with the A-Model.

Our podcast interview with Bruce about the A-model has been of great interest to learning and HR professionals. The interview explores how this framework addresses the changes that have taken place in recent years and the resulting complexities of today’s workplace.

A-model diagramHere are a few excerpts from the conversation. If you’d like to learn more, listen to the 10-minute podcast below.

“The things that I’ve observed have to do with our moving away from a training focus into a performance focus. So we don’t speak so much about training or even training and development anymore. We speak a lot more about performance improvement, or human performance, or learning and performance in the workplace. And those sorts of changes have had a great impact in how we do our business, how we design our solutions and how we go about assessing and evaluating them.

…the A-model evolved out of dealing with the need to evaluate all of this and still focus on what are we trying to accomplish: how do we go about parsing up the components of our evaluation and keeping those things logically organized in their relationship to each other?

…If we have a complex, blended solution, if we haven’t done a good job of really tying that to our objectives and to the original business issue that we’re trying to address…it becomes apparent through a focus on evaluation and assessment.”