See You in LA!

eric_smallPosted by Eric Shepherd

I am looking forward to meeting old friends and new at this year’s Questionmark Users Conference in Los Angeles March 15 – 18!

LA is a place that revels in finding new ways to do things, and the conference will reflect that spirit by exploring a sea change that’s transforming the world of learning and assessment: the increasing adoption of social and informal learning initiatives by organizations of all stripes.

One of the things we’ll be talking about at the conference is the 70+20+10 model for learning and development, which I recently wrote about in my own blog . This model suggests that about 70% what we learn is from real-life and on-the job experiences — with about 20% coming from feedback and from observing and working with other people. That leaves about 10% of learning taking place through study or formal instruction. So how do we measure the other 90%? Where does assessment fit in to 70+20+10? These questions will make for some lively conversation!

We’ll be providing some answers to them by showing how Questionmark’s Open Assessment Platform works together with many commonly used informal/social learning technologies such as wikis, blogs and portals – and we’ll be showing how we will build on that going forward. We’ll demonstrate features and applications ranging from embedded, observational and mobile assessments to content evaluation tools, open user interfaces, new authoring capabilities in Questionmark Live, and next-generation reporting and analytics tools.

Of course we’ll share plenty of information and inspiration about assessments in the here and now as well as in the future! In addition to tech training, case studies, best practice sessions and peer discussions, you’ll be able to meet one-on-one with our technicians and product managers and network with other Perception users who share your interests.

I can’t wait to welcome you to the conference and I am looking forward to learning together with you. The conference program offers something for every experience level, so I hope you will take a look at it, sign up soon and join us in Los Angeles.

New Options for Delivering High-Stakes Exams Securely

 

Posted by Joan Phaup

We always look forward to the Association of Test Publishers’ Innovations in Testing Conference – a great place to stay up to date with all the changes happening in the testing and assessment industry and to share some of our own recent progress.

At this year’s conference in Phoenix next week we’ll be involved in a number of presentations, one of which will focus on new options for delivering high-stakes exams.

Our CEO, Eric Shepherd, will be sharing the podium with Allison Horn from Accenture and Ruben Garcia from Innovative Exams to talk about the use of remotely monitored testing stations as a secure but flexible alternative when it’s too impractical or costly to schedule locally proctored exams. They’ll explore how remote proctoring compares to in-person proctoring, talk about how participants in remotely proctored tests can be authenticated and discuss ways to protect exam content and prevent cheating. They’ll also consider what circumstances lend themselves most readily to the use of self-service testing stations.

Eric has given a lot of thought to the issues surrounding secure test delivery and mentions the test kiosk option in a wide-ranging article on Oversight, Monitoring and Delivery of Higher Stakes Assessments Safely and Securely – a recommended read!

Advice from Cognitive Psychologist Roddy Roediger on using retrieval practice to aid learning

Posted by John Kleeman

I am a keen admirer of the work of Professor Roddy Roediger, a cognitive psychologist who investigates how quizzes and tests directly aid learning by giving retrieval practice. I recently interviewed him and here is how he explains this effect and how we can apply this in practice.

Roddy Roediger

Could you explain a little about your background and how you moved into the memory field?

I have a Ph.D. in cognitive psychology from Yale University. I’ve always been interested in memory, and I was surprised to find there was an academic discipline devoted to studying remembering, so I naturally gravitated to that. I worked with Robert Crowder and Endel Tulving at Yale, two leading people in the field. Since then I have taught at Purdue University, the University of Toronto and Rice University. I am now James S. McDonnell Distinguished University Professor at Washington University in St. Louis.

Most of my career has been doing laboratory research trying to show factors that help or harm memory. In the 1990s I published a series of studies on illusions of memory – that is, on false memories  — how we can have very strong memories of something that either never happened or that happened quite differently from how we remember it. About 8 years ago I became interested in applying what we were learning about memory to education, and I started looking at factors that are important for learning and remembering but that are not well appreciated in education. One of these is retrieval practice, which is what happens when we test ourselves, or when we are given a test or quiz. When we actually retrieve information from memory, it’s a very potent enhancement to remembering it. We are much more likely to remember something again if we actively retrieve it than if we are passively exposed to it in restudying.

Is this the testing or quizzing effect  —  that if you learn something and answer questions on it, you are much more likely to retain it for the long term than if you don’t answer the questions?

Yes, absolutely. Making people actually think about material, to reconstruct it, to say it in their own words is much more effective than simply restudying it, yet many students don’t seem to appreciate this. If you ask students how they study to remember, their study strategy is typically re-reading and reviewing. That’s good up to a point, but it would be much better if they actively practiced retrieval, which is what a test requires them to do. If you haven’t constructed or answered practice questions, you won’t do as well on a test as students who have practiced.

A lot of our readers are in corporate training; does this apply in this field too? How should this affect people’s design of learning programs?

I think retrieval practice can have direct implications in the corporate world.

Let me give you an anecdote. One of the people I was talking with about this was skeptical. She was going to work on the train, reading the newspaper like she does every morning. She decided she’d put the paper down after each story and summarize it to herself mentally in her own words. When she got home that night, she asked her husband to test her on the stories she’d read.  And she did really well, surprising them both. Because after she’d read the stories, she’d retrieved them and put them in her own words in her mind.

So if you’re a sales person and you need to remember a lot of qualities of your product to go out and sell it, the best way to do it is to practice retrieving the information and consult your notes only when you fail to retrieve a critical piece of information. Then when you are with a customer you will know all the information. I talk to textbook sales people a lot. Some can walk in and tell me all about the books while others just get out their notes in their folders and show them to me. It’s so much more impressive when the the salespeople can look you in the eye and tell you about the book without having to refer to their notes.

How does this actually work inside the brain?

We don’t know the neural mechanisms yet, but i can tell you some factors that seem to be important.

There seems to be something about effortful retrieval that matters. If you have to put a bit of effort into the retrieval — if it’s harder to bring the fact out of memory — that helps. So imagine you are trying to remember a face or a name; say you meet someone and you want to remember her name. You might think it would be good to repeatedly retrieve the name immediately after you met her, but it is not. Repeated immediate retrieval is like rote rehearsal – and that doesn’t do very well. But if you space out your retrievals – so you do it right away after you meet the person (to make sure you have the name) and then you wait a while to try again and you keep trying at spaced intervals, you will remember the name much better. The delayed retrieval makes you expend a bit more effort. You want to make retrieval a little difficult for yourself, so something about retrieval effort does seem to matter.

Another way you can see that is if you have people read a passage and take a multiple-choice test. In a multiple-choice test you see all the alternatives and you see which one is familiar and correct. You will get a slight benefit in retention from that. But if you are given a short answer question and can actively retrieve the answer, you will get even more benefit, because you have to reproduce the information instead of just recognizing it.  Although both tests provide a benefit, research shows that more benefit accrues from a short answer test or quiz where you have to retrieve information than from a multiple choice or true/false one where you just have to recognize it.

Would that apply to other kinds of recall questions like putting a number as your answer or filling in a blank in a question?

Yes, it does. Fill-in-blank questions do provide the benefit. I assume the same would be true in remembering numbers, but I do not know any research on that topic yet.

What about with multiple response questions or matching questions?

We haven’t done the research in these areas, but we believe that questions that stimulate recall are superior to those that use recognition, but all retrieval practice is useful.

Does it just apply to learning knowledge and facts or does it apply to learning concepts and higher levels of learning?

It definitely applies to concepts. Let me give one example.

Larry Jacoby and his colleagues at Washington University study how people learn bird concepts like warbler or thrasher and so on. He had some people study examples of birds and which category they were in, and another group were given tests on birds and tried  to guess which category they belonged in (and then they got feedback).  So one group just studied birds with their category names whereas the other group learned them while being tested on the names. When he tested both groups a couple of days later, the people who’d had been tested while learning did better than those who’d just studied the examples and the categories. In the test, he showed novel examples that people hadn’t seen before, for instance a bird that belonged in the thrasher family but that had not been used in the practice phase, and the people who’d taken the tests did better. Answering the questions about the birds allowed them to grasp the concept better and generalize it to new examples.

By testing yourself, making mistakes and being corrected, you sharpen what you know about a concept.

So this sounds like a significant way that people can learn better that isn’t widely known. Why is that?

I don’t know! In his essay on memory, Aristotle said, “Exercise in repeatedly recalling a thing strengthens the memory.” Sir Francis Bacon and William James also knew the benefits of retrieval practice (or recitation, as it used to be called) and wrote about them. They didn’t have evidence, of course, except from their own experience. But the technique has mostly been lost from education and training.  In fact the idea of retrieval practice is pretty much derided in education because people in the U.S., at least, are so opposed to anything that smacks of testing.

Certainly testing can be misused; in the old school days there was an emphasis on rote memorization –students had to remember poetry, say, by heart. Educators later decried what they called this “kill and drill” approach to education and they got away from these techniques. That is good in part, because the philosophy behind rote memorization was misguided. Some educators a hundred years ago considered “Memory” to be a faculty of mind and to operate like a muscle. The idea was if a student practiced memorizing poetry, “the Memory” would become stronger and would be better at learning and remembering other things, like algebra. However, the mind simply does not work that way. Practicing one topic helps that topic but does not usually spill over to learning unrelated topics.

But on the other hand, with the de-emphasis on memorization, the benefits of active retrieval should not be lost, because active retrieval is a potent memory enhancer. When you see how children learn multiplication tables, they use flash cards with 6×4 on one side and 24 on the other, and teachers say, “Practice until you think you really know it. Practice until it’s completely automatic.” So teachers use retrieval practice for multiplication tables, but the idea that you can use it for much for complex ideas is not widely appreciated.

Testing has gotten a bad name in the educational community. Instead of thinking of testing as standardized testing to place people into groups, we need to see use of low-stakes quizzes in the classroom and self-testing outside the classroom as a study and learning strategy.

Next week we’ll publish the second part of the interview, in which Professor Roediger gives practical advice for people seeking to use the retrieval practice effect to help people learn.

How to use item analysis to get positive information from compliance assessments to feed into better training

Posted by John Kleeman

I was speaking to one of our customers recently about how they use Questionmark Perception for compliance and I was struck by one of his comments – how item analysis is useful not just to prove to the regulator that things are going well, but also to show how it identifies weaknesses that you react to.

Many companies in financial services, pharmaceuticals, utilities and other regulated industries need to assess their employees regularly to prove their competence to the regulator.

Employees who pass the tests can continue to do their jobs, and employees who fail need re-training. Compliance assessment managers naturally focus on making the assessment programme fair and defensible so that they can prove to the regulator that their assessments are valid and reliable and that someone who passes is genuinely competent. Results can also be used to demonstrate to a failing candidate that they have failed for fair reasons.  As part of this process, it’s usual to run an item analysis report that gives statistics on the questions in your tests.  This allows you to weed out poorly performing  questions to improve the validity  of tests.

With all the focus on the important mission of proving to the regulator that your employees are competent and dealing with failing employees, it’s easy to miss some positive benefit from compliance assessments.

For instance, look at this item analysis report fragment. It shows a question asking the participant which product to recommend to a customer.  Of the four choices provided, Product C is the correct one for the particular customer’s needs. The question has a p value of 0.72 which means that that 72% of your participants get the question right. It also correlates very well with the total test score, as indicated by the ‘Item-total correlation’. It appears a reasonable question to include in the assessment.

Item analysis screenshot

Something to note, however, is that many participants are choosing Product A including some high achievers in the upper group (6%). This could indicate that there was some confusion, either in the instruction or in the question wording, which caused high achievers to choose this particular incorrect answer. This is a flag to have to question reviewed to ensure that the wording/content is accurate as well as a flag to look into the instruction of the course material to ensure that there were no breakdowns in how the material for this particular question was taught.

Organizations want to be very careful about giving good advice to customers, and if high achievers are getting things wrong, this is an issue to look into. Taking  this information to your training team as a potential issue, and working with them to correct it, will help ensure that consistent, accurate messaging is going out to customers.  Also, potentially when the regulator next comes round to visit, you can show them not only that your testing programme is showing that your employees are competent, but also that you are using the results to help improve your training.

Configuring Mobile Apps with Customer Number

Posted by Jim Farrell

I recently shared a video that showed how easy it is to configure our mobile apps to access your installation of Questionmark Perception. But I actually left out one of the best features within our apps: configuring with a customer number.

If you contact Customer Care, they can give you  a number that you can use within your mobile apps to access your installation, On Premise or OnDemand. This allows you to easily share the number to your user base without having to worry about a long URL address. Watch the following video to see how easy it is to use a customer number to configure your Questionmark Mobile App.

Coming soon to Los Angeles: A full conference program by and for Questionmark users

Joan Phaup

Posted by Joan Phaup

Questionmark users who join us in Los Angeles for the Questionmark Users Conference  March 15 – 18 will be spoiled for choice when it comes to choosing breakout sessions. Many delegates will be presenting case studies and leading peer discussions. And we’ve put together product training, best practice and drop-in demo sessions to suit everyone from beginners to power users. 

Here’s the complete conference line-up. You can Click here for details about all these sessions:

General Sessions

  • Opening General Session: Would You “Like” To Know? The Roles of Assessment in Social Learning
  •  Keynote: Bryan Chapman on Assessment’s Strategic Role in Enterprise Learning: Innovation and Trends
  • Closing General Session: The Road Ahead

Case Studies

  • Tiered Assessment Model for Internal Certification (Intermediate/Advanced) – Accenture
  • Applying Diagnostic Assessment in a Virtual Corporate University (All Experience Levels) – PricewaterhouseCoopers
  • Dynamically Generating Certificates from an Assessment (Intermediate/Advanced) –  Beckman Coulter, Inc.
  • Questionmark in Medical Education: Planning and Implementing Blended Course Exams (Beginning/Intermediate) –  Loma Linda University School of Medicine
  • Enabling Self-Service Reporting (Intermediate) – Accenture
  • Implementing E-Testing in the US Coast Guard: Challenges and Solutions (All Experience Levels)
  • Ensuring the Security and Integrity of Certification Tests (All Experience Levels) – Philadelphia Parking Authority
  • Using Captivate and Flash Simulations in eLearning and Assessments (Beginning/Intermediate) – Verizon Communications

Tech Training

  • Introduction to Questionmark Perception for Beginners
  • Advanced Authoring Techniques Using Authoring Manager (Intermediate/Advanced)
  • Planning Your Migration from Perception v4 to v5 (Intermediate/Advanced)
  • Configuring the User Experience and Understanding Templates in Perception v5 (Intermediate/Advanced)
  • Authoring with Questionmark Live – A hands-on introduction (Bring your own laptop!) (Beginning/Intermediate)
  • Analyzing and Sharing Assessment Results (Beginning/Intermediate)
  • Integrating Perception with Other Systems (Advanced)

Best Practices

  • Best Practices for Surveys and Course Evaluations (All Experience Levels) – Greg Pope, Analytics and Psychometrics Manager, Questionmark
  • Using Questionmark Perception to Make SharePoint an Effective Learning Platform (All Experience Levels) – John Kleeman, Chairman, Questionmark
  • Getting the Most from a Small Screen: Design Strategies for Mobile Devices (All Experience Levels) –  Silke Fleischer, Co-founder and CEO, ATIV Software
  • Principles of Item and Test Analysis (All Experience Levels) – Greg Pope, Analytics and Psychometrics Manager, Questionmark
  • Using the Latest Learning Research to Improve Your Questionmark Assessments (All Experience Levels) – John Kleeman, Chairman, Questionmark
  • Writing High-quality Assessments that Yield Meaningful Results (All Experience Levels) – Howard Eisenberg, Training and Consultancy Manager, Questionmark

Peer Discussions

  • Evaluating the Effectiveness of Your Training Programs: Metrics and ROI – Farmers Insurance
  • Using Printing and Scanning to Capture Survey and Test Results – PG&E Academy
  • Delivering Assessments at Academic Institutions: An Open Conversation- Rio Salado College
  • Improving Your Workflow: Ensuring smooth sailing from question creation to release  –  U.S. Coast Guard

Drop-in Demos

  • Using Assessments with Social/Informal Learning Tools
  • Perception v5 and Questionmark Live
  • Tools and Features for Enhancing Your Assessments and Reports

We’ll also be hosting drop-in sessions with Questionmark technicians and group discussions with our product owners about potential new offerings. You can register for the conference online or email conference@questionmark.com for more information.

Next Page »