Paper-Free Smile Sheets, Real-time Insight

Posted by Brian McNamara

Last week was Questionmark’s 10th annual Users Conference – and the eighth one that I’ve personally been able to attend. At each of the previous conferences I’ve attended, I’ve been asked the same question: Why are you doing paper evaluations when you’re an online assessment company?

We actually had a very good reason: our main interest in doing session evaluations was to maximize the quality and quantity of the feedback that we received. If you wanted to maximize response rates, you had to make it as EASY as possible to answer the survey.  For smile sheets, paper trumped online… until now.

Enter the second decade of the 21st century:

  • Smartphones equipped with cameras are commonplace – many of us couldn’t imagine leaving home without ours.
  • QR Codes embedding website URLs are easy to come by, too: at grocery/department stores and malls, on billboards, in magazines, brochures, you name it.
  • QR Code- reading software, which essentially turns your Smartphone into a hand-held scanner, is freely available.
  • Questionmark’s auto-sensing, auto-sizing assessment delivery engine enables a single assessment to be delivered reliably and effectively on many different browsers and platforms – PC or mobile – without any extra steps for the author or administrator. Plus, you can capture key demographic information to the assessment results from an assessment URL.

Delegates at the 2012 Users Conference had two choices for providing session evaluation feedback: scan a QR code with their mobile and answer the survey online, or fill out a paper form (which would be scanned and uploaded via Questionmark Printing and Scanning following the event).

For those using mobiles, scanning the QR code would launch the assessment. The intro screen would display the name of the session they had just attended (just to be sure they scanned the right code).

While I was happy that the mobile smile sheet delivery would ease the burden of post-conference scanning and uploading of results, it was particularly useful to get a real-time view on how delegates were reacting to sessions – during the conference we ran our Course Summary report (see below) a few times a day to get a sense of which sessions were bringing the most benefit to our customers.

This was the first year we offered this option, and I was pleased to see to that more than 20% of all the session evals submitted were done via mobile device! Next year we’ll shoot for 80% and for 100% in 2014!  Paper-free and loving it!

International certifications: To translate or not to translate?

Sue Orchard

Posted by Joan Phaup

Scoring techniques, test delivery options, item generation and the intricacies of translating tests into different languages were among the many subjects covered during the Association of Test Publishers’ Innovations in Testing Conference last month.

Curious to know in particular about issues relating to test translation and localization, I spoke briefly with  Sue Orchard of Comms Multilingual about her perspective on the conference:

Were there any particular themes that emerged about translation and localization?

More and more organizations in North America are looking at taking their certifications international. One of the main themes is whether these certifications and any related training and marketing materials need to be translated or not. Some organizations have decided to leave their materials in English. My response to that would be: Are you testing people’s knowledge, skills and abilities or are you testing their knowledge of the English language?

What are the key elements that make for a high-quality translation?

Preparation is absolutely key in ensuring a successful outcome. When creating exams, tests and assessments in the first place, it is important to write these with translation in mind. You should avoid jargon, complicated sentences, overcrowding of the text on a page and many other things. If the exams, tests and assessments have not been created with translation in mind, then this can cause problems during a translation project.

What do you look for in validating a translation?

It is very important to follow specific process steps to ensure the validation of a translation. The actual steps to be taken will vary from client to client, depending on their own capabilities, such as the availability of native-speaker Subject Matter Experts. When translating, localizing and adapting exams, tests and assessments, the steps to be taken will require much more work than for the translation of training or marketing materials, which just require translation into the language and proof-reading.

Going forward, what do you see as the key issues organization will face as they continue to expand their international and intercultural testing programs?

There are many issues that need to be considered by organizations that are looking to expand internationally. Should the exams, tests and assessments be left in English or translated? What about related materials such as training or marketing materials?

Should the certification remain exactly as it is in the original country, or should organizations attempt to get the certification licensed in the target market? Is the exam, test or other assessment culturally valid in the target country? Can it be localized and adapted or is it not suitable at all for people in other countries?

For more on this subject see the Q&A at the end of my previous post about Sue’s February 16 Questionmark web seminar on assessment translation and localization.

What I Learned Back in the Big Easy

Posted by John Kleeman

Questionmark cutomers and staff pose for a picture at the dessert reception

I’m just back from the 2012 Questionmark user conference in New Orleans “the Big Easy”. This is our second user conference in New Orleans – we were there in 2005, and it was great to re-visit a lovely city. Here is some of what I learned from Questionmark customers and speakers.

Most striking session: Fantastic to hear from a major US government agency about how they deployed Questionmark to do electronic testing after a long history of 92 years of paper testing. The agency have a major programme – with 800,000 items and 300,000 test takers. I hope to cover what they are doing in a future blog post.

Most memorable saying: Bill Coscarelli (co-author of Criterion-Referenced Test Development) noting that in his huge experience, the two things in testing that really make a difference are “test above knowledge” and “Use the Angoff method to set a cut/pass score for your tests”.

Meeting a hero: You’ll have seen me writing on this blog about the Bruce C. Aaron’s A-model and it was great to meet Bruce in person for the first time and hear him talk so intelligently about the A-model as a way of measuring effectiveness of training – starting from a business problem.

Most fun time: Got to mention the amazing food in New Orleans – especially the catfish and the oysters. But the highlight was being part of a parade down Bourbon Street with a marching band, all the Questionmark conference attendees and a full New Orleans ensemble.

Best conversation: Discussion with a user who’d come to the conference uncertain that Questionmark was the right system for them, but realized on talking to other customers and the Questionmark team that they were only using 10% of what Questionmark could offer them, and left the conference enthused about all they would do going forward.

Session I learned most from: a security panel chaired by Questionmark CEO Eric Shepherd with executives from Pearson Vue, ProctorU, Innovative Exams and Shenandoah University. There was a great discussion about the advantages of test center delivery vs remote proctoring. Test center delivery is highly secure, but testing at home with a remote proctor observing by video link is very convenient, and it was great to hear real experts debating pros and cons.

Most actionable advice: Keynote speaker Jane Bozarth: “what you measure is what you get”, a reminder that what we do with measurement will define behavior in our organization.

It was great to spend time with and learn from customers, partners and our own team. Look forward to seeing many of you again in the 2013 Questionmark user conference in Baltimore Inner Harbor in March 3-6 2013.

PricewaterhouseCoopers wins assessment excellence award

Posted by Joan Phaup

Greg Line and Sean Farrell accept the Questionmark Getting Results Award on behalf of PwC

I’m blogging this morning from New Orleans, where we have just completed the 10th annual Questionmark Users Conference.

It’s been a terrific time for all of us, and we are already looking forward to next year’s gathering. 2013.

One highlight this week was yesterday’s presentation of a Questionmark Getting Results Award to PricewaterhouseCoopers.

Greg Line, a PwC Director in Global Human Capital Transformation, and Sean Farrell, Senior Manager of Evaluation & Assessment at PwC, accepted the award.

Getting Results Award

The award acknowledges PWC’s successful global deployment of diagnostic and post-training assessments to more than 100,000 employees worldwide, as well as 35,000 employees in the United States.

In delivering more than 230,000 tests each year —  in seven different languages — PwC defines and adheres to sound practices in the use of diagnostic and post-training assessments as part of its highly respected learning and compliance initiatives. These practices include developing test blueprints, aligning test content with organizational goals, utilizing sound item writing techniques, carefully reviewing question quality and using Angoff ratings to set passing scores.

Adhering to these practices has helped PwC deploy valid, reliable tests for a vast audience – an impressive accomplishment that we were very pleased to celebrate at the conference.

So that’s it for 2012! But mark your calendar for March 3 – 6, 2013, when we will meet at the Hyatt Regency in Baltimore!

Conference kicks off in New Orleans

Dessert reception

Posted by Joan Phaup

Last night’s opening reception at the 10th annual Questionmark Users Conference in the U.S. brought many old friends back together and introduced newcomers to staff members and fellow customers.

I’ve just returned to the blog from today’s opening general session, which prompted enthusiastic responses about a number of announcements and demos. Among them:

Lynn Cram receiving her perfect attendance award

  • Details about Questionmark Perception version 5.4, which includes Questionmark Analytics and the ability to deploy observational assessments,
  • The security of Questionmark’s OnDemand platform, starting with the many security measures taken at our SSAE 16-audited data center and covering data, network and application security – all reinforced through redundancy, monitoring and testing
  • Questionmark Live demonstrations of  the new Math Formula Editor  and the ability to embed YouTube videos
  • Using QR codes to launch assessments on mobile devices
  • New reports in Questionmark Analytics: Assessment Completion Time and Item Analysis
  • An update on Questionmark’s Open Platform
  • Our research about the role of assessments in mitigating risk

We had some fun honoring Lynn Cram from Progressive Insurance for having attended every single U.S. Users Conference since the first one in 2003. Congratulations to Lynn, who says she keeps returning to the conference because there’s always something new to learn!

You can follow the conference and learn some new things on Twitter at #QMCON, so check in whenever you like.

Mastering Your Multiple-Choice Questions

Posted By Doug Peterson

If you’re going to the Questionmark Users Conference in New Orleans this week (March 21 – 23), be sure to grab me and introduce yourself! I want to speak to as many customers as possible so that I can better understand how they want to use Questionmark technologies for all types of assessments.

This year Sharon Shrock and Bill Coscarelli are giving a pre-conference workshop on Tuesday called “Criterion-Referenced Test Development.” I remember three years ago at the Users Conference in Memphis when these two gave the keynote. They handed out a multiple-choice test that everyone in the room passed on the first try. The interesting thing was that the test was written in complete gibberish. There was not one intelligible word!

The point they were making is that multiple-choice questions need to be constructed correctly or you can give the answer away without even realizing it. Last week I ran across an article in Learning Solutions magazine called “The Thing About Multiple-Choice Tests…” by Mike Dickinson that explores the same topic. If you author tests and quizzes, I encourage you to read it.

A few of the points made by Mr. Dickinson:

  • Answer choices should be roughly the same length and kept as short as possible.
  • Provide a minimum of three answer choices and a maximum of five. Four is considered optimal.
  • Keep your writing clear and concise – you’re testing knowledge, not reading comprehension.
  • Make sure that you’re putting the correct answer in the first two positions as often as the last two positions.

One of the most interesting points in the article is about the correctness of the individual answer choices. Mr. Dickinson proposes the following:

  • One answer is completely correct.
  • Another answer is mostly correct, but not completely. This will distinguish between those who really know the subject matter and those who have a more shallow understanding.
  • Another answer is mostly incorrect, but plausible. This will help reveal if test takers are just guessing or perhaps racing through the test without giving it much thought.
  • The fourth answer would be totally incorrect, but still fit within the context of the question.

Once you have an assessment made up of well-crafted multiple-choice questions, you can be confident that your analysis of the results will give you an honest picture of the learners’ knowledge as well as the learning situation itself – and this is where Questionmark’s reports and analytics really shine!

The Coaching Report shows you exactly how a participant answered each individual question. If your questions have been constructed as described above, you can quickly assess each participant’s understanding of the subject matter. For example, two participants might fail the assessment with the same score, but if one is consistently selecting the “almost correct” answer while the other is routinely selecting the plausible answer or the totally incorrect answer, you know that the first participant has a better understanding of the material and may just need a little help, while the second participant is completely lost.

The Item Analysis report shows you how a question itself (or even the class or course materials) is performing.

  • If pretty much everyone is answering correctly, the question may be too easy.
  • If almost no one is getting the question correct, it may be too hard or poorly written.
  • If the majority of participants are selecting the same wrong answer:
    • The question may be poorly written.
    • The wrong answer may be flagged as correct.
    • There may be a problem with what is being taught in the class or course materials.
  • If the answer selection is evenly distributed among the choices:
    • The question may be poorly written and confusing.
    • The topic may not be covered well in the class or course materials.
  • If the participants who answer the question correctly also tend to pass the assessment, it shows that the question is a “good” question – it’s testing what it’s supposed to test.

What question writing rules do you use when constructing a multiple-choice question? What other tips and tricks have you come up with for writing good multiple-choice questions? Feel free to leave your comments, questions and suggestions in the comments area!

Next Page »