Integration Highlights in Barcelona

Steve Lay HeadshotPosted by Steve Lay

The programme for Questionmark’s European Users Conference in Barcelona November 10 – 12 is just being finalized. As usual, there is plenty to interest customers who are integrating with our Open Assessment Platform.

This year’s conference includes a case study from Wageningen University on using QMWISe, our SOAP-based API, to create a dashboard designed to help you manage your eu confassessment process. Also, our Director of Solution Services, Howard Eisenberg, will be leading a session oncustomising the participant interface so you can learn how to integrate your own CSS into your Questionmark assessments.

I’ll be running a session introducing you to the main integration points and connectors with the assistance of two colleagues this year: Doug Peterson will be there to help translate some of the technical jargon into plain English and Bart Hendrickx will bring some valuable experience from real-world applications to the session. As always, we’ll be available throughout the conference to answer questions if you can’t make the session itself.

Finally, participants will also get the chance to meet Austin Fossey, our Analytics Product Owner, who will be talking, amongst other things, about our OData API for Analytics. This API allows you to create bespoke reports from data ‘feeds’ published from the results warehouse.

See the complete conference schedule here, and sign up soon if you have not done so already.

See you in Barcelona!

Writing Good Surveys, Part 6: Tips for the form of the survey

Doug Peterson HeadshotPosted By Doug Peterson

In this final installment of this series, we’ll take a look at some tips for the form of the survey itself.

The first suggestion is to avoid labeling sections of questions. Studies have shown that when it is obvious that a series of questions belong to a group, respondents tend to answer all the questions in the group the same way they answer the first question in the group. The same is true with visual formatting, like putting a box around a group of questions or extra space between groups. It’s best to just present all of the questions in a simple, sequentially numbered list.

As much as possible, keep questions at about the same length, and present the same number of questions (roughly, it doesn’t have to be exact) for each topic. Longer questions or more questions on a topic tend to require more reflection by the respondent, and tend to receive higher ratings. I suspect this might have something to do with the respondent feeling like the question or group of questions is more important (or at least more work) because it is longer, possibly making them hesitant to give something “important” a negative rating.

It is important to collect demographic information as part of a survey. However, a suspicion that he or she can be identified can definitely skew a respondent’s answers. Put the demographic information at the end of the survey to encourage honest responses to the preceding questions. Make as much of the demographic information optional as possible, and if the answers are collected and stored anonymously, assure the respondent of this. If you don’t absolutely need a piece of demographic information, don’t ask for it. The more anonymous the respondent feels, the more honest he or she will be.

Group questions with the same response scale together and present them in a matrix format. This reduces the cognitive load on the respondent; the response possibilities do not have to be figured out on each individual question, and the easier it is for respondents to fill out the survey, the more honest and accurate they will be. If you do not use the matrix format, consider listing the response scale choices vertically instead of horizontally. A vertical orientation clearly separates the choices and reduces the chance of accidentally selecting the wrong choice. And regardless of orientation, be sure to place more space between questions than between a question and its response scale.

I hope you’ve enjoyed this series on writing good surveys. I also hope you’ll join us in San Antonio in March 2014 for our annual Users Conference – I’ll be presenting a session on writing assessment and survey items, and I’m looking forward to hearing ideas and feedback from those in attendance!

Better outcomes make the outcome better!

Joan Phaup 2013 (3)Posted by Joan Phaup

As we build the program for the Questionmark 2014 Users Conference, I’m having a great time chatting with presenters about their plans.

I spoke recently with Gail Watson from the US Marine Corps University’s College of Distance Education and Training. As an institution that educates large numbers of people about large and complex subjects, the college has grappled with how to make sure the tests it administers yield meaningful outcomes.

Gail says that careful attention to the relationship of business practices and organization of the folder/topic system is critical to the effective delivery of a Questionmark assessment if the subject matter is broad or complex.  When subject matter experts (SMEs) are made aware of Questionmark’s capabilities prior to creating questions, they can organize topics and questions in ways that result in better feedback. The success of the assessments and topics can be better measured.

Her case study presentation  in San Antonio will also focus on principles for using multiple response, pull-down, matching and ranking questions.

Gail Watson

Gail Watson

 Who would benefit most from your presentation?

I think it will help users who either have broad, complex material to be assessed as well as new users who may be confused about where to start. Even if you are knowledgeable about Questionmark, sometimes a subject is so big you don’t know how to set up your topics or use Question blocks — things like that.

How will you help your audience?

I’d like to show how to assess the capabilities of Questionmark in order to make assessments fit in with (business) processes and decision points. It’s a matter of matching up the SME vocabulary with the Questionmark vocabulary. We have done this to such an extent that we now have a roadmap that we go through long before any items are put into Questionmark. I can look at a question number and tell from it exactly what topic it relates to and what folders it goes in.

I’d like to show the importance of understanding Questionmark’s capabilities and how they relate to the world of SMEs – so that people understand how to match their assessment up with decision points up front. You can’t do that after the fact. By getting this right, you end up providing good feedback to students, good feedback to SMEs and good feedback to managers.

Do your SMEs use Questionmark Live to author items?

Yes, and we use it heavily in our question review and approval process.  This actually relates to the importance of the question number used in the Description Field.  I work with large batches of approved questions, and without that question number, I don’t know what folder “path” to put them in.  An example question number is 5212.4.1. I can tell just from looking at number that this question resides in a topic folder 4 levels down.  So as I start receiving questions, I can start producing assessments.

How do question and topic outcomes influence assessment results?

Creating good question outcomes for multiple select, matching, pull down and ranking questions produces supporting data for item analysis of those question types. These types of questions can tell you what correct answers people are missing, what wrong answers are being collected, and so forth. Creating good topic outcomes give Subject Matter Experts (SME’s) insight into weak areas of knowledge from topic outcome reports and gives failing students the proper assessment feedback to focus their study for re-test

Can you share some quick tips for organizing topics and folders effectively?

List your SME “vocabulary” and processes, then list Questionmark’s “vocabulary” and capabilities.  After that, match them up to create a topic folder and assessment map; a question management process; an assessment management process; and a review and maintenance process.

What do you hope people will take away from your session? 

Everyone who touches the assessment process needs to know how Questionmark works so that they can best leverage Questionmark’s capabilities. I’d like to help people match up what Questionmark can do with what they need to accomplish in their learning organizations. I also will be talking about Questionmark Live, because we use it quite heavily in the process of creating questions. I hope they’ll learn some tips for using it effectively.

***

Early-bird registration for the conference, to be held in San Antonio March 4 – 7, is open through December 12. Click here to register.


Teaching to the test and testing to what we teach

Austin FosseyPosted by Austin Fossey

We have all heard assertions that widespread assessment creates a propensity for instructors to “teach to the test.” This often conjures images of students memorizing facts without context in order to eke out passing scores on a multiple choice assessment.

But as Jay Phelan and Julia Phelan argue in their essay, Teaching to the (Right) Test, teaching to the test is usually problematic when we have a faulty test. When our curriculum, instruction, and assessment are aligned, teaching to the test can be beneficial because we have are testing what we taught. We can flip this around and assert that we should be testing to what we teach.

There is little doubt that poorly-designed assessments have made their way into some slices of our educational and professional spheres. Bad assessment designs can stem from shoddy domain modeling, improper item types, or poor reporting.test classroom

Nevertheless, valid, reliable, and actionable assessments can improve learning and performance. When we teach to a well-designed assessment, we should be teaching what we would have taught anyway, but now we have a meaningful measurement instrument that can help students and instructors improve.

I admit that there are constructs like creativity and teamwork that are more difficult to define, and appropriate assessment for these learning goals can be difficult. We may instinctively cringe at the thought of assessing an area like creativity—I would hate to see a percentage score assigned to my creativity.

But if creativity is a learning goal, we should be collecting evidence that helps us support the argument that our students are learning to be creative. A multiple choice test may be the wrong tool for that job, but we can use frameworks like evidence-centered design (ECD) to decide what information we want to collect (and the best methods for collecting it) to demonstrate our students’ creativity.

Assessments have evolved a lot over the past 25 years, and with better technology and design, test developers can improve the validity of the assessments and their utility in instruction. This includes new item types, simulation environments, improved data collection, a variety of measurement models, and better reporting of results. In some programs, the assessment is actually embedded in the everyday work or games that the participant would be interacting with anyway—a strategy that Valerie Shute calls stealth assessment.

With a growing number of tools available to us, test developers should always be striving to improve how we test what we teach so that we can proudly teach to the test.

European Conference Close Up: What sessions interest you most?

Chloe MendoncaPosted by Chloe Mendonca

With so many great sessions planned for the Questionmark 2013 European Users Conference in Barcelona November 10 – 12, now is the perfect time to check out the conference agenda and sign up.

The program is set to cover a wide array of topics and I thought I’d summarise one from each of the conference tracks.

Features & Functions: Customising the Participant InterfaceEUCOnf1

Knowing how to add your logo to participant login screens and assessments and customising the styles and behaviours used in assessment templates are just some of the  exciting things you get to learn in this session. For many, ensuring that the participant interface conforms to organisational style guidelines is very important. So if you want to learn about techniques and best practices for customising the interface templates, I’d definitely recommend this session!

Case Study: Using a Blended Test Delivery Model to Drive Strategic Success for SAP Certification

“Author once, deliver anywhere.” This is a major strategic initiative of SAP. And in a genuinely global company, certification needs to be done for people in a variety of geographies and languages. In this session you’ll hear about experiences of delivering the same exams through various delivery channels globally.euconf2

Best Practice: Assessment Feedback – What Can We Learn From Psychology Research?

Retaining knowledge and applying what you learn is something that applies to every individual no matter the industry. We all forget a surprising amount of what we learn, but quizzes and tests can force you into practicing retrieving, making it more likely for things to stick in your mind. John Kleeman, Questionmark’s founder and chairman, will lead this fascinating best practice session providing actionable ideas you can apply to your Questionmark assessments to improve retention.

Bonus Sessions & Demos: Tips for Delivering Your Assessments to Mobile Deviceseuconf3

With an ever-growing mobile world, we’re seeing more and more people using mobile devices for assessment, and exploring the potential of tablets for “mobile test centres”. This means the ability to reach millions of people anytime, anywhere. It is changing the way we test by making it possible to gather information and get results on the spot. This presentation will showcase Inlea’s recent use of mobile assessments and the benefits it has brought to their organisation, as well as providing useful pointers on designing tests for small screens.

I look forward to meeting you next month in Barcelona.

Click here to register

Writing Good Surveys, Part 5: Finishing Up on Response Scales

Doug Peterson HeadshotPosted By Doug Peterson

If you have not already seen part 4 of this series, I’d recommend reading what it has to say about number of responses and direction of response scales as an introduction to today’s discussion.

To label or not to label, that is the question (apologies to Mr. Shakespeare). In his Harvard Business Review article, Getting the Truth into Workplace Surveys, Palmer Morrel-Samuels presents the following example”surveys 5 1

Mr. Morrel-Samuels’ position is that the use of words or phrases to label the choices is to be avoided because the labels may mean different things to different people. What I consider to be exceeding expectations may only just meet expectations according to someone else. And how far is “far” when someone far exceeds expectations? Is it a great deal more that “meets expectations” and a little bit more than “exceeds expectations,” or is it a great deal more than “exceeds expectations?” Because of this ambiguity, Mr. Morrel-Samuels recommends only labeling the first and last option, and using numbers to label every option as shown here:

surveys 5 2

The idea behind this approach is that “never” and “always” should mean the same thing to every respondent, and that the use of numbers indicates an equal difference between each choice.

However, a quick Googling of “survey response scales” reveals that many survey designers recommend just the opposite – that scale choices should all be labeled! Their position is that numbers have no meaning on their own and that you’re putting more of a cognitive load on the respondent by forcing them to determine the meaning of “5” versus “6” instead of providing the meaning with a label.

I believe that both sides of the argument have valid points. My personal recommendation is to label each choice, but to take great care to construct labels that are clear and concise. I believe this is also a situation where you must take into account the average respondent – a group of scientists may be quite comfortable with numeric labels, while the average person on the street would probably respond better to textual labels.

Another possibility is to avoid the problem altogether by staying away from opinion-based answers whenever possible. Instead, look for opportunities to measure frequency. For example:

I ride my bicycle to work:surveys 5 3

 In this example, the extremes are well-defined, but everything in the middle is up to the individual’s definition of frequency. This item might work better
like this:

On average, I ride my bicycle to work:

surveys 5 4

 Now there is no ambiguity among the choices.

A few more things to think about when constructing your response scales:

  • Space the choices evenly. Doing so provides visual reinforcement that there is an equal amount of difference between the choices.
  • If there is any possibility that the respondent may not know the answer or have an opinion, provide a “not applicable” choice. Remember, this is different from a “neutral” choice in the middle of the scale. The “not applicable” choice should be different in appearance, for example, a box instead of a circle and greater space between it and the previous choice.
  • If you do use numbers in your choice labels, number them from low to high going left to right. That’s how we’re used to seeing them, and we tend to associate low numbers with “bad” and high numbers with “good” when asked to rate something. (See part 4 in this series for a discussion on going from negative to positive responses. Obviously, if you’re dealing with a right-to-left language (e.g., Arabic or Hebrew), just the opposite is true.
  • When possible, use the same term in your range of choices. For example, go from “not at all shy” to “very shy” instead of “brave” to “shy”. Using two different terms hearkens back to the problem of different people having different definitions for those terms.

Be sure to stay tuned for the next installment in this series. In part 6, we’ll take a look at putting the entire survey together – some “form and flow” best practices. And if you enjoy learning about things like putting together good surveys and writing good assessment items, you should really think about attending our European Users Conference or our North American Users Conference. Both conferences are great opportunities to learn from Questionmark employees as well as fellow Questionmark customers!

Next Page »