Video tutorials, best practices & more in the Learning Café

Joan Phaup 2013 (3)Posted by Joan Phaup

We frequently add new videos to the Questionmark Learning Café, where you will find everything from quick tutorials to complete webinars about best practices in the use of online surveys, quizzes, tests and exams.

Here are just a few recent additions:

There’s lots more to discover in the Learning Café, no matter how much or how little experience you have had with online assessments.

Find more videos here!

LC banner

Keep those conference comments coming through December 1

Joan Phaup 2013 (3)

Posted by Joan Phaup

Thinking of Thanksgiving, something we are  always grateful for here at Questionmark is, of course, our customers!

I’d like to thank people who have posted comments on our Facebook page as part of our 2014 Users Conference sweepstakes.

Those who “like” our page and post comments on the conference banner there are being entered into a random drawing for a free conference registration plus a food service gift certificate from the Grand Hyatt San Antonio.

If you have not done this yet, there’s still time! The sweepstakes ends Sunday, December 1, so take a moment to tell us why you’d like to attend the conference. We’ll put your name in the hat along with all the others. And if you’ve already registered, we’ll refund your fee.

Here are just a few of the answers we’ve received to the question, “Why would you like to attend the Questionmark 2014 Users Conference?”

  • “Excellent learning opportunities…and tacos.”Facebook Sweepstakes Final Banner 10-31-13
  • “I met so many great people at last year’s conference in Baltimore that were using Questionmark in so many different ways to support training. Good opportunity to pick everyone’s brains!”
  • ” I have never attended before and Questionmark has become essential in my day-to-day. So I’d like face-to-face interaction with other users and the Questionmark staff.”
  • “Would enjoy seeing what’s new with Questionmark and what is planned for the future.”
  • “We are getting ready to implement testing through Questionmark, so conference attendance would be a great opportunity to learn more about the software and network!”
  • “I would like to hear stories from other users as well as meet more of the Questionmark staff, who have always been generous in their assistance.”

Then there was the entrant who raved about everything from Tech Central and breakout presentations to the fun of meeting people with common  interests. then simply put into words the wish of everyone who is written in so far: “PICK ME!!!”

The winner’s name will be drawn at random on December 2nd. If you’re not on Facebook or have questions about the sweepstakes rules, click here.

So, what’s your answer to the question? Like us on Facebook if you haven’t already — and click on the conference sweepstakes banner there to tell us why you’d like to attend the conference. The deadline for entries is Sunday, December 1 — so go to our Facebook page soon — to “Like,” “Click,” and “Comment.”

 

 

Item Analysis Report – Item Difficulty Index

Austin FosseyPosted by Austin Fossey

In classical test theory, a common item statistic is the item’s difficulty index, or “p value.” Given many psychometricians’ notoriously poor spelling, might this be due to thinking that “difficulty” starts with p?

Actually, the p stands for the proportion of participants who got the item correct. For example, if 100 participants answered the item, and 72 of them answered the item correctly, then the p value is 0.72. The p value can take on any value between 0.00 and 1.00. Higher values denote easier items (more people answered the item correctly), and lower values denote harder items (fewer people answered the item correctly).

Typically, test developers use this statistic as one indicator for detecting items that could be removed from delivery. They set thresholds for items that are too easy and too difficult, review them, and often remove them from the assessment.

Why throw out the easy and difficult items? Because they are not doing as much work for you. When calculating the item-total correlation (or “discrimination”) for unweighted items, Crocker and Algina (Introduction to Classical and Modern Test Theory) note that discrimination is maximized when p is near 0.50 (about half of the participants get it right).

Why is discrimination so low for easy and hard items? An easy item means that just about everyone gets it right, no matter how proficient they are in the domain; the item does not discriminate well between high and low performers. (We will talk more about discrimination in subsequent posts.)

Sometimes you may still need to use a very easy or very difficult item on your test form. You may have a blueprint that requires a certain number of items from a given topic, and all of the available items might happen to be very easy or very hard. I also see this scenario in cases with non-compensatory scoring of a topic. For example, a simple driving test might ask, “Is it safe to drink and drive?” The question is very easy and will likely have a high p value, but the test developer may include it so that if a participant gets the item wrong, they automatically fail the entire assessment.

You may also want very easy or very hard items if you are using item response theory (IRT) to score an aptitude test, though it should be noted that item difficulty is modeled differently in an IRT framework. IRT yields standard errors of measurement that are conditional on the participant’s ability, so having hard and easy items can help produce better estimates of high- and low-performing participants’ abilities, respectively. This is different from the classical test theory where the standard error of measurement is the same for all observed scores on an assessment.

While simple to calculate, the p value requires cautious interpretation. As Crocker and Algina note, the p value is a function of the number of participants who know the answer to the item plus the number of participants who were able to correctly guess the answer to the item. In an open response item, that latter group is likely very small (absent any cluing in the assessment form), but in a typical multiple choice item, a number of participants may answer correctly, based on their best educated guess.

Recall also that p values are statistics—measures from a sample. Your interpretation of a p value should be informed by your knowledge of the sample. For example, if you have delivered an assessment, but only advanced students have been scheduled to take it, then the p value will be higher than it might be when delivered to a more representative sample.

Since the p value is a statistic, we can calculate the standard error of that statistic to get a sense of how stable the statistic is. The standard error will decrease with larger sample sizes. In the example below, 500 participants responded to this item, and 284 participants answered the item correctly, so the p value is 284/500 = 0.568. The standard error of the statistic is ± 0.022. If these 500 participants were to answer this item over and over again (and no additional learning took place), we would expect the p value for this item to fall in the range of 0.568 ± 0.022 about 68% of the time.

item analysis report 2

 

Item p value and standard error of the statistic from Questionmark’s Item Analysis Report

Winter Webinars in the UK: Questionmark Live updates and more

Posted by Chloe MendoncaChloe Mendonca

The holidays are just around the corner, but the excitement of learning continues year-round here at Questionmark!

Our free, one-hour web seminars give you the opportunity find out what’s happening in the world of online assessment and consider which tools and technologies would be most useful to you. Here’s the current line-up:

What’s New in Questionmark Live Browser-Based Authoring?

With the addition of two new item types to Questionmark Live, our browser-based authoring tool, we have two web seminars set up for you to learn about these and other changes you may have missed. Get an overview of Questionmark Live’s capabilities and all the latest new features.

  • Friday, 13th December 2013 – 3:00 PM (GMT – London)
  • Friday, 24th January 2014 – 3:00 PM (GMT – London)

Creating Assessments for Mobile Delivery

If you want some top tips on how to create assessments for mobile devices using Questionmark’s responsive design features, then don’t miss this webinar.

A report earlier this year stated that the number of smartphones, tablets, laptops and internet-capable devices would exceed the number of humans by the end of 2013 —  and that has come to pass. Mobile delivery is the way to reach people on the go. This session will help you get started.

  • Monday, 13th January 2014 – 2:00 PM (GMT – London)
  • Tuesday, 29th January 2014 – 10:30 AM (GMT – London)

Introduction to Questionmark’s Assessment Management System

For a more general overview on Questionmark’s assessment technologies join seminar, which explains and demonstrates key features and functions available in Questionmark OnDemand and Questionmark Perception. Spend an hour with a Questionmark expert learning the basics of authoring, delivering and reporting on surveys, quizzes, tests and exams.

  • Monday, 25th November 2013 – 2:30 PM (GMT – London)
  • Tuesday, 3rd December 2013 – 10:30 AM (GMT – London)
  • Thursday, 16th January 2014 – 11:00 AM (GMT – London)

Click here to choose your webinar and register online. I hope you enjoy the session you choose and if you have any questions,  reach out to us!

Nine tips on recommended assessment practice — from Barcelona

John Kleeman HeadshotPosted by John Kleeman

Something I enjoy most about our users conferences is the chance to learn from experts about good practice in assessments. Most of our customers have deep knowledge and insightful practical experience, so there is always much to learn.

Here are some tips I picked up last week at our recent European Users Conference in Barcelona.Questionmark2013_DSC3209

1. Make sure to blueprint. It’s critical to have a detailed design (often called a blueprint) for an assessment – or as one user shared, “Without a blueprint, you don’t have an assessment”.

2. Network to get SMEs. With technology changing quickly, if your assessments assess IT or other new technology, the content changes very quickly and the quality of your subject matter experts (SMEs) who create and review items is critical. As an assessment owner, use networking skills to get the right SMEs on board; getting them engaged and building trust are essential.

3. Test above knowledge. Develop questions that test application or comprehension, for instance using scenarios. They are more likely to make your test valid than questions that simply test facts.

4. Give employees ownership of their own compliance testing. If employees have to take annual refresher tests, give them the responsibility to do so and encourage sel- learning and pre-reading. Give them plenty of time (e.g. 6 weeks’ warning), but make it their responsibility to take and pass the test in the window, not yours to keep on reminding them.

5. Gather feedback from participants. Make sure you solicit feedback from your participants on tests and the testing experience. That way you will learn about weak questions and how to improve your testing process. And you also make participants feel that the process is fairer.

6. Use job/task analysis. Asking questions about jobs and tasks is the best way to specify the criteria used to judge competency or proficiency. These questions can be automated in Questionmark right now. Watch this space for improvements coming to make this easier.

7. Look at Questionmark Live for item review workshops. If you have any informal or informal process for having groups of people working on or reviewing items, look at Questionmark Live. It’s free to use, has great group working capability and improves productivity. A lot of organizations are having success with it.

8. Keep feedback short and to the point… especially on mobile devices where people won’t read long messages.Questionmark2013_DSC3215

9. Look for live data, not just your rear view mirror. Data is important – without measurement we cannot improve. But make sure the data you are looking at is not dead data. Looking into the rear view mirror of what happened in the past doesn’t help as much as using reports and analytics from Questionmark to discover what is happening now, and use that data to improve things.

I hope some of these tips can help you in your work with assessments.

I will write in the spring about the tips I gather at the 2014 U.S. Users Conference in San Antonio!

Reflections on Barcelona: Great learning, great connections

Doug Peterson HeadshotPosted By Doug Peterson

I had the opportunity to attend and present at the Questionmark European Users Conference in Barcelona, Spain, November 10 – 12. Questionmark2013_DSC3268 If you’ve never had the chance to go to Barcelona, I highly recommend it! This is a *beautiful* city full of charming people, wonderful architecture, and GREAT food.

Traveling and seeing new sights and engaging in new adventures is always fun, but the true “stars of the show” at a users conference are, of course, the users.

It was wonderful to catch up with customers I first met two years ago in Brussels. I also had the opportunity to meet in person several people I had met over the last couple of years only through emails and conference calls, and it was great to put a face with a name. And of course, it was wonderful to make brand new friends whom I hope to see again next year!

One of the things I enjoy the most about Questionmark Users Conferences is how customers learn from each other. This happens in a more structured way during the many sessions presented by members of our user community, but I enjoy it even more when it happens more informally.

During the opening reception Sunday night I had the opportunity to talk with several customers, to hear why they were attending – what they wanted to get out of the conference – and then introduce them to another customer or a Questionmark employee who could help them meet their goals for the conference. Breakfast and lunch conversations were always interesting, and even during Monday night’s fantastic dinner at El Torre Dels Lleones, the conversations between different users facing various challenges continued (except, of course, when we were all watching the flamenco dancers perform!). These conversations are simply invaluable, not only because customers help each other find answers to their challenges, but because I as an employee gain insights and a depth of understanding as to what our customers are doing and the problems they are facing in ways that an email or a phone call can’t communicate.

Questionmark2013_DSC3125Tuesday morning we tried something new during the General Session. Howard Eisenberg gave an informative presentation on item writing best practices, and at certain points he would pause so that each table could discuss the current topic amongst ourselves. Then he would take comments from some of the tables before moving forward with the next topic. The conversations at the table where I was sitting were GREAT! The sharing of different perspectives and experiences resulted in a lot , “Oh, I never thought of that!” expressions all around the room. THAT’S why I love going to our users conferences:  there’s just nothing like the information exchange and growth that takes place when a bunch of Questionmark users gather together in one place.

If you weren’t able to make it to Barcelona, I hope you can come to San Antonio, Texas, March 4 – 7. We’ll be right on the city’s River Walk. I’ve been there several times visiting family in the area and, I can tell you it’s beautiful and FUN! Please join us. I’m confident that by the end of the conference you will agree that it was time well spent.

Next Page »