Reflections on the San Antonio Users Conference

Doug Peterson HeadshotPosted By Doug Peterson

I had the good fortune of attending the Questionmark Users Conference in San Antonio, Texas a couple of weeks ago.

As required by (personal) law, I visited the Hard Rock Café for dinner on my first night in town! And let me tell you, if you missed the fresh sushi at the Grand Hyatt’s Bar Rojo, you missed something pretty doggone special.

But more special than Hard Rock visits and heavenly sushi was the chance to interact with and learn from Questionmark users. Honestly, users conferences are a  favorite part of my job. The energy, the camaraderie, the ideas – it all energizes me and helps keep me fired up!

We had a great session on Item Writing Techniques for Surveys, Quizzes and Tests. We had some wonderful conversations – I like for my sessions to be more of a conversation than a lecture – and I picked up some helpful tips and examples to work into my next presentation. For those of you who couldn’t make this session, it’s based on a couple of blog series. Check out the Writing Good Surveys series as well as the Item Writing Guide series. You’ll also want to check out Improving Multiple Choice Questions and Mastering Your Multiple Choice Questions for more thoughts on improving your multiple choice questions.

The other session I led was on using Captivate and Flash application simulations in training and assessments. As with my previous presentations on this topic, the room was packed and people were excited! During my years as a Questionmark customer, I was always impressed with the Adobe Captivate Simulation and Adobe Flash question types. I feel even more strongly about this since attending a webinar put on the other day by a fairly popular LMS. The process you have to go through to do a software simulation in one of their assessments is far too involved and complicated – it really drove home the simplicity of using the Captivate question type in Questionmark.

It really was great to see old friends and make new ones at the conference. I look forward to working with customers throughout the rest of 2014 and to seeing them again soon.

Assembling the Test Form — Test Design and Delivery Part 7

Posted By Doug Peterson

In the previous post in this series, we looked at putting together assessment instructions for both the participant and the instructor/administrator. Now it’s time to start selecting the actual questions.

Back in Part 2 we discussed determining how many items needed to be written for each content area covered by the assessment. We looked at writing 3 times as many items as were actually needed, knowing that some would not
make it through the review process. Doing this also enables you to create multiple forms of the test, where each form covers the same concepts with equivalent – but different – questions. We also discussed the amount of time a participant needs to answer each question type, as shown in this table:

As you’re putting your assessment together, you have to account for the time required to take the assessment. You have to multiply the number of each question type in the assessment by the values in the table above.

You also need to allow time for:

  • Reading the instructions
  • Reviewing sample items
  • Completing practice items
  • Completing demographic info
  • Taking breaks

If you already know the time allowed for your assessment, you may have to work backwards or make some compromises. For example, if you know that you only have one hour for the assessment, and you have a large amount of content to cover, you may want to consider focusing on multiple choice and fill-in-the-blank questions, and stay away from matching and short-answer to maximize the number of questions you can include in the time period allowed.

To select the actual items for the assessment, you may want to consider using a Test Assembly Form, which might look something like this:

The content area is in the first column. The second column shows how many questions are needed for that content area (as calculated back in Part 2). Each item should have a short identifier associated with it, and this is provided in the “Item #” column. The “Keyword” column is just that – one or two words to remind you what the question addresses. The last column lists the item number of an alternate item in case a problem is found with the first selection during assessment review.

As you select items, watch out for two things:

1. Enemy items. This is when one item gives away the answer to another item. Make sure that the stimulus or answer to one item does not answer or give a clue to the answer of another item.

2. Overlap. This is when two questions basically test the same thing. You want to cover all of the content in a given content area, so each question for that content area should cover something unique. If you find that you have several questions assessing the same thing, you may need to write some new questions or you may need to re-calculate how many questions you actually need.

Once you have your assessment put together, you need to calculate the cutscore. This topic could easily be another (very lengthy) blog series, and there are many books available on calculating cutscores. I recently read the book, Cutscores: A Manual for Setting Standards of Performance on Educational and Occupational Tests, by Zieky, Perie and Livingston. I found it to be a very good book, considering that the subject matter isn’t exactly “thrill a minute”. The authors discuss 18 different methods for setting cutscores, including which methods to use in various situations and how to carry out a cutscore study. They look at setting cutscores for criterion-referenced assessments (where performance is judged against a set standard) as well as norm-referenced assessments (where the performance of one participant is judged against the performance of the other participants). They also look at pass/fail situations as well as more complex judgments such as dividing participants into basic, proficient and advanced categories.