Posted by Joan Phaup
I’ve been enjoying a series of posts in this blog my colleague Doug Peterson about Test Design and Delivery – so much so that I suggested he elaborate on this theme during a presentation at the Questionmark 2013 Users Conference in Baltimore March 3 – 6 – and he said, “Yes!”
Doug will be presenting on some other topics, too, but during a recent conversation with him I asked if he could tell me a little more about this particular session, which will focus on five processes:
1. Plan: Establish your test’s reliability and validity, and identify content areas to be covered
2. Create: Write items that increase the cognitive load, avoid bias and measure what’s important
3. Build: Pull items together into a test form, develop clear instructions and set passing scores
4. Deliver: Protect test content, control item exposure, protect test content and discourage cheating
5. Evaluate: Use item-, topic-, and test-level data to assess reliability and improve quality
Could you talk about your own background as a test author?
It mainly stems from what I was doing in the 3 or 4 years before I joined Questionmark. My group was responsible for training call center employees, and that included writing and administering lots of tests. Before my group took that over, all the tests were paper-and-pencil and had to be graded by the instructors. And of course you know that instructors over a period of time tend to bond with their students and tend to lose their objectivity.
It was clear that subjective testing was not good! We were introduced to Questionmark and we said, “Let’s automate these tests and make sure they are objective and fair.” That’s when I really got heavy-duty into testing. Over the course of those few years I attended several Questionmark conferences and went to a number sessions on item analysis, test analysis, setting cut scores, and so forth. I tried to understand all those kinds of things so that we could run statistical reports on our own content and sure our tests were valid and reliable.
Those years were very full of testing, and I learned a great deal about item and test writing, secure delivery and analyzing test results. I applied everything I learned to our call center training tests, and the customer satisfaction numbers began to rise. Why? Because our tests were working! The tests were valid and reliable, and because of that, they were weeding out the people who truly were not qualified for the job. Our stakeholders were very pleased because they had confidence that our tests were only passing people who were qualified to work in the call centers.
What do you think are the most challenging aspects of test design and delivery?
That’s hard to answer, because there are so many important things to think about! At the end of the day, the main thing is that the assessment is fair to both the test taker and the stakeholder, That idea encompasses many, many things. For the stakeholder, it requires having a valid, reliable assessment that uses solid methodology. For the participant, it boils down to well-written items. That sounds pretty simple, but it actually requires careful attention to detail.
How will you be addressing these challenges during your presentation at the Users Conference?
We’re going take a look at everything I’ve been working on in the blog series. What does reliable mean? What does valid mean? How can we appropriately plan an assessment and tie it back to the job or subject matter we’re testing for? We will also incorporate a lot of ideas about item writing. Throughout the session, we will be looking at fairness to the stakeholder and fairness to the participant and breaking those principles down into several components.
Who would benefit from attending this session?
Anyone who has anything to do with creating and delivering assessments: item writers, assessment assemblers, administrators. It’s good for people in these different roles to understand the entire test development and delivery process, so they appreciate their co-workers’ concerns. I see this session as suitable for people who are just beginning their work with Questionmark as well as those at the intermediate level. I’m looking forward to sharing so much of what I learned when I was so closely involved in a testing program myself.