Conceptual Assessment Framework: Building the Task Model

Austin FosseyPosted by Austin Fossey

In my previous post, I introduced the student model—one of the three sections of the Conceptual Assessment Framework (CAF) in Evidence-Centered Design (ECD). At the other end of the CAF is the task model.

The task model defines the assumptions and specifications for what a participant can do within your assessment (e.g., Design and Discovery in Educational Assessment: Evidence-Centered Design, Psychometrics, and Educational Data Mining; Mislevy, Behrens, Dicerbo, & Levy, 2012). This may include the format of the items, the format of the assessment itself, and the work products that the participant may be expected to create during the assessment.

Most importantly, the task model should be built so that the assessment tasks are appropriate for eliciting the behavior that you will use as evidence about the assessed domain. For example, if you are assessing a participant’s writing abilities, an essay item would probably be specified in your task model instead of a slew of multiple choice items.

You may have already defined pieces of your task model without even realizing it. For example, if you are using Questionmark to conduct observational assessments in the workplace, you have probably decided that the best way to gather evidence about a participant’s proficiency is to have them perform a task in a work environment. That task model may elicit behavior (and therefore evidence) that perhaps could not be captured well in other environments, for instance a traditional computer-based assessment.

In the observational assessment example below, the task model specifies that the participant has a ladder and an environment in which they can set up and climb the ladder. The task model might also specify information about the size of the ladder and the state of the ladder when the assessment begins.

sample obvs

Sample of an observational assessment

The task model can also help you avoid making inappropriate assessment design decisions that might threaten the validity of your inferences about the results.

I often see test developers use more complicated item types (like drag and drop) when a multiple choice item would have been more appropriate. For example, picture an assessment about human anatomy. If you want to know if the participant can find a kidney amongst its surrounding anatomy during an operation, then you would likely build a drag and drop item. If you just want to know if a participant knows what a kidney looks like, you may want to use a multiple choice item with three to five pictures of organs from which the participant must choose.

The task model will also encompass other design and delivery decisions you will make about your assessment. For example, a time-limit, the participant’s ability to review answers, access to resources (e.g., references, calculators), and item translations might all be specified in your task model.

By specifying your task model in advance and tying your design decisions to the inferences you want to make about the participant’s results, you can ensure that your assessment instrument is built to gather the right evidence about your participants.

Join the march — to March 2014!

Joan Phaup 2013 (3)Posted by Joan Phaup

In early fall, we step back and marvel at how quickly the year has gone by and acknowledge the fact that it will soon be over.river walk

This means, of course, that although the Questionmark 2014 Users Conference is just over five months away, time will fly before we meet March 4 – 7  in San Antonio, Texas. And that, in turn, means that now is the time to get ready for this essential learning event.

Here are a few ways to get started:

Get with the program: We have begun building the conference agenda and will be adding to it in the coming weeks. Boookmark the schedule-at-a-glance and keep an eye out for new developments. Please note: If you are new to Questionmark, plan to attend the full-day Boot Camp for Beginners workshop. Bring your own laptop and get some basic training before the conference starts.

Get on the program: They said that when you give to something you get more out of it — and that’s true of customers who participate in the conference as case study presenters and peer discussion leaders. They make loads of connections with their peers and Questionmark staff. They also receive some red carpet treatment, including a special dinner in their honor — and we award a 50% registration discount per case study. So think about what you’d like to contribute in 2014. We’re eager to hear from you. See the call for proposals for details.HildersonPhoto_DSC3383

Plan ahead: Plan your budget now and consider your conference ROI. The time and effort you save by learning effective ways to run your assessment program will more than pay for your conference participation. Check out the reasons to attend and the conference ROI toolkit here.

Sign up soon for early-bird savings: You will save $200 by registering on our before December 12 — and your organization will save by taking advantage of group registration discounts. Get all the details and register soon.

See you in San Antonio!

night river banner




Writing Good Surveys, Part 4: Response Scales

Doug Peterson HeadshotPosted By Doug Peterson

In part 2 and part 3 of this series, we looked at writing good survey questions. Now it’s time to turn our attention to the response scale.

Response scales come in several flavors. The binary or dichotomous scale is your basic yes/no option. A multiple choice scale offers three or more discreet selections from which to choose. For example, you might ask “What is the public sector organization in which you work?” and provide the following choices:

  • Federal
  • State
  • County
  • Municipal
  • City/Local
  • Other

Dichotomous and multiple choice scales are typically used for factual answers, not for opinions or ratings. The key to these types of scales is that you must make sure that you offer the respondent an answer they can use. For example, asking a hotel guest “Did you enjoy your stay?” and then giving them the options “yes” and “no” is not a good idea. They may have greatly enjoyed their room, but were very dissatisfied with the fitness center, and this question/response scale pairing does not allow them to differentiate between different aspects of their visit. A better approach might be to ask “Did you use the fitness center?” with a yes/no response, and if they did, have them answer more detailed questions about their fitness center experience.

The response scale we typically think about when it comes to surveys is the descriptive scale, where the respondent describes their opinion or experience as a point on a continuum between two extremes such as “strongly disagree” to “strongly agree”. These are the scales that elicit the most amount of debate among the experts, and I strongly encourage you to Google “survey response scales” and do a little reading. The main points of discussion are number of responses, direction, and labeling.

Number of Responses

We’ve all seen the minimalist approach to rating scales:

Disagree     Neutral     Agree

[1]                   [2]              [3]

There are certainly situations where this scale is valid, but most of the time you will want to provide more options to allow the respondent to provide a more detailed answer:

Strongly Disagree     Disagree        Neutral        Agree        Strongly Agree

[1]                                   [2]                  [3]                  [4]                      [5]

Five choices is very typical, but I would agree with the point made by Ken Phillips during his session on surveys at ASTD 2013 – Five may not be enough. Think of it this way: I’m pretty much going to either disagree or agree with the statement, so two of the choices are immediately eliminated. Therefore, what I *really* have is a three point scale – Strongly Disagree, Disagree, and Neutral, or Neutral, Agree, and Strongly Agree (and an argument can be made for taking “neutral” out of the mix when you do agree or disagree at some level). Ken, based on the Harvard Business Review article, “Getting the Truth into Workplace Surveys” by Palmer Morrel-Samuels, recommends using a minimum of seven choices, and indicates that nine and eleven choice scales are even better. However, there are a number of sources out there who feel that anything over seven puts a cognitive load on the respondent: they are presented with too many options and have trouble choosing between them. Personally, I recommend either five or seven choices.


By “direction” I’m referring to going from “Strongly Disagree” to “Strongly Agree”, or going from “Strongly Agree” to “Strongly Disagree”. Many experts will tell you that it doesn’t matter, but others bring out the fact that studies have shown that respondents tend to want to agree with the statement. If you start with “Strongly Agree”, the respondent may tend to select that choice “automatically”, as this is a positive response and they tend to want to agree anyway. This could skew your results. However, if the first choice is “Strongly Disagree”, the respondent will have more of a tendency to read through the choices, as “Strongly Disagree” has a negative feel to it (it’s not an attractive answer) and respondents will shy away from it unless they truly feel that way. At this point, the respondent will have more of a tendency to truly differentiate between “Agree” and “Strongly Agree”, instead of seeing “Strongly Agree” first and thinking, “Yeah, what the heck, that’s good enough,” and selecting it without much thought.

In Part 5, we’ll finish up this discussion by taking a look at labeling each choice, along with a few other best practices related to response scales.

If you are interested in authoring best practices, be sure to register for the Questionmark 2014 Users Conference in San Antonio, Texas March 4 – 7. See you there!

Barcelona or San Antonio or both?

John Kleeman HeadshotPosted by John Kleeman

Questionmark users conferences are unforgettable. I’ve been to all 14 of them so far and each is engraved in my memory as an empowering, enriching and mesmerising event.  We are running two user conferences in the next few months and if you have a chance to attend one (or both!) I promise you won’t regret

Our first upcoming conference is the Questionmark European conference in Barcelona, Spain on 10-12 November. Barcelona is one of the most exciting cities in Europe and will be a great place to learn from other assessment professionals. You can see details at Walk San Antonio

Our second upcoming conference is the Questionmark US User Conference in San Antonio, Texas on 4-7 March, 2014. San Antonio is the home of the Alamo and the conference venue is part of the River Walk — a uniquely peaceful and positive environment for a conference. You can see details at

Here are five reasons I think Questionmark conferences are worth coming to:

1. Learn about assessments. I’ve been working with assessments for over 25 years … I know a lot, but  I’m still learning. Quizzes, surveys, tests and exams are hugely powerful ways of measuring human behavior and helping organizations improve. There is so much to learn.

2. Learn from Questionmark.  Our best presenters and technical experts are at the conference, and they have a lot to share.

Conversation at a Questionmark user conference3. Learn from peers. Most attendees say that the best thing about a Questionmark user conference is that they meet and learn from peers who have similar issues to them. A problem shared is often a problem solved, and you can find out what other people have done in their organization to solve the problems you are facing in yours.

4. Influence the future of the product. What we learn at these conferences contributes hugely to how we improve our products and services. Our product owners (people like Jim Farrell, Austin Fossey, Doug Peterson and Steve Lay)  attend the conferences and listen carefully to what our customers say.

5. Great cities. We know that people who come to our conferences go back to their organization passionate about online assessments and enthusiastic about wider use of Questionmark. We choose great venues for our conferences, and provide memorable experiences in a great environment so the conferences are fantastic personal experiences as well as being fulfilling learning opportunities.

I look forward to meeting readers of this blog at the conferences. And if any conference attendee can name the cities where the 14 conferences prior to these ones were held, I will buy you to a drink of your choice!

Get details here for the European Conference and here for the US Conference.

Managing a complex testing environment with a QMWise-based dashboard

 Chloe Mendonca Posted by Chloe Mendonca

Earlier this week I spoke with Gerard Folkerts of Wageningen University who will be delivering a presentation at the upcoming Questionmark 2013 European Users Conference in Barcelona  November 10 – 12.Wageningen UR logo

In their case study presentation, Gerard and his co-presenter, Gerrit Heida, will share their experience using the QMWISe API to create a dashboard for their  Questionmark Perception installation.

Could you tell me about Wageningen University and how you use Questionmark assessments?

Wageningen University is the only university in the Netherlands to focus specifically on the themes of ‘healthy food and the living environment’ — and for the eighth year in a row we’ve been awarded the title of “Best University in the Netherlands”.

We use Questionmark for summative and formative assessments. We do about 10 to 20 summative assessments each period with up to 500 participants simultaneously. Besides the summative assessments there are also formative assessments available to students. Besides the web based assessments we also have a setup for running tests in a secure environment which require the use of Microsoft Office, SAS or SPSS etc.

Could you tell me a little more about this secure setup?

eu confThe secure test environment turns a regular PC room into a test center. Each computer inside the room is closed from communication with the outside world. With our i- house developed tools we are able to deliver the necessary documents for the assessment to each participant. Such an assessment could be writing a report based on data analysis within SPSS. At the end of the assessment all the work of the participants is stored in a central location.

How do you effectively manage numerous participants taking simultaneous tests?

First of all we have set up a number of procedures to ensure a stabile testing environment. For this we have separated the environments needed for question development and formative and summative testing. Multiple administrators are working in the same environment and could schedule assessments to start simultaneously. With Questionmark Web Integration Services environment (QMWISe,) we created a dashboard to get a real-time view of the amount of assessments scheduled and the number of participants that are enrolled into the assessments.

How has this helped to control your environment?

Using the standard QMWise API makes it possible to build a dashboard which will show this data, and with the information already there, it is easy to do a basic health check on the assessments. For technical support it is essential to predict how much load can be expected on your QMP farm, so the dashboard has helped us to get our QMP environment in control

How do you hope that your session will benefit others?

I hope that my session will give some insight in the possibilities of QMWise. Once you understand how it works it is not that hard to add custom modules to the Questionmark Perception environment. I think the API is undervalued.

What are you looking forward to most about the Users Conference?

Meeting with other Questionmark users and learning about the roadmap of Questionmark Perception.  And of course a visit to Barcelona…

Check out the conference program and click here to register. We hope to see you there!

Euro Conf Banner

Getting Results with Questionmark

Are you new to Questionmark? Do you want to know more about who we are and what we do?  This short video will take you through the process of creating, delivering and reporting on assessments with Questionmark. Enjoy, and feel free to share and repost!