The Open Assessment Platform in Action

howard-headshot-small1

Posted by Howard Eisenberg

I was impressed, during the recent Questionmark European Users Conference, to meet so many people who have been using Questionmark’s Open Assessment Platform to create solutions that address their organizations’ particular needs. These customers have used various elements of this platform, which utilizes  standard technologies, to address their specific challenges. These solutions make use of the readily available APIs (Application Program Interfaces), Questionmark Perception version 5 templates and other resources that are available through the Open Assessment Platform.

Some examples:

By incorporating the functionality of JQuery (a cross-browser open source JavaScript library) into Questionmark Perception version 5, the University of Leuven in Belgium has been able to set up client-side form validation. Their case study presenter demonstrated how to  differentiate between required and optional questions in a survey. Participants could be required, say, to answer the first question and third questions but not the second—and they wouldn’t be able to submit the survey until they answer the required questions. They also showed how a participant could be required to provide the date in a specific, pre-determined format.  And they demonstrated an  essay question that includes a paragraph containing misspelled words, which students identify by clicking on them. Customizations like these make creative use of the templates in Perception version 5 and demonstrate that it’s an extensible platform with which users can create their own tailor-made solutions.

A staff member from Rotterdam University demonstrated a technique for creating random numeric questions using Microsoft Excel and QML (Question Markup Language). This solution makes it possible to base questions on randomly generated values and other well-chosen variables, allowing for limits on lower and upper boundaries. Formulas in Excel make it possible to generate the numbers that appear in word problems  generated using  QML, which in turn can be used to create various iterations and clones of typical math question types.  QML— because it is complete, well structured and well documented – is proving its worth as a tool for generating large numbers of questions and even for providing “smart” feedback: Common mistakes can be diagnosed by establishing certain conditions within a question. For example, If the input is supposed to  be a number rounded to the nearest tenth and the correct answer is 55.5, it can be assumed that a person who put down 55.4 as their answer has probably made a rounding error.

Random conversations revealed other innovations such as automating the creation of participants, their enrollment  in appropriate groups and the scheduling of their assessments — all made possible through the use of QMWISe (Questionmark Web Integration Services environment).

It feels to me as if we have reached a threshold where the Open Assessment Platform is really being embraced and put to imaginative use. The stories I heard at the conference were certainly eye openers for me; I think that innovations like these will inspire other Questionmark users to come up with equally innovative solutions. So I am looking forward to hearing more great case studies at the 2011 Users Conference in Los Angeles! (The call for proposals is now open, so if you are a Perception user now is the time to think about how you would like to participate in the 2011 conference program.)

Assessment Standards 101: IMS QTI XML

john_smallPosted by John Kleeman

This is the second of a series of blog posts on assessment standards. Today I’d like to focus on the IMS QTI (Question and Test Interoperability) Specification.

It’s worth mentioning the difference between Specifications and Standards: Specifications are documents that industry bodies have agreed on (like IMS QTI XML), while Standards have been published and committed to by a formal legal body (like AICC or HTML). A Specification is less formal than a Standard but still can be very useful for interoperability.

Questionmark was one of the originators of QTI. When we migrated our assessment platform from Windows to the Web in the 1990s, our customers had to migrate their questions from one platform to the other. As you will know, it takes a lot of time to write high quality questions, and so it’s important to be able to carry them forward independently of technology. We knew that we’d be improving our software over the years and we wanted to ensure the easy transfer of questions from one version to the next. So we came up with QML (Question Markup Language), an open and platform-independent method of maintaining questions that makes it easy for customers to move forward in the future.

Although QML did solve the problem of moving questions between Questionmark versions, we met many customers who had difficulty bringing content created in another vendor’s proprietary format  into Questionmark. We  wanted to help them, and we also wanted to embrace openness and allow Questionmark customers to export out their questions in a standard format if they ever wanted to leave us. So we worked with other vendors within the umbrella of the IMS Global Learning Consortium to come up with QTI XML, a language that describes questions in a technology-neutral way.  I was involved in the work defining IMS QTI as were several of my colleagues: Paul Roberts did a lot of technical design, Eric Shepherd led the IMS working group that made QTI version 1, and Steve Lay (before joining Questionmark) led the version 2 project.

Here is a fragment of QTI XML and you can see that it is a just-about-human-readable way of describing a question.

<?xml version="1.0" standalone="no"?>
<!DOCTYPE questestinterop SYSTEM "ims_qtiasiv1p2.dtd">
<questestinterop>
<item title="USA" ident="3230731328031646">
<presentation>
<material>
<mattext texttype="text/html"><![CDATA[<P>Washington DC is the capital of the USA</P>]]></mattext>
</material>
<response_lid ident="1">
<render_choice shuffle="No">
<response_label ident="A">
<material> <mattext texttype="text/html"><![CDATA[True]]></mattext> </material>
</response_label>
<response_label ident="B">
<material> <mattext texttype="text/html"><![CDATA[False]]></mattext> </material>
</response_label>
</render_choice>
</response_lid>
</presentation>
<resprocessing>
<outcomes> <decvar/> </outcomes>
<respcondition title="0 True" >
<conditionvar> <varequal respident="1">A</varequal> </conditionvar>
<setvar action="Set">1</setvar> <displayfeedback linkrefid="0 True"/>
</respcondition>
<respcondition title="1 False" >
<conditionvar> <varequal respident="1">B</varequal> </conditionvar>
<setvar action="Set">0</setvar> <displayfeedback linkrefid="1 False"/>
</respcondition>
</resprocessing>
<itemfeedback ident="0 True" view="Candidate">
</itemfeedback>
<itemfeedback ident="1 False" view="Candidate">
</itemfeedback>
</item>
</questestinterop>
.
QTI XML has successfully established itself as a way of exchanging questions. For a long time, it was the most downloaded of all the IMS specifications, and many vendors support it. One problem with the language is that it allows description of a very wide variety of possible questions, not just those that are commonly used, and so it’s quite complex. Another problem is that (partly as it is a Specification, not a Standard) there’s ambiguity and disagreement on some of the finer points. In practice, you can exchange questions using QTI XML, especially multiple choice questions, but you often have to clean them up a bit to deal with different assumptions in different tools. At present, QTI version 1.2 is the reigning version, but IMS are working on an improved QTI version 2, and one day this will probably take over from version 1.