The Open Assessment Platform in Action

howard-headshot-small1

Posted by Howard Eisenberg

I was impressed, during the recent Questionmark European Users Conference, to meet so many people who have been using Questionmark’s Open Assessment Platform to create solutions that address their organizations’ particular needs. These customers have used various elements of this platform, which utilizes  standard technologies, to address their specific challenges. These solutions make use of the readily available APIs (Application Program Interfaces), Questionmark Perception version 5 templates and other resources that are available through the Open Assessment Platform.

Some examples:

By incorporating the functionality of JQuery (a cross-browser open source JavaScript library) into Questionmark Perception version 5, the University of Leuven in Belgium has been able to set up client-side form validation. Their case study presenter demonstrated how to  differentiate between required and optional questions in a survey. Participants could be required, say, to answer the first question and third questions but not the second—and they wouldn’t be able to submit the survey until they answer the required questions. They also showed how a participant could be required to provide the date in a specific, pre-determined format.  And they demonstrated an  essay question that includes a paragraph containing misspelled words, which students identify by clicking on them. Customizations like these make creative use of the templates in Perception version 5 and demonstrate that it’s an extensible platform with which users can create their own tailor-made solutions.

A staff member from Rotterdam University demonstrated a technique for creating random numeric questions using Microsoft Excel and QML (Question Markup Language). This solution makes it possible to base questions on randomly generated values and other well-chosen variables, allowing for limits on lower and upper boundaries. Formulas in Excel make it possible to generate the numbers that appear in word problems  generated using  QML, which in turn can be used to create various iterations and clones of typical math question types.  QML— because it is complete, well structured and well documented – is proving its worth as a tool for generating large numbers of questions and even for providing “smart” feedback: Common mistakes can be diagnosed by establishing certain conditions within a question. For example, If the input is supposed to  be a number rounded to the nearest tenth and the correct answer is 55.5, it can be assumed that a person who put down 55.4 as their answer has probably made a rounding error.

Random conversations revealed other innovations such as automating the creation of participants, their enrollment  in appropriate groups and the scheduling of their assessments — all made possible through the use of QMWISe (Questionmark Web Integration Services environment).

It feels to me as if we have reached a threshold where the Open Assessment Platform is really being embraced and put to imaginative use. The stories I heard at the conference were certainly eye openers for me; I think that innovations like these will inspire other Questionmark users to come up with equally innovative solutions. So I am looking forward to hearing more great case studies at the 2011 Users Conference in Los Angeles! (The call for proposals is now open, so if you are a Perception user now is the time to think about how you would like to participate in the 2011 conference program.)

Leave a Reply