Internet assessment software pioneer Paul Roberts to retire

Paul Roberts photoPosted by John Kleeman

We think of the Internet as being very young, but one of the pioneers in using the Internet for assessments is about to retire. Paul Roberts, the developer of the world’s first commercial, Internet assessment software is retiring in March. I thought readers might like to hear some of his story.

Paul was employee number three at Questionmark, joining us as software developer in 1989 when the company was still working out of my home in London.

During the 1990s, our main products ran on DOS and Windows. When we started hearing about the new ideas of HTML and the web, we realized that the Internet could make computerized assessment so much easier. Prior to the Internet, testing people at a distance required a specialized network or sending floppy disks in the mail (yes people really did this!). The idea that participants could connect to the questions and return their results over the Internet was compelling. With me as product manager, tester and documenter for our new product — and Paul as lead (and only!) developer — he wrote the first version of our Internet testing product QM Web, which we released in 1995.

QM Web manual cover

QM Web became widely used by universities and corporations who wanted to deliver quizzes and tests over the Internet. Later in the nineties, learning from the lessons of QM Web, we developed Questionmark Perception, our enterprise-level Internet assessment management system still widely used today. Paul architected Questionmark Perception and for many years was our lead developer on its assessment delivery engine.

One of Paul’s key innovations in developing Questionmark Perception was the use of XML to store questions. XML (eXtensible Markup Language) is a way of encoding data that is both human-readable and machine-readable. In 1997, Paul implemented QML (Question Markup Language) as an early application of this concept. QML allowed questions to be described independently of computer platforms. To quote Paul at the time:

“When we were developing our latest application, we really felt that we didn’t want to go down the route of designing yet another proprietary format that would restrict future developments for both us and the rest of the industry. We’re very familiar with the problems of transporting questions from platform to platform because we’ve been doing it for years with DOS, Windows, Macintosh and now the Web. With this in mind, we created a language that can describe questions and answers in tests, independently of the way they are presented. This makes it extremely powerful because QML now enables the same question database to be presented no matter what computer platform is chosen on or whatever the operating system.”

Questionmark Perception and Questionmark OnDemand still use QML as their native format, so that every single question delivered by Questionmark technology has QML as its core. QML was very influential in the design of the version 1 IMS Question & Test Interoperability specification (IMS QTI), which was led by Questionmark CEO Eric Shepherd and to which Paul was a major contributor. Paul also worked on other industry standards efforts including AICC, xAPI and ADL SCORM.

Over the years, many other technology innovators and leaders have joined Questionmark, and we have a thriving product development team. Most members of our team have had the opportunity to learn from Paul over the years, and Paul’s legacy is in safe hands: Questionmark will continue to break new frontiers in computerizing assessments. I am sure you will join me in wishing Paul well in his personal journey post-retirement.

Assessment Standards 101: IMS QTI XML

john_smallPosted by John Kleeman

This is the second of a series of blog posts on assessment standards. Today I’d like to focus on the IMS QTI (Question and Test Interoperability) Specification.

It’s worth mentioning the difference between Specifications and Standards: Specifications are documents that industry bodies have agreed on (like IMS QTI XML), while Standards have been published and committed to by a formal legal body (like AICC or HTML). A Specification is less formal than a Standard but still can be very useful for interoperability.

Questionmark was one of the originators of QTI. When we migrated our assessment platform from Windows to the Web in the 1990s, our customers had to migrate their questions from one platform to the other. As you will know, it takes a lot of time to write high quality questions, and so it’s important to be able to carry them forward independently of technology. We knew that we’d be improving our software over the years and we wanted to ensure the easy transfer of questions from one version to the next. So we came up with QML (Question Markup Language), an open and platform-independent method of maintaining questions that makes it easy for customers to move forward in the future.

Although QML did solve the problem of moving questions between Questionmark versions, we met many customers who had difficulty bringing content created in another vendor’s proprietary format  into Questionmark. We  wanted to help them, and we also wanted to embrace openness and allow Questionmark customers to export out their questions in a standard format if they ever wanted to leave us. So we worked with other vendors within the umbrella of the IMS Global Learning Consortium to come up with QTI XML, a language that describes questions in a technology-neutral way.  I was involved in the work defining IMS QTI as were several of my colleagues: Paul Roberts did a lot of technical design, Eric Shepherd led the IMS working group that made QTI version 1, and Steve Lay (before joining Questionmark) led the version 2 project.

Here is a fragment of QTI XML and you can see that it is a just-about-human-readable way of describing a question.

<?xml version="1.0" standalone="no"?>
<!DOCTYPE questestinterop SYSTEM "ims_qtiasiv1p2.dtd">
<questestinterop>
<item title="USA" ident="3230731328031646">
<presentation>
<material>
<mattext texttype="text/html"><![CDATA[<P>Washington DC is the capital of the USA</P>]]></mattext>
</material>
<response_lid ident="1">
<render_choice shuffle="No">
<response_label ident="A">
<material> <mattext texttype="text/html"><![CDATA[True]]></mattext> </material>
</response_label>
<response_label ident="B">
<material> <mattext texttype="text/html"><![CDATA[False]]></mattext> </material>
</response_label>
</render_choice>
</response_lid>
</presentation>
<resprocessing>
<outcomes> <decvar/> </outcomes>
<respcondition title="0 True" >
<conditionvar> <varequal respident="1">A</varequal> </conditionvar>
<setvar action="Set">1</setvar> <displayfeedback linkrefid="0 True"/>
</respcondition>
<respcondition title="1 False" >
<conditionvar> <varequal respident="1">B</varequal> </conditionvar>
<setvar action="Set">0</setvar> <displayfeedback linkrefid="1 False"/>
</respcondition>
</resprocessing>
<itemfeedback ident="0 True" view="Candidate">
</itemfeedback>
<itemfeedback ident="1 False" view="Candidate">
</itemfeedback>
</item>
</questestinterop>
.
QTI XML has successfully established itself as a way of exchanging questions. For a long time, it was the most downloaded of all the IMS specifications, and many vendors support it. One problem with the language is that it allows description of a very wide variety of possible questions, not just those that are commonly used, and so it’s quite complex. Another problem is that (partly as it is a Specification, not a Standard) there’s ambiguity and disagreement on some of the finer points. In practice, you can exchange questions using QTI XML, especially multiple choice questions, but you often have to clean them up a bit to deal with different assumptions in different tools. At present, QTI version 1.2 is the reigning version, but IMS are working on an improved QTI version 2, and one day this will probably take over from version 1.

Defense in Depth: Security for SCORM and Beyond

tomking_tn80x60-21

Posted by Tom King

My earlier post, The Importance of Security and Integrity of Performance Data addressed a specific emerging SCORM security issue. It also raised the issue of “Defense in Depth” as an approach for improving security. Here are some defense in depth approaches you can use right now to increase security and decrease vulnerability.

Key ways to reduce vulnerability and improve security.

  • Audit trails and accountability. Have a second source of data to cross-check. Ideally this data should be automatically collected. Data sent to a SCORM or AICC LMS is also sent to a Questionmark Perception server via a different data conduit.
  • Secured Communication. Transfer responsibility for the result data to a server. Questionmark’s secure server-to-server implementation of AICC does this.
  • Increased Client/Browser Security. Reduce the attack surface of the runtime. Use a Secured Browser that disables or limits functionality not directly needed for the primary activity. Questionmark Secure is a browser that does this for AICC or SCORM.
  • Direct Proprietary Communication. This approach works by centralizing the chain-of-custody for the data to one trusted provider. Questionmark Perception can manage the process completely from authoring to scheduling to delivery to reporting.

Audit trails. Keeping parallel records such as with a double-entry accounting system is one way to achieve an audit trail. Having such an audit trail is key to identifying and recovering from errors or misdeeds. Questionmark provides capabilities for such an audit trail through both its SCORM and its AICC implementations. Perception achieves increased security and this audit trail by sending data to the LMS using the SCORM or AICC standard and, in parallel, sending data directly to the secure Perception server database. In the case of an error or misdeed, the LMS system results and the results in the secured Perception database can be compared to recover from either a security breach or an error.

Secured Server-to-Server Communication. In the cheatlet exploit, the openness of the published SCORM API and the browser JavaScript layer are used to inject false data from the client side. One way to increase the security is to remove this client side vulnerability and use AICC instead of SCORM. The innovative Perception server-to-server implementation of the AICC HACP specification demonstrates this, by having the browser relay minimal data to the Perception server. The client is not capable of directly injecting falsified overall score data. The Perception server is ultimately responsible for judging response and data communication with the LMS, not the browser client.

In 2002, Paul Roberts of Questionmark identified and described the risks of the client-side API (see Security Issues with the JavaScript API, Paul Roberts, 2002 on the AICC web site). He urged the AICC to continue to support the HACP protocol because of the value of the increased security enabled with a server-to-server AICC implementation. The diagram below helps explain this communication.

clip_image002clip_image004

Increased Browser Security. As currently implemented, this exploit relies on user access to the UI to open a bookmark. Changes to the launch environment (browser) can reduce this vulnerability. The Questionmark Perception Secure Browser is a commercialized browser solution built for the rigorous requirements of high-stakes testing environments. When a participant takes an online assessment using Questionmark Secure, the secure browser displays the HTML content of the assessment, but disables key functions such as task-switching, right click options, screen captures, menus and printing. There simply isn’t a means to access a menu or bookmark to trigger.

Direct Proprietary Communication In this scenario, one trusted party is responsible for the full span of access, delivery, and results. It does run somewhat contrary to cybersecurity practice of published protocols and specifications that can bear wide scrutiny. It can also undermine interoperability, something near and dear to my heart. In the long run, I believe you’ll find Questionmark moving in directions that addresses these type of concerns.

However, there are many valid circumstances where the values of single party chain of custody and trusted relationship trumps other concerns. High stakes test are often the prime case for this, and it is critical to expand cyber-defense-in-depth with adjunct security measures (such as tight control of source materials, exam monitors, proctors/invigilators).

Work-around versus defend-against. Finally, as an exercise for the reader, you may consider reading the the two ADL workarounds published April 2, 2009. You’ll find that the excerpt on Securing Your Assessments provides a means of masking the location of answer-judging source code sent to the client by some systems. While useful, it doesn’t provide the same security and depth of defense as other approaches. Consider for instance using Questionmark Secure (prevents ‘view source’) with the Perception SCORM implementation (adds audit trail) and Perception server-side evaluation logic (secures the evaluation logic on the server-side). That is defense in depth. One might even replace SCORM with AICC in this case for additional security in addition to or in lieu of Questionmark Secure.

Whenever faced with security concerns regarding the possibility of cheating, abuse or data integrity, I encouraged you to think about defense in depth and the role of all the components in security.