LTI certification and news from the IMS quarterly meeting

Steve Lay HeadshotPosted by Steve Lay

Earlier this month I travelled to Michigan for the IMS Global Learning Consortium’s quarterly meeting. The meeting was hosted at the University of Michigan, Ann Arbor, the home of “Dr Chuck”, the father of the IMS Learning Tools Interoperability (LTI) protocol.

I’m pleased to say that, while there, I put our own LTI Connector through the new conformance test suite and we have now been certified against the LTI 1.0 and 1.1 protocol versions.IMS

The new conformance tests re-enforce a subtle change in direction at IMS. For many years the specifications have focused on packaged content that can be moved from system to system. The certification process involved testing this content in its transportable form, matching the data against the format defined by the IMS data specifications. This model works well for checking that content *publishers* are playing by the rules, but it isn’t possible to check if a content player is working properly.

In contrast, the LTI protocol is not moving the content around but integrating and aggregating tools and content that run over the web. This shifts conformance from checking the format of transport packages to checking that online tools, content and the containers used to aggregate them (typically an LMS) are all adhering to the protocol. With a protocol it is much easier to check that both sides are playing by the rules  — so overall interoperability should improve.

In Michigan, the LTI team discussed the next steps with the protocol. Version 2 promises to be backwards-compatible but will also make it much easier to set up the trusted link between the tool consumer (e.g., your LMS) and the tool provider (e.g., Questionmark OnDemand).  IMS are also looking to expand the protocol to enable a deeper integration between the consumer and the provider. For example, the next revision of the protocol will make it easier for an LMS to make a copy of a course while retaining the details of any LTI-based integrations. They are also looking at improving the reporting of outcomes using a little-known part of the Question and Test Interoperability (QTI) specification called QTI Results Reporting.

After many years of being ‘on the shelf’ there is a renewed interest in the QTI specification in general. QTI has been incorporated into the Accessible Portable Item Protocol (APIP) specification that has been used by content publishers involved in the recent US Race to the Top Assessment Program. What does the future of QTI look like?  It is hard to tell at this early stage, but the buzzword in Michigan was definitely EPUB3.

Thoughts on Emerging Standards for the Educational Cloud

I recently attended an IMS Quarterly meeting in Utrecht, Holland, where delegates engaged in some lively discussion on the development and adoption of technical standards for learning in the “Educational Cloud”.  Questionmark takes a keen interest in conversations like this because we believe that community-driven standards are the key to the success of our Open Assessment Platform.

IMS is short for IMS Global Learning Consortium, one of the major organizations involved in technical standardization in the learning, education and training (LET) sector. Technical standardization relates to the technology we use for LET, including Questionmark technologies. Questionmark worked closely with IMS on the creation of the Question and Test Interoperability specification, known as QTI.

The quarterly meeting combines the technical work of the consortium with a number of open meetings where community members share their adoption experiences and  identify areas where future work might be needed. The meeting in Utrecht was hosted by SURF, a Dutch organization that promotes the adoption of technology standards.

The immediate benefits of technical standards are typically those of interoperability: a content format like QTI makes it easier to move content from a specialist authoring system into a delivery system. In turn this gives users more choice of tools and promises faster integration projects. The goal of the IMS is increasingly focused on highlighting the connection between these immediate benefits and the overall organizational goals of improving learning.

This approach contrasts with the way some standards bodies do their work. The Internet Engineering Task Force (IETF), for instance, has been very successful in getting standards for the internet adopted despite a much more hands-off approach. The IETF publishes a large number of technical proposals in its RFC series and concentrates on ensuring the basic quality of each document. Adoption is left to the community, and many ideas get no further than this stage.  Arguably, IETF is successful because it exposes ideas openly at an early stage and allows the best solutions to emerge on their own.

IMS works more closely within its membership, drafting in private before publishing and promoting what they now refer to as ‘standards’. It is changing from a technical-workshop body (more like IETF) to an organization centered on standards, conformance and promoting adoption.  As a result, we increasingly need to look elsewhere for early-stage technical work. For example, Learning Tools Interoperability (LTI) started outside IMS and has only now moved inside to aid standardization. LTI has the potential to obviate LMS-specific plugins and provide a long awaited replacement for SCORM and AICC’s launch and track model.

The meeting closed with a participatory session loosely modeled on the debating style of the British Parliament (complete with cries of “Hear! Hear!” and “Shame!”). The subjects of the debate were whether we should design standards and whether or not openness would lead to chaos.  There was general agreement from the largely Dutch audience that designing standards was wrong and that openness would not lead to chaos.   Despite the light-hearted nature of the discussion, its theme was serious, and it will be interesting to see how the IMS community’s technical development model evolves.

Assessment Standards 101: IMS QTI XML

john_smallPosted by John Kleeman

This is the second of a series of blog posts on assessment standards. Today I’d like to focus on the IMS QTI (Question and Test Interoperability) Specification.

It’s worth mentioning the difference between Specifications and Standards: Specifications are documents that industry bodies have agreed on (like IMS QTI XML), while Standards have been published and committed to by a formal legal body (like AICC or HTML). A Specification is less formal than a Standard but still can be very useful for interoperability.

Questionmark was one of the originators of QTI. When we migrated our assessment platform from Windows to the Web in the 1990s, our customers had to migrate their questions from one platform to the other. As you will know, it takes a lot of time to write high quality questions, and so it’s important to be able to carry them forward independently of technology. We knew that we’d be improving our software over the years and we wanted to ensure the easy transfer of questions from one version to the next. So we came up with QML (Question Markup Language), an open and platform-independent method of maintaining questions that makes it easy for customers to move forward in the future.

Although QML did solve the problem of moving questions between Questionmark versions, we met many customers who had difficulty bringing content created in another vendor’s proprietary format  into Questionmark. We  wanted to help them, and we also wanted to embrace openness and allow Questionmark customers to export out their questions in a standard format if they ever wanted to leave us. So we worked with other vendors within the umbrella of the IMS Global Learning Consortium to come up with QTI XML, a language that describes questions in a technology-neutral way.  I was involved in the work defining IMS QTI as were several of my colleagues: Paul Roberts did a lot of technical design, Eric Shepherd led the IMS working group that made QTI version 1, and Steve Lay (before joining Questionmark) led the version 2 project.

Here is a fragment of QTI XML and you can see that it is a just-about-human-readable way of describing a question.

<?xml version="1.0" standalone="no"?>
<!DOCTYPE questestinterop SYSTEM "ims_qtiasiv1p2.dtd">
<item title="USA" ident="3230731328031646">
<mattext texttype="text/html"><![CDATA[<P>Washington DC is the capital of the USA</P>]]></mattext>
<response_lid ident="1">
<render_choice shuffle="No">
<response_label ident="A">
<material> <mattext texttype="text/html"><![CDATA[True]]></mattext> </material>
<response_label ident="B">
<material> <mattext texttype="text/html"><![CDATA[False]]></mattext> </material>
<outcomes> <decvar/> </outcomes>
<respcondition title="0 True" >
<conditionvar> <varequal respident="1">A</varequal> </conditionvar>
<setvar action="Set">1</setvar> <displayfeedback linkrefid="0 True"/>
<respcondition title="1 False" >
<conditionvar> <varequal respident="1">B</varequal> </conditionvar>
<setvar action="Set">0</setvar> <displayfeedback linkrefid="1 False"/>
<itemfeedback ident="0 True" view="Candidate">
<itemfeedback ident="1 False" view="Candidate">
QTI XML has successfully established itself as a way of exchanging questions. For a long time, it was the most downloaded of all the IMS specifications, and many vendors support it. One problem with the language is that it allows description of a very wide variety of possible questions, not just those that are commonly used, and so it’s quite complex. Another problem is that (partly as it is a Specification, not a Standard) there’s ambiguity and disagreement on some of the finer points. In practice, you can exchange questions using QTI XML, especially multiple choice questions, but you often have to clean them up a bit to deal with different assumptions in different tools. At present, QTI version 1.2 is the reigning version, but IMS are working on an improved QTI version 2, and one day this will probably take over from version 1.

Welcoming Steve Lay to Lead Our Open Platform Initiative

joan-small2Posted by Joan Phaup

During our recent users conference I had the pleasure of meeting Steve Lay, who is now leading Questionmark’s Open Platform initiative.

Steve Lay

Steve Lay

Steve has extensive experience of open source systems and an in-depth understanding of how awarding bodies, academic institutions and other organizations use and deploy assessments.  At the University of Cambridge’s Centre for Applied Research into Educational Technologies (CARET) in England, Steve managed the transition from a number of trial systems to the University’s chosen online system to support research and learning based on the Sakai open source platform, CamTools. He was also part of a new technologies team at Cambridge Assessment, known in the United Kingdom  as the OCR exam board and outside the UK as the University of Cambridge International Examinations board, or Cambridge ESOL, a leading provider of tests of English for speakers of other languages. And he was chair of the Question and Test Interoperability specification project team (QTI) from 2003-2008.

Our Open Platform Initiative builds on our longtime commitment to open standards that support interoperability between different systems. We already have a number of “Connectors” to portals and enterprise business systems including Oracle and SAP; Steve will help us take this even further by expanding access to toolsets and example code. Our aim is to provide useful environments that foster a spirit of cooperation and collaboration for a wide range of communities.

We are pleased to welcome Steve on board to help us achieve this.