OASIS: Putting the “Open” into the OData Protocol

Steve Lay HeadshotPosted by Steve Lay

Last year I wrote a quick primer on the OData protocol and how it relates to Questionmark’s Open Assessment Platform, see What is OData and Why is it important?

A lot has happened in the last year in the OData community. One of the most interesting aspects of the developing standard is the way the OData ‘ecosystem’ is developing. This is the term used to describe the tools that developers can use to help them support the standard as well as the data services published by information providers.

The OData specification started life at Microsoft, but the list of contributors to the OASIS technical committee now includes some other familiar names such as IBM and SAP. SAP are co-chairing the technical committee and have made a significant contribution to an open source library that allows Java developers to take advantage of the standard. This library has recently been moved into an Apache Foundation ‘incubator’ which is a great way to get the Java developer community’s attention. You can find it at Apache Olingo.

Moving the specification into an industry standards body like OASIS means that Microsoft relinquish some control in exchange for a more open approach. OASIS allows any interested party to join and become part of the standards development process. Documentation is now available publicly for review before it is finalized, and I’ve personally found the committee responsive.

Microsoft continue to develop tools that support OData both through Windows Communication Foundation (WCF) and through the newer WebAPI, making OData a confirmed part of their platform. There are options for users of other programming languages too. Full details are available from http://www.odata.org/ecosystem/.

With OASIS now in the process of approving version 4 of the specification, I thought it would be worth giving a quick overview of how Questionmark is using the standard and how it is developing.

Questionmark’s OData API for Analytics uses version 2 of OData. This is the most widely supported version of the specification; it is also the version supported by Apache Olingo.

Some of the more recent libraries, including Microsoft’s WCF and Web API, have support for version 3 of the protocol. We’re currently investigating the potential of version 3 for future OData projects at Questionmark.

Version 4 is the first version published by OASIS and marks an important step for the OData community. It also brings in a number of breaking changes, in the words of the technical committee:

“In evolving a specification over time, sometimes you find things that worked out better than you had expected and other times you find there are things you wish you had done differently.”

Ref: What’s new in OData 4

The reality is that there are lot of new features in version 4 of the protocol and, combined with comrehensive clean-up process, it will be some time before version 4 is widely adopted across the community. However, the increased transparency that comes with an OASIS publication and the weight of industry leaders like SAP to help drive adoption mean it is definitely one I’ll be keeping an eye on.

Information Security: from Human Factors to FIPS

Steve Lay HeadshotPosted by Steve Lay

Last year I wrote about Keeping up with Software Security and touched on the issue of cryptography and the importance of keeping up with modern standards to ensure data is stored and communicated securely. In this post, I return to this subject and dive a little deeper into maintaining information security and the role that software plays in it.

The first thing to is that software can only ever be part of an information security strategy. It doesn’t matter how complex your password policy or how strong your data encryption standards are if you allow authorised individuals to share information inappropriately. Amongst all the outcry following the information disclosed by Edward Snowden, it is surprising to me that the focus hasn’t been on why an organization that plays such a crucial role in security (in its widest sense) was so easily undone by a third-party contractor. According to this article from Reuters news agency, one of the techniques Snowden used to gain access to information was simply asking co-workers to give him their passwords.

The computing press will sometimes try and persuade you that there is a purely technical solution to this type of problem. It reminds me of the time my son’s school introduced fingerprint recognition for canteen payments, in part, to allow parents to check up on what their children had been spending — and eating!  It wasn’t long before the phrase, “Can I borrow your finger?” was commonplace amongst the pupils.

I hope my preamble has brought home to you the basic idea that these ‘human factors’, as the engineers like to call them, are just as important as getting the technology right. Earlier this week I had to go through my own routine security testing as part of our security process here at Questionmark, so this stuff is fresh in my mind!

Security Standards

Information security, in this wider context is covered by a whole series of international standards commonly known as the ISO 27000 series.  This series of standards and codes of practice cover a wide range of security processes and computer systems. To an engineer looking for a simple answer it can be frustrating though. ISO 27002 contains advice on the use of cryptography, but it runs more like a policy checklist. It won’t tell you which algorithms are safe to use. In part, this is a recognition of how dynamic this field is. The recommendations might change too fast for something like an ISO standard which takes a long time to develop and is designed to have a fairly long shelf-life.

For more practical advice to engineers developing software, help is at hand from the Federal Information Processing Standards (known as FIPS).  FIPS was developed by the U.S. government to help fill out the gaps where externally defined standards weren’t available. It ranges across many areas of information processing, not just security, but one of the gaps FIPS fills is specifying the details of which cryptographic algorithms are fit for modern software and, by implication, which ones need to be retired (FIPS-140). This standard has become so important that the word FIPS is often used to refer only to FIPS-140! It isn’t restricted to the U.S. either; it is being freely adopted by other governments including my own government here in the UK.

FIPS 140 also has a certification programme. The purpose of the programme is to certify the implementation of cryptographic code to check that it does indeed correctly implement the security standard. Microsoft have a technical article to explain which parts of their platform and which versions have been certified. There is even a “FIPS mode” in which the system can be instructed to use only cryptographic algorithms that have been certified.

Concentrating the cryptographic features of an application into a small number of modules that are distributed with the underlying operating system rather than having each application developer individually implement or incorporate the code themselves will, over time, make it easier to make use of appropriate cryptography and to implement policies such as those described by ISO 27002.

What’s new in the Blackboard Connector?

Steve Lay HeadshotPosted by Steve Lay

We recently published an updated version of our Blackboard Connector for Blackboard Learn 9.1. This version contains some compatibility improvements but is mainly to introduce support for more roles in the synchronization process.

Blackboard Learn supports the IMS LTI protocol so, in theory, you could integrate directly to Questionmark OnDemand without the custom building block.

Our development focus is definitely towards replacing the block, eventually, with an LTI-based solution, however, at the moment the building block provides more functionality than the LTI protocol allows so we’re recommending that Blackboard customers stick with the connector. That doesn’t mean development of the Connector has stopped though, as this latest version demonstrates.

Synchronizing Users

The Blackboard Connector contains synchronization logic which ensures that users of courses are synchronized with users and groups in the Questionmark repository. This synchronization is automatic and controllable using options in the Connector. It is possible to achieve some interesting use cases such as restricting access to the block on a course-by-course basis by controlling groups in your Questionmark repository. This level of control is not available in the LMS itself.

Until recently, we only supported three roles: Students, Instructors and Teaching Assistants. Students automatically become Questionmark participants and the other two roles are mapped to administrator profiles created in the repository. If either of the administrator profiles were missing, all users would be denied access to the Connector.

In the latest version, the situation is more controllable. Not only can you synchronize Course Builders and Graders too, but you can turn synchronization on or off for any of the default LMS roles simply by creating or deleting the corresponding profile in Questionmark Enterprise Manager.

Integration Highlights in Barcelona

Steve Lay HeadshotPosted by Steve Lay

The programme for Questionmark’s European Users Conference in Barcelona November 10 – 12 is just being finalized. As usual, there is plenty to interest customers who are integrating with our Open Assessment Platform.

This year’s conference includes a case study from Wageningen University on using QMWISe, our SOAP-based API, to create a dashboard designed to help you manage your eu confassessment process. Also, our Director of Solution Services, Howard Eisenberg, will be leading a session oncustomising the participant interface so you can learn how to integrate your own CSS into your Questionmark assessments.

I’ll be running a session introducing you to the main integration points and connectors with the assistance of two colleagues this year: Doug Peterson will be there to help translate some of the technical jargon into plain English and Bart Hendrickx will bring some valuable experience from real-world applications to the session. As always, we’ll be available throughout the conference to answer questions if you can’t make the session itself.

Finally, participants will also get the chance to meet Austin Fossey, our Analytics Product Owner, who will be talking, amongst other things, about our OData API for Analytics. This API allows you to create bespoke reports from data ‘feeds’ published from the results warehouse.

See the complete conference schedule here, and sign up soon if you have not done so already.

See you in Barcelona!

Using OData queries to calculate simple statistics

Steve Lay HeadshotPosted by Steve Lay

In previous blog posts we’ve looked at how to use OData clients, like the PowerPivot plugin for Excel, to create sophisticated reports based on the data exported from your Questionmark Analytics results warehouse. In this post, I’ll show you that your web developers don’t need a complex tool like Excel to harness the power of OData.

There are several third party libraries available that make it easy for your developers to incorporate support for OData in their applications, but OData itself contains a powerful query language and developers need nothing more than the ability to fetch a URL to take advantage of it. For example, suppose I want to know what percentage of my participants have passed one of my exams.

Step 1: find out the <id> of your assessment.OnDemand

To start with, I’ll build a URL that returns the complete list of assessments in my repository. For example, if your customer number is 123456 then a URL like the following will do the trick:

https://ondemand.questionmark.com/123456/odata/Assessments (This and the following URLS are examples, not live links.)

The resulting output is an XML file containing one record for each assessment. Open up the result in a text editor or ‘view source’ in your browser, scan down for the Assessment you are interested in, and make a note of the entry’s <id>, that’s the URL of the assessment you are interested in. If you’ve got lots of assessments you might like to filter the results using an OData query. In my case, I know the assessment name starts with the word “Chemistry”, so the following URL makes it easier to find the right one:

https://ondemand.questionmark.com/123456/odata/Assessments?$filter=startswith(Name,’Chem’)

All I’ve done is add a $filter parameter to the URL! The resulting document contains a single assessment! I can see that the <id> of my assessment is actually the following URL:

https://ondemand.questionmark.com/123456/odata/Assessments(77014)

Step 2: count the results

I’m not interested in the information about the assessment but I am interested in the results. OData makes it easy to navigate from one data item to another. I just add “/Results” to the URL:

https://ondemand.questionmark.com/123456/odata/Assessments(77014)/Results

This URL gives me a similar list to the assessment list I had earlier but this time I have one entry for each result of this assessment. Of course, there could be thousands, but for my application I only want to know how many. Again, OData has a way of finding this information out just by manipulating the URL:

https://ondemand.questionmark.com/123456/odata/Assessments(77014)/Results/$count

By adding /$count to the URL I’m asking OData not to send me all the data, but just to send me a count of the number of items that it would have sent back. The result is a tiny plain text document containing just the number. If you view this URL in your web browser you’ll see the number appear as the only thing on the page.

I’ve now calculated the total number of results for my assessment without having to do anything more sophisticated than fetch a URL. But what I really want is the percentage of these results that represent a pass. It turns out I can use the same technique as before to filter the results and include only those that have passed. My assessment has Pass/Fail information represented using the ScorebandName.

odataD

the $count option just sends a count of the number of items that it would have sent back without it

https://ondemand.questionmark.com/123456/odata/Assessments(77014)/Results/$count?$filter=ScorebandName eq ‘Pass’

Notice that by combining $count and $filter I can count how many passing results there are without having to view the results themselves. It is now trivial to combine the two values that have been returned to your application by these URLs to display a percentage passed or to display some other graphic representation such as a pie chart or part filled bar.

As you can see, your developers don’t need a sophisticated library to write powerful client applications with OData. And these HTTP documents are just a few bytes in size so they won’t use much bandwidth (or memory) in your application either. For additional resources and definitions of all the OData filters, you can visit the OData URI conventions page at odata.org.

Questionmark Users Conferences offer many opportunities to learn more about Questionmark Analytics and OnDemand. Registration is already open for the 2014 Users Conference March 4 – 7 in San Antonio, Texas. Plan to be there!

 

LTI certification and news from the IMS quarterly meeting

Steve Lay HeadshotPosted by Steve Lay

Earlier this month I travelled to Michigan for the IMS Global Learning Consortium’s quarterly meeting. The meeting was hosted at the University of Michigan, Ann Arbor, the home of “Dr Chuck”, the father of the IMS Learning Tools Interoperability (LTI) protocol.

I’m pleased to say that, while there, I put our own LTI Connector through the new conformance test suite and we have now been certified against the LTI 1.0 and 1.1 protocol versions.IMS

The new conformance tests re-enforce a subtle change in direction at IMS. For many years the specifications have focused on packaged content that can be moved from system to system. The certification process involved testing this content in its transportable form, matching the data against the format defined by the IMS data specifications. This model works well for checking that content *publishers* are playing by the rules, but it isn’t possible to check if a content player is working properly.

In contrast, the LTI protocol is not moving the content around but integrating and aggregating tools and content that run over the web. This shifts conformance from checking the format of transport packages to checking that online tools, content and the containers used to aggregate them (typically an LMS) are all adhering to the protocol. With a protocol it is much easier to check that both sides are playing by the rules  — so overall interoperability should improve.

In Michigan, the LTI team discussed the next steps with the protocol. Version 2 promises to be backwards-compatible but will also make it much easier to set up the trusted link between the tool consumer (e.g., your LMS) and the tool provider (e.g., Questionmark OnDemand).  IMS are also looking to expand the protocol to enable a deeper integration between the consumer and the provider. For example, the next revision of the protocol will make it easier for an LMS to make a copy of a course while retaining the details of any LTI-based integrations. They are also looking at improving the reporting of outcomes using a little-known part of the Question and Test Interoperability (QTI) specification called QTI Results Reporting.

After many years of being ‘on the shelf’ there is a renewed interest in the QTI specification in general. QTI has been incorporated into the Accessible Portable Item Protocol (APIP) specification that has been used by content publishers involved in the recent US Race to the Top Assessment Program. What does the future of QTI look like?  It is hard to tell at this early stage, but the buzzword in Michigan was definitely EPUB3.