Open Standards: Spotlight on CSS

Steve Lay HeadshotPosted by Steve Lay

In my role as Integrations Product Owner and champion of Questionmark’s Open Assessment Platform strategy I often write on the topic of open standards.

When we browse the internet on our mobiles, tablets or even on the humble PC, our experience is based on a vast stack of open standards covering everything from the way the information is wrapped up in ‘packets’ for sending over the network to the way text and graphics appear on our screens.

You’ve probably all heard of HTML, the main markup language used for creating web pages. HTML, or HyperText Markup Language to give it its full name, allows web servers to specify how text is broken up into paragraphs, lists or tables, when it should be emphasised and how it relates to media files like images and videos that are also rendered on the page. But HTML has a lesser-known yet powerful helper: Cascading Style Sheets (CSS).

CSS is a standard which allows a designer to apply ‘style’ to a web page. By style, we are talking about formatting information: things that affect the appearance of the page without affecting the meaning. Essentially, information on the web is split into these two halves: content (in HTML) and style (in CSS). Initial versions of the CSS standard were rudimentary, and support across different browsers was often inconsistent. But the standard is now on version 3, often abbreviated to CSS3, and renderings are much more predictable. Also, adoption of more advanced features is rapidly becoming the norm rather than the exception.

By adopting HTML and CSS at Questionmark, the content/style division translates into different responsibilities for the question author (responsible for content) and the graphic designer (responsible for style). By being mindful of this division — and the fact that the same question may have different styles applied on different devices or in different contexts — authors can avoid question wording that is dependent on the style or type of rendering.

For example, a phrase such as “which category applies to the text in red?” makes specific reference to an element of style appearing elsewhere in the content. If colour is not essential to the meaning it would be better to use a more neutral term such as the emphasised text. Being aware of different styles has the knock-on benefit of making assessment content more accessible while ensuring they look good!

Questionmark has embraced CSS as the best technology for customising the appearance of tests. It is easy to copy the default CSS files and change the colours and fonts, say, to match your company portal.

In this screenshot shot, I’ve created a yellow background simply by changing one line in the default style sheet:


With CSS, web designers can help you make your assessments look even more professional!

Integrating your LMS with Questionmark OnDemand just got easier!

Steve Lay Headshot

Posted by Steve Lay

Last year I wrote about the impact that the IMS LTI standard could have on the way people integrate their LMS with external tools.

I’m pleased to say that we have just released our own LTI Connector for Questionmark OnDemand. The connector makes it easy to integrate your LMS with your Questionmark repository. Just enter some security credentials to set up the trusted relationships and your instructors are ready to start embedding assessments directly into the learning experience.

Moodle TestBy using a standard, the LTI connector enables a wide range of LMSs to be integrated in the same way. Many of them have LTI support built in directly too, so you won’t have to install additional software or request optional plugins from your LMS hosting provider.

You can read more about how to use the LTI connector with Questionmark OnDemand on our website: Questionmark Connectors.

You can also find out which tools are currently supporting the LTI standard from the IMS Conformance Certification page (which we hope to be joining shortly).

From Content to Tool Provider

The LTI standard, in many ways, does a similar job to the older SCORM and AICC standards. It provides a mechanism for an LMS to launch a student into an activity and for that activity to pass performance information (outcomes) back to the LMS to be recorded in their learning record.

Both the SCORM and AICC standards were designed with content portability in mind, before the Web became established. As a result, they defined the concept of a package of content that has to be published and ‘physically’ moved to the LMS to be run. The LMS became a player of the content.

Contrast this approach with that of IMS LTI. In LTI, the activity is provided by an external Tool Provider. The Tool Provider is hosted on the web and is identified by a simple URL; there is no publishing required! When the Tool’s URL is placed into the LMS, along with appropriate security credentials, the link is made. Now the student just follows an embedded link to the Tool Provider’s website where they interact with the activity directly. The two websites communicate via web services (much like AICC) to pass back information about outcomes.

The result is simpler and more secure! It is no wonder that the LTI specification has been adopted so quickly by the community.

What is OData, and why is it important?

Steve Lay HeadshotPosted by Steve Lay

At the recent Questionmark Users Conference I teamed up with Howard Eisenberg, our Director of Solution Services, to talk about OData. Our session included some exciting demonstrations of our new OData API for Analytics. But what is OData and why is it important?

OData is a standard for providing access to data over the internet. It has been developed by Microsoft as an open specification. To help demonstrate its open approach, Microsoft is now working with OASIS to create a more formal standard. OASIS stands for the Organization for the Advancement of Structured Information Standards; it provides the software industry with a way to create standards using open and transparent procedures. OASIS has published a wide range of standards, particularly in the areas of document formats and web service protocols — for example, the OpenDocument formats used by the OpenOffice application suite.

Why OData?

Questionmark’s Open Assessment Platform already includes a set of web-service APIs (application programming interfaces). We call them QMWISe and they are ideal for programmers who are integrating server-based applications. With one QMWISe request you can trigger a series of actions typical of a number of common use cases. There are, inevitably, times when you need more control over your integration, though, and that is where OData comes in.

Unlike QMWISe, OData provides access to just the data you want. It has scalability built right in to the protocol. Using the conventions of OData, you can make highly specific requests to get a single data item or you can use the feature of linked-data to quickly uncover relationships.

OData works just like the web: each record returned by an OData request contains links to other related records in exactly the same way as web pages contain hyperlinks to other web pages. Want to know about all the results for a specific assessment? It is easy with OData, just follow the results link in the assessment’s record.

OData is also based on pre-existing internet protocols, which means that web developers can use it in their applications with a much easier learning curve. In fact, if a tool already supports RSS/Atom you can probably start accessing OData-feeds right away!

OData Ecosystem

As we build our support for the OData protocol, we join a growing community. OData makes sense as the starting point for any data-rich standard. Last week I was at CETIS 2013, where there was already talk of other standards organizations in the e-Learning community adopting OData as a way of standardizing the way they share information.

Scalability testing for online assessments

Posted by Steve Lay

Last year I wrote a series of blog posts with accompanying videos on the basics of setting up virtual machines in the cloud and running them ready to install Questionmark Perception:

This type of virtual machine environment is very useful for development and testing; we use a similar capability ourselves when testing the Perception software as well as new releases of our US and EU OnDemand services. One thing these environments are particularly useful for is scalability testing.

Scalability can be summarised as the ability to handle increased load when resources are added. We actually publish details of the scalability testing we do for our OnDemand service in our white paper on the “Security of Questionmark’s US OnDemand Service”.

The connection between scalability and security is not always obvious, but application availability is an important part of any organisation’s security strategy. For example, a denial-of-service or DoS attack is one in which an attacker deliberately exploits a weakness of a system in order to make it unavailable. Most DoS attacks do not involve any breach of confidentiality or data integrity, but they are still managed under the umbrella of security. Scalability testing focuses on the ‘friendly’ threat from increased demand but, like a DoS attack, the impact of a failure on the end user is the same: loss of availability.

As the popularity of our OnDemand service continues to increase, we’ve been ramping up our scalability testing, too. Using an external virtual machine service we are able to temporarily, and cost-effectively, simulate loads that exceed the highest peaks of expected demand. As more and more customers join our OnDemand service, the peaks of demand tend to smooth out when compared to a single customer’s usage — allowing us to scale our hardware requirements more efficiently. Our test results are also used to help users of Question Perception, our system for on-premise installation, provision suitable resources for their peak loads.

I thought I’d share a graph from a recent test run to help illustrate how we test the software behind our services. These results were obtained with a set of virtual resources designed to support a peak rate equivalent of 1 million assessments per day. The graph shows results from 13 different types of test, such as logging in, starting a test, submitting results, etc. The vertical axis represents the response times (in ms) for the minimum, median and 90th percentile cases at peak load. As you can see, all results are well within the target time of 5000ms.

I hope I’ve given you a flavour of the type of testing we do to ensure that Questionmark OnDemand lives up to being a scalable platform for your high-volume delivery needs.


Meeting Dutch Questionmark users in Utrecht

Posted by Steve Lay

When I told a friend that I was visiting Utrecht recently, they said I should make time to see the fantastic railway museum there. So I was pleasantly surprised when I discovered that the 2012 Dutch Questionmark Users Conference was being held in the Spoorwegmuseum.

The presentations (in Dutch) from some of the sessions are available online from the conference home page. An Excel add-in developed by Michel Duijvestijn at the Hogeschool Rotterdam is also available for download. The plug-in allows results downloaded from your Questionmark repository to be adjusted, e.g., to remove any unwanted questions, and then re-calculated! Our Dutch customers are always very innovative in the way they use Questionmark technologies.

I would like to thank our hosts, Stoas, for putting on such a fantastic event. Sadly, my Dutch was not up to following all the proceedings,but everyone was very kind when I had to stand in on the panel session at the last minute. There were lots of interesting topics discussed, and there was plenty of interest in Questionmark Secure and our mobile delivery solutions.

All in all it was a great day out, as I hope these pictures will convey.

A fresher’s view of university admissions testing

Posted by Steve Lay

As someone who lives in a university town, I’m very aware that the beginning of October is the start of the academic year. The impact of 12,000+ students descending on a small English market town over the course of one weekend is total gridlock! Fortunately, first year undergraduates, or freshers as they are known, are given a bit of a head start which helps them find their way before the deluge. The whole process has an added poignancy for me this year as my son starts his college course this week.

A lot has changed since I started my degree, but I’m sure freshers still compare A-level results as one of their first topics of conversation. A-level examinations were originally designed as qualifications in themselves. For example, a person with a French language A-level can be expected to demonstrate a certain level of competence when speaking and writing in French. This contrasts with aptitude tests where a person demonstrates their potential rather than a specific competency.

These days, an increase in the number of students going on to higher education in the UK has shifted the main emphasis towards the use of A-levels as a tool for university admissions. As a result, different providers are now required to publish a unified mark scale (UMS). The UMS is an ambitious concept: not only does it attempt to standardise marks amongst the different providers but it also attempts to tackle year-on-year comparability. As governments look for year-on-year improvements there is a serious danger of circularity in the standard-setting process. We even have a phrase to describe the end result: grade inflation. The whole system has become a political football.

With so much invested (by teachers and students) in one system of assessment, it is not surprising that the subject comes up in freshers week, but it won’t take long before talk of A-level results will be replaced by more pertinent measures. Universities have embraced assessment technologies, making it easier to provide students with frequent progress checks and tailored feedback. Students are no longer passive recipients either: they are demanding improvements in the assessment processes! This has even been the subject of a recent student union campaign.

It is worth remembering that all assessments have a shelf-life. The value of an A-level taken 25 years ago is considerably less than one taken this year — and that’s not a statement about year-on-year comparability!