Interact with your data: Looking forward to Napa

Steve Lay HeadshotPosted by Steve Lay

It’s almost time for the Questionmark Users Conference, which this year is being held in Napa, California. As usual there’s plenty on the program for delegates interested in integration matters!

At last year’s conference we talked a lot about OData for Analytics, (which I have also written about here: What is OData, and why is it important? ). OData is a data standard originally created by Microsoft but now firmly embedded in the open standards community through a technical group at OASIS. OASIS have taken on further development, resulting in the publication of the most recent version, OData 4.

This year we’ve built on our earlier work with the Results OData API to extend our adoption of OData to our delivery database, but there’s a difference. Whereas the Results OData API provides access to data, the data exposed from our delivery system supports read and write actions, allowing third-party integrations to interact with your data during the delivery process.

Why would you want to do that?

Some assessment delivery processes involve actions that take place outside the Questionmark system. The most obvious example is essay grading. Although the rubrics (the rules for scoring) are encoded in the Questionmark database, it takes a human being outside the system to follow those rules and to assign marks to the participant. We already have a simple scoring tool built directly in to Enterprise Manager but for more complex scoring scenarios you’ll want to integrate with external marking tools.

The new Delivery OData API provides access to the data you need, allowing you to read a participant’s answers and write back the scores using a simple Unscored -> Saved -> Scored workflow. When the result is placed in the final status, the participant’s result is updated and will appear with the updated scores in future reports.

I’ll be teaming up with Austin Fossey, our product owner for reporting, and Howard Eisenberg, our head of Solution Services, to talk at the conference about Extending Your Platform, during which we’ll be covering these topics. I’m also delighted that colleagues from Rio Salado College will also be talking about their own scoring tool that is built right on top of the Delivery OData API.

I look forward to meeting you in Napa but if you can’t make it this year, don’t worry, some of the sessions will be live-streamed. Click here to register so that we can send you your login info and directions. And you can always follow along with social media by following and tweeting with @Questionmark.

Acronyms, Abbreviations and APIs

Steve Lay HeadshotPosted by Steve Lay

As Questionmark’s integrations product owner, it is all too easy to speak in acronyms and abbreviations. Of course, with the advent of modern day ‘text-speak,’ acronyms are part of everyday speech. But that doesn’t mean everyone knows what they mean. David Cameron, the British prime minister, was caught out by the everyday ‘LOL’ when it was revealed during a recent public inquiry that he’d used it thinking it meant ‘lots of love’.

In the technical arena things are not so simple. Even spelling out an acronym like SOAP (which stands for Simple Object Access Protocol) doesn’t necessarily make the meaning any clearer. In this post, I’m going to do my best to explain the meanings of some of the key acronyms and abbreviations you are likely to hear talked about in relation to Questionmark’s Open Assessment Platform.

API

At a recent presentation (on Extending the Platform), while I was talking about ways of integrating with Questionmark technologies, I asked the audience how many people knew what ‘API’ stood for. The response prompted me to write this blog article!

The term, API, is used so often that it is easy to forget that it is not widely known outside of the computing world.

API stands for Application Programming Interface. In this case the ‘application’ refers to some external software that provides functionality beyond that which is available in the core platform. For example, it could be a custom registration application that collects information in a special way that makes it possible to automatically create a user and schedule them to a specified assessment.

The API is the information that the programmer needs to write this registration application. ‘Interface’ refers to the join between the external software and the platform it is extending. (Our own APIs are documented on the Questionmark website and can be reached directly from developer.questionmark.com.)

APIs and Standards

APIs often refer to technical standards. Using standards helps the designer of an API focus on the things that are unique to the platform concerned without having to go into too much incidental detail. Using a common standard also helps programmers develop applications more quickly. Pre-written code that implements the underlying standard will often be available for programmers to use.

To use a physical analogy, some companies will ask you to send them a self-addressed stamped envelope when requesting information from them. The company doesn’t need to explain what an envelope is, what a stamp is and what they mean by an address! These terms act a bit like technical standards for the physical world. The company can simply ask for one because they know you understand this request. They can focus their attention on describing their services, the types of requests they can respond to and the information they will send you in return.

QMWISe

QMWISe stands for Questionmark Web Integration Services Environment. This API allows programmers to exchange information with Questionmark OnDemand software-as-a-service or Questionmark Perception on-premise software. QMWISe is based on an existing standard called SOAP. (see above)

SOAP defines a common structure used for sending and receiving messages; it even defines the concept of a virtual ‘envelope’. Referring to the SOAP standard allows us to focus on the contents of the messages being exchanged such as creating participants, creating schedules, fetching results and so on.

REST

REST stands for REpresentational State Transfer and must qualify as one of the more obscure acronyms! In practice, REST represents something of a back-to-basics approach to APIs when contrasted with those based on SOAP. It is not, in itself, a standard but merely a set of stylistic guidelines for API designers defined by an academic paper written by Roy Fielding, a co-author of the HTTP standard (see below).

As a result, APIs are sometimes described as ‘RESTful’, meaning they adhere to the basic principles defined by REST. These days, publicly exposed APIs are more likely to be RESTful than SOAP-based. Central to the idea of a RESTful API is that the things your API deals with are identified by a URL (Uniform Resource Locator), the web’s equivalent of an address. In our case, that would mean that each participant, schedule, result, etc. would be identified by its own URL.

HTTP

RESTful APIs draw heavily on HTTP. HTTP stands for HyperText Transfer Protocol. It was invented by Tim Berners-Lee and forms one of the key inventions that underpin the web as we know it. Although conceived as a way of publishing HyperText documents (i.e., web pages), the underlying protocol is really just a way of sending messages. It defines the virtual envelope into which these messages are placed. HTTP is familiar as the prefix to most URLs.

OData

Finally this brings me to OData. OData just stands for Open Data. This standard makes it much easier to publish RESTful APIs. I recently OData in the post, What is Odata, and why is it important?

Although arguably simpler than SOAP, OData provides an even more powerful platform for defining APIs. For some applications, OData itself is enough, and tools can be integrated with no additional programming at all. The PowerPivot plugin for Microsoft Excel is a good example. Using Excel you can extract and analyse data using the Questionmark Results API (itself built on OData) without any Questionmark-specific programming at all.

For more about OData, check out this presentation on Slideshare.

Discussing data mining at NCME

Austin FosseyPosted by Austin Fossey

We will wrap up our discussion of themes at the National Council for Measurement in Education (NCME) annual meeting with an overview of the inescapable discussion about working with complex — and often messy– data sets.

It was clear from many of the presentations and poster sessions that technology is driving the direction of assessment, for better or for worse (or as Damian Betebenner put it, “technology eats statistics”). Advances in technology have allowed researchers to examine new statistical models for scoring participants, identify aberrant responses, score performance tasks, identify sources of construct-irrelevant variance, diversify item formats, and improve reporting methods.

As the symbiotic knot between technology and assessment grows tighter, many researchers and test developers are in the unexpected position of having too much data. This is especially true in complex assessment environments that yield log files with staggering amounts of information about a participant’s actions within an assessment.

Log files can track many types of data in an assessment, such as responses, click streams, and system states. All of these data are time stamped, and if they capture the right data, they can illuminate some of the cognitive processes that are manifesting themselves through the participant’s interaction with the assessment. Raw assessment data like Questionmark’s Results API OData Feeds can also be coupled with institutional data, thus exponentially growing the types of research questions we can pursue within a single organization.

NCME attendees learned about hardware and software that captures both response variables and behavioral variables from participants as they complete an online learning task.

Several presenters discussed issues and strategies for addressing less-structured data, with many papers tackling log file data gathered as participants interact with an online assessment or other online task. Ryan Baker (International Educational Data Mining Society) gave a talk about combine the data mining of log files with field observations to identify hard-to-capture domains, like student engagement.

Baker focused on the positive aspects of having oceans of data, choosing to remain optimistic about what we can do rather than dwell on the difficulties of iterative model building in these types of research projects. He shared examples of intelligent tutoring systems designed to teach students while also gathering data about the student’s level of engagement with the lesson. These examples were peppered with entertaining videos of the researchers in classrooms playing with their phones so that individual students would not realize that they were being subtly observed by the researcher via sidelong glances.

Evidence-centered design (ECD) emerged as a consistent theme: there was a lot conversation about how researchers are designing assessments so that they yield fruitful data for
intended inferences. Nearly every presentation about assessment development referenced ECD. Valerie Shute (Florida State University) observed that five years ago, only a fraction of participants would have known about ECD, but today it is widely used by practitioners.

Heading home from San Antonio

Joan Phaup 2013 (3)Posted by Joan Phaup

Bryan Chapman (2)

Bryan Chapman

As we head back home from this week’s Questionmark Users Conference in San Antonio, it’s good to reflect on the connections people made with one another during discussions, focus groups, social events and a wide variety of presentations covering best practices, case studies and the features and functions of Questionmark technologies. Many thanks to all our presenters!

Bryan Chapman’s keynote on Transforming Open Data into Meaning and Action offered an expansive approach to a key theme of this year’s conference. Bryan described the tremendous power of OData while dispelling much of the mystery around it. He explained that OData can be exchanged in simple ways, such as using a URL or inserting a command line to create, read, update, and/or delete data items.

thurs night scottIt was interesting to see how focusing on key indicators that have the biggest impact can produce easy-to-understand visual representations of what is happening within an organization. Among the many dashboards Bryan shared was on that showed the amount of safety training in relation to incidence of on-the-job injuries.thurs night trio

No conference is complete without social events that nurture new friendships and cement long-established bonds. Yesterday ended with a visit to the Rio Cibolo Ranch outside the city, where we enjoyed a Texas-style meal, western music and all manner of ranch activities. Many of us got acquainted with some Texas Longhorn Cattle, and the bravest folks of all took some lassoing lessons (snagging a  mechanical calf, not a longhorn!).

Today’s breakouts and general session complete three intensive days of learning. Here’s wishing everyone a good journey home and continued connections in the year ahead.

Setting Your Data Free – 2014 Users Conference

Austin Fossey-42Posted by Austin Fossey

The Questionmark Product Team is off to the 2014 Users Conference! We had a great time last night at the opening reception and are ready now to launch into the conference program.

A major theme this year is “setting your data free!” — so I wanted to give you a little taste of how this theme relates to my presentations on reporting and analytics.

As you know from my previous posts, we have implemented the OData API, which connects your raw assessment data (the same data driving Questionmark Analytics) to a whole ecosystem of business intelligence tools, custom dashboards, statistical packages, and even common desktop applications like Excel and web browsers. At this year’s conference, we will talk about how OData can be a tool for freeing those data for users of all types. Be sure to check out my OData session, where we will be running through examples using Excel with the PowerPivot add-in.

night riverBut, when we free our data, we want to make sure we are putting good, quality, meaningful data out there for our stakeholders so that they make valid inferences about the participants and the assessments. I will be doing two presentations related to this topic. In one, we will talk about understanding assessment results, with a focus on the classical test theory model and its applications for evaluation assessment quality with item statistics. In the second presentation, we will talk about principles of psychometrics and measurement design, where we will discuss validity studies and how principled test development frameworks like evidence-centered design can help us build better assessments that produce actionable data.

I’m pleased to see everyone in San Antonio and expect to be talking a lot about how we can set data free to make a powerful impact for stakeholders!

OASIS: Putting the “Open” into the OData Protocol

Steve Lay HeadshotPosted by Steve Lay

Last year I wrote a quick primer on the OData protocol and how it relates to Questionmark’s Open Assessment Platform, see What is OData and Why is it important?

A lot has happened in the last year in the OData community. One of the most interesting aspects of the developing standard is the way the OData ‘ecosystem’ is developing. This is the term used to describe the tools that developers can use to help them support the standard as well as the data services published by information providers.

The OData specification started life at Microsoft, but the list of contributors to the OASIS technical committee now includes some other familiar names such as IBM and SAP. SAP are co-chairing the technical committee and have made a significant contribution to an open source library that allows Java developers to take advantage of the standard. This library has recently been moved into an Apache Foundation ‘incubator’ which is a great way to get the Java developer community’s attention. You can find it at Apache Olingo.

Moving the specification into an industry standards body like OASIS means that Microsoft relinquish some control in exchange for a more open approach. OASIS allows any interested party to join and become part of the standards development process. Documentation is now available publicly for review before it is finalized, and I’ve personally found the committee responsive.

Microsoft continue to develop tools that support OData both through Windows Communication Foundation (WCF) and through the newer WebAPI, making OData a confirmed part of their platform. There are options for users of other programming languages too. Full details are available from http://www.odata.org/ecosystem/.

With OASIS now in the process of approving version 4 of the specification, I thought it would be worth giving a quick overview of how Questionmark is using the standard and how it is developing.

Questionmark’s OData API for Analytics uses version 2 of OData. This is the most widely supported version of the specification; it is also the version supported by Apache Olingo.

Some of the more recent libraries, including Microsoft’s WCF and Web API, have support for version 3 of the protocol. We’re currently investigating the potential of version 3 for future OData projects at Questionmark.

Version 4 is the first version published by OASIS and marks an important step for the OData community. It also brings in a number of breaking changes, in the words of the technical committee:

“In evolving a specification over time, sometimes you find things that worked out better than you had expected and other times you find there are things you wish you had done differently.”

Ref: What’s new in OData 4

The reality is that there are lot of new features in version 4 of the protocol and, combined with comrehensive clean-up process, it will be some time before version 4 is widely adopted across the community. However, the increased transparency that comes with an OASIS publication and the weight of industry leaders like SAP to help drive adoption mean it is definitely one I’ll be keeping an eye on.