Five tips for enhancing test security using technology

Headshot JuliePosted by Julie Delazyn

Test security is a topic that comes up time and time again on education and company forums.  You can improve test security by changing the physical test-taking environment, but you can also use technology to tackle certain security issues.

Here are are five tips that can help you use technology to address security challenges:

  1. Randomize: Shuffling the order of the choices can help protect the security of the assessment. The questions can also be delivered in a random order themselves — to help prevent cribbing when users are sitting in non-screened assessment centers.
  2. Encrypt: With so many tests and exams being delivered via the Internet or an intranet, encryption can protect against interception. A Secure Socket layer (SSL) is a protocol that allows the browser and web server to encrypt their communication; anyone intercepting the communication can’t read it.
  3. Schedule: You can discourage cheating by specifying user names and passwords, setting assessment start times, limiting the length of time for an assessment and the number of times it may be taken.
  4. Monitor: A participant can’t start a monitored assessment until a proctor or invigilator has logged on to verify the participant’s identity. The monitor can be limited to a range of IP addresses to ensure that a certain physical location is used to administer the assessment.
  5. Secure browsers: It is possible to ‘lock down’ computers to keep participants from accessing other applications and websites while taking a medium- or high-stakes assessment. A secure browser prevents candidates from printing, capturing screens, accidentally exiting the assessment viewing source, task switching, etc.

Want more info? Download the White Paper: Delivering Assessments Safely and Securely  [registration required]

OASIS: Putting the “Open” into the OData Protocol

Steve Lay HeadshotPosted by Steve Lay

Last year I wrote a quick primer on the OData protocol and how it relates to Questionmark’s Open Assessment Platform, see What is OData and Why is it important?

A lot has happened in the last year in the OData community. One of the most interesting aspects of the developing standard is the way the OData ‘ecosystem’ is developing. This is the term used to describe the tools that developers can use to help them support the standard as well as the data services published by information providers.

The OData specification started life at Microsoft, but the list of contributors to the OASIS technical committee now includes some other familiar names such as IBM and SAP. SAP are co-chairing the technical committee and have made a significant contribution to an open source library that allows Java developers to take advantage of the standard. This library has recently been moved into an Apache Foundation ‘incubator’ which is a great way to get the Java developer community’s attention. You can find it at Apache Olingo.

Moving the specification into an industry standards body like OASIS means that Microsoft relinquish some control in exchange for a more open approach. OASIS allows any interested party to join and become part of the standards development process. Documentation is now available publicly for review before it is finalized, and I’ve personally found the committee responsive.

Microsoft continue to develop tools that support OData both through Windows Communication Foundation (WCF) and through the newer WebAPI, making OData a confirmed part of their platform. There are options for users of other programming languages too. Full details are available from

With OASIS now in the process of approving version 4 of the specification, I thought it would be worth giving a quick overview of how Questionmark is using the standard and how it is developing.

Questionmark’s OData API for Analytics uses version 2 of OData. This is the most widely supported version of the specification; it is also the version supported by Apache Olingo.

Some of the more recent libraries, including Microsoft’s WCF and Web API, have support for version 3 of the protocol. We’re currently investigating the potential of version 3 for future OData projects at Questionmark.

Version 4 is the first version published by OASIS and marks an important step for the OData community. It also brings in a number of breaking changes, in the words of the technical committee:

“In evolving a specification over time, sometimes you find things that worked out better than you had expected and other times you find there are things you wish you had done differently.”

Ref: What’s new in OData 4

The reality is that there are lot of new features in version 4 of the protocol and, combined with comrehensive clean-up process, it will be some time before version 4 is widely adopted across the community. However, the increased transparency that comes with an OASIS publication and the weight of industry leaders like SAP to help drive adoption mean it is definitely one I’ll be keeping an eye on.

OData – Direct Data Access for Flexible Reporting

Austin Fossey-42 Posted by Austin Fossey

We have already spoken a lot about OData, but I’d like to circle back to underscore the main usefulness of Questionmark’s OData API for Analytics.

OData is a handy way to get access to the same raw data that drives the reports in Questionmark Analytics. It’s a very useful tool to have in your belt if you want to use another reporting engine or business intelligence tool to analyze, explore, and report on your data.

We have mentioned tools like Microsoft Excel PowerPivot and Tableau Public as easy-to-use platforms for exploring OData feeds, but every month I learn about a new application or reporting engine that consumes OData feeds.

OData map

Tableau Public dashboard showing assessment performance by country and job title using the Questionmark OData Results feed.

For example, I recently learned that the R Project for Statistical Computing (“R”) has an XML package that can be used to read in specific data from an OData feed.

You can then write a program to read in data from the Questionmark OData feeds for statistical analysis. I used this R package to read in response data from the Questionmark OData Answers feed, which I then used to calibrate a set of polytomous items with the generalized partial credit model. R can be used for just about any analysis, but this example would be useful if you wanted to score your participants with an item response theory (IRT) model instead of the classical test theory model used in Questionmark.

OData Item Response

Item characteristic curve generated in R using polytomous response data from the Questionmark OData Answers feed.

For research and analysis, I like OData as a way to automatically access my data instead of having to manually export and merge data files. Of course, raw data can be messy! And OData URLs can take some time to learn, which is why we are continuously working to improve our client tutorials and developer resources so that people can use this technology effectively.

Customers who attend the Questionmark Users Conference will have opportunities to learn more about OData, which we will focus on during a number of different sessions.

Getting more value from assessment results

Joan Phaup 2013 (3)

Posted by Joan Phaup

How do you maximize the value of assessment results? How do you tailor those results to meet the specific needs of your organization? We’ll address these question and many others at the Questionmark Users Conference in San Antonio March 4 – 7.

The conference program will cover a wide range of topics, offering learning opportunities for beginning, intermediate and advanced users of Questionmark technologies. The power and potential of open data will be a major theme, highlighted in a keynote by Bryan Chapman on Transforming OData into Meaning and Action.

Here’s the full program:Gen 3

Optional Pre-conference Workshops

  • Test Development Fundamentals, with Dr. Melissa Fein (half day)
  • Questionmark Boot Camp: Basic Training for Beginners, with Questionmark Trainer Rick Ault (full day)

General Sessions

  • Conference Kickoff and Opening General Session
  • Conference Keynote by Bryan Chapman – Transforming Open Data into Meaning and Action
  • Closing General Session — Leaping Ahead: A View Beyond the Horizon on the Questionmark Roadmap

Case Studiesgen 1

  • Using Questionmark to Conduct Performance Based Certifications —  SpaceTEC®
  • Better Outcomes Make the Outcome Better! —  USMC Marine Corps University
  • Generating and Sending Custom Completion Certificates — The Aurelius Group
  • Leveraging Questionmark’s Survey Capabilities with a Multi-system Model —  Verizon
  • Importing Questions into Questionmark Live on a Tri-Military Service Training Campus — Medical Education & Training Campus
  • How Can a Randomly Designed Test be Fair to All? —  U.S. Coast Guard

Best Practices

  • Principles of Psychometrics and Measurement Design
  • 7 Reasons to Use Online Assessments for Compliance
  • Reporting and Analytics: Understanding Assessment Resultsgen 2
  • Making it Real: Building Simulations Into Your Quizzes and Tests
  • Practical Lessons from Psychology Research to Improve Your Assessments
  • Item Writing Techniques for Surveys, Quizzes and Tests

Questionmark Features & Functions

  • Introduction to Questionmark for Beginners
  • BYOL: Item and Topic Authoring
  • BYOL: Collaborative Assessment Authoring
  • Integrating with Questionmark’s Open Assessment Platform
  • Using Questionmark’s OData API for Analytics
  • Successfully Deploying Questionmark Perception
  • Customizing the Participant Interfacegen 4


  • Testing is Changing: Practical and Secure Assessment in the 21st Century
  • Testing what we teach: How can we elevate our effectiveness without additional time or resources?
  • 1 + 1 = 3…Questionmark, GP Strategies and You!

Drop-in Demos

  • Making the Most of Questionmark’s Newest Technologies

Future Solutions: Influence Questionmark’s Road Map

  • Focus Group on Authoring and Deliverygen 5
  • Focus Group on the Open Assessment Platform and Analytics

Tech Central

  • One-on-one meetings with Questionmark Technicians

Special Interest Group Meetings

  • Military/Defense US DOD and Homeland Security
  • Utilities/Energy Generation and Distribution
  • Higher Education
  • Corporate Universities

Social Events

Click here to see details about all these sessions, and register today!

night river banner


Learning Styles: Fiction?

Doug Peterson HeadshotPosted By Doug Peterson

Last week, I wrote about learning styles and the importance many educators place on them. Today, let’s look at the downside of this approach.

Do a Google search on “debunking 4 learning styles” and you’ll find a lot of information. For example, a few years ago the Association for Psychological Science published an article stating that there is no scientific support for learning styles. But there are a couple of points in this article that I would like to bring out.

The first is that the article isn’t really saying that the learning styles theory has been disproved: it’s saying the theory hasn’t been correctly proven. In other words, learning styles may still exist, but the proponents of the theory simply haven’t proven it yet. That’s different from “proven not to exist at all.”

Second, note the little bit that says “the participants would need to take the same test at the end of the experiment.” We know that for an assessment to be fair, valid and reliable, one of the things it must do is allow the participant to display his/her level of knowledge, skill or ability without interference and without testing multiple skills simultaneously (like reading comprehension along with the actual knowledge objective).

So how should we be looking at the relationship between learning styles and assessments? Should proponents of learning styles need to use assessments that take them into consideration? If a person is a visual learner would they be better able to communicate their understanding with a visual question—say a Hot Spot—than with a multiple choice question? And maybe an auditory learner would better communicate his/her understanding with a spoken answer. Would forcing a visual learner to prove their understanding in a non-visual way be fair? Would it truly be testing not only their knowledge? Or would it also be testing their ability to overcome the learning style barrier presented by the question itself?

Those who don’t support the learning style theory feel that anyone can learn from any presentation style—people just have preferred styles. In other words, they feel that the evidence shows that if you had two groups who identify as visual learners, and they both learned the same subject matter but one group learned it visually while the subject matter was presented differently to the second group, both groups would still end up learning the same amount. Their learning style is not a limitation (so much so that they can’t learn as much or as well when the material is presented in other styles), it’s just a preference.

I can’t say that I accept learning styles as fact, but I also can’t say that I believe they are fiction. What I can say is that I believe that learning has to do with two things:

1.       Engagement
2.       Learner motivation

I don’t believe that “learning styles” and “engagement” are the same thing. I can see where, assuming that learning styles exist, it would be easier to engage a visual learner with visual content, but if you have boring visual content, even a visual learner will not learn. I also believe that a podcast done really well can engage a (supposedly) visual or tactile learner. True, according to the theory, the visual or tactile learner may not learn as much as when the material is presented in their style, but I think you get my point that learning must be engaging, and that engagement is independent of learning style.

My experience has also shown me that when a learner is motivated, nothing will stand in his or her way. If passing that eLearning course means a promotion and a raise, that auditory learner will do what it takes to learn the material and pass, even if the material is nothing but charts and graphs. Conversely, if the visual learner couldn’t care less about the material, the greatest graphs in the world won’t make one whit of difference.

I would love to hear your thoughts and opinions on learning styles. Do you think they’re real, and that a learner simply cannot learn as well from material not presented in their style as they can from material that is?? Or do you think that learning style is more of a preference, and that learning will take place regardless of the way in which it is presented as long as it is engaging and the learner is motivated?

Integrating and Connectors – Blackboard

Doug Peterson HeadshotPosted By Doug Peterson

So far in this series we have discussed integrating using common standards – launch-and-track with AICC or SCORM, or a tighter integration with the Questionmark LTI Connector. In this installment we take a look at a deeper, custom integration – the Questionmark Blackboard Connector.

The latest version of Blackboard does have LTI capabilities, but we recommend using our Blackboard Connector instead of the LTI Connector as the Blackboard Connector has more functionality. As you’ll see in the following video, the Blackboard Connector handles a number of things behind the scenes – automatically creating groups that represent courses, adding participants and instructors to the appropriate groups, scheduling, etc. You also have a great amount of control – the Blackboard Connector has settings that allow you to control which courses and/or participants can interact with Questionmark from the Questionmark side, instead of automatically synchronizing everything.

Enjoy this video about integrating Questionmark with Blackboard using the Questionmark Blackboard Connector, and let me know if you have any questions!

Integrating Blackboard