GDPR: 6 months to go

Posted by Jamie Armstrong

Anyone working with personal data, particularly in the European Union, will know that we are now just six months from “GDPR day” (as I have taken to calling it). On 25-May-2018, the EU General Data Protection Regulation (“GDPR”) will become applicable, ushering in a new privacy/data protection era with greater emphasis than ever on the rights of individuals when their personal data is used or stored by businesses and other organizations. In this blog post, I provide some general reminders about what the GDPR is and give some insight into Questionmark’s compliance preparations.

The GDPR replaces the current EU Data Protection Directive, which has been around for more than 20 years. To keep pace with technology advances and achieve greater uniformity on data protection, the EU began work on the GDPR over 5 years ago and finalized the text in April 2016. There then followed a period for regulators and other industry bodies to provide guidance on what the GDPR actually requires, to help organizations in their compliance efforts. Like all businesses that process EU personal data, whether based within the U.S., the EU or elsewhere, Questionmark has been busy in the months since the GDPR was finalized to ensure that our practices and policies align with GDPR expectations.

For example, we have recently made available revised versions of our EU OnDemand service and US OnDemand service terms and conditions with new GDPR clauses, so that our customers can be assured that their agreements with us meet data controller-data processor contract requirements. We have updated our privacy policy to make clearer what personal data we gather and how this is used when people visit and interact with our website. There is also a helpful Knowledge Base article on our website that describes the personal data Questionmark stores.


One of the most talked-about provisions of the GDPR is Article 35, which deals with data protection impact assessments, or “DPIAs.” Basically, there is a requirement that organizations acting as data controllers of personal data (meaning that they determine the purpose and means of the processing of that data) complete a prior assessment of the impacts of processing that data if the processing is likely to result in a high risk to the rights and freedoms of data subjects. Organizations will need to make a judgment call regarding whether a high risk exists to require that a DPIA be completed. There are scenarios in which a DPIA will definitely be required, such as when data controllers process special categories of personal data like racial origin and health information, and in other cases some organizations may decide it’s safer to complete a DPIA even if not absolutely necessary to comply with the GDPR.

The GDPR expects that data processors will help data controllers with DPIAs. Questionmark has therefore prepared an example draft DPIA template that may be used for completing an assessment of data processing within Questionmark OnDemand. The draft DPIA template is available for download now.

In the months before GDPR day we will see more guidance from the Article 29 Working Party and national data protection authorities to assist organizations with compliance. Questionmark is committed to helping our customers being compliant with the GDPR and we’ll post more next year on this subject. We hope this update is useful in the meantime

Important disclaimer: This blog is provided for general information and interest purposes only, is non-exhaustive and does not constitute legal advice. As such, the contents of this blog should not be relied on for any particular purpose and you should seek the advice of their own legal counsel in considering GDPR requirements.

Learning, Training and Assessments in Regulatory Compliance – Implementation Best Practices

Posted by John Kleeman

I’m pleased to let you know of a new joint SAP and Questionmark white paper on implementation best practices for learning, training and assessments in regulatory compliance. You can download the white paper here.

There has been a huge change in the regulatory environment for companies in the last few years. This is illustrated nicely by the graph below showing the number of formal warning letters the U.S. Food and Drug Administration (FDA) issued in the period from 2010 to 2016 for various compliance infractions.

Rise in FDA warning letters from 2010 to 2016, numbers increase from a few hundred a year to over 10,000 a year

Of course it’s not just letters that regulators issue, there have also been huge increases in the fines issued on companies in areas including rules breaches in banking, data protection, price-fixing and manufacturing.

Failure to effectively train or assess employees is a significant cause of compliance errors, and this white paper authored by SAP experts Thomas Jenewein, Simone Buchwald and Mark Tarallo and me (Questionmark Founder and Executive Director, John Kleeman) explains how technology can help address the issue.

The white paper starts by looking at key factors increasing the need for training, learning, and assessments to ensure that businesses stay compliant and then goes on to consider three drivers for compliance learning – Organization Imposed, Operations Critical and Regulatory. The white paper then looks at how

  • A Learning Management System (LMS) can manage compliance learning
  • Learning Content and Documentation Authoring Tools can author compliance learning
  • An Assessment Management System can be used to diagnose the training needed, to help direct learning and to check competence, knowledge and skills.

A typical LMS includes basic quiz and survey capabilities, but when making decisions about people, such as whether to promote, hire, fire, or confirm competence for compliance or certification purposes, companies need more. The robust functionality of an effective assessment management system allows organizations to create reliable, valid, and more trustworthy assessments. Often, assessment management systems and LMSs work together, and test-takers will often be directed to take assessments by using a single sign-on from the LMS.

The white paper describes how companies can use SAP SuccessFactors Learning, SAP Enable Now and Questionmark software work together productively to help companies manage and deliver effective compliance learning, training and assessments – to mitigate regulatory risk. It goes on to describe some key trends in compliance training and assessments that the authors see going forwards, including how cybersecurity and data protection are impacting compliance.

The white paper is a quick, easy and useful read – you can download it here.

High-stakes assessment: It’s not just about test takers

Lance bio picPosted by

In my last post I spent some time defining how I think about the idea of high-stakes assessment. I also talked about how these assessments affect the people who take them including how important it is to their ability to get or do a job.

Now I want to talk a little bit about how these assessments affect the rest of us.

The rest of us

Guess what? The rest of us are affected by the outcomes of these assessments. Did you see that coming?

But seriously, the credentials or scores that result from these assessments affect large swathes of the public. Ultimately that’s the point of high-stakes assessment. The resulting certifications and licenses exist to protect the public. These assessments are acting as barriers preventing incompetent people from practicing professions where competency really matters.

 It really matters

What are some examples of “really matters”? Well, when hiring, it really matters to employers that the network techs they hire knows how to configure a network securely, not that the techs just say they do. It matters to the people crossing a bridge that the engineers who designed it knew their physics. It really matters to every one of us that our doctor, dentist, nurse, or surgeon know what they are doing when they treat us. It really matters to society at large when we measure (well) the children and adults who take large-scale assessments like college entrance exams.

At the end of the day, high-stakes exams are high-stakes because in a very real way, almost all of us have a stake in their outcome.

 Separating the wheat from the chaff

There are a couple of ways that high stakes assessments do what they do. Some assessments are simply designed to measure “minimal competence,” with test takers either ending above the line—often known as “passing”—or below the line. The dreaded “fail.”

Other assessments are designed to place test takers on a continuum of ability. This type of assessment assigns scores to test takers, and the range of
score often appear odd to laypeople. For example, the SAT uses a 200 – 800 scale.

Want to learn more? Hang on till next time!

South African Users Conference Programme Takes Shape

Chloe MendoncaPosted by Chloe Mendonca

In just five weeks, Questionmark users and other learning professionals will gather in Midrand for the first South African Questionmark Users Conference.

Delegates will enjoy a full programme, from case studies to features and functions sessions on the effective use of Questionmark technologies.

There will also be time for networking during lunches and our Thursday evening event.

Here are some of the sessions you can look forward to:

  • Case Study: Coming alive with Questionmark Live: A mind shift for lecturers – University of Pretoria
  • Case Study: Lessons and Discoveries over a decade with Questionmark – Nedbank
  • Case Study: Stretching the Boundaries: Using Questionmark in a High-Volume Assessment Environment – University of Pretoria
  • Features and Functions: New browser based tools for collaborative authoring – Overview and Demonstrations
  • Features and Functions: Analysing and Sharing Results with Stakeholders: Overview of Key New Questionmark Reporting and Analytics features
  • Features and Functions: Extending the Questionmark Platform: Updates and overviews of APIs, Standards Support, and integrations with third party applications
  • Customer Panel Discussion: New Horizons for eAssessment

You can register for the conference online or visit our website for more information.

We look forward to seeing you in August!


UK Briefings Update: Join us for discussions on assessment security

UKBChloe MendoncaPosted by Chloe Mendonca

Last week in London, we held the first of our three UK breakfast briefings taking place this summer.

In case you haven’t attended a breakfast briefing before, these events involve a morning of networking, best practice tips and live demonstrations of the newest assessment technologies.

Last week at our London briefing we received some great feedback about some of our latest features, including new capabilities within Questionmark Live and customised reporting using the Results API.

Our next two briefings will take place on Tuesday 17th June in London and Wednesday 18th June in Edinburgh. They will focus on some of the latest assessment security technologies that make it possible to administer high-stakes tests anywhere in the world.

ProctorU President Don Kassner, will begin by explaining the basics of online invigilation and discuss proven strategies for alleviating the testing centre burden. Then Che Osborne, Questionmark’s VP of sales, will discuss methods you can use to protect your valuable assessment content and test results.

Each briefing will include a complimentary breakfast at 8.45 a.m. followed by presentations and discussions until about 12:30 p.m.

We hope you will be able to attend one of the sessions.

Item Analysis – Two Methods for Detecting DIF

Posted by Austin FosseyAustin Fossey-42

My last post introduced the concept of differential item functioning. Today, I would like to introduce two common methods for detecting DIF in a classical test theory framework: the Mantel-Haenszel method and the logistic regression method.

I will not go into the details of these two methods, but if you would like to know more, there are many great online resources. I also recommend de Ayala’s book, The Theory and Practice of Item Response Theory, for a great, easy-to-read chapter discussing these two methods.


The Mantel-Haenszel method determines whether or not there is a relationship between group membership and item performance, after accounting for participants’ abilities (as represented by total scores). The magnitude of the DIF is represented with a log odds estimate, known as αMH. In addition to the log odds ratio, we can calculate the Cochran-Mantel-Haenszel (CMH) statistic, which follows a chi squared distribution. CMH shows whether or not the observed DIF is significant, though there is no sense of magnitude as there is with αMH.

Logistic Regression

Unfortunately, the Mantel-Haenszel method is only consistent when investigating uniform DIF. If non-uniform DIF may be present, we can use logistic regression to investigate the presence of DIF. To do this, we run two logistic regression models where item performance is regressed on total scores (to account for the participants’ abilities) and group membership. One of the models will also include an interaction term between test score and group membership. We then can compare the fit of the two models. If the model with the interaction term fits better, then there is non-uniform DIF. If the model with no interaction term shows that group membership is a significant predictor of item performance, then there is uniform DIF. Otherwise, we can conclude that there is no DIF present.

Just because we find a statistical presence of DIF does not necessarily mean that we need to panic. In Odds Ratio, Delta, ETS Classification, and Standardization Measures of DIF Magnitude for Binary Logistic Regression, Monahan, McHorney, Stump, & Perkins note that it is useful to flag items based on the effect size of the DIF.

Both the Mantel-Haenszel method and the logistic regression method can be used to generate standardized effect sizes. Monahan et al. provide three categories of effect sizes: A, B, and C. These category labels are often generated in DIF or item calibration software, and we interpret them as follows: Level A is negligible levels of DIF, level B is slight to moderate levels of DIF, and level C is moderate to large levels of DIF. Flagging rules vary by organization, but it is common for test developers to only review items that fall into levels B and C.