GDPR: 6 months to go

Posted by Jamie Armstrong

Anyone working with personal data, particularly in the European Union, will know that we are now just six months from “GDPR day” (as I have taken to calling it). On 25-May-2018, the EU General Data Protection Regulation (“GDPR”) will become applicable, ushering in a new privacy/data protection era with greater emphasis than ever on the rights of individuals when their personal data is used or stored by businesses and other organizations. In this blog post, I provide some general reminders about what the GDPR is and give some insight into Questionmark’s compliance preparations.

The GDPR replaces the current EU Data Protection Directive, which has been around for more than 20 years. To keep pace with technology advances and achieve greater uniformity on data protection, the EU began work on the GDPR over 5 years ago and finalized the text in April 2016. There then followed a period for regulators and other industry bodies to provide guidance on what the GDPR actually requires, to help organizations in their compliance efforts. Like all businesses that process EU personal data, whether based within the U.S., the EU or elsewhere, Questionmark has been busy in the months since the GDPR was finalized to ensure that our practices and policies align with GDPR expectations.

For example, we have recently made available revised versions of our EU OnDemand service and US OnDemand service terms and conditions with new GDPR clauses, so that our customers can be assured that their agreements with us meet data controller-data processor contract requirements. We have updated our privacy policy to make clearer what personal data we gather and how this is used when people visit and interact with our website. There is also a helpful Knowledge Base article on our website that describes the personal data Questionmark stores.

GDPR

One of the most talked-about provisions of the GDPR is Article 35, which deals with data protection impact assessments, or “DPIAs.” Basically, there is a requirement that organizations acting as data controllers of personal data (meaning that they determine the purpose and means of the processing of that data) complete a prior assessment of the impacts of processing that data if the processing is likely to result in a high risk to the rights and freedoms of data subjects. Organizations will need to make a judgment call regarding whether a high risk exists to require that a DPIA be completed. There are scenarios in which a DPIA will definitely be required, such as when data controllers process special categories of personal data like racial origin and health information, and in other cases some organizations may decide it’s safer to complete a DPIA even if not absolutely necessary to comply with the GDPR.

The GDPR expects that data processors will help data controllers with DPIAs. Questionmark has therefore prepared an example draft DPIA template that may be used for completing an assessment of data processing within Questionmark OnDemand. The draft DPIA template is available for download now.

In the months before GDPR day we will see more guidance from the Article 29 Working Party and national data protection authorities to assist organizations with compliance. Questionmark is committed to helping our customers being compliant with the GDPR and we’ll post more next year on this subject. We hope this update is useful in the meantime

Important disclaimer: This blog is provided for general information and interest purposes only, is non-exhaustive and does not constitute legal advice. As such, the contents of this blog should not be relied on for any particular purpose and you should seek the advice of their own legal counsel in considering GDPR requirements.

How online assessments (quizzes, tests and exams) can help information security awareness and compliance

Posted by John Kleeman

With the rise of data security leakages, most professional organizations are seeking to significantly upscale their cybersecurity to better protect their organization from information security risks. I see an increasing use of online assessments helping information security and thought I’d provide some pointers about this.

There are three main ways in which online quizzes, tests, exams and surveys can aid information security:

  • Testing personnel to check understanding of security awareness and security policies
  • Ensuring and documenting that personnel in security roles are competent
  • Helping measure success against security objectivesNIST logo

Testing on security awareness and knowledge of policies

A cornerstone of good practice in security is training in security awareness. For example, the widely respected NIST 800-53 publication recommends that organizations provide general-purpose and role-based training to personnel as part of initial training and periodically thereafter. If you follow NIST standards, NIST control AT-4 also requires that all security training be documented and records retained.

There is widespread evidence that delivering an assessment is the best way of documenting that training took place, because it doesn’t just document attendance but also understanding of the training. For more explanation, see the Questionmark blog post Proving compliance – not just attendance. The only point of security awareness training is to have the training be understood, so testing to confirm understanding is widespread and sensible.

At Questionmark, we practice what we preach! All our employees have to take a test on data security when they join to check they understand our policies; all employees must also take and pass an updated test each year to ensure they continue to understand.

Ensure that people in security roles are competent

iso 27001The international security standard ISO 27001:2013 requires that an organization determine the necessary competence of personnel affecting information security performance. The organization must also ensures that personnel have such competence and retain evidence of this.

In a large organization with many different security roles, developing and using competence tests for each information security-related role is a good way of measuring and showing competence.  Knowing who is competent in which aspect of security and data protection matters: it ensures that  you are covering appropriate risks with appropriate people. Online testing is an effective way of measuring competence and makes it easy to update competence records by giving periodic tests every six months or annually.

Helping measure information security objectives

PCI logoISO 27001 also requires setting up metrics to measure information security objectives. Results from assessments can be a good metric to use.  Other standards say similar things. For example, the PCI standard widely used for credit card security says in its best practice guide:

“Metrics can be an effective tool to measure the success of a security awareness program, and can also provide valuable information to keep the security awareness program up-to-date and effective”

The PCI guide recognizes that good metrics include “feedback from personnel; quizzes and training assessments”. In my experience, as well as using quizzes and tests to measure knowledge, it also makes sense to use online surveys to assess actual practice by employees and to allow reporting of security concerns.

Testing on information security and data protection is an increasing use case for Questionmark’s trustable SaaS assessment management system, Questionmark OnDemand.  Whichever security standard you are following (ISO 27001, NIST, PCI or one of several others), creating online assessments tailored to measure knowledge of your organization’s policies and procedures using an assessment management system like Questionmark’s can make a useful difference.

Don’t Let Compliance Blind you to Security

profile-picturePosted by David Hunt

The field of security is constantly growing, shifting and adapting to meet an ever changing threat landscape. To provide a degree of order in this chaotic landscape, we look to compliance standards such as NIST 800-53, PCI, HIPPA, ISO’s …. These standards provide frameworks which allow us to measure or determine the maturity of an organization’s security program. However, these frameworks need to be tempered by the current security environment or we risk sacrificing our security for compliance.

This idea was illustrated nicely at this year’s BSides Las Vegas security conference. Lorrie Cranor the Chief Technologist for the Federal Trade Commission provided the keynote speech about why we need to start training our clients and end users to reevaluate their thinking on mandatory password changes. In brief, Lorrie questioned the practice of frequent mandatory password changes, meant to prevent brute forcing (trying all possible combinations) or to lock out those who may have a shared or stolen password.

Here is what she found: Frequent password change requirements can actually make us less secure based on two separate studies. Lorrie’s message was to empower users to create good passwords and agree on what good is, while addressing common misconceptions. Questionmark OnDemand’s new portal, empowers customers to determine what good passwords are and to create customized roles based on their requirements. When configuring passwords requirements for your OnDemand users, consider Lorries advice for passwords.lorrie-cranor-image

  • Avoid common words, names
  • Avoid patterns
  • Digits and symbols add strength
  • Understand different types of attacks
  • Make them Better (Not Perfect)
  • Change them only when Required
  • Start with your Core accounts
  • Use Tools where appropriate

In this case, Lorrie did not blindly follow the path of compliance, leading to ever shorter password refresh limits. She got it right by looking at the issue from a security perspective. What are the threats to the use of passwords and are our mitigations of these threats reducing our risk? The answer, when it comes to mandatory password changes, is NO! We are actually increasing our risk in some cases. So when setting password policies in Questionmark OnDemand, it is always a good practice to regularly review your settings to ensure you are getting it right.

As with any good keynote speech, this one was a catalyst for many subsequent conversations both at and after the conference.

The dangers of security programs that blindly check compliance requirements off a list, news-driven security programs, and the proverbial not being able to see the security forest for the trees. The takeaway from these conversations was that we have an obligation to “disobey.” Not that we should break the rules but rather question them — lest we face a fate such as that which befell those in “The Charge of the Light Brigade,” a poem describing the tragedy resulting from the miscommunication of orders at the Battle of Balaclava.  in our case, we may not face a physical death by blindly following orders, but a virtual death is plausible.

Just as Lorrie asked “Why?” we should be asking it as well. As anyone who has spent time with a 3-year-old child knows, this one simple question is the key to building knowledge. We all should ask “Why?” and grow our technical and security knowledge to ensure we are not just compliant, but secure!

 

mk-cybersecurity

Checklists for Test Development

Austin Fossey-42Posted by Austin Fossey

There are many fantastic books about test development, and there are many standards systems for test development, such as The Standards for Educational and Psychological Testing. There are also principled frameworks for test development and design, such as evidence-centered design (ECD). But it seems that the supply of qualified test developers cannot keep up with the increased demand for high-quality assessment data, leaving many organizations to piece together assessment programs, learning as they go.checklist

As one might expect, this scenario leads to new tools targeted at these rookie test developers—simplified guidance documents, trainings, and resources attempting to idiot-proof test development. As a case in point, Questionmark seeks to distill information from a variety of sources into helpful, easy-to-follow white papers and blog posts. At an even simpler level, there appears to be increased demand for checklists that new test developers can use to guide test development or evaluate assessments.

For example, my colleague, Bart Hendrickx, shared a Dutch article from the Research Center for Examination and Certification (RCEC) at University of Twente describing their Beoordelingssysteem. He explained that this system provides a rubric for evaluating education assessments in areas like representativeness, reliability, and standard setting. The Buros Center for Testing addresses similar needs for users of mental assessments. In the Assessment Literacy section of their website, Buros has documents with titles like “Questions to Ask When Evaluating a Test”—essentially an evaluation checklist (though Buros also provides their own professional ratings of published assessments). There are even assessment software packages that seek to operationalize a test development checklist by creating a rigid workflow that guides the test developer through different steps of the design process.

The benefit of these resources is that they can help guide new test developers through basic steps and considerations as they build their instruments. It is certainly a step up from a company compiling a bunch of multiple choice questions on the fly and setting a cut score of 70% without any backing theory or test purpose. On the other hand, test development is supposed to be an iterative process, and without the flexibility to explore the nuances and complexities of the instrument, the results and the inferences may fall short of their targets. An overly simple, standardized checklist for developing or evaluating assessments may not consider an organization’s specific measurement needs, and the program may be left with considerable blind spots in its validity evidence.

Overall, I am glad to see that more organizations are wanting to improve the quality of their measurements, and it is encouraging to see more training resources to help new test developers tackle the learning curve. Checklists may be a very helpful tool for a lot of applications, and test developers frequently create their own checklists to standardize practices within their organization, like item reviews.

What do our readers think? Are checklists the way to go? Do you use a checklist from another organization in your test development?

 

 

 

 

9 trends in compliance learning, training and assessment

John Kleeman HeadshotThis version is a re-post of a popular blog by John Kleeman

Where is the world of compliance training, learning and assessment going?

I’ve collaborated recently with two SAP experts, Thomas Jenewein of SAP and Simone Buchwald of EPI-USE, to write a white paper on “How to do it right – Learning, Training and Assessments in Regulatory Compliance[Free with registration]. In it, we suggested 9 key trends in the area. Here is a summary of the trends we see:

1. Increasing interest in predictive or forward-looking measures

Many compliance measures (for example, results of internal audits or training completion rates) are backwards looking. They tell you what happened in the past but don’t tell you about the problems to come. Companies can see clearly what is in their rear-view mirror, but the picture ahead of them is rainy and unclear. There are a lot of ways to use learning and assessment data to predict and look forward, and this is a key way to add business value.

2. Monitoring employee compliance with policies

A recent survey of chief compliance officers suggested that their biggest operational issue is monitoring employee compliance with policies, with over half of organizations raising this as a concern. An increasing focus for many companies is going to be how they can use training and assessments to check understanding of policies and to monitor compliance.

3. Increasing use of observational assessments

Picture of observational assessment on smartphoneWe expect growing use of observational assessments to help confirm that employees are following policies and procedures and to help assess practical skills. Readers of this blog will no doubt be familiar with the concept. If not, see Observational Assessments—why and how.

4. Compliance training conducted on mobile devices

The world is moving to mobile devices and this of course includes compliance training and assessment.

5. Informal learning

You would be surprised not to see informal learning in our list of trends. Increasingly we are all understanding that formal learning is the tip of the iceberg and that most learning is informal and often on the job.

6. Learning in the extended enterprise

Organizations are becoming more interlinked, and another important trend is the expansion of learning to the extended enterprise, such as contractors or partners. Whether for data security, product knowledge, anti-bribery or a host of other regulatory compliance reasons, it’s becoming crucial to be able to deliver learning and to assess not only your employees but those of other organizations who work closely with you.

7. Cloud

There is a steady movement towards the cloud and SaaS for compliance learning, training, and assessment – with the huge advantage of delegating all of the IT to an outside party being the strongest compelling factor.  Especially for compliance functions, the cloud offers a very flexible way to manage learning and assessment without requiring complex integrations or alignments with a company’s training departments or related functions.

8. Changing workforce needs

The workforce is constantly changing, and many “digital natives” are now joining organizations. To meet the needs of such workers, we’re increasingly seeing “gamification” in compliance training to help motivate and connect with employees. And the entire workforce is now accustomed to seeing high-quality user interfaces in consumer Web sites and expects the same in their corporate systems.

9. Big Data

E-learning and assessments are a unique way of touching all your employees. There is huge potential in using analytics based on learning and assessment data. We have the potential to combine Big Data available from valid and reliable learning assessments with data from finance, sales, and HR sources.  See for example the illustration below from SAP BusinessObjects showing assessment data graphed against performance data as an illustration of what can be done.

data exported using OData from Questionmark into SAP BusinessObjects

For information on these trends, see the white paper written with SAP and EPI-USE: “How to do it right – Learning, Training and Assessments in Regulatory Compliance”, available free to download with registration.

If you have other suggestions for trends, feel free to contribute them below.

Know what your questions are about before you deliver the test

Austin Fossey-42Posted by Austin Fossey

A few months ago, I had an interesting conversation with an assessment manager at an educational institution—not a Questionmark customer, mind you. Finding nothing else in common, we eventually began discussing assessment design.

At this institution (which will remain anonymous), he admitted that they are often pressed for time in their assessment development cycle. There is not enough time to do all of the item development work they need to do before their students take the assessment. To get around this, their item writers draft all of the items, conduct an editorial review, and then deliver the items. The items are assigned topics after administration, and students’ total scores and topic scores are calculated from there. He asked me if Questionmark software allows test developers to assign topics and calculate topic scores after assessing the students, and I answered truthfully that it does not.

But why not? Is there a reason test developers should not do what is being practiced at this institution? Yes, there are in fact two reasons. Get ready for some psychometric finger-wagging.

Consider what this institution is doing. The items are drafted and subjected to an editorial review, but no one ever classifies the items within a topic until after the test has been administered. Recall what people typically do during a content review prior to administration:

  • Remove items that are not relevant to the domain.
  • Ensure that the blueprint is covered.
  • Check that items are assigned to the correct topic.

If topics are not assigned until after the participants have already tested, we risk the validity of the results and the legal defensibility of the test. If we have delivered items that are not relevant to the domain, we have wasted participants’ time and will need to adjust their total score. Okay, we can manage that by telling the participants ahead of time that some of the test items might not count. But if we have not asked the correct number of questions for a given area of the blueprint, the entire assessment score will be worthless—a threat to validity known as construct underrepresentation or construct deficiency in The Standards for Educational and Psychological Testing.

For example, if we were supposed to deliver 20 items from Topic A, but find out after the fact that only 12 items have been classified as belonging to Topic A, then there is little we can do about it besides rebuilding the test form and making everyone take the test again.

The Standards provide helpful guidance in these matters. For this particular case, the Standards point out that:

“The test developer is responsible for documenting that the items selected for the test meet the requirements of the test specifications. In particular, the set of items selected for a new test form . . . must meet both content and psychometric specifications.” (p. 82)

Publications describing best practices for test development also specify that the content must be determined before delivering an operational form. For example, in their chapter in Educational Measurement (4th Edition), Cynthia Schmeiser and Catherine Welch note the importance of conducting a content review of items before field testing, as well a final content review of a draft test form before it becomes operational.

In Introduction to Classical and Modern Test Theory, Linda Crocker and James Algina also made an interesting observation about classroom assessments, noting that students expect to be graded on all of the items they have been asked to answer. Even if notified in advance that some items might not be counted (as one might do in field testing), students might not consider it fair that their score is based on a yet-to-be-determined subset of items that may not fully represent the content that is supposed to be covered.

This is why Questionmark’s software is designed the way it is. When creating an item, item writers must assign an item to a topic, and items can be classified or labeled along other dimensions (e.g., cognitive process) using metatags. Even if an assessment program cannot muster any further content review, at least the item writer has classified items by content area. The person building the test form then has the information they need to make sure that the right questions get asked.

We have a responsibility as test developers to treat our participants fairly and ethically. If we are asking them to spend their time taking a test, then we owe them the most useful measurement that we can provide. Participants trust that we know what we are doing. If we postpone critical, basic development tasks like content identification until after participants have already given us their time, we are taking advantage of that trust.