Questionmark not impacted by Bash/ShellShock Internet vulnerability

John Kleeman HeadshotPosted by John Kleeman

You may have heard of the recent “ShellShock” vulnerability where a bug in a program called “GNU Bash” puts Internet systems containing the program at risk. This bug was revealed to the public on September 24th, and here is Questionmark’s response to the bug.

Our internal Computer Emergency Response Team (CERT)  immediately reviewed our servers and systems to identify any potential vulnerabilities.  Fortunately, most Questionmark systems use Microsoft technology and do not contain the “GNU Bash” program, and Questionmark software is not impacted by this vulnerability.

Here is some additional information for our customers:

Questionmark’s cloud-based products and services:

Questionmark Live

  • Our collaborative authoring system, Questionmark Live was not vulnerable to the bug.

Questionmark’s US OnDemand Service

Questionmark’s European OnDemand Service

  • Questionmark’s European OnDemand Service was not vulnerable to the bug. One related system uses Linux; this was reviewed and there was no way to exploit the vulnerability, but it has been patched in any case.

If you use Linux or OS X on client computers accessing Questionmark’s cloud-based services, there is no vulnerability directly due to use of Questionmark OnDemand, but you should check with your IT department on whether it would be wise to patch or update these client computers.

Questionmark products for on-premise deployment

Questionmark Perception

  • Our behind the firewall product, Questionmark Perception does not require or use GNU Bash and runs on Microsoft Windows which does not usually deploy GNU Bash. This vulnerability will not impact most customer servers for Questionmark Perception. For the small number of customers who use Linux or OS X within your Questionmark Perception environment (for example to run the Perception database or for participants to take assessments), you should work with your IT department to patch the systems. All customers should also check other non-Questionmark systems in your landscape.

If any Questionmark user or customer has questions, please raise them with your Questionmark account manager or with technical support. I hope that this rapid response and transparency highlights our commitment to security.This also illustrates the value of an OnDemand service. Rather than having to rely on internal IT to catch up and patch vulnerable systems, you can delegate this to Questionmark as your service provider.

Watch this video for more about Questionmark’s commitment to security.

Security

Acronyms, Abbreviations and APIs

Steve Lay HeadshotPosted by Steve Lay

As Questionmark’s integrations product owner, it is all too easy to speak in acronyms and abbreviations. Of course, with the advent of modern day ‘text-speak,’ acronyms are part of everyday speech. But that doesn’t mean everyone knows what they mean. David Cameron, the British prime minister, was caught out by the everyday ‘LOL’ when it was revealed during a recent public inquiry that he’d used it thinking it meant ‘lots of love’.

In the technical arena things are not so simple. Even spelling out an acronym like SOAP (which stands for Simple Object Access Protocol) doesn’t necessarily make the meaning any clearer. In this post, I’m going to do my best to explain the meanings of some of the key acronyms and abbreviations you are likely to hear talked about in relation to Questionmark’s Open Assessment Platform.

API

At a recent presentation (on Extending the Platform), while I was talking about ways of integrating with Questionmark technologies, I asked the audience how many people knew what ‘API’ stood for. The response prompted me to write this blog article!

The term, API, is used so often that it is easy to forget that it is not widely known outside of the computing world.

API stands for Application Programming Interface. In this case the ‘application’ refers to some external software that provides functionality beyond that which is available in the core platform. For example, it could be a custom registration application that collects information in a special way that makes it possible to automatically create a user and schedule them to a specified assessment.

The API is the information that the programmer needs to write this registration application. ‘Interface’ refers to the join between the external software and the platform it is extending. (Our own APIs are documented on the Questionmark website and can be reached directly from developer.questionmark.com.)

APIs and Standards

APIs often refer to technical standards. Using standards helps the designer of an API focus on the things that are unique to the platform concerned without having to go into too much incidental detail. Using a common standard also helps programmers develop applications more quickly. Pre-written code that implements the underlying standard will often be available for programmers to use.

To use a physical analogy, some companies will ask you to send them a self-addressed stamped envelope when requesting information from them. The company doesn’t need to explain what an envelope is, what a stamp is and what they mean by an address! These terms act a bit like technical standards for the physical world. The company can simply ask for one because they know you understand this request. They can focus their attention on describing their services, the types of requests they can respond to and the information they will send you in return.

QMWISe

QMWISe stands for Questionmark Web Integration Services Environment. This API allows programmers to exchange information with Questionmark OnDemand software-as-a-service or Questionmark Perception on-premise software. QMWISe is based on an existing standard called SOAP. (see above)

SOAP defines a common structure used for sending and receiving messages; it even defines the concept of a virtual ‘envelope’. Referring to the SOAP standard allows us to focus on the contents of the messages being exchanged such as creating participants, creating schedules, fetching results and so on.

REST

REST stands for REpresentational State Transfer and must qualify as one of the more obscure acronyms! In practice, REST represents something of a back-to-basics approach to APIs when contrasted with those based on SOAP. It is not, in itself, a standard but merely a set of stylistic guidelines for API designers defined by an academic paper written by Roy Fielding, a co-author of the HTTP standard (see below).

As a result, APIs are sometimes described as ‘RESTful’, meaning they adhere to the basic principles defined by REST. These days, publicly exposed APIs are more likely to be RESTful than SOAP-based. Central to the idea of a RESTful API is that the things your API deals with are identified by a URL (Uniform Resource Locator), the web’s equivalent of an address. In our case, that would mean that each participant, schedule, result, etc. would be identified by its own URL.

HTTP

RESTful APIs draw heavily on HTTP. HTTP stands for HyperText Transfer Protocol. It was invented by Tim Berners-Lee and forms one of the key inventions that underpin the web as we know it. Although conceived as a way of publishing HyperText documents (i.e., web pages), the underlying protocol is really just a way of sending messages. It defines the virtual envelope into which these messages are placed. HTTP is familiar as the prefix to most URLs.

OData

Finally this brings me to OData. OData just stands for Open Data. This standard makes it much easier to publish RESTful APIs. I recently OData in the post, What is Odata, and why is it important?

Although arguably simpler than SOAP, OData provides an even more powerful platform for defining APIs. For some applications, OData itself is enough, and tools can be integrated with no additional programming at all. The PowerPivot plugin for Microsoft Excel is a good example. Using Excel you can extract and analyse data using the Questionmark Results API (itself built on OData) without any Questionmark-specific programming at all.

For more about OData, check out this presentation on Slideshare.

Item Development – Five Tips for Organizing Your Drafting Process

Austin FosseyPosted by Austin Fossey

Once you’ve trained your item writers, they are ready to begin drafting items. But how should you manage this step of the item development process?

There is an enormous amount of literature about item design and item writing techniques—which we will not cover in this series—but as Cynthia Shmeiser and Catherine Welch observe in their chapter in Educational Measurement (4th ed.), there is very little guidance about the item writing process. This is surprising, given that item writing is critical to effective test development.

It may be tempting to let your item writers loose in your authoring software with a copy of the test specifications and see what comes back, but if you invest time and effort in organizing your item drafting sessions, you are likely to retain more items and better support the validity of the results.

Here are five considerations for organizing item writing sessions:

  • Assignments – Shmeiser and Welch recommend giving each item writer a specific assignment to set expectations and to ensure that you build an item bank large enough to
    meet your test specifications. If possible, distribute assignments evenly so that no single author has undue influence over an entire area of your test specifications. Set realistic goals for your authors, keeping in mind that some of their items will likely be dropped later in item reviews.
  • Instructions – In the previous post, we mentioned the benefit of a style guide for keeping item formats consistent. You may also want to give item writers instructions or templates for specific item types, especially if you are working with complex item types. (You should already have defined the types of items that can be used to measure each area of your test specifications in advance.)
  • Monitoring – Monitor item writers’ progress and spot-check their work. This is not a time to engage in full-blown item reviews, but periodic checks can help you to provide feedback and correct misconceptions. You can also check in to make sure that the item writers are abiding by security policies and formatting guidelines. In some item writing workshops, I have also asked item writers to work in pairs to help check each other’s work.
  • Communication – With some item designs, several people may be involved in building the item. One team may be in charge of developing a scoring model, another team may draft content, and a third team may add resources or additional stimuli, like images or animations. These teams need to be organized so that materials are
    handed off on time, but they also need to be able to provide iterative feedback to each other. For example, if the content team finds a loophole in the scoring model, they need to be able to alert the other teams so that it can be resolved.
  • Be Prepared – Be sure to have a backup plan in case your item writing sessions hit a snag. Know what you are going to do if an item writer does not complete an assignment or if content is compromised.

Many of the details of the item drafting process will depend on your item types, resources, schedule, authoring software, and availability of item writers. Determine what you need to accomplish, and then organize your item writing sessions as much as possible so that you meet your goals.

9 trends in compliance learning, training and assessment

John Kleeman HeadshotPosted by John Kleeman

Where is the world of compliance training, learning and assessment going?

I’ve collaborated recently with two SAP experts, Thomas Jenewein of SAP and Simone Buchwald of EPI-USE, to write a white paper on “How to do it right – Learning, Training and Assessments in Regulatory Compliance[Free with registration]. In it, we suggested 9 key trends in the area. Here is a summary of the trends we see:

1. Increasing interest in predictive or forward-looking measures

Many compliance measures (for example, results of internal audits or training completion rates) are backwards looking. They tell you what happened in the past but don’t tell you about the problems to come. Companies can see clearly what is in their rear-view mirror, but the picture ahead of them is rainy and unclear. There are a lot of ways to use learning and assessment data to predict and look forward, and this is a key way to add business value.

2. Monitoring employee compliance with policies

A recent survey of chief compliance officers suggested that their biggest operational issue is monitoring employee compliance with policies, with over half of organizations raising this as a concern. An increasing focus for many companies is going to be how they can use training and assessments to check understanding of policies and to monitor compliance.

3. Increasing use of observational assessments

Picture of observational assessment on smartphoneWe expect growing use of observational assessments to help confirm that employees are following policies and procedures and to help assess practical skills. Readers of this blog will no doubt be familiar with the concept. If not, see Observational Assessments—why and how.

4. Compliance training conducted on mobile devices

The world is moving to mobile devices and this of course includes compliance training and assessment.

5. Informal learning

You would be surprised not to see informal learning in our list of trends. Increasingly we are all understanding that formal learning is the tip of the iceberg and that most learning is informal and often on the job.

6. Learning in the extended enterprise

Organizations are becoming more interlinked, and another important trend is the expansion of learning to the extended enterprise, such as contractors or partners. Whether for data security, product knowledge, anti-bribery or a host of other regulatory compliance reasons, it’s becoming crucial to be able to deliver learning and to assess not only your employees but those of other organizations who work closely with you.

7. Cloud

There is a steady movement towards the cloud and SaaS for compliance learning, training, and assessment – with the huge advantage of delegating all of the IT to an outside party being the strongest compelling factor.  Especially for compliance functions, the cloud offers a very flexible way to manage learning and assessment without requiring complex integrations or alignments with a company’s training departments or related functions.

8. Changing workforce needs

The workforce is constantly changing, and many “digital natives” are now joining organizations. To meet the needs of such workers, we’re increasingly seeing “gamification” in compliance training to help motivate and connect with employees. And the entire workforce is now accustomed to seeing high-quality user interfaces in consumer Web sites and expects the same in their corporate systems.

9. Big Data

E-learning and assessments are a unique way of touching all your employees. There is huge potential in using analytics based on learning and assessment data. We have the potential to combine Big Data available from valid and reliable learning assessments with data from finance, sales, and HR sources.  See for example the illustration below from SAP BusinessObjects showing assessment data graphed against performance data as an illustration of what can be done.

data exported using OData from Questionmark into SAP BusinessObjects

 

For information on these trends, see the white paper written with SAP and EPI-USE: “How to do it right – Learning, Training and Assessments in Regulatory Compliance”, available free to download with registration. Thomas, Simone and I are also doing a free-to-attend webinar on this subject on October 1st (also a German language one on September 22nd). You can see details and links to the webinars here.

If you have other suggestions for trends, feel free to contribute them below.

New tools for building questions and assessments

Jim Farrell HeadshotPosted by Jim Farrell

If you are a Questionmark customer and aren’t using Questionmark Live, what are you waiting for?

More than 2000 of our customers have started using Questionmark Live this year, so I think now is a good time to call out some of the features that are making it a vital part of their assessment development processes.

Let’s start with building questions. One new tool our customers are using is the ability to add notes to a question. This allows reviewers to open questions and leave comments for content developers without changing the version of the question.

image 1

Now over to the assessment-building side of things. Our new assessment interface allows users to add questions in many different ways including single questions, entire topics, and random pull from a topic. You can even prevent participants from seeing repeated of questions during retakes when pulling questions at random. Jump blocks allow you to shorten test time or redirect for extra questions to participants who obtain a certain score. You can also easily tag questions as demographic questions so they can be used as filters in our reporting and analytics tools.

image 2

We have also added a more robust outcome capabilities to give your test administrators new tools for controlling how assessments are completed and reported. You can have multiple outcomes for different score bands, but you can also make it so participants have to get certain scores on particular topics before they can pass a test. For example, suppose you are giving a test on Microsoft Office and you set a pass score at 80%. You probably want to make sure that your participants understand all the products and don’t bomb one of them. You can set a prerequisite for each topic at 80% to make sure participants have knowledge of all areas before passing. If someone gets a 100% on Word questions and 60% on Excel questions, they would not pass. Powerful outcome controls help ensure you are truly measuring the goals of your learning organization.

image 3

If you aren’t using Questionmark Live you are missing out, as we are releasing new functionality every month. Get access and start getting your subject matter experts to contribute to your item banks.

Item Development – Training Item Writers

Austin FosseyPosted by Austin Fossey

Once we have defined the purpose of the assessment, completed our domain analysis, and finalized a test blueprint, we might be eager to jump right in to item writing, but there is one important step to take before we begin: training!

Unless you are writing the entire assessment yourself, you will need a group of item writers to develop the content. These item writers are likely experts in their fields, but they may have very little understanding of how to create assessment content. Even if these experts have experience writing items, it may be beneficial to provide refresher trainings, especially if anything has changed in your assessment design.

In their chapter in Educational Measurement (4 th ed.), Cynthia Shmeiser and Catherine Welch note that it is important to consider the qualifications and representativeness of your item writers. It is common to ask item writers to fill out a brief survey to collect demographic information. You should keep these responses on file and possibly add a brief document explaining why you consider these item writers to be a qualified and representative sample.

Shmeiser and Welch also underscore the need for security. Item writers should be trained on your content security guidelines, and your organization may even ask them to sign an agreement stating that they will abide by those guidelines. Make sure everyone understands the security guidelines, and have a plan in place in case there are any violations.

Next, begin training your item writers on how to author items, which should include basic concepts about cognitive levels, drafting stems, picking distractors, and using specific item types appropriately. Shmeiser and Welch suggest that the test blueprint be used as the foundation of the training. Item writers should understand the content included in the specifications and the types of items they are expected to create for that content. Be sure to share examples of good and bad items.

If possible, ask your writers to create some practice items, then review their work and provide feedback. If they are using the item authoring software for the first time, be sure to acquaint them with the tools before they are given their item writing assignments.

Your item writers may also need training on your item data, delivery method, or scoring rules. For example, you may ask item writers to cite a reference for each item, or you might ask them to weight certain items differently. Your instructions need to be clear and precise, and you should spot check your item writers’ work. If possible, write a style guide that includes clear guidelines about item construction, such as fonts to use, acceptable abbreviations, scoring rules, acceptable item types, et cetera.

I know from my own experience (and Shmeiser and Welch agree) that investing more time in training will have a big payoff down the line. Better training leads to substantially better item retention rates when items are reviewed. If your item writers are not trained well, you may end up throwing out many of their items, which may not leave you enough for your assessment design. Considering the cost of item development and the time spent writing and reviewing items, putting in a few more hours of training can equal big savings for your program in the long run.

Next Page »
SAP Microsoft Oracle HR-XML AAIC