xAPI: A Way to Enable Learning Analytics

Posted by John Kleeman

Many organizations train and test individuals to ensure they have the right skills and competencies. In doing so, they amass vast amounts of data, which can be used to identify further training opportunities and improve performance. One way of managing this data is to use the Experience API (or xAPI) to pass data from disparate systems into a central Learning Record Store.

xAPI is maintained by the United States Advanced Distributed Learning Initiative (see www.adlnet.gov) and many Questionmark users have requested that we support xAPI so that they can export test data for analysis. For this reason, we’re pleased to let you know that earlier this year, we released our xAPI Connector for OnPremise and OnDemand customers. The integration lets the Questionmark platform connect and ‘talk’ to Learning Record Stores, creating an agile and effective learning and development ecosystem.

The challenges organizations face

For any organization, measuring the competence of employees or consultants through assessment is an essential element of ensuring the team is capable and fit-for-purpose. During this process, organizations collect large amounts of data that needs to be stored under strict data privacy regulations.

Once employers have control of learning and assessment data, it can then be interrogated to analyze employees and the effectiveness of training programs. With Questionmark’s xAPI integration, customers will now be able to transfer data from the assessment platform to their Learning Record Store.

What xAPI does

xAPI provides a standard means for collecting data from training and assessment experiences. The specification allows different systems to communicate and share data, which can then be stored and analyzed. This helps organizations to make better decisions by collecting, tracking, and quantifying learning activities to see what works and what doesn’t.

Organizations are increasingly investing in Learning Record Stores to host and analyze learning and assessment data. With xAPI, Questionmark customers will now to be able to send assessment data directly to their Learning Record Stores, so that they can measure the impact of learning and development activities and maximize the impact of their investment.

xAPI offers universal integration, meaning users can store data anywhere. Reporting across multiple geographies is easy, so users can analyze, compare and contrast data. The data is also presented in a universal format, making it easy to understand and interpret. This provides a solid starting point for big data learning analytics. And, as an assessment technology provider, Questionmark has widened its footprint in the total learning ecology by releasing the xAPI functionality.

If you’d like to find out more about the full range of assessment features that Questionmark offers, contact us or request a demo.



Workplace Exams 101: How to Prevent Cheating

John Kleeman

Posted by John Kleeman

A hot topic in the assessment world today is cheating and what to do to prevent it. Many organizations test their employees, contractors and other personnel to check their competence and skills. These include compliance tests, on-boarding tests, internal certification tests, end-of-course tests and product knowledge quizzes.

There are two reasons why cheating matters in workplace exams:

Issue #1: Validity

Firstly, the validity of the test or exam is compromised; any decision made as a result of the test is invalid. For example, you may use a test to check whether someone is safe to sell your products, but if cheating happens, then he/she is not. Or you may be checking if someone is safe to do a task, and if cheating happens, safety is compromised. Tests and exams are used to make important decisions about people with business, financial and regulatory consequences. If someone cheats at a test or exam, you are making the decision based on bad data.

Issue #2: Integrity

Secondly, people who cheat at tests or exams have demonstrated a lack of integrity. If they will cheat on a test or exam, what else might they lie, cheat or defraud your organization about? Will falsifying a record or report be next? Regulators often have rules requiring integrity and have sanctions if someone demonstrates a lack of it.

For example, in the financial sector, FINRA’s Rule 2010 requires individuals to “observe high standards of commercial honor” and is used to ban people found cheating at exams or continuing education tests. In the accountancy sector, both AICPA and CIMA require accountants to have integrity and those found cheating at tests have been banned or otherwise sanctioned. And in the medical and pharmaceutical field, regulators have codes of conduct which include honesty. For example, the UK General Medical Council requires doctors to “always be honest about your experience, qualifications and current role” and interprets cheating at exams as a violation of this.

The well-respected International Test Commission Guidelines on the Security of Tests, Exams and Other Assessments suggests six categories of cheating threats shown below, alongside examples from me of how they can take place in the work environment.


ITC categoriesTypical examples in the workplace
Using test content pre-knowledge– An employee takes the test and passes questions to a colleague still to take it
– Someone authoring questions leaks them to test-takers
– A security vulnerability allows questions to be seen in advance
Receiving expert help while taking the test– One employee sits and coaches another during the test
– IM or phone help while taking a test
– A manager or proctor supervising the test helps a struggling employee
Using unauthorized test aids– Access to the Internet allows googling the answers
– Unauthorized study guides brought to the test
Using a proxy test taker– A manager sends an assistant or secretary to take the test in place of him/her
– Other situations where a colleague stands in for another
Tampering with answer sheets or stored test results– Technically minded employees subvert communication with the LMS or other corporate systems and change their results
Copying answers from another user– Two people sitting near each other share or copy answers
– Organized answer sharing within a cohort or group of trainees


If you are interested in learning more about any of the threats above, I’ve shared approaches to mitigate them in the workplace in our webinar, Workplace Exams 101: How to Prevent Cheating. You can download the webinar recording slides HERE.

Ten Key Considerations for Defensibility and Legal Certainty for Tests and Exams

John KleemanPosted by John Kleeman

In my previous post, Defensibility and Legal Certainty for Tests and Exams, I described the concepts of Defensibility and Legal Certainty for tests and exams. Making a test or exam defensible means ensuring that it can withstand legal challenge. Legal certainty relates to whether laws and regulations are clear and precise and people can understand how to conduct themselves in accordance with them. Lack of legal certainty can provide grounds to challenge test and exam results.

Questionmark has just published a new best practice guide on Defensibility and Legal Certainty for Tests and Exams. This blog post describes ten key considerations when creating tests and exams that are defensible and encourage legal certainty.

1. Documentation

Without documentation, it will be very hard to defend your assessment in court, as you will have to rely on people’s recollections. It is important to keep records of the development of your tests and ensure that these records are updated so that they accurately reflect what you are doing within your testing programme. Such records will be powerful evidence in the event of any dispute.

2. Consistent procedures

Testing is more a process than a project. Tests are typically created and then updated over time. It’s important that procedures are consistent over time. For example, a question added into the test after its initial development should go through similar procedures as those for a question when the test was first developed. If you adopt an ad hoc approach to test design and delivery, you are exposing yourself to an increased risk of successful legal challenge.

3. Validity

Validity, reliability and fairness are the three generally accepted principles of good test design. Broadly speaking, validity is how well the assessment matches its purpose. If your tests and exams lack validity, they will be open to legal challenge.

4. Reliability

Reliability is a measure of precision and consistency in an assessment and is also critical.There are many posts explaining reliability and validity on this blog, one useful one is Understanding Assessment Validity and Reliability.

5.  Fairness (or equity)

Probably the biggest cause of legal disputes over assessments is whether they are fair or not. The International standard ISO 10667-1:2011 defines equity as the “principle that every assessment participant should be assessed using procedures that are fair and, as far as
possible, free from subjectivity that would make assessment results less accurate”. A significant part of fairness/equity is that a test should not advantage or disadvantage individuals because of characteristics irrelevant to the competence or skill being measured.

6. Job and task analysis

The type of skills and competence needed for a job change over time. Job and task analysis are techniques used to analyse a job and identify the key tasks performed and the skills and competences needed. If you use a test for a job without having some kind of analysis of job skills, it will be hard to prove and defend that the test is actually appropriate to measure someone’s competence and skills for that job.

7. Set the cut or pass score fairly

It is important that you have evidence to reasonably justify that the cut score used to divide pass from fail does genuinely distinguish the minimally competent from those who are not competent. You should not just choose a score of 60%, 70% or 80% arbitrarily, but instead you should work out the cut score based on the difficulty of questions and what you are measuring.

8. Test more than just knowledge recall

Most real-world jobs and skills need more than just knowing facts. Questions which test remember/recall skills are easy to write but they only measure knowledge. For most tests, it is important that a wider range of skills are included in the test. This can be done with conventional questions that test above knowledge or with other kinds of tests such as observational assessments.

9. Consider more than just multiple choice questions

Multiple choice tests can assess well; however in some regions, multiple choice questions sometimes get a “bad press”. As you design your test, you may want to consider including enhanced stimulus and a variety of question types (e.g. matching, fill-in-blanks, etc.) to reduce the possibility of error in measurement and enhance stakeholder satisfaction.

10. Robust and secure test delivery process

A critical part of the chain of evidence is to be able to show that the test delivery process is robust, that the scores are based on answers genuinely given by the test-taker and that there has been no tampering or mistakes. This requires that the software used to deliver the test is reliable and dependably records evidence including the answers entered by the test-taker and how the score is calculated. It also means that there is good security so that you have evidence that the right person took the test and that risks to the integrity of the test have been mitigated.

For more on these considerations, please check out our best practice guide on Defensibility and Legal Certainty for Tests and Exams, which also contains some legal cases to illustrate the points. You can download the guide HERE – it is free with registration.

What is the best way to reduce cheating?

John Kleeman HeadshotPosted by John Kleeman

There is a famous saying: “If you want to build a ship, don’t drum up the people to gather wood, divide the work, and give orders. Instead, teach them to yearn for the vast and endless sea.” This has a useful analogy in preventing cheating.

There are many useful technical and procedural ways of preventing cheating in tests and exams, and these are important to follow, but an additional, cost-effective way of reducing cheating is to encourage participants to choose not to cheat. If you can make your participants want to take the test fairly and honestly — by reducing their rationalization to cheat — this will reduce cheating.

Fraud triangle - Motivation, Opportunity and RationalizationAs shared by my colleague Eric Shepherd  in his excellent blog article Assessment Security and How To Reduce Fraud, cheating at a test is a variant of fraud.  Donald Cressey, a famed criminologist came up with the fraud triangle shown in the diagram to the right to explain why people commit fraud.

In order for someone to commit fraud (e.g. cheat at a test), he or she must have Motivation, Opportunity and Rationalization.  Motivation comes from the stakes of the test; for an important test, this is difficult to reduce. Opportunity arises out of technical and procedural weaknesses in the test-taking process, and you can obviously strengthen processes to reduce opportunity in many ways.

Rationalization is when someone reconciles their bad deeds as acceptable behavior. We all have values and like to think that what we are doing is right. When someone conducts fraud, they typically rationalize to themselves that what they are doing is right or at least acceptable. For example, they convince themselves that the organization they are robbing deserves it or can afford the loss. When cheating at a test, they say to themselves that the test is not fair or that they are just copying everyone else or they find some other excuse to rationalize and feel good about the cheating.

Here are some ways to make it less likely that people will rationalize about cheating on your test:

1. Formalize a code of conduct (e.g. honesty code) which sets out what you expect from test takers. Communicate this effectively well in advance and get people to sign up to it right before taking the test. For example, you can put it on the first screen after they log in. This will reduce rationalization from people who might claim to themselves they didn’t know it was wrong to cheat or that everyone cheats.

2. Provide fair and accessible learning environments where people can learn to pass the assessment honestly, and provide practice exams so people can check their learning. Rationalization is increased if people think there is no other way to pass the test than cheating.

3. Make sure that the test is trustable (reliable and valid) and fair. If the test is not seen as fair,  people will be less like to rationalize that it’s permissible to cheat.

3. Communicate details of why the tests are there, how the questions are constructed and what measures you take to make the Cheat sheet in a juice box test fair, valid and reliable. Again, if people know the test is there for good reason and fair, they will be less motivated to cheat.

4. Maintain a positive public image. This will reduce rationalization by people claiming that  the assessment provider is incompetent or has other faults.

5. Communicate your security measures and how people who cheat are caught.  This makes people less likely to think they will be able to get away with it.

For many organizations — in addition to other anti-cheating measures — it can be very productive to spend time reducing participants’ rationalization to cheat, thereby helping them choose to be honest. The picture on the right shows a “cheat sheet” or “crib sheet” hidden in a juice carton. Think of ways you can encourage participants to use their inventiveness to learn to pass the exam, not to believe it’s okay to defraud you and the testing system.

I hope you find this good practice tip helpful. I’ll be presenting at the Questionmark Users Conference March 10 – 13 on Twenty Testing Tips: Good practice in using assessments. Taking measures to reduce rationalization for cheating will be one of my tips. Register for the conference if you’re interested in hearing more.

South African Users Conference Programme Takes Shape

Chloe MendoncaPosted by Chloe Mendonca

In just five weeks, Questionmark users and other learning professionals will gather in Midrand for the first South African Questionmark Users Conference.

Delegates will enjoy a full programme, from case studies to features and functions sessions on the effective use of Questionmark technologies.

There will also be time for networking during lunches and our Thursday evening event.

Here are some of the sessions you can look forward to:

  • Case Study: Coming alive with Questionmark Live: A mind shift for lecturers – University of Pretoria
  • Case Study: Lessons and Discoveries over a decade with Questionmark – Nedbank
  • Case Study: Stretching the Boundaries: Using Questionmark in a High-Volume Assessment Environment – University of Pretoria
  • Features and Functions: New browser based tools for collaborative authoring – Overview and Demonstrations
  • Features and Functions: Analysing and Sharing Results with Stakeholders: Overview of Key New Questionmark Reporting and Analytics features
  • Features and Functions: Extending the Questionmark Platform: Updates and overviews of APIs, Standards Support, and integrations with third party applications
  • Customer Panel Discussion: New Horizons for eAssessment

You can register for the conference online or visit our website for more information.

We look forward to seeing you in August!

Chloe-Banner-SA-Conference-2014

4 Tips for making your assessments BYOD-Friendly

Posted by Brian McNamara

Many organizations have begun to embrace the concept of “BYOD” (Bring Your Own Device), so we thought it would be useful to share a few tips on how to optimize your online assessments for the broadest range of devices and browsers possible.

Mobile devices are increasingly being used for delivering online surveys and quizzes. We’re also seeing more customers using mobile devices for observational assessment, and exploring the potential of tablets for “mobile test centers.”

Here are a few tips to keep in mind if you’re planning to deliver to smartphones or tablets:

1. Think small. Fortunately, Questionmark’s auto-sensing, auto-sizing interface makes it easy to accommodate a broad range of devices. However, you still should consider the word-count of your items and types of content you wish to deliver. For example, large images that are crucial to a question’s stimulus and/or choices could put users of small-screen devices at a disadvantage. Likewise, they may also take longer to load if the mobile device has a less-than-optimal data-connection signal.

2. Provide QR Codes to make it easy to access quizzes and surveys via mobile devices. A QR Code can contain a URL that makes it quick and easy to launch an assessment, improve survey response rates, and enable capturing of demographic data. See the blog article for more info: “Using QR Codes – Start to Finish

3. Be cautious using Flash, or avoid it altogether, as many devices (particular iOS devices such as the iPhone) do not provide native support for it.

4. Test it out! Try your assessments on as many different devices as practical. There are many “emulators” that you can use on PCs to help understand how content will appear on the ‘small screen’ — but be cautious as they don’t always give a true “user experience.”

If you’d like to learn more, sign up for one of our web seminars on mobile assessment delivery!