Get tips for combatting test fraud

Chloe MendoncaPosted by Chloe Mendonca

There is a lot of research to support the fact that stepping up investment in learning, training and certification is critical to professional success. A projection from the Institute for Public Policy Research states that ‘between 2012 and 2022, over one-third of all jobs will be created in high-skilled occupations’. This growing need for high-skilled jobs is resulting in a rapid increase in professional qualifications and certifications.

Businesses are recognising the need to invest in skills, spending some £49 billion in 2011 alone on training [figures taken from CBI on skills] — and assessments are a big part of this. They have become widely adopted in helping to evaluate the competence, performance and potential of employees and job candidates. In many industries such as healthcare, life sciences and manufacturing, the stakes are high. Life, limb and livelihood are on the line, so delivering such assessments safely and securely is vital.

Sadly, many studies show that the higher the stakes of an assessment, the higher the potential and motivation to commit test fraud. We see many examples of content theft, impersonation and cheating in the news, so what steps can be taken to mitigate security risks?? What impact do emerging trends such as online remote proctoring have on certification programs? How can you use item banking, secure delivery apps and reporting tools to enhance the defensibility of your assessments?

This October, Questionmark will deliver breakfast briefings in two UK cities, providing the answers to these questions. The briefings will include presentations and discussions on the tools and practices that can be used to create and deliver secure high-stakes tests and exams.

These briefings, due to take place in London and Edinburgh, will be ideal for learning, training and compliance professionals who are using or thinking about using assessments. We invite you to find out more and register for one of these events:

 

Agree or disagree? 10 tips for better surveys — Part 2

John Kleeman HeadshotPosted by John Kleeman

In my first post in this series, I explained that survey respondents go through a four-step process when they answer each question: comprehend the question, retrieve/recall the information that it requires, make a judgement on the answer and then select the response. There is a risk of error at each step. I also explained the concept of “satisficing”, where participants often give a satisfactory answer rather than an optimal one – another potential source of error.

Today, I’m offering some tips for effective online attitude survey design, based on research evidence. Following these tips should help you reduce error in your attitude surveys.

Tip #1 – Avoid Agree/Disagree questions

Although these are one of the most common types of questions used in surveys, you should try to avoid questions which ask participants whether they agree with a statement.

There is an effect called acquiescence bias, where some participants are more likely to agree than disagree. It seems from the research that some participants are easily influenced and so tend to agree with things easily. This seems to apply particularly to participants who are more junior or less well educated, who may tend to think that what is asked of them might be true. For example Krosnick and Presser state that across 10 studies, 52 percent of people agreed with an assertion compared to 42 percent of those disagreeing with its opposite. If you are interested in finding more about this effect, see this 2010 paper by Saris, Revilla, Krosnick and Schaeffer.

Satisficing – where participants just try to give a good enough answer rather than their best answer – also increases the number of “agree” answers.

For example, do not ask a question like this:

My overall health is excellent. Do you:

  • Strongly Agree
  • Agree
  • Neither Agree or Disagree
  • Disagree
  • Strongly Disagree

Instead re-word it to be construct specific:

How would you rate your health overall?

  • Excellent
  • Very good
  • Good
  • Fair
  • Bad
  • Very bad

 

Tip #2 – Avoid Yes/No and True/False questions

For the same reason, you should avoid Yes/No questions and True/False questions in surveys. People are more likely to answer Yes than No due to acquiescence bias.

Tip #3 – Each question should address one attitude only

Avoid double-barrelled questions that ask about more than one thing. It’s very easy to ask a question like this:

  • How satisfied are you with your pay and work conditions?

However, someone might be satisfied with their pay but dissatisfied with their work conditions, or vice versa. So make it two separate questions.

Tip #4 – Minimize the difficulty of answering each question

If a question is harder to answer, it is more likely that participants will satisfice – give a good enough answer rather than the best answer. To quote Stanford Professor  Jon Krosnick, “Questionnaire designers should work hard to minimize task difficulty”.  For example:

  • Use as few words as possible in question and responses.
  • Use words that all your audience will know.
  • Where possible, ask questions about the recent past not the distant past as the recent past is easier to recall.
  • Decompose complex judgement tasks into simpler ones, with a single dimension to each one.
  • Where possible make judgements absolute rather than relative.
  • Avoid negatives. Just like in tests and exams, using negatives in your questions adds cognitive load and makes the question less likely to get an effective answer.

The less cognitive load involved in questions, the more likely you are to get accurate answers.

Tip #5 – Randomize the responses if order is not important

The order of responses can significantly influence which ones get chosen.

There is a primacy effect in surveys where participants more often choose the first response than a later one. Or if they are satisficing, they can choose the first response that seems good enough rather than the best one.

There can also be a recency effect whereby participants read through a list of choices and choose the last one they have read.

Setting choices to be shuffledIn order to avoid these effects, if your choices do not have a clear progression or some other reason for being in a particular order, randomize them.  This is easy to do in Questionmark software and will remove the effect of response order on your results.

I’ll follow up with more tips shortly.

Multilingual Approach Includes Videos

Julie Delazyn HeadshotPosted by Julie Delazyn

Questionmark customers are spread across the globe, and so it’s important for us that our product is multilingual. Here are some of the features available:

Now, we’ve added an extra resource: two new playlists on Questionmark’s YouTube channel, which feature testimonials, tutorials, overview videos and how-to’s in both Portuguese and Spanish:

Browse, watch and enjoy!

Questionmark en Espanol

Questionmark em Português

 

Agree or disagree? 10 tips for better surveys – Part 1

John Kleeman HeadshotPosted by John Kleeman

Writing good survey questions is different to writing good test questions. This short series of blog posts shares some pointers for writing questions for attitude surveys, based on research evidence and my own learning. It should help anyone who creates course evaluation surveys or other surveys that measure opinions, beliefs or attitudes.

I’d like to lay the groundwork by posing some essential questions:

How consistent is an attitude?  

We would like to think that an attitude is an enduring positive or negative feeling about a person, object or issue. We wish that attitudes were stable and retrievable so that a questionnaire could easily measure them. But with many topics, your survey participants may have fluid attitudes, making it likely that they will  be easily influenced by how you ask the questions.

In a well-reported 1980s experiment, Schuman and Presser asked different questions to two randomly selected groups of participants.

One group was asked this:

“Do you think the United States should forbid public speeches in favor of communism?”

The other group was asked a slightly different question:

“Do you think the United States should allow public speeches in favor of communism?”

The researchers found that 39% thought that speeches should be forbidden but 56% thought that such speeches should not be allowed. The difference in wording between “forbid” and “not allow” made a large difference in the attitude measured.

This demonstrates  that how you phrase a question about attitudes can influence people’s answers. The risk is that survey results may give imperfect measures of the underlying attitude. The purpose of good survey design is to get as accurate a measure as you can.

How do participants answer a question?

in order to understand this, it’s helpful to consider the process participants go through when answering a question.Comprehend the question, Recall/retrieve the information, Make a judgement, Select a response

As shown in the diagram above, the first thing a participant has to do is to comprehend a question and understand its meaning. If he or she understands the question differently than you intended, this will lead to error.

Next, the participant must recall or retrieve the information that the question is asking about. If the event being asked about is relatively recent, this may be simple, but if there is any time delay or complexity in the question, it’s possible that the respondent will fail to recall something or misremember or partially remember something.

Then the participant must make a judgement, which can be influenced by context. For example, earlier questions can set a context that influences judgement in later questions. Sometimes judgement will also be influenced by social desirability: This is what I’m expected to answer so I’ll give that answer, even though it’s not fully the case”.

And last, the participant must select a response. Except for open questions, which are difficult to analyze quantitatively, the response will be constrained and perhaps adjusted to the options that you provide.

Do participants give the best answer they can?

In an ideal world, all your participants will go carefully through each step and give you the best answer they can.

But unlike in tests and exams, where participants have a strong motivation to answer optimally, in a survey, participants often take shortcuts or give an answer they think is satisfactory rather than taking the time and effort to give the best answer.

This effect is called satisficing. It can involve skipping steps 2 and 3 and just selecting a response that seems to make sense without thinking too much about it —  or else rushing through or short-cutting any of the steps.

Satisficing is increased if the questions are difficult to answer and if the participants do not have motivation to answer well. Obviously, satisficing can have a big impact on the quality of the survey results.

How can you prevent this? In my next post in this series, I’ll share some tips for good practice in attitude questionnaire design, based on research evidence. I will discuss whether asking if Agree/Disagree-style questions is good practice.

In the meantime, if you are interested in some other survey advice, a good academic article is a chapter on Question and Questionnaire Design by Krosnick and Prosser.  Click here for a previous set of blog articles about writing surveys.

Test Security: Not Necessarily a Question of Proctoring Mode

Austin Fossey-42Posted by Austin Fossey

I recently spent time looking for research studies that analyzed the security levels of online and in-person proctoring. Unfortunately, no one seems to have compared these two approaches with a well-designed study. (If someone has done a rigorous study contrasting these two modes of delivery, please let me know! I certainly may have overlooked it in my research.)

I did learn a lot from the sparse literature that was available, and my main takeaway is this: security is related less to proctoring mode than it is to how much effort the test developer puts into administration planning and test design. Investing in solid administration policies, high-quality monitoring technology, and well-trained proctors is what really matters most for both in-person and online proctoring.

With some effort, testing programs with online proctors can likely achieve levels of security and service comparable to the services offered by many test centers. This came into focus for me after attending several recent seminars about online and in-person proctoring through the Association of Test Publishers (ATP) and Performance Testing Council (PTC).

The Standards for Educational and Psychological Testing provide a full list of considerations for organizations running any type of exam, but here are a few key points gleaned from the Standards and from PTC’s webinar (.wmv) to help you plan for online proctoring:

Control of the Environment

Unless a collaborator is onsite to set up and maintain the test environment, all security controls will need to be managed remotely. Here are suggestions for what you would need to do if you were a test program administrator under those circumstances:

  • Work with your online proctors to define the rules for acceptable test environments.
  • Ensure that test environment requirements are realistic for participants while still meeting your standards for security and comparability between administrations.
  • If security needs demand it, have monitoring equipment sent in advance (e.g., multiple cameras for improved monitoring, scanners to authenticate identification).
  • Clearly communicate policies to participants and get confirmation that they understand and can abide by your policies.
  • Plan policies for scenarios that might arise in an environment that is not managed by the test program administrator or proctor. For example, are you legally allowed to video someone who passes by in the background if they have not given their permission to be recorded? If not, have a policy in place stating that the participant is responsible for finding an isolated place to test. Do you or the proctoring company manage the location where the test is being delivered? If not, have a policy for who takes responsibility and absorbs the cost of an unexpected interruption like a fire alarm or power outage.

You should be prepared to document the comparability of administrations. This might include describing potential variations in the remote environment and how they may or may not impact the assessment results and security.

It is also advisable to audit some administrations to make sure that the testing environments comply with your testing program’s security policy. The online proctors’ incident reports should also be recorded in an administration report, just as they would with an in-person proctor.

Test Materials

You also need to make sure that everything needed to administer the test is provided, either physically or virtually.

  • Each participant must have the equipment and resources needed to take the test. If it is not reasonable to expect the participant to handle these tasks, you need to plan for someone else to do so, just as you would at a test center. For example, it might not be reasonable to expect some participant populations to know how to check whether the computer used for testing meets minimum software requirements.
  • If certain hardware (e.g., secured computers, cameras, scanners, microphones) or test materials (e.g., authorized references, scratch paper) are needed for the assessment design, you need to make sure these are available onsite for the participant and make sure they are collected afterwards.

Accommodations

Accommodations may take the form of physical or virtual test materials, but accommodations can also include additional services or some changes in the format of the assessment.

  • Some accommodations (e.g., extra time, large print) can be controlled by the assessment instrument or an online proctor, just as they would in a test center.
  • Other accommodations require special equipment or personnel onsite. Some personnel (e.g., scribes) may be able to provide their services remotely, but accommodations like tactile printouts of figures for the blind must be present onsite.

Extra effort is clearly needed when setting up an online-proctored test. Activities that might have been handled by a testing center (control of the environment, management of test materials, providing accommodations) now need to be remotely coordinated by the test program staff and proctors; however, the payoffs may be worth the extra effort. If comparable administration practices can be achieved, online-proctored assessments may be cheaper than test centers, offer increased access to participants, and lower the risks of collaborative cheating.

For more on online proctoring, check out this informational page and video below

Unlocking website security

Steve Lay HeadshotPosted by Steve Lay

As a product manager at Questionmark, one of the questions that I’m increasingly being asked is about support for specific versions of SSL and TLS. These abbreviations refer to different flavours of the ‘https’ protocol that keeps your web browsing secure. Questionmark’s OnDemand service no longer supports the older SSL protocol. To understand why, read on…

In this post I’ll focus on the privacy aspect of secure websites only —the extent to which communication is protected from eavesdroppers. Issues of trust are just as important, but I’ll have to discuss those in a future post.

Most browsers display a padlock icon by the web address or the site name to indicate that communication between your browser and the server is encrypted for privacy. Just as with real padlocks, though, there are stronger and weaker forms of encryption. The difference is too subtle for most browsers to show. In practice, browsers adopt a strategy of attempting to use the strongest type of encryption protocol they can, falling back to weaker methods if required. In Internet Explorer you can even configure these settings under the Advanced tab of your internet options:

qm comp 1As you can see, there are five different encryption protocols listed, in increasing order of strength. Generally speaking, TLS is better than SSL and more recent versions of TLS are better still. Published attacks on these protocols typically enable someone who can view network traffic to decrypt some or even all of the information passing over the ‘secure connection’. This type of scenario is called a ‘man in the middle attack’ because the eavesdropper stands in between your browser and the website it is communicating with.

If your browser always chooses the best encryption available, why would you want to configure the specific protocols it supports? Unfortunately, the very first part of the communication between your browser and the website is more vulnerable. The two systems have to agree on an encryption protocol to use before they can be truly private. In some special cases it is possible for a man in the middle to intervene and force a weaker protocol to be negotiated. By configuring your browser to support only stronger protocols, you can ensure that your browser is never tricked this way.

Here at Questionmark, we care about your security too! If a protocol like SSLv3 is considered vulnerable to interception, shouldn’t the server refuse to use it as well? Yes, it should. In fact, we don’t support SSL versions 2 and 3 for this very reason.

For this blog post I’ve focused on the most visible aspect of the security protocol. In practice, there lots of subtle differences in the way each protocol can be configured. If you use Google’s Chrome browser you can click on the padlock to reveal information about connection security.

qm compNotice that this connection uses TLS 1.2, but there is even more detail reported concerning the specific cryptographic algorithms used. Sites like www.ssllabs.com have almost 50 separate check points that they can report on for a public-facing secure website! Staying on top of all this configuration complexity is critical to keeping websites secure.

Unfortunately, sometimes we have to strengthen security in such a way that compatibility with older browsers is sacrificed. For example, according to the latest simulation results, Internet Explorer version 6 (running on Windows XP) is no longer able to successfully negotiate a secure connection with our OnDemand service.

In practice, an overwhelming majority of users use more modern browsers (or have access to one), so the web remains both secure and usable. Perhaps a greater cause of concern is older applications that are integrated with our APIs. It is just as important to keep these applications up to date. For example, applications that use older versions of Java, such as Java 6 or have their Java runtime configuration options set inappropriately might have problems communicating to the same high standards. If you are running a custom integration and are concerned about future compatibility, please get in touch.

This is a developing field. New ways of exploiting older protocols and cryptographic algorithms are being found by security researchers all the time, and the bad guys aren’t far behind. Our security specialists at Questionmark constantly monitor best practice and update the configuration of our OnDemand service to keep your communications safe.

Next Page »