7 Strategies to Shrink Satisficing & Improve Survey Results

John Kleeman Headshot

Posted by John Kleeman

My previous post Satisficing: Why it might as well be a four-letter word explained that satisficing on a survey is when someone answers survey questions adequately but not as well as they can. Typically they just fill in questions without thinking too hard. As a commenter on the blog said: “Interesting! I have been guilty of this, didn’t even know it had a name!”

Examples of satisficing behavior are skipping questions or picking the first answer that makes some kind of sense. Satisficing is very common.  As explained in the previous blog, some reasons for it are participants not being motivated to answer well, not having the ability to answer well, them finding the survey too hard or them simply becoming fatigued at too long a survey.

Satisficing is a significant cause of survey error, so here are 7 strategies for a survey author to reduce satisficing:

1. Keep surveys short. Even the keenest survey respondent will get tired in a long survey and most of your respondents will probably not be keen. To get better results, make the survey as short as you possibly can.Bubble-Sheet---Printing-and-Scanning_2

2. Keep questions short and simple. A long and complex question is much more likely to get a poor quality answer.  You should deconstruct complex questions into shorter ones. Also don’t ask about events that are difficult to remember. People’s memory of the past and of the time things happened is surprisingly fragile, and if you ask someone about events weeks or months ago, many will not recall well.

3. Avoid agree/disagree questions. Satisficing participants will most likely just agree with whatever statement you present. For more on the weaknesses of these kind of questions, see my blog on the SAP community network: Strongly Disagree? Should you use Agree/Disagree in survey questions?

4. Similarly remove don’t know options. If someone is trying to answer as quickly as possible, answering that they don’t know is easy for them to do, and avoids thinking about the questions.

5. Communicate the benefit of the survey to make participants want to answer well. You are doing the survey for a good reason.  Make participants believe the survey will have positive benefits for them or their organization. Also make sure each question’s results are actionable. If the participant doesn’t feel that spending the time to give you a good answer is going to help you take some useful action, why should they bother?

6. Find ways to encourage participants to think as they answer. For example, include a request to ask participants to carefully deliberate – it could remind them to pay attention. It can also be helpful to occasionally ask participants to justify their answers – perhaps adding a text comment box after the question explaining why they answered that way. Adding comment boxes is very easy to do in Questionmark software.

7. Put the most important questions early on. Some people will satisfice and they are more likely to do it later on in the survey. If you put the questions that matter most early on, you are more likely to get good results from them.

There is a lot you can do to reduce satisficing and encourage people to give their best answers. I hope these strategies help you shrink the amount of satisficing your survey participants do, and in turn give you more accurate results.

5 Ways to Limit the Use of Breached Assessment Content

Austin Fossey-42Posted by Austin Fossey

In an earlier post, Questionmark’s Julie Delazyn listed 11 tips to help prevent cheating. The third item on that list related to minimizing item exposure; i.e., limiting how and when people can see an item so that content will not be leaked and used for dishonest purposes.

During a co-presentation with Manny Straehle of Assessment, Education, and Research Experts at a Certification Network Group quarterly meeting, I presented a set of considerations that can affect the severity of item exposure. My message was that although item exposure may not be a problem for some assessment programs, assessment managers should consider the design, purpose, candidate population, and level of investment for their assessment when evaluating their content security requirements.

mitigating risk

If item exposure is a concern for your assessment program, there are two ways to mitigate the effects of leaked content: limiting opportunities to use the content, and identifying the breach so that it can be corrected. In this post, I will focus on ways to limit content-using opportunities:

Multiple Forms

Using different assessment forms lowers the number of participants who will see an item in delivery. Having multiple forms also lowers the probability that someone with access to a breached item will actually get to put that information to use. Many organizations achieve this by using multiple, equated forms which are systematically assigned to participants to limit joint cheating or to limit item exposure across multiple retakes. Some organizations also achieve this through the use of randomly generated forms like those in Linear-on-the-Fly Testing (LOFT) or empirically generated forms like those in Computer Adaptive Testing (CAT).

Frequent Republishing

Assessment forms are often cycled in and out of production on a set schedule. Decreasing the amount of time a form is in production will limit the impact of item exposure, but it also requires more content and staff resources to keep rotating forms.

Large Item Banks

Having a lot of items can help you make lots of assessment forms, but this is also important for limiting item exposure in LOFT or CAT. Item banks can also be rotated. For example, some assessment programs will use an item bank for particular testing windows or geographic regions and then switch them at the next administration.

Exposure Limits

If your item bank can support it, you may also want to put an exposure limit on items or assessment forms. For example, you might set up a rule where an assessment form remains in production until it has been delivered 5,000 times. After that, you may permanently retire that form or shelve it for a predetermined period and use it again later. An extreme example would be an assessment program that only delivers an item during a single testing window before retiring it. The limit will depend on your risk tolerance, the number of items you have available, and the number of participants taking the assessment. Exposure limits are especially important in CAT where some items will get delivered much more frequently than others due to the item selection algorithm.

Short Testing Windows

When participants are only allowed to take a test during a short time period, there are fewer opportunities for people to talk about or share content before the testing window closes. Short testing windows may be less convenient for your participant population, but you can take advantage of the extra downtime to spend time detecting item breaches, developing new content, and performing assessment maintenance.

In my next post, I will provide an overview of methods for identifying instances of an item breach.

Assessment Security: 5 Tips from Napa

John Kleeman Headshot

Posted by John Kleeman

Assessment security has been an important topic at Questionmark, and that was echoed at the Questionmark Users Conference in Napa last week. Here is some advice I heard from attendees:

  •  Tip 1: It’s common to include an agreement page at the start of the assessment, where the participant agrees to follow the assessment rules, to keep exam content confidential and not to cheat. This discourages cheating by reducing people’s ability to rationalize that it’s okay to do so and also removes the potential for someone to claim they didn’t know the rules.
  • Tip 2: It’s a good idea to have a formal agreement with SMEs in your organization who author or review questions to remind them not to pass the questions to others. If they are employees, you should involve your HR and legal departments in drafting the agreements or notices. That way if someone leaks content, you have HR and legal on board to deal with the disciplinary consequences.Data gathering, screening, unproctored assessments, proctored assessments
  • Tip 3: It’s prudent to use the capabilities of Questionmark software to restrict access to the item bank by topic. Only give  authors access to the parts they are working on, to avoid inadvertent or deliberate content leakage.
  • Tip 4: There is increasing interest and practical application of hybrid testing strategies for proctored and unprotected tests to allow you to focus anti-cheating resources on risk. For example, you might screen participants with quizzes, then give un-proctored tests and give those who pass a proctored test.  Or you might deliver a series of short exams, at periodic intervals to make it harder for people to get proxy test takers to impersonate them. There is also a lot of interest in online proctoring, where people can take exams at home or in the office, and be proctored by a remote proctor using video monitoring.  This reduces travel time and is often more secure than face-to-face proctoring.
  • Tip 5: If your assessment system is on premise (behind the firewall), check regularly with your IT department that they are providing the security you need and that they are backing up your data. Most internal IT departments are hugely competent, but there is a risk as people change jobs over time that your IT department might lose touch with what the assessment application is used for. One user shared how their IT system failed to make backups of the Questionmark database, so when the server failed, they lost their data and had to restart from scratch. I’m sure this particular issue won’t happen for others, but IT teams have a wide set of priorities, so it’s good to check in with them.

There was lots more at the conference – iPads becoming mainstream for authoring and administration as well as delivery, people using OData to get access to Questionmark data, Questionmark being used to test the knowledge of soccer referees and some good thinking on balancing questions at higher cognitive levels.

One thing that particularly interested me was anecdotal evidence that having an internal employee certification program reduces employee attrition. Employees are less likely to leave your organization if you have an assessment and certification program. Certification makes employees feel more valued and more satisfied and so less likely to leave for a new job elsewhere. A couple of attendees shared that their internal statistics showed this.

This mirrors external research I’ve seen – for example the Aberdeen Group have published research which suggests that best-in-class organizations use assessments around twice as often as laggard organizations, and that first-year retention for best-of-class organizations is around 89% vs 76% for laggards.

For more information on security,  download the white paper: Delivering Assessments Safely and Securely.

 

What is the best way to reduce cheating?

John Kleeman HeadshotPosted by John Kleeman

There is a famous saying: “If you want to build a ship, don’t drum up the people to gather wood, divide the work, and give orders. Instead, teach them to yearn for the vast and endless sea.” This has a useful analogy in preventing cheating.

There are many useful technical and procedural ways of preventing cheating in tests and exams, and these are important to follow, but an additional, cost-effective way of reducing cheating is to encourage participants to choose not to cheat. If you can make your participants want to take the test fairly and honestly — by reducing their rationalization to cheat — this will reduce cheating.

Fraud triangle - Motivation, Opportunity and RationalizationAs shared by my colleague Eric Shepherd  in his excellent blog article Assessment Security and How To Reduce Fraud, cheating at a test is a variant of fraud.  Donald Cressey, a famed criminologist came up with the fraud triangle shown in the diagram to the right to explain why people commit fraud.

In order for someone to commit fraud (e.g. cheat at a test), he or she must have Motivation, Opportunity and Rationalization.  Motivation comes from the stakes of the test; for an important test, this is difficult to reduce. Opportunity arises out of technical and procedural weaknesses in the test-taking process, and you can obviously strengthen processes to reduce opportunity in many ways.

Rationalization is when someone reconciles their bad deeds as acceptable behavior. We all have values and like to think that what we are doing is right. When someone conducts fraud, they typically rationalize to themselves that what they are doing is right or at least acceptable. For example, they convince themselves that the organization they are robbing deserves it or can afford the loss. When cheating at a test, they say to themselves that the test is not fair or that they are just copying everyone else or they find some other excuse to rationalize and feel good about the cheating.

Here are some ways to make it less likely that people will rationalize about cheating on your test:

1. Formalize a code of conduct (e.g. honesty code) which sets out what you expect from test takers. Communicate this effectively well in advance and get people to sign up to it right before taking the test. For example, you can put it on the first screen after they log in. This will reduce rationalization from people who might claim to themselves they didn’t know it was wrong to cheat or that everyone cheats.

2. Provide fair and accessible learning environments where people can learn to pass the assessment honestly, and provide practice exams so people can check their learning. Rationalization is increased if people think there is no other way to pass the test than cheating.

3. Make sure that the test is trustable (reliable and valid) and fair. If the test is not seen as fair,  people will be less like to rationalize that it’s permissible to cheat.

3. Communicate details of why the tests are there, how the questions are constructed and what measures you take to make the Cheat sheet in a juice box test fair, valid and reliable. Again, if people know the test is there for good reason and fair, they will be less motivated to cheat.

4. Maintain a positive public image. This will reduce rationalization by people claiming that  the assessment provider is incompetent or has other faults.

5. Communicate your security measures and how people who cheat are caught.  This makes people less likely to think they will be able to get away with it.

For many organizations — in addition to other anti-cheating measures — it can be very productive to spend time reducing participants’ rationalization to cheat, thereby helping them choose to be honest. The picture on the right shows a “cheat sheet” or “crib sheet” hidden in a juice carton. Think of ways you can encourage participants to use their inventiveness to learn to pass the exam, not to believe it’s okay to defraud you and the testing system.

I hope you find this good practice tip helpful. I’ll be presenting at the Questionmark Users Conference March 10 – 13 on Twenty Testing Tips: Good practice in using assessments. Taking measures to reduce rationalization for cheating will be one of my tips. Register for the conference if you’re interested in hearing more.

Ten tips on recommended assessment practice – from San Antonio, Texas

John Kleeman HeadshotPosted by John Kleeman

One of the best parts of Questionmark user conferences is hearing about good practice from users and speakers. I shared nine tips after our conference in Barcelona, but Texas has to be bigger and better (!), so here are ten things I learned last week at our conference in San Antonio.

1. Document your decisions and processes. I met people in San Antonio who’d taken over programmes from colleagues. They valued all the documentation on decisions made before their time and sometimes wished for more. I encourage you to document the piloting you do, the rationale behind your question selection, item changes and cut scores. This will help future colleagues and also give you evidence if you have need to justify or defend your programmesgen 5.

2. Pilot with non-masters as well as masters. Thanks to Melissa Fein for this tip. Some organizations pilot new questions and assessments just with “masters”, for example the subject matter experts who helped compile them. It’s much better if you can pilot to a wider sample, and include participants who are not experts/masters. That way you get better item analysis data to review and you also will get more useful comments about the items.

3. Think about the potential business value of OData. It’s easy to focus on the technology of OData, but it’s better to think about the business value of the dynamic data it can provide you. Our keynote speaker, Bryan Chapman, made a powerful case at the conference about getting past the technology. The real power is in working out what you can do with your assessment data once it’s free to connect with other business data. OData lets you link assessment and business data to help you solve business problems.

4. Use item analysis to identify low-performing questions. The most frequent and easiest use of item analysis is to identify low-performing questions. Many Questionmark customers use it regularly to identify questions that are too easy, too hard or not sufficiently discriminating. Once you identify these questions, you modify them or remove them depending on what your review finds. This is an easy win and makes your assessments more trustworthy.thurs night longhorn flag

5. Retention of learning is a challenge and assessments help. Many people shared that retention was a key challenge. How do you ensure your employees retain compliance training to use when they need it? How do you ensure your learners retain their learning beyond the final exam? There is a growing realization that using Questionmark assessments can significantly reduce the forgetting curve.

6. Use performance data to validate and improve your assessments. I spoke to a few people who were looking at improving their assessments and their selection procedure by tracking back and connecting admissions or onboarding assessments with later performance. This is a rich vein to mine.

7. Topic feedback and scores. Topic scores and feedback are actionable. If someone gets an item wrong, it might just be a mistake or a misunderstanding. But if someone is weak in a topic area, you can direct them to remediation. It’s hugely successful for a lot of organizations to divide assessments into topics and feedback and analyze by topic.

8. Questionmark Community Spaces is a great place to get advice. Several users shared that they’d posed a question or problem in the forums there and got useful answers. Customers can access Community Spaces here.wed dinner gents

9. The Open Assessment Platform is real. We promote Questionmark as the “Open Assessment Platform,” allowing you to easily link Questionmark to other systems, and it’s not just marketing! As one presenter said at the conference “The beauty of using Questionmark is you can do it all yourself”. If you have a need to build a system including assessments, check out the myriad ways in which Questionmark is open.

10. Think of your Questionmark assessments like a doctor thinks of a blood test. A doctor relies on a blood test to diagnose a patient. By using Questionmark’s trustable processes and technology, you can start to think of your assessments in a similar light, and rely on your assessments for business value.

I hope some of these tips might help you get more business value out of your assessments.

Five tips for enhancing test security using technology

Headshot JuliePosted by Julie Delazyn

Test security is a topic that comes up time and time again on education and company forums.  You can improve test security by changing the physical test-taking environment, but you can also use technology to tackle certain security issues.

Here are are five tips that can help you use technology to address security challenges:

  1. Randomize: Shuffling the order of the choices can help protect the security of the assessment. The questions can also be delivered in a random order themselves — to help prevent cribbing when users are sitting in non-screened assessment centers.
  2. Encrypt: With so many tests and exams being delivered via the Internet or an intranet, encryption can protect against interception. A Secure Socket layer (SSL) is a protocol that allows the browser and web server to encrypt their communication; anyone intercepting the communication can’t read it.
  3. Schedule: You can discourage cheating by specifying user names and passwords, setting assessment start times, limiting the length of time for an assessment and the number of times it may be taken.
  4. Monitor: A participant can’t start a monitored assessment until a proctor or invigilator has logged on to verify the participant’s identity. The monitor can be limited to a range of IP addresses to ensure that a certain physical location is used to administer the assessment.
  5. Secure browsers: It is possible to ‘lock down’ computers to keep participants from accessing other applications and websites while taking a medium- or high-stakes assessment. A secure browser prevents candidates from printing, capturing screens, accidentally exiting the assessment viewing source, task switching, etc.

Want more info? Download the White Paper: Delivering Assessments Safely and Securely  [registration required]