Posted by April Barnum
I recently met with customers and the topic of authoring trustworthy assessments and getting back trustable results was a top concern. No matter what they were assessing on, everyone wants results that are trustable, meaning that they are both valid and reliable. The reasons were similar, with the top three being: Safety concerns, being able to assert job competency, and regulatory compliance. I often share this white paper: 5 steps to better tests, as a strong resource to help you plan a strong assessment, and I encourage you to check it out. But here are six authoring steps to that can help you achieve trustworthy assessment results:
- Planning the assessment or blueprinting it. You basically are working out what it is that the test covers.
- Authoring or creating the items.
- Assembling the assessment or harvesting the items and assemble them for use in a test.
- Piloting and reviewing the assessment prior to using it for production use.
- Delivering the assessment or making the assessment available to participants; following security, proctoring and other requirements set out in the planning stage.
- Analyzing the results of the assessment or looking at the results and sharing them with stakeholders. This step also involves using the data to weed out any problem items or other issues that might be uncovered.
Each step contributes to the next, and useful analysis of the results is only possible if every previous stage has been done effectively. In future posts, I will go into each step in detail and highlight aspects you should be considering at each stage of the process.
Posted by John Kleeman
My previous post Satisficing: Why it might as well be a four-letter word explained that satisficing on a survey is when someone answers survey questions adequately but not as well as they can. Typically they just fill in questions without thinking too hard. As a commenter on the blog said: “Interesting! I have been guilty of this, didn’t even know it had a name!”
Examples of satisficing behavior are skipping questions or picking the first answer that makes some kind of sense. Satisficing is very common. As explained in the previous blog, some reasons for it are participants not being motivated to answer well, not having the ability to answer well, them finding the survey too hard or them simply becoming fatigued at too long a survey.
Satisficing is a significant cause of survey error, so here are 7 strategies for a survey author to reduce satisficing:
1. Keep surveys short. Even the keenest survey respondent will get tired in a long survey and most of your respondents will probably not be keen. To get better results, make the survey as short as you possibly can.
2. Keep questions short and simple. A long and complex question is much more likely to get a poor quality answer. You should deconstruct complex questions into shorter ones. Also don’t ask about events that are difficult to remember. People’s memory of the past and of the time things happened is surprisingly fragile, and if you ask someone about events weeks or months ago, many will not recall well.
3. Avoid agree/disagree questions. Satisficing participants will most likely just agree with whatever statement you present. For more on the weaknesses of these kind of questions, see my blog on the SAP community network: Strongly Disagree? Should you use Agree/Disagree in survey questions?
4. Similarly remove don’t know options. If someone is trying to answer as quickly as possible, answering that they don’t know is easy for them to do, and avoids thinking about the questions.
5. Communicate the benefit of the survey to make participants want to answer well. You are doing the survey for a good reason. Make participants believe the survey will have positive benefits for them or their organization. Also make sure each question’s results are actionable. If the participant doesn’t feel that spending the time to give you a good answer is going to help you take some useful action, why should they bother?
6. Find ways to encourage participants to think as they answer. For example, include a request to ask participants to carefully deliberate – it could remind them to pay attention. It can also be helpful to occasionally ask participants to justify their answers – perhaps adding a text comment box after the question explaining why they answered that way. Adding comment boxes is very easy to do in Questionmark software.
7. Put the most important questions early on. Some people will satisfice and they are more likely to do it later on in the survey. If you put the questions that matter most early on, you are more likely to get good results from them.
There is a lot you can do to reduce satisficing and encourage people to give their best answers. I hope these strategies help you shrink the amount of satisficing your survey participants do, and in turn give you more accurate results.
Posted by Julie Delazyn
Creating fair, valid and reliable tests requires starting off right: with careful planning. Starting with that foundation, you will save time and effort while producing tests that yield trustworthy results.
Five essential steps for producing high-quality tests:
1. Plan: What elements must you consider before crafting the first question? How do you identify key content areas?
2. Create: How do you write items that increase the cognitive load, avoid bias and stereotyping?
3. Build: How should you build the test form and set accurate pass/ fail scores?
4. Deliver: What methods can be implemented to protect test content and discourage cheating?
5. Evaluate: How do you use item-, topic-, and test-level data to assess reliability and improve quality?
Using item analysis can greatly improve the validity of your assessments by helping you quickly and easily spot any red flags and weed out unreliable questions that are not performing well.
Our infographic highlights 9 additional steps you can take to produce valid assessments. View and download the infographic here.
Are you interested in learning more about creating valid assessments that produce results you can trust? April Barnum,Product Manager of Authoring at Questionmark, will be present Authoring Assessments You Can Trust: What’s the Process? at Questionmark Conference 2016 — You still have a chance to register for this event taking place in Miami April 12-15.
Many organisations looking to expand their online offerings now use a new method to securely deliver high-stakes exams online: Online proctoring. A live proctor uses your computer’s webcam to observe you taking the test, to ensure its integrity. To make sure you work alone, the proctor asks you to scan your webcam around the room you are in. The proctors also asks you to show photo ID to verify your identity and will use screen-sharing technology to view your computer screen. In addition, secure browser software can sometimes be used to restrict other computer applications (such as opening a web browser) to restrict a test-taker from accessing digital resources.
Being watched in this way during an online exam often poses questions about privacy…
Is online proctoring an invasion of privacy? Do proctors still have access to your computer after the exam is complete? What sort of things can they access while you’re taking the exam? Can they access your files and identifiable information?
A video link with an online proctor invades no more privacy than taking an exam at a traditional face-to-face test centre. In many cases, allowing a proctor to see everything on your computer screen is just like a proctor at a test centre who can look over your shoulder, see your computer screen and prevent any restricted behavior. But some online proctoring systems go even further, providing proctors with full control over a candidate’s computer.
Having a proctoring service take control of a candidate’s computer can often be quite helpful. For instance a proctor who is trained in diagnosing and correcting setup issues can help speed up a process and can quickly resolve problems with the video or audio on the computer. A proctor can also guide the candidate through the exam software, in some cases entering special purpose access credentials that have not previously been made available to the candidate.
Although screen sharing and remote control solutions can be used with Questionmark Online Proctoring, there are alternatives for situations in which such far-reaching access to the candidate’s laptop is inappropriate. Using Questionmark Secure in conjunction with Questionmark OnDemand supports a special mode for online proctoring that gives the proctor limited proxy controls instead of complete control over the machine. For example the proctor can manage the running of the assessment without having control over the participant’s machine. The sense of ‘control’ that many proctoring solutions require here is similar to popular screen sharing systems that allow you to “Give Control” or “Request Control”. Questionmark Online Proctoring does not require this, because the proctor is connected directly to Questionmark’s service and can manage the exam without going ‘through’ the participant’s computer.
In addition to the privacy advantages of these proxy controls for the candidate, this arrangement also enables the test content to be kept hidden from the proctor. This could provide advantages to the test provider over and above what can be achieved even in a test centre. The proxy controls allow the proctor to pause the test, add extra time and even terminate the test completely. Meanwhile, Questionmark Secure takes care of monitoring the local computer for signs of misuse and flagging or preventing attempts to cheat. Questionmark Secure can be audited and installed by a trusted system administrator for a company-owned laptop without having to provide the same permissions to the end user. Questionmark Secure does not install keylogging software, or any other persistently active service. It is only active during the exam process itself.
Interested in learning more about Online Proctoring? I will be presenting a session on ensuring exam integrity with online proctoring at Questionmark Conference 2016: Shaping the Future of Assessment in Miami, April 12-15.
There’s only one day left to take advantage our earl-bird savings…click here to register and learn more about this important learning event. See you in Miami!