I recently spent time looking for research studies that analyzed the security levels of online and in-person proctoring. Unfortunately, no one seems to have compared these two approaches with a well-designed study. (If someone has done a rigorous study contrasting these two modes of delivery, please let me know! I certainly may have overlooked it in my research.)
I did learn a lot from the sparse literature that was available, and my main takeaway is this: security is related less to proctoring mode than it is to how much effort the test developer puts into administration planning and test design. Investing in solid administration policies, high-quality monitoring technology, and well-trained proctors is what really matters most for both in-person and online proctoring.
With some effort, testing programs with online proctors can likely achieve levels of security and service comparable to the services offered by many test centers. This came into focus for me after attending several recent seminars about online and in-person proctoring through the Association of Test Publishers (ATP) and Performance Testing Council (PTC).
Unless a collaborator is onsite to set up and maintain the test environment, all security controls will need to be managed remotely. Here are suggestions for what you would need to do if you were a test program administrator under those circumstances:
Work with your online proctors to define the rules for acceptable test environments.
Ensure that test environment requirements are realistic for participants while still meeting your standards for security and comparability between administrations.
If security needs demand it, have monitoring equipment sent in advance (e.g., multiple cameras for improved monitoring, scanners to authenticate identification).
Clearly communicate policies to participants and get confirmation that they understand and can abide by your policies.
Plan policies for scenarios that might arise in an environment that is not managed by the test program administrator or proctor. For example, are you legally allowed to video someone who passes by in the background if they have not given their permission to be recorded? If not, have a policy in place stating that the participant is responsible for finding an isolated place to test. Do you or the proctoring company manage the location where the test is being delivered? If not, have a policy for who takes responsibility and absorbs the cost of an unexpected interruption like a fire alarm or power outage.
You should be prepared to document the comparability of administrations. This might include describing potential variations in the remote environment and how they may or may not impact the assessment results and security.
It is also advisable to audit some administrations to make sure that the testing environments comply with your testing program’s security policy. The online proctors’ incident reports should also be recorded in an administration report, just as they would with an in-person proctor.
You also need to make sure that everything needed to administer the test is provided, either physically or virtually.
Each participant must have the equipment and resources needed to take the test. If it is not reasonable to expect the participant to handle these tasks, you need to plan for someone else to do so, just as you would at a test center. For example, it might not be reasonable to expect some participant populations to know how to check whether the computer used for testing meets minimum software requirements.
If certain hardware (e.g., secured computers, cameras, scanners, microphones) or test materials (e.g., authorized references, scratch paper) are needed for the assessment design, you need to make sure these are available onsite for the participant and make sure they are collected afterwards.
Accommodations may take the form of physical or virtual test materials, but accommodations can also include additional services or some changes in the format of the assessment.
Some accommodations (e.g., extra time, large print) can be controlled by the assessment instrument or an online proctor, just as they would in a test center.
Other accommodations require special equipment or personnel onsite. Some personnel (e.g., scribes) may be able to provide their services remotely, but accommodations like tactile printouts of figures for the blind must be present onsite.
Extra effort is clearly needed when setting up an online-proctored test. Activities that might have been handled by a testing center (control of the environment, management of test materials, providing accommodations) now need to be remotely coordinated by the test program staff and proctors; however, the payoffs may be worth the extra effort. If comparable administration practices can be achieved, online-proctored assessments may be cheaper than test centers, offer increased access to participants, and lower the risks of collaborative cheating.
In a recent post on his own blog, Questionmark CEO Eric Shepherd offered some insights about the 70+20+10 learning model, in which social and informal learning play key roles in knowledge transfer and performance improvement.
The A-model, developed by Dr. Bruce C. Aaron, helps organizations make the most of all types of learning initiatives – both formal and informal — by providing an effective framework for defining a problem and its solution, implementing the solution, and tracking the results.
Eric’s post, A-model and Assessment: The 7-Minute Tutorial, notes the great feedback we have received in promoting awareness of the A-model and includes a brief video that walks viewers through a business problem and explains how to approach it using the A-model.
In this video, I will take you on a tour of the latest addition to Questionmark Live, the Numeric question type. Not only can you create a simple question with a single numeric answer, you can create questions that require multiple numeric answers — something that’s a bit complicated to do in Authoring Manager but very easy to do in Questionmark Live!
Quiz and test authors need an arsenal of different question types to suit various purposes. Matching questions, which present two series of words or ideas, ask participants to match items from one list to items within the other. Learners must correctly identify which items go together–say, for instance, a state or country and its capital.
Matching questions make it possible to measure a relatively large amount of knowledge in a small amount of space, but it’s important to bear in mind that they emphasize information recognition rather than information recall.
A matching item in Questionmark Perception might look like the question below. In this example, which uses a graphical presentation format, someone has already started figuring things out!
Here’s a quick tutorial on how to create a matching question in Perception, with or without a graphical interface. The tutorial will also show you how to set up scoring and feedback.