Online or test center proctoring: Which is best?

John Kleeman HeadshotPosted by John Kleeman

A new way of proctoring certification exams is rapidly gaining traction. This article compares and contrasts the old with the new.

Many high-tech companies offer certification exams for consultants, users and implementers. Such exams often require candidates to travel to a bricks-and-mortar test center where proctors (or invigilators) supervise the process.

Now, however, online proctoring is becoming prevalent: each candidate takes the exams at his or her home or office, with a proctor observing via video camera over the Internet. Two of the world’s largest software companies, SAP and Microsoft, offer online proctoring for their certification programs, and many other companies are looking to follow suit. This article explains some of the pros and cons of the two approaches.

workplace_addFactors for choosing online proctoring

  • Reduced travel time.  Candidates can take an exam without wasting time traveling to a test center. This is an important saving for their employers – often the test sponsor’s customers.
  • Convenient scheduling. A candidate can choose a convenient time, for example after the kids have gone to bed or when work pressures are lowest. Usually one needs to book in advance to attend a test center, but it’s often possible to schedule an online proctor at short notice.
  • Fairness. With an exam at a test center, some people will have had a short journey and others a longer one. Some might have experienced a traffic jam or other hassle getting there. This gives an advantage to those who happen to live closer, as they will have less anxiety. An online experience reduces the variability of the exam experience.
  • Accessibility. Candidates take online proctored exams on their own computers, using their normal accessibility aids such as screen readers or special input devices, whereas these require setup at a test center. Some test centers only provide their own (often limited) tools for providing accommodations, so candidates are working with unfamiliar tool sets. This places them at a disadvantage. Also, for people with certain disabilities, travel is a major inconvenience.
  •  Keeping certifications up to date. If candidates have to travel to a test enter, a test sponsor can’t realistically require an exam to be taken more than once every few years. But in today’s world, products and job skills change very quickly, so certification risks being out of date. The availability of online  proctoring allows update exams (assessing candidates on what has changed since their last exam) to be taken as products change, which makes the programme more valid.
  • Greater authenticity. The more authentic assessments are, the more they measure actual performance. See Will Thalheimer’s excellent paper on measuring learning results for more on this. Assessing someone in their work environment with online proctoring is more authentic and so will likely measure performance better than putting them in a test center.

office-buildingFactors for choosing test center proctoring

  • Standardized computers. While online proctoring requires the candidate to have an appropriate computer, internet connection and webcam that they know how to use, test centers provide a computer that is already set up. For most certification programmes, it’s fair and reasonable that candidates use their own computers (often called BYOD – Bring Your Own Device). But for some programmes, this might be less fair. For example, in professions where IT literacy is not required, it might not be fair to expect people to have access to a PC with webcam that they know how to use.
  • Very long exams.  In online-proctored exams, the candidate is usually forbidden from taking a break for security reasons.  Most exams can be taken in one sitting, but if exams are longer than three hours, a test center makes sense.
  • Regulation. Some regulators or government authorities may require delivery of an exam with a physically present proctor at a test center.
  • Geographical convenience.  In some cases, test centers may be close at hand. For example, a university might have all its candidates already present, or, for some test sponsors, candidates may all live in metropolitan areas close to test centers.

checkOther factors to consider:

  • Language. In theory, a candidate could schedule an online proctor in his or her own language, though in practice many programs only offer English-speaking proctors. A test center may well not have proctors who can speak different languages, but typically will speak the local language.
  • Security. You might think that the security is stronger in a test center than with online proctoring. However, over the years there have been many incidents where face-to-face proctors have coached candidates. Online proctoring also makes it feasible to administer exams more frequently, which helps security by making impersonation harder. This is a big subject, and I’ll follow up with a blog post about security.

I’d welcome your thoughts on any other factors for and against online proctoring.

Effectively Communicating the Measurement of Constructs to Stakeholders

greg_pope-150x1502

Posted by Greg Pope

I co-wrote this article Kerry Eades, Assessment Specialist, Oklahoma Department of Career and Technology Education, a Questionmark user who shares my interest in test security and many other topics related to online assessment.

Kerry Eades

There are many mentions on websites, blogs, YouTube, etc. about people (employees, students, educators, school administrators, etc.) cheating on tests. Cheating has always been an issue, but the last decade of increased certifications and high-stakes testing seems to have brought about a significant increase in cheating. As a result, some pundits now believe we should redefine cheating and that texting for help, accessing the Web, or using any Web 2.0 resources should be allowed during testing. The basic idea is that a student should no longer be required to learn “facts” that can be easily located on the internet and that instruction should shift to only teaching and testing conceptual content.

There are many reasons for testing (educational, professional certification and licensure, legislative, psychological, etc.) and the pressures that stakeholders feel to succeed at all costs by “teaching to the test” or to condone any form of cheating is obviously immense. Those of us in the testing industry should, to the best of our ability, educate stakeholders on the purpose of tests and on the development and measurement of constructs. Having better informed stakeholders would lessen the “need” and “excuses” for cheating and improve the testing environment for all concerned. A key element of this is promoting an understanding of how to match the testing environment to the nature of an assessment: it is appropriate to allow “open book” assessments in some cases but certainly not all. We must keep in mind that education, in general, builds upon itself over time, and for that reason, constructs must be assessed in a valid, reliable and appropriate manner.

Tests are usually developed to make a point-in-time decision about the knowledge, ability, or skills of an individual based upon a set of predetermined standards/objectives/measures. The “value” of any test is not only this “point-in-time” reference, but what it entails for the future. Although examinees may have passed an assessment they may still have areas of relative weakness that should be remediated in order for them to maximize their full potential as students or employees. Instructors should also observe how all their students are performing on tests in order to identify their own instructional weaknesses. For example, does the curriculum match up with the specified standards and the high level of thinking in those standards? This information can also be aggregated and analyzed at the local, district, or state level to determine program strengths or weaknesses. In order to use scores in a valid way to make decisions about students or programs, we must begin by clearly defining and measuring the psychological/educational constructs or traits that a test purports to measure.

Measuring a construct is certainly complex, but what it boils down to is ensuring that the construct is being measuring in a valid way and then reporting/communicating that process to stakeholders.  For example, if the construct we are trying to measure in an assessment is “Surgery Procedure” and if the candidate passes the test, we expect that the person can recall this information from memory where and when needed.  It wouldn’t be valid to let the participant look up where the liver is located on the Internet during the assessment, because they would not be able to use the Internet while they are halfway through a surgical procedure.

Another example would be “Crane Operation” knowledge and skills.  If this is the construct being measured and it is expected that candidates who pass the test can operate a crane properly, when and where they need to, then allowing them to tweet or text during their crane certification exam would not be a valid thing to do (it would invalidate the test scores) because they would not be able to do this in real life.

However, if the assessment is a low stakes quiz that is measuring the construct, “Tourist Hot Spots of Arkansas,” and the purpose of the quiz is to help people remember some good tourist places in Arkansas, then an “open book” or an “open source” format where the examinee can search the internet or use Web 2.0 resources is fine.

Effectively communicating the purpose of an assessment and the constructs being measured by it is essential  for reducing the instances of cheating. This important communication  can also help prevent cheating from being “redefined” to the detriment of test security.

For more information on assessment security issues and best practices, check out the Questionmark White Paper: “Delivering Assessments Safely and Securely.”

Results Management System Quiz: Test your knowledge!

greg_pope-150x1502

Posted by Greg Pope

Organizations involved in medium and high-stakes testing must employ sound test development, administration and scoring processes to help ensure fair, reliable and valid assessments.

Knowledge Check

But despite everyone’s best efforts, there are times when it’s necessary to review and potentially modify test results to provide information and certificates that fairly reflect what was being measured.That’s where the Questionmark RMS, or Results
Management System comes in: It enables organizations to analyze, edit and publish assessment results in an informed and defensible way.

I have created a quiz on RMS to test your knowledge. Take assessment one and see how well you do. All the answers for the questions are available on the Questionmark web site, so if you study hard you can get a perfect score and impress your friends and colleagues.  Good luck!

.