Workplace Exams 101: How to Prevent Cheating

John Kleeman

Posted by John Kleeman

A hot topic in the assessment world today is cheating and what to do to prevent it. Many organizations test their employees, contractors and other personnel to check their competence and skills. These include compliance tests, on-boarding tests, internal certification tests, end-of-course tests and product knowledge quizzes.

There are two reasons why cheating matters in workplace exams:

Issue #1: Validity

Firstly, the validity of the test or exam is compromised; any decision made as a result of the test is invalid. For example, you may use a test to check whether someone is safe to sell your products, but if cheating happens, then he/she is not. Or you may be checking if someone is safe to do a task, and if cheating happens, safety is compromised. Tests and exams are used to make important decisions about people with business, financial and regulatory consequences. If someone cheats at a test or exam, you are making the decision based on bad data.

Issue #2: Integrity

Secondly, people who cheat at tests or exams have demonstrated a lack of integrity. If they will cheat on a test or exam, what else might they lie, cheat or defraud your organization about? Will falsifying a record or report be next? Regulators often have rules requiring integrity and have sanctions if someone demonstrates a lack of it.

For example, in the financial sector, FINRA’s Rule 2010 requires individuals to “observe high standards of commercial honor” and is used to ban people found cheating at exams or continuing education tests. In the accountancy sector, both AICPA and CIMA require accountants to have integrity and those found cheating at tests have been banned or otherwise sanctioned. And in the medical and pharmaceutical field, regulators have codes of conduct which include honesty. For example, the UK General Medical Council requires doctors to “always be honest about your experience, qualifications and current role” and interprets cheating at exams as a violation of this.

The well-respected International Test Commission Guidelines on the Security of Tests, Exams and Other Assessments suggests six categories of cheating threats shown below, alongside examples from me of how they can take place in the work environment.


ITC categoriesTypical examples in the workplace
Using test content pre-knowledge– An employee takes the test and passes questions to a colleague still to take it
– Someone authoring questions leaks them to test-takers
– A security vulnerability allows questions to be seen in advance
Receiving expert help while taking the test– One employee sits and coaches another during the test
– IM or phone help while taking a test
– A manager or proctor supervising the test helps a struggling employee
Using unauthorized test aids– Access to the Internet allows googling the answers
– Unauthorized study guides brought to the test
Using a proxy test taker– A manager sends an assistant or secretary to take the test in place of him/her
– Other situations where a colleague stands in for another
Tampering with answer sheets or stored test results– Technically minded employees subvert communication with the LMS or other corporate systems and change their results
Copying answers from another user– Two people sitting near each other share or copy answers
– Organized answer sharing within a cohort or group of trainees


If you are interested in learning more about any of the threats above, I’ve shared approaches to mitigate them in the workplace in our webinar, Workplace Exams 101: How to Prevent Cheating. You can download the webinar recording slides HERE.

Writing JTA Task Statements

Austin Fossey-42Posted by Austin Fossey

One of the first steps in an evidence-centered design (ECD) approach to assessment development is a domain analysis. If you work in credentialing, licensure, or workplace assessment, you might accomplish this step with a job task analysis (JTA) study.

A JTA study gathers examples of tasks that potentially relate to a specific job. These tasks are typically harvested from existing literature or observations, reviewed by subject matter experts (SMEs), and rated by practitioners or other stakeholder groups across relevant dimensions (e.g., applicability to the job, frequency of the task). The JTA results are often used later to determine the content areas, cognitive processes, and weights that will be on the test blueprint.

 Questionmark has tools for authoring and delivering JTA items, as well as some limited analysis tools for basic response frequency distributions. But if we are conducting a JTA study, we need to start at the beginning: how do we write task statements?

One of my favorite sources on the subject is Mark Raymond and Sandra Neustel’s chapter, “Determining the Content of Credentialing Examinations,” in The Handbook of Test Development. The chapter provides information on how to organize a JTA study, how to write tasks, how to analyze the results, and how to use the results to build a test blueprint. The chapter is well-written, and easy to understand. It provides enough detail to make it useful without being too dense. If you are conducting a JTA study, I highly recommend checking out this chapter.

Raymond and Neustel explain that a task statement can refer to a physical or cognitive activity related to the job/practice. The format of a task statement should always follow a subject/verb/object format, though it might be expanded to include qualifiers for how the task should be executed, the resources needed to do the task, or the context of its application. They also underscore that most task statements should have only one action and one object. There are some exceptions to this rule, but if there are multiple actions and objects, they typically should be split into different tasks. As a hint, they suggest critiquing any task statement that has the words “and” or “or” in it.

Here is an example of a task statement from the Michigan Commission on Law Enforcement Standards’ Statewide Job Analysis of the Patrol Officer Position: Task 320: “[The patrol officer can] measure skid marks for calculation of approximate vehicle speed.”

I like this example because it is pretty specific, certainly better than just saying “determine vehicle’s speed.” It also provides a qualifier for how good their measurement needs to be (“approximate”). The context might be improved by adding more context (e.g., “using a tape measure”), but that might be understood by their participant population.

Raymond and Neustel also caution researchers to avoid words that might have multiple meanings or vague meanings. For example, the verb “instruct” could mean many different things—the practitioner might be giving some on-the-fly guidance to an individual or teaching a multi-week lecture. Raymond and Neustel underscore the difficult balance of writing task statements at a level of granularity and specificity that is appropriate for accomplishing defined goals in the workplace, but at a high enough level that we do not overwhelm the JTA participants with minutiae. The authors also advise that we avoid writing task statements that describe best practice or that might otherwise yield a biased positive response.

Early in my career, I observed a JTA SME meeting for an entry-level credential in the construction industry. In an attempt to condense the task list, the psychometrician on the project combined a bunch of seemingly related tasks into a single statement—something along the lines of “practitioners have an understanding of the causes of global warming.” This is not a task statement; it is a knowledge statement, and it would be better suited for a blueprint. It is also not very specific. But most important, it yielded a biased response from the JTA survey sample. This vague statement had the words “global warming” in it, which many would agree is a pretty serious issue, so respondents ranked it as of very high importance. The impact was that this task statement heavily influenced the topic weighting of the blueprint, but when it came time to develop the content, there was not much that could be written. Item writers were stuck having to write dozens of items for a vague yet somehow very important topic. They ended up churning out loads of questions about one of the few topics that were relevant to the practice: refrigerants. The end result was a general knowledge assessment with tons of questions about refrigerants. This experience taught me how a lack of specificity and the phrasing of task statements can undermine the entire content validity argument for an assessment’s results.

If you are new to JTA studies, it is worth mentioning that a JTA can sometimes turn into a significant undertaking. I attended one of Mark Raymond’s seminars earlier this year, and he observed anecdotally that he has had JTA studies take anywhere from three months to over a year. There are many psychometricians who specialize in JTA studies, and it may be helpful to work with them for some aspects of the project, especially when conducting a JTA for the first time. However, even if we use a psychometric consultant to conduct or analyze the JTA, learning about the process can make us better-informed consumers and allow us to handle some of work internally, potentially saving time and money.

JTA

Example of task input screen for a JTA item in Questionmark Authoring.

For more information on JTA and other reporting tools that are available with Questionmark, check out this Reporting & Analytics page