Job Task Analysis Summary Report

Austin FosseyPosted by Austin Fossey

There are several ways to determine content for an assessment blueprint and ultimately for the assessment instrument itself. A job task analysis (JTA) study, as explained by Jim Farrell in a previous post, is one commonly used method to describe potential topics or tasks that need to be assessed to determine if a participant meets minimum qualifications within an area of practice.

In their chapter in Educational Measurement (4 th ed.), Clauser, Margolis, and Chase describe the work that must go in to culling the initial list of topics down to a manageable, relevant list that will be used as the foundation for the test blueprint.

Consider a JTA survey that asks participants to rate the difficulty, importance, and frequency of a list of tasks related to a specific job. Subject matter experts (SMEs) must decide how to interpret the survey results to make decisions about which topics stay and which ones go.

For example, there may be a JTA that surveys employees about potential assessment topics and tasks for an assessment about the safe operation of machinery at the job site. One task relates to being able to hit the emergency shutoff in case something goes wrong. The JTA results may show that respondents think this is very important to know, but it is not something they do very frequently because there are rarely emergency situations that would warrant this action. Similarly, there may a task related to turning the machine on. The respondents may indicate that this is important and something that is done on a daily basis, but it is also very easy to do.

There is no all-encompassing rule for how SMEs should determine which tasks and topics to include in the assessment. It often comes down to having the SMEs discuss the merits of each task, with each SME making a recommendation informed by their own experience and expertise. Reporting the results of the JTA survey will give the SMEs context for their decision-making, much like providing impact data in a standard-setting study.

Questionmark Analytics currently provides two JTA reports: the JTA Summary Report, and the JTA Demographic Report. Today, we will focus on the JTA Summary Report.

This report uses the same assessment selection and result filtering tools that are used throughout Analytics. Users can report on different revisions of the JTA survey and filter by groups, dates, and special field values.

The current JTA survey item only supports categorical and ordinal response data, so the JTA Summary Report provides a table showing the frequency distribution of responses for each task by each of the dimensions (e.g., difficulty, importance, frequency) defined by the test developer in the JTA item.

These response patterns can help SMEs decide which tasks will be assessed and which ones are not required for a valid evaluation of participants.

JTA

Response distribution table for a JTA for medical staff using the Questionmark JTA Summary Report.

 

Leave a Reply