Job Task Analysis Surveys Legally Required?

John Kleeman Headshot

Posted by John Kleeman

I had a lot of positive feedback on my blog post Making your Assessment Valid: 5 Tips from Miami. There is a lot of interest in how to ensure your assessment is valid, ensuring that it measures what it is supposed to measure.

If you are assessing for competence in a job role or for promotion into a job role, one critical step in making your assessment valid is to have a good, current analysis of what knowledge, skills and abilities are needed to do the job role. This is called a job task analysis (JTA), and the most common way of doing this analysis is to conduct a JTA Survey.

Job Task Analysis SurveyIn a JTA Survey, you ask existing people in the job role, or other experts, what tasks they do. A common practice is to survey them on how important each task is, how difficult it is and how often it is done. The resultant reports then guide the construction of the test blueprint and which topics and how many questions on each you include in the test.

If you cannot show that your assessment matches the requirements of a job, then your assessment is not only invalid but it is likely unfair — if you use it to select people for the job or measure competence in the job. And if you use an invalid assessment to select people for promotion or recruitment into the job, you may face legal action from people you reject.

Not only is this common sense, but it was also confirmed by a recent US district court ruling against the Boston Police Department. In this court case, sergeants who had been rejected for promotion to lieutenant following an exam sued that the assessment was unfair, and won.

The judge ruled that the exam was not sufficiently valid, because it omitted many job skills crucial for a police lieutenant role, and so it was not fair to be used to select for the role (see news report).

The 82-page judge’s ruling sets out in detail why the exam was unfair. He references the Uniform Guidelines on Employee Selection Procedures which state:

“There should be a job analysis which includes an analysis of the important work behavior(s) required for successful performance and their relative importance”

But the judge ruled that although a job analysis had been done, it had not been used properly in the test construction process. He said:

“When using a multiple choice exam, the developer must convert the job analysis result into a test plan to ensure a direct and strong relationship between the job analysis and the exam.

However, in this case, the job analysis was not used sufficiently well to construct the exam. The judge went on to say:

The Court cannot find, however, that the test plan ensured a strong relationship between the job analysis and the exam. … too many skills and abilities were missing from the … test outline. 

Crucially, he concluded:

“And a high score on the … exam simply was not a good indicator that a candidate would be a good lieutenant”.

Due to the pace of business change and technological advance, job roles are changing fast. Make sure that you conduct regular JTAs  of roles in your organization and make sure your assessments match the most important job tasks. Find out more about Job Task Analysis here.

Learning & Bowling — Looking back on Miami

panel Julie ProfilePosted by Julie Delazyn

We loved seeing so many customers at our conference in Miami earlier this month! We learned so much and greatly appreciate the feedback people shared with us there.

From our exciting panel discussion — which brought together experts from Pacific Gas and Electric, Amazon, The Hartford and HRSG to discuss their valid and reliable processes for assessment — to our networking dinners around Miami Beach, it was a 3-day event packed with excitement and learning opportunities.

You can re-live those moments by browsing through the photos taken during the conference and our Next-Gen release party.smiles

Reporting back from sessions he attended and conversations that took place at Questionmark Conference 2016, Questionmark Chairman John Kleeman has come up with 5 tips to making your assessments valid.Bowling

  • What did you enjoy most about Questionmark Conference 2016?
  • If you didn’t have the chance to join us this year, what would you like to see at next year’s conference?
  • Leave us a comment below and stay in touch!

Making your Assessment Valid: 5 Tips from Miami

John Kleeman Headshot

Posted by John Kleeman

A key reason people use Questionmark’s assessment management system is that it helps you make more valid assessments. To remind you, a valid assessment is one that genuinely measures what it is supposed to measure. Having an effective process to ensure your assessments are valid, reliable and trustable was an important topic at Questionmark Conference 2016 in Miami last week. Here is some advice I heard:

Reporting back from 3 days of learning and networking at Questionmark Conference 2016 in Miami

Tip 1: Everything starts from the purpose of your assessment. Define this clearly and document it well. A purpose that is not well defined or that does not align with the needs of your organization will result in a poor test. It is useful to have a formal process to kick off  a new assessment to ensure the purpose is defined clearly and is aligned with business needs.

Tip 2: A Job Task Analysis survey is a great way of defining the topics/objectives for new-hire training assessments. One presenter at the conference sent out a survey to the top performing 50 percent of employees in a job role and asked questions on a series of potential job tasks. For each job task, he asked how difficult it is (complexity), how important it is (priority) and how often it is done (frequency). He then used the survey results to define the structure of knowledge assessments for new hires to ensure they aligned with needed job skills.

Tip 3: The best way to ensure that a workplace assessment starts and remains valid is continual involvement with Subject Matter Experts (SMEs). They help you ensure that the content of the assessment matches the content needed for the job and ensure this stays the case as the job changes. It’s worth investing in training your SMEs in item writing and item review. Foster a collaborative environment and build their confidence.

Tip 4: Allow your participants (test-takers) to feed back into the process. This will give you useful feedback to improve the questions and the validity of the assessment. It’s also an important part of being transparent and open in your assessment programme, which is useful because people are less likely to cheat if they feel that the process is well-intentioned. They are also less likely to complain about the results being unfair. For example it’s useful to write an internal blog explaining why and how you create the assessments and encourage feedback.

Lunch with a view at Questionmark Conference 2016 in Miami

Tip 5: As the item bank grows and as your assessment programme becomes more successful, make sure to manage the item bank and review items. Retire items that are no longer relevant or when they have been overexposed. This keeps the item bank useful, accurate and valid.

There was lots more at the conference – excitement that Questionmark NextGen authoring is finally here, a live demo of our new easy to use Printing and Scanning solution … and having lunch on the hotel terrace in the beautiful Miami spring sunshine – with Questionmark branded sunglasses to keep cool.

There was a lot of buzz at the conference about documenting your assessment decisions and making sure your assessments validly measure job competence. There is increasing understanding that assessment is a process not a project, and also that to be used to measure competence or to select for a job role, an assessment must cover all important job tasks.

I hope these tips on making assessments valid are helpful. Click here for more information on Questionmark’s assessment management system.

Getting Assessment Results You Can Trust: White Paper

Headshot JuliePosted by Julie Delazyn

Modern organizations need their people to be competent.

Would you be comfortable in a high-rise building designed by an unqualified architect? Would you fly in a plane whose pilot hadn’t passed a flying test? Would you send a sales person out on a call  if they didn’t know what your products do? Can you demonstrate to a regulatory authority that your staff are competent and fit for their jobs if you do not have trustworthy assessments?

In all these cases and many more, it’s essential to have a reliable and valid test of competence. If you do not ensure that your workforce is qualified and competent, then you should not be surprised if your employees have accidents, cause your organization to be fined for regulatory infractions, give poor customer service or can’t repair systems effectively.

The white paper, Assessment Results you Can Trust, explains that trustworthy assessment results must be both valid (measuring what you are looking for them to measure) and reliable (consistently measuring what you want to be measured).The 6 stages of trustable results; Planning assessment, Authoring items, Assembling assessment, Pilot and review, Delivery, Analyze results

For assessments to be valid and reliable, it’s necessary to follow structured processes at each step from planning through authoring to delivery and reporting.

The white paper covers these six stages of the assessment process:

  • Planning assessment
  • Authoring items
  • Assembling assessment
  • Pilot and review
  • Delivery
  • Analyze results

Following the advice in the white paper and using the capabilities it describes will help you produce assessments that are more valid and reliable — and hence more trustable.

To download the complimentary white paper, click here.

Interested in finding out more about authoring assessments you can trust?  Make sure to join April Barnum’s session: Authoring Assessments You Can Trust: What’s the Process? We look forward to seeing you in Miami next week at Questionmark Conference 2016!

How-to improve hiring outcomes with pre-screening

Headshot Julie

Posted by Julie Delazyn

Hiring is a costly and time-consuming process, and one that many organizations struggle with. But new technologies can help to streamline the screening process, improve outcomes, and ensure that experience leaves every candidate with a more positive view of the organization they applied to work for.

In the blog post, Improving Hiring Outcomes with Pre-Screening, Human Resource Systems Group explains that organizations can improve their hiring process by focusing on these three aspects of pre-screening:HRSG

  • Legal defensibility
    • By removing the effects of human bias and focusing on the cognitive skills required for the job, online pre-screening can help your organization filter candidates more fairly and defensibly.
  • Hiring outcomes
    • Resumes, according to the blog post, have one of the weakest correlations with performance, and interviews fare poorly as well. However, cognitive abilities are unambiguously correlated with job performance.
  • Candidate experience
    • A well-conceived, cognitive-based pre-screening test can improve the application experience for the candidate and protect the organization’s brand.

Check out the blog post for the full explanation on how-to improve hiring outcomes as well as some pre-screening best practices.

palm tree emoji 2Would you like to enhance your screening and assessment expertise? Make sure to attend HRSG’s session: Employment, Certification, and Compliance Testing: Creating defensible assessments for a diverse clientele at Questionmark Conference 2016.

Register soon, if you haven’t already done so. We look forward to seeing you in Miami!

 

Establishing a data-driven assessment strategy – A Q&A with Amazon

Headshot Julie

Posted by Julie Delazyn

Jason Sunseri is a senior Program Manager – Learning Technology at Amazon. He will be leading a discussion at Questionmark Conference 2016 in Miami, about Creating a Global Knowledge and Skills Assessment Program for Amazon Sellers.

Jason Sunseri, Program Manager – Learning Technology, Amazon

Jason’s session will look at how Amazon Seller Support and Questionmark OnDemand have partnered to deliver a world-class solution. Jason will illustrate how Amazon has used the OnDemand platform to deliver a robust, data-driven assessment strategy.

I recently asked him about his session:

Tell me about Amazon and its use of assessments:

Amazon Seller Support engages with the 2.5 million+ global sellers represented on the Amazon platform. Due to rapid global expansion across the platform, the Amazon Seller Support needed to find a technology and assessment partner that could support both its knowledge and skill acquisition assessment strategies.

How does Amazon use data to drive strategy?

Assessments play a huge role at Amazon. We have really evolved into a data-driven culture and we use assessments in surveys and inside curriculum to assess training and performance, and to identify early issues and trends in order to tweak training content and fix errors.

What role does Questionmark play in that strategy?

We rely heavily on reports — Survey Matrix, Job Task Analysis and other report functions — to assess performance. We’re able to leverage the tool by having individual training centers analyze learning and training gaps and pass on those results. It allows us to see how and why a site is succeeding; where that behavior stems from — it’s really cool to see.

What are you looking forward to at the conference?

It’s Miami, so…the weather, for sure! In all seriousness, I look forward to learning about how other Questionmark users utilize the same tools and how their approach varies from ours.

Thank you, Jason for taking time out of your busy schedules to discuss your session with us!

Next Page »