Seven tips to recruit and manage SMEs for technology certification exams

imagePosted by John Kleeman

[repost from February 8, 2017]

How do you keep a certification exam up to date when the technology it is assessing is changing rapidly?

Certifications in new technologies like software-as-a-service and cloud solutions have some specific challenges. The nature of the technology usually means that questions often require very specialist knowledge to author. And because knowledge of the new technology is in short supply, subject matter experts (SMEs) who are able to author and review new items will be in high demand within the organization for other purposes.

Cloud technological offerings also change rapidly. It used to be that new technology releases came out every year or two, and if you were writing certification exams or other assessments to test knowledge and skill in them, you had plenty of notice and could plan an update cycle. But nowadays most technology organizations adopt an agile approach to development with the motto “release early, release often”. The use of cloud technology makes frequent, evolutionary releases – often monthly or quarterly – normal.

So how can you keep an exam valid and reliable if the content you are assessing is changing rapidly?

Here are seven tips that could help – a few inspired by an excellent presentation by Cisco and Microsoft at the European Association of Test Publishers conference.

  1. Try to obtain item writing SMEs from product development. They will know what is coming and what is changing and will be in a good position to write accurate questions. 
  2. Also network for SMEs outside the organization – at technology conferences, via partners and resellers, on social media and/or via an online form on your certification website. A good source of SMEs will be existing certified people.
  3. Incentivize SMEs – what will work best for you will depend on your organization, but you can consider free re-certifications, vouchers, discounts off conferences, books and other incentives. Remember also that for many people working in technology, recognition and appreciation are as important as financial incentives. Appreciate and recognize your SMEs. For internal SMEs, send thank you letters to their managers to appreciate their effort.
  4. Focus your exam on underlying key knowledge and skills that are not going to become obsolete quickly. Work with your experts to avoid items that are likely to become obsolete and seek to test on fundamental concepts, not version specific features.
  5. When working with item writers, don’t be frightened to develop questions based on beta or planned functionality, but always do a check before questions go live in case the planned functionality hasn’t been released yet.
  6. Analyze, create, deliverSince your item writers will likely be geographically spread and will be busy and tech-literate, use a good collaborative tool for item writing and item banking that allows easy online review and tracking of changes. (See https://www.questionmark.com/content/distributed-authoring-and-item-management for information on Questionmark’s authoring solution.)
  7. In technology as in other areas, confidentiality and exam security are crucial to ensure the integrity of the exam. You should have a formal agreement with internal and external SMEs who author or review questions to remind them not to pass the questions to others. Ensure that your HR or legal department are involved in the drafting of these so that they are enforceable.

Certification of new technologies helps adoption and deployment and contributes to all stakeholders success. I hope these tips help you improve your assessment program.

Reliability and validity are the keys to trust

image.png

Not reliable

John Kleeman HeadshotPosted by John Kleeman

How can you trust assessment results? The two keys are reliability and validity.

Reliability explained

An assessment is reliable if it measures the same thing consistently and reproducibly. If you were to deliver an assessment with high reliability to the same participant on two occasions, you would be very likely to reach the same conclusions about the participant’s knowledge or skills. A test with poor reliability might result in very different scores across the two instances.An unreliable assessment does not measure anything consistently and cannot be used for any trustable measure of competency. It is useful visually to think of a dartboard; in the diagram to the right, darts have landed all over the board—they are not reliably in any one place.In order for an assessment to be reliable, there needs to be a predictable authoring process, effective beta testing of items, trustworthy delivery to all the devices used to give the assessment, good-quality post-assessment reporting and effective analytics.

Validity explained

image.png

Reliable but not valid

Being reliable is not good enough on its own. The darts in the dartboard in the figure to the right are in the same place, but not in the right place. A test can be reliable but not measure what it is meant to measure. For example, you could have a reliable assessment that tested for skill in word processing, but this would not be valid if used to test machine operators, as writing is not one of the key tasks in their jobs.

An assessment is valid if it measures what it is supposed to measure. So if you are measuring competence in a job role, a valid assessment must align with the knowledge, skills and abilities required to perform the tasks expected of a job role. In order to show that an assessment is valid, there must be some formal analysis of the tasks in a job role and the assessment must be structured to match those tasks. A common method of performing such analysis is a job task analysis, which surveys subject matter experts or people in the job role to identify the importance of different tasks.

Assessments must be reliable AND valid

Trustable assessments must be reliable AND valid.

image.png

Reliable and valid

The darts in the figure to the right are in the same place and at the right place.

When you are constructing an assessment for competence, you are looking for it to consistently measure the competence required for the job.

 Comparison with blood tests

It is helpful to consider what happens if you go to the doctor with an illness. The doctor goes through a process of discovery, analysis, diagnosis and prescription. As part of the discovery process, sometimes the doctor will order a blood test to identify if a particular condition is present, which can diagnose the illness or rule out a diagnosis.

It takes time and resources to do a blood test, but it can be an invaluable piece of information. A great deal of effort goes into making sure that blood tests are both reliable (consistent) and valid (measure what they are supposed to measure). For example, just like exam results, blood samples are labelled carefully, as shown in the picture, to ensure that patient identification is retained.

image_thumb.pngA blood test that was not reliable would be dangerous—a doctor might think that a disease is not present when it is. Furthermore, a reliable blood test used for the wrong purpose is not useful—for example, there is no point in having a test for blood glucose level if the doctor is trying to see if a heart attack is imminent.

The blood test results are a single piece of information that helps the doctor make the diagnosis in conjunction with other data from the doctor’s discovery process.

In exactly the same way, a test of competence is an important piece of information to determine if someone is competent in their job role.

Using the blood test metaphor, it is easy to understand the personnel and organizational risks that can result from making decisions based on untrustworthy results. If an organization assesses someone’s knowledge, skill or competence for health and safety or regulatory compliance purposes, you need to ensure the assessment is designed correctly and runs consistently, which means that they must be reliable and valid.

For assessments to be reliable and valid, it is necessary that you follow structured processes at each step from planning through authoring to delivery and reporting. These processes are explained in our new white paper “Assessment Results You can Trust” and I’ll be sharing some of the content in future articles in this blog.

For fuller information, you can download the white paper, click here