Seven tips to recruit and manage SMEs for technology certification exams

imagePosted by John Kleeman

[repost from February 8, 2017]

How do you keep a certification exam up to date when the technology it is assessing is changing rapidly?

Certifications in new technologies like software-as-a-service and cloud solutions have some specific challenges. The nature of the technology usually means that questions often require very specialist knowledge to author. And because knowledge of the new technology is in short supply, subject matter experts (SMEs) who are able to author and review new items will be in high demand within the organization for other purposes.

Cloud technological offerings also change rapidly. It used to be that new technology releases came out every year or two, and if you were writing certification exams or other assessments to test knowledge and skill in them, you had plenty of notice and could plan an update cycle. But nowadays most technology organizations adopt an agile approach to development with the motto “release early, release often”. The use of cloud technology makes frequent, evolutionary releases – often monthly or quarterly – normal.

So how can you keep an exam valid and reliable if the content you are assessing is changing rapidly?

Here are seven tips that could help – a few inspired by an excellent presentation by Cisco and Microsoft at the European Association of Test Publishers conference.

  1. Try to obtain item writing SMEs from product development. They will know what is coming and what is changing and will be in a good position to write accurate questions. 
  2. Also network for SMEs outside the organization – at technology conferences, via partners and resellers, on social media and/or via an online form on your certification website. A good source of SMEs will be existing certified people.
  3. Incentivize SMEs – what will work best for you will depend on your organization, but you can consider free re-certifications, vouchers, discounts off conferences, books and other incentives. Remember also that for many people working in technology, recognition and appreciation are as important as financial incentives. Appreciate and recognize your SMEs. For internal SMEs, send thank you letters to their managers to appreciate their effort.
  4. Focus your exam on underlying key knowledge and skills that are not going to become obsolete quickly. Work with your experts to avoid items that are likely to become obsolete and seek to test on fundamental concepts, not version specific features.
  5. When working with item writers, don’t be frightened to develop questions based on beta or planned functionality, but always do a check before questions go live in case the planned functionality hasn’t been released yet.
  6. Analyze, create, deliverSince your item writers will likely be geographically spread and will be busy and tech-literate, use a good collaborative tool for item writing and item banking that allows easy online review and tracking of changes. (See https://www.questionmark.com/content/distributed-authoring-and-item-management for information on Questionmark’s authoring solution.)
  7. In technology as in other areas, confidentiality and exam security are crucial to ensure the integrity of the exam. You should have a formal agreement with internal and external SMEs who author or review questions to remind them not to pass the questions to others. Ensure that your HR or legal department are involved in the drafting of these so that they are enforceable.

Certification of new technologies helps adoption and deployment and contributes to all stakeholders success. I hope these tips help you improve your assessment program.

Transform Your 2017 Assessment Strategy

Julie ProfilePosted by Julie Delazyn

As the year draws to close and you wrap up the final exams for 2016,  you may be thinking about your 2017 assessment strategy. Transform the way you manage learning and training in 2017 by taking advantage of these learning opportunities:

 Transforming Your Test Program with Online Proctoring

Person-taking-a-test.jpg

If you’ve been considering implementing online proctoring, 2017 is the year to make it happen. With the increase in test security threats and the growing demand for flexibility in learning and training, there’s no better time to turn to a secure and cost-effective alternative to traditional brick-and-mortar test centers.

This 45-minute webinar will cover the basics of online proctoring and describe how it manages the variety of test security threats.

WHEN: This Wednesday (Dec. 14) 3:30 p.m. UK GMT / 10:30 a.m. US EDT —  Register now

Intro to Questionmark’s Assessment Management System

WebinarIf you’d like to end this year by getting a better understanding of how Questionmark’s assessment solutions can help you gain the impact you need from your test programs, then attend this 60-minute introductory webinar. We’ll give you a live demo of Questionmark OnDemand showing you key features and functions.

WHEN: This Wednesday  (Dec. 14) 6:00 p.m. UK GMT / 1:00 p.m. US EDTRegister now

If you can’t make it to tomorrow’s webinars, check back here for new dates and themes. We look forward to helping you transform your assessment strategy in 2017!

New best practice webinars: Taking your assessments from to good to great

Posted by Chloe Mendonca

“Good, better, best. Never let it rest. ‘Til your good is better and your better is best.” This old little rhyme teaches us a valuable lesson: There is always room for improvement! No matter what role or business you’re in, if you’re interested in long-term success, you should strive to continuously improve your knowledge, systems and processes.

But how does this relate to assessments? Well, in many ways, there are always things we can do to develop better assessments: more secure, more trustworthy assessment programs. Maybe your current assessment program is “good”, but is “good” good enough?

We’re offering two new webinars that will help you assess how you’re currently performing in two key areas — and take your assessments from good to great:

  1. Item Writing

How to write high quality test items [35-Minute Session]

  • 3rd August, 2016, 3:00 p.m. UK BST / 10:00 a.m. US EDT

Are your items poorly written? Perhaps they’re good but you want them to be “better”. Skilfully crafted items promote learning and memory recall. They help retain knowledge, skills and/or abilities over time, but writing high-quality items isn’t as easy as it looks. This session will give you tips for taking your items to the next level.

  1. Exam Integrity

Enhancing exam integrity with online proctoring [45-Minute Session]

  • 9th August, 2016, 3:00 p.m. UK BST / 10:00 a.m. US EDT

With online proctoring rapidly gaining the attention of organisations and test sponsors around the world, many are wondering how it compares with traditional test centre proctoring. This 45-minute webinar will discuss what online proctoring is, how it works and whether it can in fact boost test security. Don’t miss this session if you’re keen to extend geographic reach and lower test administration costs.


If you’re looking to learn more about what you can achieve with Questionmark’s Assessment Management System, join our 60-minute introductory session. We’ll demo the platform live and cover a number of key features and functions. Save your seat at one of these sessions:

Intro to Questionmark’s Assessment Management System [60-Minute Session]

  • 4th August, 2016, 10:30 a.m. (BST) UK
  • 10th August, 2016, 12:00 p.m. (EDT) US

We also deliver this webinar in Spanish and Portuguese. Check out the upcoming dates and times here.

7 Strategies to Shrink Satisficing & Improve Survey Results

John Kleeman Headshot

Posted by John Kleeman

My previous post Satisficing: Why it might as well be a four-letter word explained that satisficing on a survey is when someone answers survey questions adequately but not as well as they can. Typically they just fill in questions without thinking too hard. As a commenter on the blog said: “Interesting! I have been guilty of this, didn’t even know it had a name!”

Examples of satisficing behavior are skipping questions or picking the first answer that makes some kind of sense. Satisficing is very common.  As explained in the previous blog, some reasons for it are participants not being motivated to answer well, not having the ability to answer well, them finding the survey too hard or them simply becoming fatigued at too long a survey.

Satisficing is a significant cause of survey error, so here are 7 strategies for a survey author to reduce satisficing:

1. Keep surveys short. Even the keenest survey respondent will get tired in a long survey and most of your respondents will probably not be keen. To get better results, make the survey as short as you possibly can.Bubble-Sheet---Printing-and-Scanning_2

2. Keep questions short and simple. A long and complex question is much more likely to get a poor quality answer.  You should deconstruct complex questions into shorter ones. Also don’t ask about events that are difficult to remember. People’s memory of the past and of the time things happened is surprisingly fragile, and if you ask someone about events weeks or months ago, many will not recall well.

3. Avoid agree/disagree questions. Satisficing participants will most likely just agree with whatever statement you present. For more on the weaknesses of these kind of questions, see my blog on the SAP community network: Strongly Disagree? Should you use Agree/Disagree in survey questions?

4. Similarly remove don’t know options. If someone is trying to answer as quickly as possible, answering that they don’t know is easy for them to do, and avoids thinking about the questions.

5. Communicate the benefit of the survey to make participants want to answer well. You are doing the survey for a good reason.  Make participants believe the survey will have positive benefits for them or their organization. Also make sure each question’s results are actionable. If the participant doesn’t feel that spending the time to give you a good answer is going to help you take some useful action, why should they bother?

6. Find ways to encourage participants to think as they answer. For example, include a request to ask participants to carefully deliberate – it could remind them to pay attention. It can also be helpful to occasionally ask participants to justify their answers – perhaps adding a text comment box after the question explaining why they answered that way. Adding comment boxes is very easy to do in Questionmark software.

7. Put the most important questions early on. Some people will satisfice and they are more likely to do it later on in the survey. If you put the questions that matter most early on, you are more likely to get good results from them.

There is a lot you can do to reduce satisficing and encourage people to give their best answers. I hope these strategies help you shrink the amount of satisficing your survey participants do, and in turn give you more accurate results.

Satisficing: Why it might as well be a four-letter word

John Kleeman Headshot

Posted by John Kleeman

Have you ever answered a survey without thinking too hard about it, just filling in questions in ways that seem half sensible? This behavior is called satisficing – when you give responses which are adequate but not optimal. Satisficing is a big cause of error in surveys and this post explains what it is and why it happens.

These are typical satisficing behaviors:

  • selecting the first response alternative that seems reasonable
  • agreeing with any statement that asks for agree/disagree answers
  • endorsing the status quo and not thinking through questions inviting change
  • in a matrix question, picking the same response for all parts of the matrix
  • responding “don’t know”
  • mentally coin flipping to answer a question
  • leaving questions unanswered

How prevalent is it?

Very few of us satisfice when taking a test. We usually try hard to give the best answers we can. But unfortunately for survey authors, it’s very common in surveys to answer half-heartedly, and satisficing is one of the common causes of survey errors.

For instance, a Harvard University study looked at a university survey with 250 items. Students were given a $15 cash incentive to complete it:

  • Eighty-one percent of participants satisficed at least in part.
  • Thirty-six percent rushed through parts of the survey too fast to be giving optimal answers.
  • The amount of satisficing increased later in the survey.
  • Satisficing impacted the validity and reliability of the survey and of any correlations made.

It is likely that for many surveys, satisficing plays an important part in the quality of the data.

How does it look like?

There are a few tricks to help identify satisficing behavior, but the first thing to look for when examining the data is straight-lining on grid questions. According to How to Spot a Fake, an article based on the Practices that minimize online panelist satisficing behavior by Shawna Fisher, “an instance or two may be valid, but often, straight-lining is a red flag that indicates a respondent is satisficing.” See the illustration for a visual.

Why does it happen?

Research suggests that there are four reasons participants typically satisfice:

1. Participant motivation. Survey participants are often asked to spend time and effort on a survey without much apparent reward or benefit. One of the biggest contributors to satisficing is lack of motivation to answer well.

2. Survey difficulty. The harder a survey is to answer and the more mental energy that needs to go into thinking about the best answers, the more likely participants are to give up and choose an easy way through.

3. Participant ability. Those who find the questions difficult, either because they are less able, or because they have not had a chance to consider the issues being asked in other contexts are more likely to satisfice.

4. Participant fatigue. The longer a survey is, the more likely the participant is to give up and start satisficing.

So how can we reduce satisficing? The answer is to address these reasons in our survey design. I’ll suggest some ways of doing this in a follow-up post.

I hope thinking about satisficing might give you better survey results with your Questionmark surveys!

5 Steps to Better Tests

Julie ProfilePosted by Julie Delazyn

Creating fair, valid and reliable tests requires starting off right: with careful planning. Starting with that foundation, you will save time and effort while producing tests that yield trustworthy results.five steps white paper

Five essential steps for producing high-quality tests:

1. Plan: What elements must you consider before crafting the first question? How do you identify key content areas?

2. Create: How do you write items that increase the cognitive load, avoid bias and stereotyping?

3. Build: How should you build the test form and set accurate pass/ fail scores?

4. Deliver: What methods can be implemented to protect test content and discourage cheating?

5. Evaluate: How do you use item-, topic-, and test-level data to assess reliability and improve quality?

Download this complimentary white paper full of best practices for test design, delivery and evaluation.