How many errors can you spot in this survey question?

John KleemanPosted by John Kleeman

Tests and surveys are very different. In a test, you look to measure participant knowledge or skill; you know what answer you are looking for, and generally participants are motivated to answer well. In a survey, you look to measure participant attitude or recollection; you don’t know what answer you are looking for, and participants may be disinterested.

Writing good surveys is an important skill. If you’re interested in how to write good surveys of opinion and attitude in training, learning, compliance, certification, based on research evidence, you might be interested in a webinar I gave titled, “Designing Effective Surveys.” Click HERE for the webinar recording and slides.

In the meantime, here’s a sample survey question. How many errors can you spot in the question?

The material and presentation qualty at Questionmark webinars is always excellent. Strongly Agree Agree Slightly agree Neither agree nor disagree Disagree Strongly disagree

There are quite a few errors. Try to count the errors before you look at my explanation below!!

I count seven errors:

  1. I am sure you got the mis-spelling of “quality”. If you mis-spell something in a survey question, it indicates to the participant that you haven’t taken time and trouble writing your survey, so there is little incentive for them to spend time and trouble answering.
  2. It’s not usually sensible to use the word “always” in a survey question. Some participants make take the statement literally, and it’s much more likely that webinars are usually excellent than that every single one is excellent.
  3. The question is double-barreled. It’s asking about material AND presentation quality. They might be different. This really should be two questions to get a consistent answer.
  4. The “Agree” in “Strongly Agree” is capitalized but not in other places, e.g. “Slightly agree”. Capitalization should be equal in every part of the scale.

You can see these four errors highlighted below.

Red marking corresponding to four errors above

Is that all the errors? I count three more, making a total of seven:

  1. The scale should be balanced. Why is there a “Slightly agree” and not a “Slightly disagree”?
  2. This is a leading or “loaded” question, not a neutral one, it encourages you to a positive answer. If you genuinely want to get people’s opinion in a survey question, you need to ask it without encouraging the participant to answer a particular way.
  3. Lastly, any agree/disagree question has acquiescence bias. Research evidence suggests that some participants are more likely to agree when answering survey questions. Particularly those who are more junior or less educated who may tend to think that what is asked of them might be true. It would be better to word this question to ask people to rate the webinars rather than agree with a statement about them.

Did you get all of these? I hope you enjoyed this little exercise. If you did, I explain more about this and good survey practice in our Designing Effective Surveys webinar, click HERE for the webinar recording and slides.

Beyond Recall : Taking Competency Assessments to the Next Level

A pyramid showing create evaluate analyze apply understand remember / recall

John KleemanPosted by John Kleeman

A lot of assessments focus on testing knowledge or facts. Questions that ask for recall of facts do have some value. They check someone’s knowledge and they help reduce the forgetting curve for new knowledge learned.

But for most jobs, knowledge is only a small part of the job requirements. As well as remembering or recalling information, people need to understand, apply, analyze, evaluate and create as shown in Bloom’s revised taxonomy right. Most real world jobs require many levels of the taxonomy, and if your assessments focus only on recalling knowledge, they may well not test job competence validly.

Evaluating includes exercising judgement, and using judgement is a critical factor in competence required in a lot of job roles. But a lot of assessments don’t assess judgement, and this webinar will explain how you can do this.

There are many approaches to creating assessments that do more than test recall, including:

  • You can write objective questions which test understanding and application of knowledge, or analysis of situations. For example you can present questions within real-life scenarios which require understanding a real-life situation and working out how to apply knowledge and skills to answer it. It’s sometimes useful to use media such as videos to also make the question closer to the performance environment.
  • You can use observational assessments, which allow an observer to watch someone perform a task and grade their performance. This allows assessment of practical skills as well as higher level cognitive ones.
  • You can use simulations which assess performance within a controlled environment closer to the real performance environment
  • You can set up role-playing assessments, which are useful for customer service or other skills which need interpersonal skills
  • You can assess people’s actual job performance, using 360 degree assessments or performance appraisal.

In our webinar, we will give an overview of these methods but will focus on a method which has always been used in pre-employment but which is increasingly being used in post-hire training, certification and compliance testing. This method is Situational Judgement Assessments – which are questions carefully written to assess someone’s ability to exercise judgement within the domain of their job role.

It’s not just CEOs who need to exercise judgment and make decisions, almost every job requires an element of judgement. Many costly errors in organizations are caused by a failure of judgement. Even if people have appropriate skill, experience and knowledge, they need to use judgement to apply it successfully, otherwise failures occur or successful outcomes are missed.

Situational Judgment Assessments (SJAs) present a dilemma to the participant (using text or video)  and ask them to choose options in response. The dilemma needs to be one that is relevant to the job, i.e. one where using judgement is clearly linked to a needed domain of knowledge, skill or competency in the job role. And the scoring needs to be based on subject matter experts alignment that the judgement is the correct one to make.

Context is defined (text or video); Dilemma that needs judgment; The participant chooses from options; A score or evaluation is made

Situational Judgement Assessments can be a valid and reliable way of measuring judgement and can be presented in a standalone assessment or combined with other kinds of questions. If you’re interested in learning more, check out our webinar titled “Beyond Recall: Taking Competency Assessments to the Next Level.” You can download the webinar recording and slides HERE.

How is the SAP Global Certification program going? A re-interview with SAP’s manager of global certification, part 1.

Posted by Zainab Fayaz

Back in 2016, John Kleeman, Founder and Executive Director of Questionmark interviewed Ralf Kirchgaessner, Manager of SAP Global Certification program about their use of Questionmark software in their Certification in the Cloud program and about their move to online proctoring. You can see the interview on the Questionmark blog here. We also thought readers might be interested in an update, so here is a short interview between the two on how SAP are getting on three years later:

John: Could you give us an update on where you are with the Certification in the Cloud program?

Ralf: The uptake, adoption and increase of Certification in the Cloud is tremendous! Over the years we have seen a significant increase in the volume of candidates taking exams in the cloud; the numbers doubled from 2016 to 2017 and increased almost by 60% in 2018. This means more than 50% of SAP Global Certification exams are now done remotely!

John: Are all your SAP Global Certification exams now available online in the cloud?

Ralf: Nearly so. By mid-2019 we plan on having the complete portfolio of every SAP exam available on the cloud. This is great news for our learners who have invested in a Certification in the Cloud subscription. So, we then have Certification in the Cloud not only for SAP SuccessFactors and SAP Ariba, but for all products, including SAP C/4HANA.

John: How many different languages are your exams translated into?

Ralf: This depends on the portfolio. Some of our certifications are available in English and others, such as for SAP Business One are translated in up to 20 languages.

John: How are you dealing with the fast pace of change within SAP software in a certification context? How do you ensure certifications stay up to date when the software changes?

Ralf: This is of course a challenge. In previous years, it was the case of getting certified once every few years. However, now you must keep your skills up-to-date and stay current with quarterly release cycles of our SAP Cloud solutions. Also, for people who are first timers or newly enter the SAP eco-system; it is important that they are certified on the latest quarterly release.

To help overcome this challenge, we have developed an agile approach to updating our exams; we use the Questionmark platform for those who are new to the eco-system to help them getting certified initially. We also have a very good process in place and often use the same subject matter experts when it comes to keeping up to the speed of software changes.

For already certified professionals, another way to remain up to date is through our ‘Stay Current’ program. For some of our solutions, partners have to come back every 3 months to show that they are staying current. They do this in the form of taking a short “delta” knowledge assessment. For instance, for certified professionals of SAP SuccessFactors it is mandatory to stay current in order to get provisioning access to the software systems.

In 2018, SAP’s certification approach was acknowledged with the ITCC Innovation Award. Industry peers like from Microsoft, IBM and others recognized this great achievement with this award.

 

Q&A: Sue Martin and John Kleeman discuss steps to building a certification program

Posted by Zainab Fayaz

Certification programs are a vital way of recognizing knowledge, skills and professional expertise, but, during a time of digital transformation, how do you build a program that is sustainable and adaptable to the evolving needs of your organization, stakeholders and the market?

Questionmark Founder and Executive Director, John Kleeman and Sue Martin, certification expert and Business Transformation Consultant presented a webinar on how to build a certification program (you can view the webinar HERE). Before the webinar, we sat down with our experts to gain some insight on what they’ll be covering during the session.

Tell us a bit about what you’ll be covering during the webinar:

Sue: During the webinar, we’ll be covering a range of things; from the conceptual steps of building a certification program to the many projects that have evolved from these and the importance of outlining key steps from the very beginning of the process for creating a comprehensive and cohesive certification program.

We will also talk about the value certification program, can add to an organization, not only in the short-haul but also for many years to come. It is important to remember, “why” and “what” you are trying to achieve, and this webinar will provide detail on how the alignment of strategic goals and communication with stakeholders contributes to the success of an adaptable certification program.

John: We’ll be discussing a range of things during the webinar, but here are the ten easy steps that we’ll be describing:

  1. Business goals
  2. Scope
  3. Security
  4. Vendor evaluation
  5. Blueprint and test design
  6. Test development
  7. Pilot
  8. Communications
  9. Delivery
  10. Reporting and monitoring

What influenced the selection of these 10-steps you have identified in building a certification program?

John:  Sue and I sat down to plan the webinar when we were together at the OEB conference in Berlin in December. Although we wanted to cover a bit some of the obvious things like test design and development, we wanted to make sure people think first about the preparation and planning, for example getting organization buy-in and working out how to market and communicate the program to stakeholders. So we’ll be focusing on what you need to do to make a successful program, and that will drive everything you do

Although you’ll be covering the key steps for building a certification program during the webinar, can you advise on three key steps you find to be the most important during the process:

Sue:
1. Planning:
The emphasis of the program’s work should be at the start, in the planning phase – especially in order to build a flexible program which will adapt to the needs of your audience and stakeholders as their needs change over time. In all of the individual project components, whether it be test creation, vendor evaluation or communications rollout, for example, design and plan for the end goal. For example, when it comes to creating an exam, you plan for it right at the start of the project – you hit the ground running! It is not all about item writing, but also the development of the project from the beginning and if you don’t plan; this can lead to the lack of validity in the exam program and inconsistency over time

2. Practical tips and tricks for approaching various elements of your program development: It is important to set out the target audience; identify their learning journey and how they learn – in knowing this, can you go forward and build a certification program that can become integrated and aligns with the learning process

3. Scope: This is very important; setting the scope is a priority. Of course, in the greater scheme of things; you’ll have a mission statement, which provides you with a strategic vision, but when it comes to the finer detail and knowing what countries to enter, the pricing structure or knowing whether to offer remote proctoring; always keep in mind three things: the value contribution, the stakeholders and ask yourselves “yes, but why?”; as this will help align with organizational objectives.

What can attendees take away from the webinar you’ll present?  

Sue: Those attending will learn the value and importance of planning and questioning everything from the start of the process. We’ll share advice on the importance of having a value statement for every part of the process and making sure you know that a certification program is what you are looking for. By attending you can walk away with knowing the operational and strategic steps you must go through in order to build a program that is sustainable; think of it as a checklist!

John: If you’re starting a new certification program, I think this webinar will help guide you and help you create it more easily and more effectively. And if you already have a certification program and want to improve it, you’ll probably be doing a lot of what we suggest already but I hope they’ll be something for everyone to take away and learn.

Want to know more?

If you’re interested in learning more about the steps to building a certification program that meets the needs of your organization and stakeholders, check out John and Sue’s webinar session, Building a Certification Program in 10 easy steps.

A little bit more about our two experts:

John Kleeman is Executive Director and Founder of Questionmark. He has a first-class degree from Trinity College, Cambridge, and is a Chartered Engineer and a Certified Information Privacy Professional/Europe (CIPP/E). John wrote the first version of the Questionmark assessment software system and then founded Questionmark in 1988 to market, develop and support it. John has been heavily involved in assessment software for 30 years and has also participated in several standards initiatives including IMS QTI, ISO 23988 and ISO 10667. John was recently elected to the Association of Test Publishers (ATP) Board of Directors.

Sue Martin is a trusted advisor to companies and institutions across Europe in the area of workforce credentialing, learning strategies and certification. Her career prior to consulting included a role as Senior Global Certification Director for SAP and several regional and global management roles in the testing industry. She has also held several positions within industry institutions, such as the Chair of the European Association of Test Publishers and is currently a member of the Learning & Development Committee at BCS (British Computer Society).

What time limit is fair to set for an online test or exam?

John KleemanPosted by John KleemanPicture of a sand timer

How do you know what time limit to set for a test or exam? I’m presenting a webinar on December 18th on some tips on how you can improve your tests and exams (it’s free of charge, register here) and this is one of the subjects I’ll be covering. In the meantime, this blog gives some good practice on setting a time limit.

Power tests

The first thing to identify is what the test is seeking to measure, and whether this has a speed element. Most tests are “power” tests in that they seek to measure someone’s knowledge or skill, not how fast it can be demonstrated. In a power test, you could set no time limit, but for practical purposes, it’s usual to set a time limit. This should allow most people to have enough time to complete answering the questions.

The best way to set a time limit is to pilot the test and measure how long pilot participants take to answer questions and use this to set an appropriate time period. If you have an established testing program, you may have organizational guidelines on time limits, for example you might allow a certain number of seconds or minutes per question; but even if you have such guidelines, you must still check that they are reasonable for each test.

Speed tests

Sometimes, speed is an important part of what you are trying to measure, and you need to measure that someone not only can demonstrate knowledge or skill but can also do so quickly. In a speed test, failure to be able to answer quickly may mean that the participant does not meet the requirements for what is being measured.

For example, in a compliance test for bank personnel to check their knowledge of anti-bribery and corruption laws, speed is probably not part of what is being measured. It will be rare in practice for people to encounter real-life issues involving bribery and very reasonable for them to think and consider before answering. But if you are testing a medical professional’s ability to react to a critical symptom in a trauma patient and make a decision on a possible intervention, rapid response is likely part of the requirement.

When speed is part of the requirements of what is being measured, the time limit for the test should be influenced by the performance requirements of the job or skill being measured.

Monitoring time limits

For all tests, it is important to review the actual time taken by participants to ensure that the time limit remains appropriate. You should regularly check the proportion of participants who answer all the questions in the test and those who skip or miss out some questions. In a speed test, it is likely that many participants will not finish the test. But if many participants are failing to complete a power test, then this should be investigated and may mean that the time limit is too short and needs extending.

If the time limit for a power test is too short, then essentially it becomes a speed test and is measuring how fast participants can demonstrate their skills. As such, if this is not part of the purpose of the test, it will impact the validity of the test results and it’s likely that the test will mis-classify people and so be unfair.

A particular point of concern is when you are using computerized tests to test people who are not proficient computer users. They will inevitably be slower than proficient computer users, and unless your test seeks to measure computer proficiency, you need to allow such people enough time.

What about people who need extra time?

It’s common to give extra time as accommodation for certain kinds of special needs. Extra time is also sometimes given for linguistic reasons e.g. taking an assessment in second language. Make sure that your assessment system lets you override the time limit in such cases. Ideally base the extra time in such cases on piloting, not just a fixed extra percentage.

Screenshot showing a setting where it is possible to exclude material from the assessment time limitWhen should a time limit start?

My last tip is that the time limit should only start when the questions begin. If you are presenting any of these:

  • Introductory material or explanation
  • Practice questions
  • An honor code to commit to staying honest and not cheating
  • Demographic questions

The time limit should start after these are done. If you are using Questionmark software, you can make this happen by excluding the question block from the assessment time limit.

 

If you are interested in more tips on improving your tests and exams, register to attend our free webinar on December 18th:  10 Quick Tips to Improve your Tests and Exams.

What is the Single Best Way to Improve Assessment Security?

John KleemanPosted by John Kleeman

Three intersecting circles, one showing Confidentiality, one showing Availability and one showing IntegrityAssessment results matter. Society relies on certifications and qualifications granted to those who pass exams. Organizations take important decisions about people based on test scores. And individuals work hard to learn skills and knowledge they can demonstrate in tests and exams. But in order to be able to trust assessment results, the assessment process needs to be secure.

Security is usefully broken down into three aspects: confidentiality, integrity and availability.

  • Confidentiality for assessments includes that questions are kept secure and that results are available only to those who should see them.
  • Integrity for assessments includes that that the process is fair and robust, that identify of the test-taker is confirmed and that cheating does not take place.
  • Availability includes that assessments can be taken when needed and that results are stored safely for the long term.

A failure of security, particularly one of confidentiality or integrity reduces the usefulness and trustworthiness of test results. A confidentiality failure might mean that results are meaningless as some test-takers knew questions in advance. An integrity failure means that some results might not be genuine.

So how do you approach making an assessment program secure? The best way to think about this is in terms of risk. Risk assessment is at the heart of all successful security systems and central to the widely respected ISO 27001 and NIST 800-53 security standards. In order to focus resources to make an assessment program secure and to reduce cheating, you need to enumerate and quantify the risks and identify probability (how likely they are to happen) and impact (how serious it is if they do). You then allocate mitigation effort at the ones with higher probability and impact. This is shown illustratively in the diagram – the most important risks to deal with are those that have high probability and high impact.

Four quadrants showing high probability, high impact in red and Low probability, low impact in green. With yellow squares for high probability, low impact and low probability, high impact

One reason why risk assessment is sensible is that it focuses effort on issues that matter. For example, the respected Verizon data breach investigations report for 2017 reported that 81% of hacking-related breaches involved weak or stolen passwords. For most assessment programs, it will make sense to put in place measures like strong passwords and training on good password practice for assessment administrators and authors to help mitigate this risk.

There is no “one size fits all approach”. Some risks will differ between assessment programs. To give a simple example, some organizations are concerned  about people having reference materials or “cheat sheets” to look up answers in and this can be an important risk to mitigate against; whereas in other programs, exams are open book and this is not a concern. In some programs, identity fraud (where someone pretends to be someone else to take the exam for them) is a big concern; in others the nature of the proctoring or the community makes this much less likely.

If you’re interested in learning more about the risk approach to assessment security, I’m presenting a webinar “9 Risks to Test Security (and what to do about them)” on 28th November which:

  • Explains the risk approach to assessment security.
  • Details nine key risks to assessment security from authoring through delivery and into reporting.
  • Gives some real examples of the threats for each risk.
  • Suggests some mitigations and measures to consider to improve security.

You can see more details on the webinar and register here.

Assessment security matters because it impacts the quality and trustworthiness of assessment results. If you are not already doing it, starting a risk-based approach to analyzing risks to your security is the single best way to improve assessment security.