Single Sign-On Pros and Cons

Bart Hendrickx Small

Posted by Bart Hendrickx

In my previous blog post on Single Sign-On (SSO), I touched on advantages and disadvantages of using SSO. In this blog post, I will revisit and expand on those. I hope they will help you decide whether SSO is something you would like to use within Questionmark OnDemand, your organization or project.

My colleague Christian Röwenstrunk used the following image in his presentation on SSO at Questionmark Conference 2016, which sums it up well:

SSO

SSO Advantages

  1. SSO reduces password fatigue. When you need to remember one password for your identity provider, instead of multiple passwords for each of the different systems (service providers) you want to connect to, you get less tired of having to fill out passwords each time you log on to a system. It is a better user experience.
    Let’s admit it, if you need to maintain ten passwords for ten systems, you are inclined to choose passwords that are variations of one another which can be insecure. Or, worse, you use the same password on all systems (never do that!). If you need to remember only one password, you are also more willing to invest more energy in coming up with a more secure password.
  2. SSO reduces password exposure. When you need to enter your password once (cf. “Single” in Single Sign-On), there is less risk that someone will shoulder surf and see your password.
  3. SSO simplifies user and password management. This is especially interesting for the IT department. If you can access multiple systems as part of your employment, and you leave the organization, the IT department only needs to decommission your account on the identity provider to revoke your access to all the service providers.
  4. SSO opens up new possibilities. Identity providers often have capabilities that make authentication more secure. For example, if your identity provider supports multi-factor authentication, then you can leverage that capability for all the service providers that are linked to your identity provider.

SSO Disadvantages

  1. SSO gives you the keys to the castle. If you log on to multiple systems from one identity provider, and a hacker compromises your user account on the identity provider, the hacker gets unauthorized access to all the linked systems. This is similar to using the same password for multiple systems.
  2. SSO does not work when your identity provider is down. If your identity provider does not respond, for example due to an outage, you cannot log on to any of the systems that are linked to it.
  3. SSO takes a little bit of investment to set up. Linking your identity provider to a service provider is an extra step. Depending on the technologies used and the use cases, that extra step can mean that you will spend some time setting things up.

Satisficing: Why it might as well be a four-letter word

John Kleeman Headshot

Posted by John Kleeman

Have you ever answered a survey without thinking too hard about it, just filling in questions in ways that seem half sensible? This behavior is called satisficing – when you give responses which are adequate but not optimal. Satisficing is a big cause of error in surveys and this post explains what it is and why it happens.

These are typical satisficing behaviors:

  • selecting the first response alternative that seems reasonable
  • agreeing with any statement that asks for agree/disagree answers
  • endorsing the status quo and not thinking through questions inviting change
  • in a matrix question, picking the same response for all parts of the matrix
  • responding “don’t know”
  • mentally coin flipping to answer a question
  • leaving questions unanswered

How prevalent is it?

Very few of us satisfice when taking a test. We usually try hard to give the best answers we can. But unfortunately for survey authors, it’s very common in surveys to answer half-heartedly, and satisficing is one of the common causes of survey errors.

For instance, a Harvard University study looked at a university survey with 250 items. Students were given a $15 cash incentive to complete it:

  • Eighty-one percent of participants satisficed at least in part.
  • Thirty-six percent rushed through parts of the survey too fast to be giving optimal answers.
  • The amount of satisficing increased later in the survey.
  • Satisficing impacted the validity and reliability of the survey and of any correlations made.

It is likely that for many surveys, satisficing plays an important part in the quality of the data.

How does it look like?

There are a few tricks to help identify satisficing behavior, but the first thing to look for when examining the data is straight-lining on grid questions. According to How to Spot a Fake, an article based on the Practices that minimize online panelist satisficing behavior by Shawna Fisher, “an instance or two may be valid, but often, straight-lining is a red flag that indicates a respondent is satisficing.” See the illustration for a visual.

Why does it happen?

Research suggests that there are four reasons participants typically satisfice:

1. Participant motivation. Survey participants are often asked to spend time and effort on a survey without much apparent reward or benefit. One of the biggest contributors to satisficing is lack of motivation to answer well.

2. Survey difficulty. The harder a survey is to answer and the more mental energy that needs to go into thinking about the best answers, the more likely participants are to give up and choose an easy way through.

3. Participant ability. Those who find the questions difficult, either because they are less able, or because they have not had a chance to consider the issues being asked in other contexts are more likely to satisfice.

4. Participant fatigue. The longer a survey is, the more likely the participant is to give up and start satisficing.

So how can we reduce satisficing? The answer is to address these reasons in our survey design. I’ll suggest some ways of doing this in a follow-up post.

I hope thinking about satisficing might give you better survey results with your Questionmark surveys!

How to transform recruitment and hiring with online testing

Julie Profile

Posted by Julie Delazyn

Did you know that 80 percent of employee turnover is a result of poor hiring practices? With turnover costs as high as $20,000 per exiting employee, improving the screening and testing process is a priority for every HR professional.

Online testing is making the hiring process easier and more effective. In this webinar,  “How to transform recruitment and hiring with online testing,” we’ll also share best practices and a case study from Canada Post Corporation.

Glen Budgell_large

Dr. Glen Budgell, PhD and senior strategic HR advisor at Human Resource Systems Group (HRSG)

 Join us on June 14 at 12:00 p.m. EDT. Click here to reserve your spot!

The webinar is designed to give you an up-close look at the latest assessment technologies. To learn more, I spoke to Dr. Glen Budgell, PhD and senior strategic HR advisor at Human Resource Systems Group (HRSG), who will be leading the session. Glen has over thirty years of experience managing the promotion, development and delivery of consulting services in the area of assessment and testing.

Here is a snippet from my conversation with Glen:

How can online testing transform recruitment and hiring?

Online testing has transformed the recruitment and hiring landscape in many ways. The days in which employers were limited to local talent is over: online tests can be administered at the click of a button to job candidates in remote regions, thereby increasing both the diversity of the talent pool and the inclusiveness of the work place. Standardized testing (whether paper-pencil or computer-based) provides job candidates with a sense of fairness in the hiring process.  When you combine this sense of fairness with an innovative online test that is both job relevant and engaging, you really “wow!” the job candidate, and that is one of our goals at HRSG: to wow our clients and their job candidates with innovative employment tests.

Online testing platforms such as Questionmark provide employers and firms like HRSG the tools to efficiently develop and deliver assessments. At HRSG, we use Questionmark to develop employment tests for our clients so that they can identify the right talent faster, and at a lower cost.

What are some best practices that are important to developing a strong assessment process?

Traditional testing best practices are always relevant, and we strongly urge our clients to conduct the required job/task analysis before choosing or building an employment test. In addition, and especially if the test is being built in house, it should be ensured that the test is both job relevant and linked to the abilities/skills identified in the job analysis.

Of course, online / computer-based testing has introduced a myriad of other challenges. Companies must ensure that the testing software they use is secure and candidate data is protected. We rely on the expertise of Questionmark to provide our clients with the required security and data protection features mandated by the ISO and ICT standards. Furthermore, we share with our client’s methods and best practices for cheating prevention and analysis, monitoring the web for leaked test content, and most importantly, providing candidate’s with technical support and accessibility features when required.

What tools will attendees walk away from the webinar with:

We’ll also discuss:

  • When to buy an off the shelf assessment when to build in house
  • How to enhance the candidate experience with online testing
  • How to develop and use innovative item types
  • Importance of candidate perceptions of online testing
  • Best practices for unproctored internet testing
  • Case study which describes best practices with high volume testing

Thank you, Glen, for taking the time to speak to us!

We look forward to seeing you at this complimentary webinar on Tuesday, June 14 at 12:00 p.m. EDT.

Click here to register and save your spot.

5 Steps to Better Tests

Julie ProfilePosted by Julie Delazyn

Creating fair, valid and reliable tests requires starting off right: with careful planning. Starting with that foundation, you will save time and effort while producing tests that yield trustworthy results.five steps white paper

Five essential steps for producing high-quality tests:

1. Plan: What elements must you consider before crafting the first question? How do you identify key content areas?

2. Create: How do you write items that increase the cognitive load, avoid bias and stereotyping?

3. Build: How should you build the test form and set accurate pass/ fail scores?

4. Deliver: What methods can be implemented to protect test content and discourage cheating?

5. Evaluate: How do you use item-, topic-, and test-level data to assess reliability and improve quality?

Download this complimentary white paper full of best practices for test design, delivery and evaluation.

 

Job Task Analysis Surveys Legally Required?

John Kleeman Headshot

Posted by John Kleeman

I had a lot of positive feedback on my blog post Making your Assessment Valid: 5 Tips from Miami. There is a lot of interest in how to ensure your assessment is valid, ensuring that it measures what it is supposed to measure.

If you are assessing for competence in a job role or for promotion into a job role, one critical step in making your assessment valid is to have a good, current analysis of what knowledge, skills and abilities are needed to do the job role. This is called a job task analysis (JTA), and the most common way of doing this analysis is to conduct a JTA Survey.

Job Task Analysis SurveyIn a JTA Survey, you ask existing people in the job role, or other experts, what tasks they do. A common practice is to survey them on how important each task is, how difficult it is and how often it is done. The resultant reports then guide the construction of the test blueprint and which topics and how many questions on each you include in the test.

If you cannot show that your assessment matches the requirements of a job, then your assessment is not only invalid but it is likely unfair — if you use it to select people for the job or measure competence in the job. And if you use an invalid assessment to select people for promotion or recruitment into the job, you may face legal action from people you reject.

Not only is this common sense, but it was also confirmed by a recent US district court ruling against the Boston Police Department. In this court case, sergeants who had been rejected for promotion to lieutenant following an exam sued that the assessment was unfair, and won.

The judge ruled that the exam was not sufficiently valid, because it omitted many job skills crucial for a police lieutenant role, and so it was not fair to be used to select for the role (see news report).

The 82-page judge’s ruling sets out in detail why the exam was unfair. He references the Uniform Guidelines on Employee Selection Procedures which state:

“There should be a job analysis which includes an analysis of the important work behavior(s) required for successful performance and their relative importance”

But the judge ruled that although a job analysis had been done, it had not been used properly in the test construction process. He said:

“When using a multiple choice exam, the developer must convert the job analysis result into a test plan to ensure a direct and strong relationship between the job analysis and the exam.

However, in this case, the job analysis was not used sufficiently well to construct the exam. The judge went on to say:

The Court cannot find, however, that the test plan ensured a strong relationship between the job analysis and the exam. … too many skills and abilities were missing from the … test outline. 

Crucially, he concluded:

“And a high score on the … exam simply was not a good indicator that a candidate would be a good lieutenant”.

Due to the pace of business change and technological advance, job roles are changing fast. Make sure that you conduct regular JTAs  of roles in your organization and make sure your assessments match the most important job tasks. Find out more about Job Task Analysis here.

Learning & Bowling — Looking back on Miami

panel Julie ProfilePosted by Julie Delazyn

We loved seeing so many customers at our conference in Miami earlier this month! We learned so much and greatly appreciate the feedback people shared with us there.

From our exciting panel discussion — which brought together experts from Pacific Gas and Electric, Amazon, The Hartford and HRSG to discuss their valid and reliable processes for assessment — to our networking dinners around Miami Beach, it was a 3-day event packed with excitement and learning opportunities.

You can re-live those moments by browsing through the photos taken during the conference and our Next-Gen release party.smiles

Reporting back from sessions he attended and conversations that took place at Questionmark Conference 2016, Questionmark Chairman John Kleeman has come up with 5 tips to making your assessments valid.Bowling

  • What did you enjoy most about Questionmark Conference 2016?
  • If you didn’t have the chance to join us this year, what would you like to see at next year’s conference?
  • Leave us a comment below and stay in touch!

Next Page »