Item Development Tips For Defensible Assessments

Julie ProfilePosted by Julie Delazyn

Whether you work with low-stakes assessments, small-scale classroom assessments or large-scale, high-stakes assessment, understanding and applying some basic principles of item development will greatly enhance the quality of your results.

What began as a popular 11-part blog series has morphed into a white paper: Managing Item Development for Large-Scale Assessment, which offers sound advice on how-to organize and execute item development steps that will help you create defensible assessments. These steps include:   Item Dev.You can download your copy of the complimentary white paper here: Managing Item Development for Large-Scale Assessment

Online Proctoring: FAQs

John Kleeman HeadshotPosted by John Kleeman

Online proctoring was a hot-button topic at Questionmark’s annual Users Conference. And though we’ve discussed the pros and cons in this blog and even offered an infographic highlighting online versus test-center proctoring, many interesting questions arose during the Ensuring Exam Integrity with Online Proctoring  session I presented with Steve Lay at Questionmark Conference 2016.

I’ve compiled a few of those questions and offered answers to them. For context and additional information, make sure to check out a shortened version of our presentation. If you have any questions you’d like to add to the list, comment below!

What control does the online proctor have on the exam?

With Questionmark solutions, the online proctor can:

  • Converse with the participant
  • Pause and resume the exam
  • Give extra time if needed
  • Terminate the exam

What does an online proctor do if he/she suspects cheating?

Usually the proctor will terminate the exam and file a report to the exam sponsor.

What happens if the exam is interrupted, e.g. by someone coming in to the room?

This depends on your security protocols. Some organizations may decide  to terminate the exam and require another attempt. In some cases, if it seems an honest mistake, the organization may decide that the proctor can use discretion to permit the exam to continue.

Which is more secure, online or face-to-face proctoring?online proctoring

On balance, they are about equally secure.

Unfortunately there has been a lot of corruption with face-to-face proctoring, and online proctoring makes it much harder for participant and proctor to collude as there is no direct contact, and all communication can be logged.

But if the proctors are honest, it is easier to detect cheating aids in a face-to-face environment than via a video link.

What kind of exams is online proctoring good for?

Online proctoring works well for exams where:

  • The stakes are high and so you need the security of a proctor
  • Participants are in many different places, making travel to test centers costly
  • Participants are computer literate – have and know how to use their own PCs
  • Exams take 2-3 hours or less

If your technology or subject area changes frequently, then online proctoring is particularly good because you can easily give more frequent exams, without requiring candidates to travel.

What kind of exams is online proctoring less good for?

Online proctoring is less appropriate for exams where:

  • Exams are long and participants needs breaks
  • Exams where participants are local and it’s easy to get them into one place to take the exam
  • Participants do not have access to their own PC and/or are not computer literate

How do you prepare for online proctoring?

Here are some preparation tasks:

  • Brief and communicate with your participants about online proctoring
  • Define clearly the computer requirements for participants
  • Agree what happens in the event of incidents – e.g. suspected cheating, exam interruptions
  • Agree what ID is acceptable for participants and whether ID information is going to be stored
  • Make a candidate agreement or honor code which sets out what you expect from people to encourage them to take the exam fairly

I hope these Q&A and the linked presentation are interesting. You can find out more about Questionmark’s online proctoring solution here.

Satisficing: Why it might as well be a four-letter word

John Kleeman Headshot

Posted by John Kleeman

Have you ever answered a survey without thinking too hard about it, just filling in questions in ways that seem half sensible? This behavior is called satisficing – when you give responses which are adequate but not optimal. Satisficing is a big cause of error in surveys and this post explains what it is and why it happens.

These are typical satisficing behaviors:

  • selecting the first response alternative that seems reasonable
  • agreeing with any statement that asks for agree/disagree answers
  • endorsing the status quo and not thinking through questions inviting change
  • in a matrix question, picking the same response for all parts of the matrix
  • responding “don’t know”
  • mentally coin flipping to answer a question
  • leaving questions unanswered

How prevalent is it?

Very few of us satisfice when taking a test. We usually try hard to give the best answers we can. But unfortunately for survey authors, it’s very common in surveys to answer half-heartedly, and satisficing is one of the common causes of survey errors.

For instance, a Harvard University study looked at a university survey with 250 items. Students were given a $15 cash incentive to complete it:

  • Eighty-one percent of participants satisficed at least in part.
  • Thirty-six percent rushed through parts of the survey too fast to be giving optimal answers.
  • The amount of satisficing increased later in the survey.
  • Satisficing impacted the validity and reliability of the survey and of any correlations made.

It is likely that for many surveys, satisficing plays an important part in the quality of the data.

How does it look like?

There are a few tricks to help identify satisficing behavior, but the first thing to look for when examining the data is straight-lining on grid questions. According to How to Spot a Fake, an article based on the Practices that minimize online panelist satisficing behavior by Shawna Fisher, “an instance or two may be valid, but often, straight-lining is a red flag that indicates a respondent is satisficing.” See the illustration for a visual.

Why does it happen?

Research suggests that there are four reasons participants typically satisfice:

1. Participant motivation. Survey participants are often asked to spend time and effort on a survey without much apparent reward or benefit. One of the biggest contributors to satisficing is lack of motivation to answer well.

2. Survey difficulty. The harder a survey is to answer and the more mental energy that needs to go into thinking about the best answers, the more likely participants are to give up and choose an easy way through.

3. Participant ability. Those who find the questions difficult, either because they are less able, or because they have not had a chance to consider the issues being asked in other contexts are more likely to satisfice.

4. Participant fatigue. The longer a survey is, the more likely the participant is to give up and start satisficing.

So how can we reduce satisficing? The answer is to address these reasons in our survey design. I’ll suggest some ways of doing this in a follow-up post.

I hope thinking about satisficing might give you better survey results with your Questionmark surveys!

5 Steps to Better Tests

Julie ProfilePosted by Julie Delazyn

Creating fair, valid and reliable tests requires starting off right: with careful planning. Starting with that foundation, you will save time and effort while producing tests that yield trustworthy results.five steps white paper

Five essential steps for producing high-quality tests:

1. Plan: What elements must you consider before crafting the first question? How do you identify key content areas?

2. Create: How do you write items that increase the cognitive load, avoid bias and stereotyping?

3. Build: How should you build the test form and set accurate pass/ fail scores?

4. Deliver: What methods can be implemented to protect test content and discourage cheating?

5. Evaluate: How do you use item-, topic-, and test-level data to assess reliability and improve quality?

Download this complimentary white paper full of best practices for test design, delivery and evaluation.

 

Job Task Analysis Surveys Legally Required?

John Kleeman Headshot

Posted by John Kleeman

I had a lot of positive feedback on my blog post Making your Assessment Valid: 5 Tips from Miami. There is a lot of interest in how to ensure your assessment is valid, ensuring that it measures what it is supposed to measure.

If you are assessing for competence in a job role or for promotion into a job role, one critical step in making your assessment valid is to have a good, current analysis of what knowledge, skills and abilities are needed to do the job role. This is called a job task analysis (JTA), and the most common way of doing this analysis is to conduct a JTA Survey.

Job Task Analysis SurveyIn a JTA Survey, you ask existing people in the job role, or other experts, what tasks they do. A common practice is to survey them on how important each task is, how difficult it is and how often it is done. The resultant reports then guide the construction of the test blueprint and which topics and how many questions on each you include in the test.

If you cannot show that your assessment matches the requirements of a job, then your assessment is not only invalid but it is likely unfair — if you use it to select people for the job or measure competence in the job. And if you use an invalid assessment to select people for promotion or recruitment into the job, you may face legal action from people you reject.

Not only is this common sense, but it was also confirmed by a recent US district court ruling against the Boston Police Department. In this court case, sergeants who had been rejected for promotion to lieutenant following an exam sued that the assessment was unfair, and won.

The judge ruled that the exam was not sufficiently valid, because it omitted many job skills crucial for a police lieutenant role, and so it was not fair to be used to select for the role (see news report).

The 82-page judge’s ruling sets out in detail why the exam was unfair. He references the Uniform Guidelines on Employee Selection Procedures which state:

“There should be a job analysis which includes an analysis of the important work behavior(s) required for successful performance and their relative importance”

But the judge ruled that although a job analysis had been done, it had not been used properly in the test construction process. He said:

“When using a multiple choice exam, the developer must convert the job analysis result into a test plan to ensure a direct and strong relationship between the job analysis and the exam.

However, in this case, the job analysis was not used sufficiently well to construct the exam. The judge went on to say:

The Court cannot find, however, that the test plan ensured a strong relationship between the job analysis and the exam. … too many skills and abilities were missing from the … test outline. 

Crucially, he concluded:

“And a high score on the … exam simply was not a good indicator that a candidate would be a good lieutenant”.

Due to the pace of business change and technological advance, job roles are changing fast. Make sure that you conduct regular JTAs  of roles in your organization and make sure your assessments match the most important job tasks. Find out more about Job Task Analysis here.

Making your Assessment Valid: 5 Tips from Miami

John Kleeman Headshot

Posted by John Kleeman

A key reason people use Questionmark’s assessment management system is that it helps you make more valid assessments. To remind you, a valid assessment is one that genuinely measures what it is supposed to measure. Having an effective process to ensure your assessments are valid, reliable and trustable was an important topic at Questionmark Conference 2016 in Miami last week. Here is some advice I heard:

Reporting back from 3 days of learning and networking at Questionmark Conference 2016 in Miami

Tip 1: Everything starts from the purpose of your assessment. Define this clearly and document it well. A purpose that is not well defined or that does not align with the needs of your organization will result in a poor test. It is useful to have a formal process to kick off  a new assessment to ensure the purpose is defined clearly and is aligned with business needs.

Tip 2: A Job Task Analysis survey is a great way of defining the topics/objectives for new-hire training assessments. One presenter at the conference sent out a survey to the top performing 50 percent of employees in a job role and asked questions on a series of potential job tasks. For each job task, he asked how difficult it is (complexity), how important it is (priority) and how often it is done (frequency). He then used the survey results to define the structure of knowledge assessments for new hires to ensure they aligned with needed job skills.

Tip 3: The best way to ensure that a workplace assessment starts and remains valid is continual involvement with Subject Matter Experts (SMEs). They help you ensure that the content of the assessment matches the content needed for the job and ensure this stays the case as the job changes. It’s worth investing in training your SMEs in item writing and item review. Foster a collaborative environment and build their confidence.

Tip 4: Allow your participants (test-takers) to feed back into the process. This will give you useful feedback to improve the questions and the validity of the assessment. It’s also an important part of being transparent and open in your assessment programme, which is useful because people are less likely to cheat if they feel that the process is well-intentioned. They are also less likely to complain about the results being unfair. For example it’s useful to write an internal blog explaining why and how you create the assessments and encourage feedback.

Lunch with a view at Questionmark Conference 2016 in Miami

Tip 5: As the item bank grows and as your assessment programme becomes more successful, make sure to manage the item bank and review items. Retire items that are no longer relevant or when they have been overexposed. This keeps the item bank useful, accurate and valid.

There was lots more at the conference – excitement that Questionmark NextGen authoring is finally here, a live demo of our new easy to use Printing and Scanning solution … and having lunch on the hotel terrace in the beautiful Miami spring sunshine – with Questionmark branded sunglasses to keep cool.

There was a lot of buzz at the conference about documenting your assessment decisions and making sure your assessments validly measure job competence. There is increasing understanding that assessment is a process not a project, and also that to be used to measure competence or to select for a job role, an assessment must cover all important job tasks.

I hope these tips on making assessments valid are helpful. Click here for more information on Questionmark’s assessment management system.

« Previous PageNext Page »