Item Development – Organizing a bias review committee (Part 1)

Austin Fossey-42Posted by Austin Fossey

Once the content review is completed, it is time to turn the items over to a bias review committee. In previous posts, we have talked about methods for detecting bias in item performance using DIF analysis, but DIF analysis must be done after the item has already been delivered and item responses are available.

Your bias review committee is being tasked with identifying sources of bias before the assessment is ever delivered so that items can be edited or removed before presenting them to a participant sample (though you can conduct bias reviews at any stage of item development).

The Standards for Educational and Psychological Testing explain that bias occurs when the design of the assessment results in different interpretations of scores for subgroups of participants. This implies that some aspect of the assessment is impacting scores based on factors that are not related to the measured construct. This is called construct-irrelevant variance.

The Standards emphasize that a lack of bias is critical for supporting the overall fairness of the assessment, so your bias review committee will provide evidence to help demonstrate your compliance with the Standards. Before you convene your bias review committee, you should finalize a set of sensitivity guidelines that define the criteria for identifying sources of bias in your assessment.

As with your other committees, the members of this committee should be carefully selected based on their qualifications and representativeness, and they should not have been involved with any other test development processes like domain analysis, item writing, or content review. In his chapter in Educational Measurement (4th ed.), Gregory Camilli suggests building a committee of at least five to ten members who will be operating under the principle that “all students should be treated equitably.”

Camilli recommends carefully documenting all aspects of the bias review, including the qualifications and selection process for the committee members. The committee should be trained on the test specifications and the sensitivity guidelines that will inform their decisions. Just like item writing or content review trainings, it is helpful to have the committee practice with some examples before they begin their review.

Camilli suggests letting committee members review items on their own after they complete their training. This gives them each a chance to critique items based on their unique perspectives and understanding of your sensitivity guidelines. Once they have had time to review the items on their own, have your committee reconvene to discuss the items as a group. The committee should strive to reach a consensus on whether items should be retained, edited, or removed completely. If an item needs to be edited, they should document their recommendations for changes. If an item is edited or removed, be sure they document the rationale by relating their decision back to your sensitivity guidelines.

In the next post, I will talk about two facets of assessments that can result in bias (content and response process), and I will share some examples of publications that have recommendations for bias criteria you can use for your own sensitivity guidelines.

Check out our white paper: 5 Steps to Better Tests for best practice guidance and practical advice for the five key stages of test and exam development.

Austin Fossey will discuss test development at the 2015 Users Conference in Napa Valley, March 10-13. Register before Dec. 17 and save $200.

Full house at learning event in New Zealand

rafael-conf-australia2Posted by Rafael Lami Dozo

img00113-20090811-2230-2I posted recently about an upcoming  three-day learning event in Auckland, New Zealand, focusing on assessment best practices. Now I’d like to update you on the great turnout, exciting customers, and the full house that participated in the workshop!

The Online Assessments Symposium organized by Business Toolbox was packed with learning opportunities: the first two days were devoted to instruction on best practices img00114-20090811-2231-3in creating assessments, and the third brought together industry experts to share advice about moving assessments online.

It was motivating to see academic and corporate Questionmark users sharing  their experiences in successfully implementing assessments and enjoying some impressive case study presentations. I took img00117-20090811-2231-2these photos with my BlackBerry to give you a sense of the group that gathered.

We will continue to perform workshops like this one around the world so stay tuned for our next location.

How to Manage Participants, Groups and More in Questionmark Perception