Item Development – Organizing a bias review committee (Part 1)

Austin Fossey-42Posted by Austin Fossey

Once the content review is completed, it is time to turn the items over to a bias review committee. In previous posts, we have talked about methods for detecting bias in item performance using DIF analysis, but DIF analysis must be done after the item has already been delivered and item responses are available.

Your bias review committee is being tasked with identifying sources of bias before the assessment is ever delivered so that items can be edited or removed before presenting them to a participant sample (though you can conduct bias reviews at any stage of item development).

The Standards for Educational and Psychological Testing explain that bias occurs when the design of the assessment results in different interpretations of scores for subgroups of participants. This implies that some aspect of the assessment is impacting scores based on factors that are not related to the measured construct. This is called construct-irrelevant variance.

The Standards emphasize that a lack of bias is critical for supporting the overall fairness of the assessment, so your bias review committee will provide evidence to help demonstrate your compliance with the Standards. Before you convene your bias review committee, you should finalize a set of sensitivity guidelines that define the criteria for identifying sources of bias in your assessment.

As with your other committees, the members of this committee should be carefully selected based on their qualifications and representativeness, and they should not have been involved with any other test development processes like domain analysis, item writing, or content review. In his chapter in Educational Measurement (4th ed.), Gregory Camilli suggests building a committee of at least five to ten members who will be operating under the principle that “all students should be treated equitably.”

Camilli recommends carefully documenting all aspects of the bias review, including the qualifications and selection process for the committee members. The committee should be trained on the test specifications and the sensitivity guidelines that will inform their decisions. Just like item writing or content review trainings, it is helpful to have the committee practice with some examples before they begin their review.

Camilli suggests letting committee members review items on their own after they complete their training. This gives them each a chance to critique items based on their unique perspectives and understanding of your sensitivity guidelines. Once they have had time to review the items on their own, have your committee reconvene to discuss the items as a group. The committee should strive to reach a consensus on whether items should be retained, edited, or removed completely. If an item needs to be edited, they should document their recommendations for changes. If an item is edited or removed, be sure they document the rationale by relating their decision back to your sensitivity guidelines.

In the next post, I will talk about two facets of assessments that can result in bias (content and response process), and I will share some examples of publications that have recommendations for bias criteria you can use for your own sensitivity guidelines.

Check out our white paper: 5 Steps to Better Tests for best practice guidance and practical advice for the five key stages of test and exam development.

Austin Fossey will discuss test development at the 2015 Users Conference in Napa Valley, March 10-13. Register before Dec. 17 and save $200.

Leave a Reply