Item Development – Organizing a content review committee (Part 2)

Austin Fossey-42Posted by Austin Fossey

In my last post, I explained the function of a content review committee and the importance of having a systematic review process. Today I’ll provide some suggestions for how you can use the content review process to simultaneously collect content validity evidence without having to do a lot of extra work.

If you want to get some extra mileage out of your content review committee, why not tack on a content validity study? Instead of asking them if an item has been assigned to the correct area of the specifications, ask them to each write down how they would have classified the item’s content. You can then see if topics picked by your content review committee correspond with the topics that your item writers assigned to the items.

There are several ways to conduct content validity studies, and a content validity study might not be sufficient evidence to support the overall validity of the assessment results. A full review of validity concepts is outside the scope of this article, but one way to check whether items match their intended topics is to have your committee members rate how well they
think an item matches each topic on the specifications. A score of 1 means they think the item matches, a score of -1 means they think it does not match, and a score of 0 means that they are not sure.

If each committee member provides their own ratings, you can calculate the index of congruence , which was proposed by Richard Rovinelli and Ron Hambleton. You can then create a table of these indices to see whether the committee’s classifications correspond to the content classifications given by your item writers.

The chart below compares item writers’ topic assignments for two items and the index of congruence determined by a content committee’s ratings of the two items on an assessment with ten topics. We see that both groups agreed that Item 1 belonged to Topic 5 and Item 2 belonged to Topic 1. We also see that the content review committee was uncertain on whether or not Item 1 measured Topic 2, and we see that some of the committee members felt that Item 2 measured  Topic 7.


Comparison of content review committee’s index of congruence and item writers’ classifications of two items on an assessment with ten topics.


Item Development – Organizing a content review committee (Part 1)

AustinPosted by Austin Fossey

Once your items have passed through an initial round of edits, it is time for a content review committee to examine them. Remember that you should document the qualifications of your committee members, and if possible, recruit different people than those used to write the items or conduct other reviews.

In their chapter in Educational Measurement (4 th ed.), Cynthia Shmeiser and Catherine Welch explain that the primary function of the content review committee is to verify the accuracy of the items with regard to the defined domain, including content and cognitive classification of items. The committee might answer questions like:

  • Given the information in the stem, is the item key the correct answer in all situations?
  • Is enough information provided in the item for candidates to choose an answer?
  • Given the information in the stem, are the distractors incorrect in all situations?
  • Would a participant with specialized knowledge interpret the item and the options differently from the general population of participants?
  • Is the item tagged to the correct area of the specifications (e.g., topic, subdomain)?
  • Does the item function at the intended cognitive level?

Other content review goals may be added depending on your specific testing purpose. For example, in their chapter in Educational Measurement (4th ed.), Brian Clauser, Melissa Margolis, and Susan Case observe that for certification and licensure exams, a content review committee might determine whether items are relevant to new practitioners—the intended audience for such assessments.

Shmeiser and Welch also recommend that the review process be systematic, implying that the committee should apply a consistent level of scrutiny and decision criteria for each item they review. But how can you as the test developer keep things systematic?

One way is to use a checklist of the acceptance criteria for each item. By using a checklist, you can ensure that the committee reviews and signs off on each aspect of the item’s content. The checklist can also provide a standardized format for documenting problems that need to be addressed by the item writers. These checklists can be used to report the results of the content review, and they can be kept as supporting documentation for the Test Development and Revision requirements specified by the Standards for Educational and Psychological Testing.

In my next post, I’ll suggest some ways for you, as a test developer, to leverage your content review committee to gather content validity evidence for your assessment.

For best practice guidance and practical advice for the five key stages of test and exam development, check out our white paper: 5 Steps to Better Tests.