# Standard Setting: Methods for establishing cut scores

Posted by Greg Pope

My last post offered an introduction to standard setting; today I’d like to go into more detail about establishing cut scores. There are many standard setting methods used to set cut scores. These methods are generally split into two types: a) question-centered approaches and b) participant-centered approaches. A few of the most popular methods, with very brief descriptions of each, are provided below. For more detailed information on standard setting procedures and methods see the book, Setting Performance Standards: Concepts, Methods, and Perspectives, edited by Gregory Cizek and Robert Sternberg.

• Modified Angoff method (question-centered): Subject matter experts (SMEs) are generally briefed on the Angoff method and allowed to take the test with the performance levels in mind. SMEs are then asked to provide estimates for each question of the proportion of borderline or “minimally acceptable” participants that they would expect to get the question correct. The estimates are generally in p-value type form (e.g., 0.6 for item 1: 60% of borderline passing participants would get this question correct). Several rounds are generally conducted with SMEs allowed to modify their estimates given different types of information (e.g., actual participant performance information on each question, other SME estimates, etc.). The final determination of the cut score is then made (e.g., by averaging estimates or taking the median). This method is generally used with multiple-choice questions.
• I like a dichotomous modified Angoff approach where, instead of using p-value type statistics, SMEs are asked to simply provide a 0/1 for each question (“0” if a borderline acceptable participant would get the question wrong and “1” if a borderline acceptable participant would get the item right)
• Nedelsky method (question-centered): SMEs make decisions on a question-by-question basis regarding which of the question distracters they feel borderline participants would be able to eliminate as incorrect. This method is generally used with multiple-choice questions only.
• Bookmark method (question-centered): Questions are ordered by difficulty (e.g., Item Response Theory b-parameters or Classical Test Theory p-values) from easiest to hardest. SMEs make “bookmark” determinations of where performance levels (e.g., cut scores) should be (“As the test gets harder, where would a participant on the boundary of the performance level not be able to get any more questions correct?”) This method can be used with virtually any question type (e.g., multiple-choice, multiple-response, matching, etc.).
• Borderline groups method (participant-centered): A description is prepared for each performance category. SMEs are asked to submit a list of participants whose performance on the test should be close to the performance standard (borderline). The test is administered to these borderline groups and the median test score is used as the cut score. This method can be used with virtually any question type (e.g., multiple-choice, multiple response, essay, etc.).
• Contrasting groups method (participant-centered): SMEs are asked to categorize the participants in their classes according to the performance category descriptions. The test is administered to all of the categorized participants and the test score distributions for each of the categorized groups are compared. Where the distributions of the contrasting groups intersect is where the cut score would be located. This method can be used with virtually any question type (e.g., multiple-choice, multiple response, essay, etc.).

I hope this was helpful and I am looking forward to talking more about an exciting psychometric topic soon!

### 3 Responses to “Standard Setting: Methods for establishing cut scores”

1. […] margin of error in when setting the cut score…but this is another fun topic that I touched on in another post. I believe a best practice is to incorporate a confidence interval into the reporting of scores for […]

2. DR.Rajasingam shanmugam says:

Dear Greg Pope, we have a very dedicated standard setting commitee at our medical college in malaysia for the 4th and final year examinations . i am one of the moderators.with respect to the modified angoff method for the multiple essay questions, our panels were assessing it like the MCQS by thinking in percentages.i advise them to decide how many items/or responses with the allocated marks that a borderline student can achieve instead.am i doing the rt thing?pl advise.regards,

3. Austin Fossey says:

Dear Dr. Rajasingam,
If your essay items are polytomous, then it is correct to set the standard by asking panelists to determine the expected number of points a minimally qualified candidate would achieve on the item. From there, the iterative and aggregation processes are the same. For more information, I would refer you to Hambleton and Pitoniak’s chapter entitled “Setting Performance Standards” in Educational Measurement, 4th Edition (Eds. Brennan).
Sincerely,
Austin Fossey
Reporting and Analytics Manager
Questionmark