Item Analysis Analytics Part 2: Conducting an Item Analysis
Posted by Greg Pope
In my last post I talked a bit about Classical Test Theory (CTT) to lay the foundation for a discussion of item analysis analytics using CTT. In this post I will talk about the high-level purpose and process of conducting an item analysis. The general purpose of conducting an item analysis is to find out whether the questions composing an assessment are performing in a manner that is psychometrically appropriate and defensible. Item analyses are used to evaluate the psychometric performance of questions. They help us find out whether items need to be improved (sent back to development), sent to the scrap heap, or left as they are because they meet all the criteria for being included in an assessment.
I’d like to share a tip about how some of my colleagues decide whether to revise a problematic looking question or throw it away as “unfixable.” This involves setting a review time limit for each question that needs to be reviewed. In an item analysis review meeting which may involve psychometricians, subject matter experts, exam developers and other stakeholders, each question could be reviewed for no more than a pre-determined period of time, say 10 minutes. If an effective revision for the question does not become apparent within that period of time, the question goes to the scrap bin and a new question is developed by SMEs to take its place.
Many organizations beta test questions in order to choose those that should be included in an actual assessment. Questionmark Perception offers the delivery status field of “Experimental,” which allows beta questions to be included/interspersed within an actual assessment form but not scored and therefore not counted as part of the calculation of participant assessment scores. More on the topic of beta testing another time though…
In my next post I will discuss some essential things to look for in an Item Analysis Report.