Face Validity: Participants Connecting Assessments to Constructs
Posted by Austin Fossey
I’d like to follow up my April 10 post about argument-based validity with details about face validity and a note about how these two concepts relate to each other.
This concept of face validity has been around for a while, but in his 1947 article, A Critical Examination of the Concepts of Face Validity, Charles Mosier defined what had previously been a nebulous buzz word.
Nowadays, we generally think of face validity as the degree to which an instrument measures a construct in a way that is meaningful to the layperson. To put it another way, is it clear to your participants how the test relates to the construct? Do they understand how the assessment design relates to what it claims to measure?
For an example of assessments that may have face validity issues, let’s consider college entrance exams. Many students find fault with these assessments, correctly noting that vocabulary and math multiple choice items are not the only indicators of intelligence. But here is the catch: these are not tests of intelligence!
Many such assessments are designed to correlate with academic performance during the first year of college. So while the assessment is very useful for college entrance committees, the connection between the instrument and its consequences is not immediately apparent to many of the participants. In this case, we have high criterion validity and lower face validity.
There are cases when we may not want face validity. For example, a researcher may be delivering a survey where he or she does not want participants to know specifically what is being measured. In such a scenario, the researcher may be concerned that knowledge of the construct might lead participants to engage in hypothesis guessing, which is a threat to the external validity of the study. In such cases, the researcher may design the survey instrument to deliberately obfuscate the construct, or the researcher may use items that correlate with the construct but don’t reference the construct directly.
Face validity is an issue that many of us put on the back burner because we need to focus on criterion, construct, and content validity. Face validity is difficult to measure, and it should have little bearing on the inferences or consequences of the assessment. However, for those of us who are accountable to our participants (e.g., organizations selling certification assessments), face validity can play a big part in customer satisfaction and public perception.
Here is where I believe argument-based validity can be very helpful. Many people can understand the structure of argument-based validity, even if they may not understand the warrants and rebuttals. By using argument-based validity to frame our validity documentation, we map out how performance on the assessment relates to the construct inferences and to the consequences that matter to the participant.