Face Validity: Participants Connecting Assessments to Constructs

Austin FosseyPosted by Austin Fossey

I’d like to follow up my April 10 post about argument-based validity with details about face validity and a note about how these two concepts relate to each other.

This concept of face validity has been around for a while, but in his 1947 article, A Critical Examination of the Concepts of Face Validity, Charles Mosier defined what had previously been a nebulous buzz word.

Nowadays, we generally think of face validity as the degree to which an instrument measures a construct in a way that is meaningful to the layperson. To put it another way, is it clear to your participants how the test relates to the construct? Do they understand how the assessment design relates to what it claims to measure?

For an example of assessments that may have face validity issues, let’s consider college entrance exams. Many students find fault with these assessments, correctly noting that vocabulary and math multiple choice items are not the only indicators of intelligence. But here is the catch: these are not tests of intelligence!

Many such assessments are designed to correlate with academic performance during the first year of college. So while the assessment is very useful for college entrance committees, the connection between the instrument and its consequences is not immediately apparent to many of the participants. In this case, we have high criterion validity and lower face validity.

There are cases when we may not want face validity. For example, a researcher may be delivering a survey where he or she does not want participants to know specifically what is being measured. In such a scenario, the researcher may be concerned that knowledge of the construct might lead participants to engage in hypothesis guessing, which is a threat to the external validity of the study. In such cases, the researcher may design the survey instrument to deliberately obfuscate the construct, or the researcher may use items that correlate with the construct but don’t reference the construct directly.

validityFace validity is an issue that many of us put on the back burner because we need to focus on criterion, construct, and content validity. Face validity is difficult to measure, and it should have little bearing on the inferences or consequences of the assessment. However, for those of us who are accountable to our participants (e.g., organizations selling certification assessments), face validity can play a big part in customer satisfaction and public perception.

Here is where I believe argument-based validity can be very helpful. Many people can understand the structure of argument-based validity, even if they may not understand the warrants and rebuttals. By using argument-based validity to frame our validity documentation, we map out how performance on the assessment relates to the construct inferences and to the consequences that matter to the participant.

See you in San Antonio March 4 – 7, 2014

Joan Phaup HeadshotPosted by Joan Phaup

Building on the success of this year’s Questionmark Users Conference, we are already planning for the next one!

We look forward to seeing customers at the Grand Hyatt San Antonio, on the city’s famous Riverwalk, from March 4 to 7 next year.

Here’s some of the feedback we’ve received from people who joined us in Baltimore several weeks ago:general session 2013

  • “The conference was absolutely awesome. I can’t tell you how much I’ve already implemented with our assessment process.”
  • “It’s a fantastic place to learn about what other customers are using Questionmark for and bouncing ideas off them.”
  • “The networking was great! I left the conference feeling like a member of the Questionmark family.”
  • “Not going to the conference would be a huge missed opportunity.”
  • “Outstanding!”
  • “I learned so much!”

Mark your calendar for next March! Let us know what you’d like to learn at the next conference — and dream a little as you watch this brief video about our 2014 destination.

[youtube http://www.youtube.com/watch?v=gbSjXNeA1CA?rel=0&w=560&h=315]

New Questionmark OnDemand release enhances analytics and mobile delivery

Jim Farrell HeadshotPosted by Jim Farrell

With Questionmark having just released a major upgrade of our OnDemand platform. I want to highlight some of the great new features and functionality now available to our customers.

Let’s start with my favorite. Questionmark released a new API known as OData, which allows Questionmark customers to access data in their results warehouse database and create reports using third-party tools like PowerPivot for Excel and Tableau. Through a client, a user makes a request to the data service, and the data service processes that request and returns an appropriate response.

You can use just about any client to access the OData API as long as it can make HTTP requests and parse XML responses. Wow…that’s technical! But  the power of the new OData API is that it liberates your data from the results warehouse and lets you build custom reports, create dashboards, or feed results data into other business intelligence tools.

5.6

The OData API is not the only update we have made to Analytics. The addition of the Assessment Content Report allows users to review participant comments for all questions within an assessment, topic, or specific question. Enhancements to the Item Analysis report include the ability to ignore question and assessment revisions. This report now also supports our dichotomously-scored Multiple Response, Matching, and Raking question types.

Another improvement I want to highlight is the way Questionmark now works with mobile assessments. An updated template design for assessments when taken from a mobile device embraces responsive design, enhancing our ability to author once and deploy anywhere. The new mobile offering supports Drag and Drop and Hotspot question types — and Flash questions can now run on all Flash-enabled mobile devices.

Click here for more details about this new release of Questionmark OnDemand.

To Your Health! Good practice for competency testing in laboratories

John Kleeman HeadshotPosted by John Kleeman

In the world of health care, from pathology labs to medical practitioners to pharmaceutical manufacturers, a mistake can mean much more than a regulatory fine or losing money – people’s lives and health are at stake. Hospitals, laboratories and other medical organizations have large numbers of people and need effective systems to make them work well together.

I’ve been learning about how assessments are used in the health care sector. Here is the first of a series of blog articles in the  theme of “learning from health care”.

In this article, I’d like to share some of what I’ve learned about how pathology and other health care laboratories approach competency assessment. Laboratory personnel have to work tirelessly and in an error-free way to give good quality, reliable pathology results. And mistakes cost – as the US College of American Pathologists (CAP) state in their trademarked motto “Every number is a life”. I think there is a lot we can all learn from how they do competency testing.

Job Description -> Task-specific Training -> Competency Assessment -> Competency RecognitionA good place to start is with the World Health Organization (WHO). Their training on personnel management reminds us that “personnel are the most important laboratory resource” and they promote competency assessment based on a job description and task-specific training as shown in the diagram on the right.

WHO advise that competency assessments should be conducted regularly (usually once or twice a year) and they recommend observational assessments for many areas of competence:  “Observation is the most time-consuming way to assess employee competence, but this method is advised when assessing the areas that may have a higher impact on patient care.” Their key steps for conducting observational assessments are:

  • Assessor arranges with employee a pre-arranged time for the assessment
  • The assessment is done on routine work tasks
  • To avoid subjectivity, the assessment should be recorded on a fixed check-list with everyone assessed the same way, to avoid bias
  • The results of the assessment are recorded, kept confidential but shared with the employee
  • If remediation is needed, an action plan involving retraining is defined and agreed with the employee

WHO’s guidance is international. Here is some additional guidance from the US, from a 2012 presentation in the US by CAP’s inspections team lead on competency assessment for pathology labs. This advice seems to make sense in a wider context:

  • If it’s not documented, it didn’t happen!
  • You need to do competency assessment on every person on every important system they work with
  • If employees who are not in your department or organization, contribute significantly to the work product, you  need to assess their competence too. Otherwise the quality of your work product is impacted
  • Competency assessment often contains quizzes/tests, observational assessments, review of records, demonstration of taking corrective action and troubleshooting
  • If people fail competency assessment, you need to re-train, re-assess and document that

If your organization relies on employees working accurately, I hope this provides value and interest to you. I will share more of what I’m learning in future articles.

SlideShare presentation on writing high-complexity test items

Headshot JuliePosted by Julie Delazyn

Writing high-quality test items is difficult, but writing questions that go beyond checking knowledge is even more complex.

James Parry, E-Testing Manager at the U.S. Coast Guard Training Center in Yorktown, Virginia, offered some valuable tips on advanced test item construction during a peer discussion at this year’s Questionmark Users Conference.

The PowerPoints from this session will help you distinguish among three levels of test items:

  • Low-complexity – requiring knowledge of single facts
  • Medium-complexity – requiring test takers to know or derive multiple facts
  • High-complexity – requiring test takers to analyze and evaluate multiple facts to solve problems (often presented as scenarios)

The slides relate these levels to Bloom’s Taxonomy and Gagne’s Nine Events of Instruction and offer pointers for writing performance-based test items based on clear objectives.

Enjoy the presentation below, and save March 4 – 7 next year for the Questionmark 2014 Users Conference in San Antonio, Texas.

How tests help online learners stay on task

Joan Phaup Headshot Posted by Joan Phaup

Online courses offer a flexible and increasingly popular way for people to learn. But what about the many distractions that can cause a student’s mind to wander off the subject at hand?

According to a team of Harvard University researchers, administering short tests to students watching video lectures can decrease mind-wandering, increase note-taking and improve retention.

Interpolated memory tests reduce mind wandering and improve learning of online lectures, a paper by Harvard Postdoctoral Fellow Karl K. Szpunar, Research Assistant Novall Y. Khan and Psychology Professor  Daniel L. Schacter, was published this month in Proceedings of the National Academy of Sciences  (PNAS) in the U.S.

The team conducted two experiments in which they interspersed online lectures with memory tests and found that such tests can: help students pay more sustained attention to lecture content encourage task-relevant note-taking improve learning reduce students’ anxiety about final tests.

“Here we provide evidence that points toward a solution for the difficulty that students frequently report in sustaining attention to online lectures over extended periods,” the researchers say.


In this experiment a group of students watched a 21-minute lecture presented in four segments of about five minutes each. After each segment, students were asked to do some math problems. Some students were then tested on the material from the lecture, while others (the “not tested” group) did more math problems.

This research seems to indicate that including tests or quizzes could make online courses more successful. So yes! Use assessments to reinforce what people are learning in your own courses. Whatever types of information you are presenting online – whether it’s a lecture, an illustration or text, you can help students stay focused by embedding assessments right on the same page as your learning materials.

A previous post on this blog offers an example of how embedded quizzes are being used to engage learners. You can read more about the recently published research, including an interview with Szpunar and Schacter, in the Harvard Gazette. You can read the paper here.

Next Page »