SlideShare presentation on writing high-complexity test items

Headshot JuliePosted by Julie Delazyn

Writing high-quality test items is difficult, but writing questions that go beyond checking knowledge is even more complex.

James Parry, E-Testing Manager at the U.S. Coast Guard Training Center in Yorktown, Virginia, offered some valuable tips on advanced test item construction during a peer discussion at this year’s Questionmark Users Conference.

The PowerPoints from this session will help you distinguish among three levels of test items:

  • Low-complexity – requiring knowledge of single facts
  • Medium-complexity – requiring test takers to know or derive multiple facts
  • High-complexity – requiring test takers to analyze and evaluate multiple facts to solve problems (often presented as scenarios)

The slides relate these levels to Bloom’s Taxonomy and Gagne’s Nine Events of Instruction and offer pointers for writing performance-based test items based on clear objectives.

Enjoy the presentation below, and save March 4 – 7 next year for the Questionmark 2014 Users Conference in San Antonio, Texas.

Performance testing versus knowledge testing

Joan Phaup HeadshotPosted by Joan Phaup

Art Stark is an instructor at the United States Coast Guard National Search and Rescue School – and a longtime Questionmark user.

He will team up with James Parry, Test Development/E-Testing Manager at the Coast Guard’s Performance Systems Branch, to share a case study at the Questionmark Users Conference in Baltimore March 3 – 6.

I’m looking forward to hearing about the Coast Guard’s progress in moving from knowledge-based tests to performance-based tests. Here’s how Art explains the basic ideas behind this.

Tell me about your experience with performance-based training at the Coast Guard.

Art Stark photo

Art Stark

All Coast Guard training is performance-based. At the National Search and Rescue School we’ve recently completed a course re-write and shifted more from knowledge-based assessments to performance-based assessments. Before coming to the National SAR School, I was an instructor and boat operator trainer on Coast Guard small boats. Everything we did was 100% performance-based. The boat was the classroom and we had standards and objectives we had to meet.

How does performance testing differ from knowledge testing?

To me, knowledge-based testing is testing to the lowest denominator. All through elementary and high school we have been tested at the knowledge level and very infrequently at a performance level. Think of a test you may have crammed for, as soon as the test was over you promptly forgot the information. Most times this was just testing knowledge.

Performance testing is actually being able to observe and evaluate the performance while it is occurring. Knowledge testing is relatively easy to develop. Performance testing is much harder and much more expensive, to create. With reductions to budgets, it is becoming harder and harder to develop the type of facilities we need to use for performance testing, so we need to find new, less expensive ways to test performance.

It takes a much more concerted effort to develop knowledge application test items than to develop simple knowledge test items. When a test is geared to knowledge only, it does not give the evaluator a good assessment of the student’s real ability. An example of this would be applying for a job as a customer service representative. Often there are questions for the job that actually test the application of knowledge, such as “You are approached by an irate customer, what actions do you take…?”

How will you address this during your session?

We’ll look at using written assessments to test performance objectives, which requires creating test items that apply knowledge instead of just recalling it. Taking from Blooms Taxonomy, I look at the third step, application. I’ll be showing how to bridge the gap from knowledge-based testing to performance-based testing.

What would you and Jim like your audience to take away from your presentation?

A heightened awareness of using written tests to evaluate performance.

You’ve attended many of these conferences. What makes you return each year?

The ability to connect with other professionals and increase my knowledge and awareness of advances in training. Meeting and being with good friends in the industry.

Check out the conference program and register soon.

What is the Angoff Method?

Posted by Julie Delazyn

When creating tests that define levels of competency as they relate to performance, it’s essential to use a reliable method for establishing defensible pass/fail scores.

One of these is the Angoff Method, which uses a focus-group approach for this process. This method has a strong track record and is widely accepted by testing professionals and courts.

Subject-matter experts (SMEs) review each test question and then predict how many minimally-qualified candidates would answer the item correctly. The average of the judges’ predictions for test questions is used to calculate the passing percentage (cut score) for a test.

Basing cut scores on empirical data instead of choosing arbitrary passing scores helps test developers produce legally defensible tests that meet the Standards for Educational and Psychological Testing. The Angoff Method offers a practical way to achieve this.

View this SlideShare presentation to learn more: