Posted By Doug Peterson
While many people tend to think of quizzes, tests and exams as being made up of questions, professionals in the assessment industry typically use the term “item” because the question is only a part of the overall interaction with the learner, along with choices, scoring and feedback.
Well-written items can be used to assess what a learner needs to learn prior to a learning event as well as what they *have* learned after the learning event. Well-written items promote learning and memory recall, and help retain knowledge, skills and/or abilities over time. But writing good items isn’t as easy as it looks.
I’ll be devoting a few blog posts to some pointers about item writing and hope you find them helpful.
Today, let’s consider these three important qualities of well-written items – they need to be fair, valid and reliable.
“Trick” items, or confusing or misleading items do not allow the test-taker to show their true understanding of the subject matter, and stakeholders in the testing process would not be able to trust the results. The goal of an item is not to prove what a test-taker doesn’t know, it’s to allow him or her to show what they *do* know. Also, an item should only test one thing, so you only want to test the participant’s knowledge OR their puzzle-solving abilities, but not both at the same time.
An item needs to be valid in the context of the assessment. For example, an assessment that tests a learner’s ability to diagnose Alzheimer’s disease should not contain an item about dental hygiene. The dental hygiene item, as well-written as it may be, is simply not valid in that context.
Finally, an item needs to be reliable. For example, if you were asking a five-year-old a question that requires the language comprehension skills of a ten-year-old, it’s unlikely that your results would be very reliable. An item must accurately measure the test-taker’s true understanding of the subject matter, repeatedly over time, and reduce the possibility of guessing the correct answer as much as possible.
Please feel free to leave a comment in the area below! In Part 2, we’ll take a look at the different parts of an item as well as some basic guidelines for writing a good stimulus and good choices.
Other posts in this series:
Posted by Brian McNamara
A survey is only as good as the results you get from it. That’s why it’s important to carefully consider and plan for survey forms that will yield accurate, valid data that can be analyzed to yield the answers that you and your stakeholders are seeking.
This article looks at a few general tips on identifying the information you want to capture, writing survey questions, structuring surveys and planning ahead for how you or your stakeholders will want to analyze data.
1. Provide a brief introduction to the survey that lets the respondents know the:
- Purpose of the survey – why do you want the respondents’ opinions?
- Length of the survey (Number of questions? How long will it take to complete?)
- Closing date for survey responses
Tip: It also makes sense to include this information in the initial invitation to help set expectations and boost response rates.
2. Keep the survey short and sweet (only ask the minimum number of questions required)… the longer the survey, the more likely that respondents will abandon it or refuse to participate.
3. Avoid ambiguity in how your questions are worded; be as direct as possible.
4. Within the survey form, let respondents know how much longer they have to finish the assessment – built-in progress bars (available in most of Questionmark’s standard question-by-question assessment templates) can help here. For example:
5. Consider the flow of the assessment. Ideally your survey should group similar types of questions together. For example, in a course evaluation survey, you might ask two or three questions about the course content, then questions about the venue, and then questions about an instructor.
6. Avoid the potential for confusing respondents by keeping your Likert scale questions consistent where possible. For example, don’t follow a question that uses a positive-to-negative scale (e.g. “Strongly Agree” to “Strongly Disagree”) with a question that uses a negative-to-positive scale (e.g. “Very Dissatisfied” to “Very Satisfied”).
7. Make it easy for respondents to answer surveys via a wide variety of devices and browsers. Check out previous blog articles on this topic: Tips for making your assessments BYOD-friendly.
8. Consider what respondent demographics and other information you may wish use for filtering and/or comparing your survey results. For example, in a typical course evaluation, you might be looking to capture information such as:
- Course name
- Instructor name
- Date (or range of dates)
Questionmark provides different options for capturing demographic data into “special fields” that can be used in in Questionmark’s built-in survey and course evaluation reports for filtering and comparison. Likewise, this demographic data can be exported along with the survey results to ASCII or Microsoft Excel format if you prefer to use third-party tools for additional analysis.
9. Consider how you wish to capture demographic information.
- Easiest way: you can ask a question! In Questionmark assessments, you can designate certain questions as “demographic questions” so their results are saved to “special fields” used in the reporting process.Typically you would use a multiple choice and/or drop-down question type to ask for such information. For example, if you were surveying a group of respondents who attended a “Photoshop Basics” course in three different cities, you might ask the following to capture this data:
- Embedding demographic data within assessment URLs: In some cases, you might already have certain types of demographic information on hand. For example, if you are emailing an invitation only to London respondents of the “Photoshop Basics” course, then you can embed this information as a parameter of a Questionmark assessment URL – it will be one less question you’ll need to ask your respondents, and a sure-fire way you’ll capture accurate location demographics with the survey results!
If you are looking for an easy way to rapidly create surveys and course evaluations, check out Questionmark Live – click here. And for more information about Questionmark’s survey and course evaluation reporting tools, click here.
Posted By Doug Peterson
At this point in the design process, you’ve written all the items for your assessment. Before you assemble them into a test, they need to be reviewed. Be sure to link each item to the Test Content Outline (TCO), then ask a group of subject matter experts (SMEs) to review the questions. This very well could be the same group of SMEs that wrote the questions in the first place, in which case they can simply review each other’s work. There are three main things to look at when reviewing each item:
- Spelling and grammar
- Clarity – is it clear what the item is asking? Is the item asking only one question and does it have only one correct answer? Is the item free of any extraneous information, bias, and stereotyping?
- Connection to TCO – it is legitimate to include this item on this assessment because it clearly and directly pertains to the goals of the training.
Once you are confident that you have a complete set of well-written items that tie directly to your TCO, it’s time to start putting the assessment together. In addition to determining which questions from your item bank you want to include (which will be discussed in the next entry in this series), you must also develop test directions for the participant. These directions should include:
- Purpose of the assessment
- Amount of time allowed
- Procedures for asking questions
- Procedures for completing the assessment
- Procedures for returning test materials
As part of your participant directions, you may want to consider including sample items, especially if the format is unusual or unfamiliar to the participants. Sample items also help reduce test anxiety. Remember, you want to assess the participant’s true knowledge, which means you don’t want a “stress barrier” getting in the way.
In addition to the participant’s instructions, you also want to put together instructions for the assessment administrator – the instructor or proctor who will be handing the test out and watching over the room while the participant’s take the assessment. Having a set of written instructions will help ensure consistency when the assessment is given by different administrators in different locations. The instructions should include:
- The participant’s instructions, which should be read aloud
- How to handle and document irregularities
- The administrator’s monitoring responsibilities and methods (e.g., no phone conversations, walk around the room every 10 minutes, etc.)
- Hardware and software requirements and instructions, if applicable
- Contact information for technical help
As you develop your assessment, make sure that you are taking into account any local or national laws. For example, American test centers must comply with the Americans with Disabilities Act (ADA). The ADA requires that the test site be accessible to participants in wheelchairs and that compensation be made for certain impaired abilities (e.g., larger print or a screen reader for visually impaired participants). The administrator’s instructions should cover what to do in each case.
Revision history is an collaborative authoring capability of Questionmark Live. Here are the basics:
What it does: This feature enables users to track and manage edits made to questions:
- View a question’s full revision history
- Compare different versions side-by-side with marked-up changes
- Roll back to previous versions of questions to undo edits made by others
Who should use it: Subject matter experts can use this feature to easily review, keep track and roll back to a previous version of the question.
- Revision – identifies how many revisions have been created
- Version ID – identifies the version of the question (Used to identify which version of the question any rollback has been reverted to)
- Modified On – states the date that the revision was made
- Modified By – highlights who made the actual revision
- Change Type – describes what the revision entailed
- Comments – lists the notes made by the author that revised the question
By selecting a revision you can view the question. It is possible to select more than one revision to compare the differences. To roll back to a previous revision, select the revision and click Rollback.
Extended Matching Questions are similar to multiple choice questions but test knowledge in a far more applied, in-depth way. This question type is now available in Questionmark Live browser-based authoring.
What it does: An Extended Matching question provides an “extended” list of answer options for use in questions relating to at least two related scenarios or vignettes. (The number of answer options depends on the logical number of realistic options for the test taker.) The same answer choice could be correct for more than one question in the set, and some answer choices may not be the correct answer for any of the questions – so it is difficult the answer this type of question correctly by chance. A well-written lead-in question is so specific that students understand what kind of response is expected, without needing to look at the answer options.
Who should use it: It is often used in medical education and other healthcare subject areas to test diagnostic reasoning.
What’s the process for creating it? This diagram shows how to create this question type in Questionmark Live:
How it looks: Here is an example of an Extended Matching Question