Get with the program: Final reminder to present in Napa

Julie Delazyn HeadshotWe are busy planning the program for the Questionmark 2015 Users Conference in Napa Valley March 10-13.2015-napa-01

We are thrilled about the location of this conference, in the heart of California Wine Country, surrounded by spectacular scenery, world-acclaimed wineries and award-winning restaurants.

A top priority is planning the conference program, which will include sessions on best practices, the use of Questionmark features and functions, demos of the latest technologies, case studies and peer discussions.

Equally significant will be the content created by Questionmark users themselves — people who present case studies or lead discussions. We are excited by the enriching case study and discussion proposals that are coming in, and we are still accepting proposals until December 10.

Space is limited — Click here to download and fill out the call-for-proposal for a chance to present in Napa.

grape iconPlease note that presenters will receive some red carpet treatment — including a special dinner in their honor on Tuesday, March 10. And we award one 50% registration for each case study presentation.

  • Do you have a success story to share about your use of Questionmark assessments?
  • Have you had experiences or learned lessons that would be helpful to others?
  • Is there a topic you’d like to talk about with fellow learning and assessment professionals?
Marriott

Napa Valley Marriott Hotel & Spa

If you can answer “yes” to any of these questions, we would welcome your ideas!

Plan ahead:
Plan your budget now and consider your conference ROI. The time and effort you save by learning effective ways to run your assessment program will more
than pay for your conference participation. Check out the reasons to attend and the conference ROI toolkit here.

Sign up soon for early-bird savings:
You will save $200 by registering on or before December 17 — and your organization will save by taking advantage of group registration discounts. Get all the details and register soon.

Get trustable results : Let participants comment on beta questions

John Kleeman HeadshotPosted by John Kleeman

However good our item creation and review processes are, we need to pilot them before we use them. As my colleague Austin Fossey says in his blog post Field Test Studies: Taking your items for a test drive:

Even though we work so hard to write high-quality items, some bad items may slip past our review committees. To be safe, most large-scale assessment programs will try out their items with a field test.

For smaller and medium-sized programs, who inevitably have less resources, field testing items is vital. In his post, Austin describes great way to do this with Questionmark – to set questions as “experimental” and include them within production assessments. Such questions gather statistics but don’t count towards the pass/fail of the participant.

Another useful capability in Questionmark technology that helps with field testing or beta testing of questions is allowing participants to comment on a question. This allows you to place an optional text box underneath a question, and participants can enter a comment if they feel a question is poorly worded or ambiguous.

It’s easy to turn on a comment box in Questionmark Live — you just click on the Add comment box button highlighted in the screenshot below. You can also easily do this in our other authoring tools.

Questionmark Live screenshot showing the Add comment box

Participants can then enter comments, which appear in reports and can easily be collated and actioned. When you finalize the questions to use in production, you can simply turn the comments off.

Example of a question with a comment box

I hope that getting feedback and the ability to improve questions before they are used in production help you create more trustable assessments. By following these steps, you also demonstrate due diligence by trying out items before you use them for scoring, thereby minimizing the risk of having your assessment challenged later.

If you have any comments on this Questionmark capability or how best to field test items, feel free to enter them in the comment area below!

Check out our white paper: 5 Steps to Better Tests for best practice guidance and practical advice for the five key stages of test and exam development.

John Kleeman will discuss benefits and good practice in assessments at the 2015 Users Conference in Napa Valley, March 10-13. Register before Dec. 17 and save $200.

Item Development – Organizing a bias review committee (Part 1)

Austin Fossey-42Posted by Austin Fossey

Once the content review is completed, it is time to turn the items over to a bias review committee. In previous posts, we have talked about methods for detecting bias in item performance using DIF analysis, but DIF analysis must be done after the item has already been delivered and item responses are available.

Your bias review committee is being tasked with identifying sources of bias before the assessment is ever delivered so that items can be edited or removed before presenting them to a participant sample (though you can conduct bias reviews at any stage of item development).

The Standards for Educational and Psychological Testing explain that bias occurs when the design of the assessment results in different interpretations of scores for subgroups of participants. This implies that some aspect of the assessment is impacting scores based on factors that are not related to the measured construct. This is called construct-irrelevant variance.

The Standards emphasize that a lack of bias is critical for supporting the overall fairness of the assessment, so your bias review committee will provide evidence to help demonstrate your compliance with the Standards. Before you convene your bias review committee, you should finalize a set of sensitivity guidelines that define the criteria for identifying sources of bias in your assessment.

As with your other committees, the members of this committee should be carefully selected based on their qualifications and representativeness, and they should not have been involved with any other test development processes like domain analysis, item writing, or content review. In his chapter in Educational Measurement (4th ed.), Gregory Camilli suggests building a committee of at least five to ten members who will be operating under the principle that “all students should be treated equitably.”

Camilli recommends carefully documenting all aspects of the bias review, including the qualifications and selection process for the committee members. The committee should be trained on the test specifications and the sensitivity guidelines that will inform their decisions. Just like item writing or content review trainings, it is helpful to have the committee practice with some examples before they begin their review.

Camilli suggests letting committee members review items on their own after they complete their training. This gives them each a chance to critique items based on their unique perspectives and understanding of your sensitivity guidelines. Once they have had time to review the items on their own, have your committee reconvene to discuss the items as a group. The committee should strive to reach a consensus on whether items should be retained, edited, or removed completely. If an item needs to be edited, they should document their recommendations for changes. If an item is edited or removed, be sure they document the rationale by relating their decision back to your sensitivity guidelines.

In the next post, I will talk about two facets of assessments that can result in bias (content and response process), and I will share some examples of publications that have recommendations for bias criteria you can use for your own sensitivity guidelines.

Check out our white paper: 5 Steps to Better Tests for best practice guidance and practical advice for the five key stages of test and exam development.

Austin Fossey will discuss test development at the 2015 Users Conference in Napa Valley, March 10-13. Register before Dec. 17 and save $200.

Evidence that assessments improve learning outcomes

John Kleeman HeadshotPosted by John Kleeman

I’ve written about this research before, but it’s a very compelling example and I think it’s useful as evidence that giving low stakes quizzes during a course correlates strongly with improved learning outcomes.

The study was conducted by two economics lecturers, Dr Simon Angus and Judith Watson, and is titled Does regular online testing enhance student learning in the numerical sciences? Robust evidence from a large data set. It was published in the British Journal of Educational Technology Vol 40 No 2, 255-272 in 2009.

Angus and Watson introduced a series of 4 online, formative quizzes into a business mathematics course, and wanted to determine whether students who took the quizzes learned more and did better on the final exam than those who didn’t. The interesting thing about the study is that they used a statistical technique which allowed them to estimate the effect of several different factors, and isolate the effects of taking the quizzes from the previous mathematical experience of the students, their gender and their general level of effort to determine which impacted the final exam score most.

You can see a summary of their findings in the graph below, which shows the estimated coefficients for four of the main factors, all of which had a statistical significance of p < 0.01.

Factors associated with final exam score graph

You can see from this graph that the biggest factor associated with final exam success was how well students had done in the midterm exam, i.e. how well they were doing in the course generally. But students who took the 4 online quizzes learned from them and did significantly better. The impact of taking or not taking the quizzes was broadly the same as the impact of their prior maths education: i.e. quite reasonable and significant.

We know intuitively that formative quizzes help learning, but it’s nice to see a statistical proof that – to quote the authors – “exposure to a regular (low mark) online quiz instrument has a significant and positive effect on student learning as measured by an end of semester examination”.

Another good resource on the benefits of assessments to check out is the white paper, The Learning Benefits of Questions. In it, Dr. Will Thalheimer of Work-Learning Research reveals research that shows that questions can produce significant learning and performance benefits, potentially improving learning by 150% or more. The white paper is complimentary after registration.

John Kleeman will discuss benefits and good practice in assessments at the 2015 Users Conference in Napa Valley, March 10-13. Register before Dec. 17 and save $200.

Join a UK briefing on academic assessment

Chloe MendoncaThere are just a few weeks to go until the UK Academic Briefings at the University of Bradford and University of Southampton.

These complimentary morning briefings bring people and ideas together.

For those working in academic-related roles, here is an opportunity to get up to date with recent and future developments in online testing.

The agenda will cover ways to:

  • enhance student learning through e-assessment
  • overcome key obstacles surrounding e-assessment in higher education
  • mitigate the threat of cheating and fraud within online exams

If you’re new to online assessment or are thinking of implementing it within your institution, attend a briefing to see Questionmark technologies in action
and speak with assessment experts about potential applications for online surveys, quizzes, tests and exams.

Register for the date and location that suits you best:

 

Item Development – Organizing a content review committee (Part 2)

Austin Fossey-42Posted by Austin Fossey

In my last post, I explained the function of a content review committee and the importance of having a systematic review process. Today I’ll provide some suggestions for how you can use the content review process to simultaneously collect content validity evidence without having to do a lot of extra work.

If you want to get some extra mileage out of your content review committee, why not tack on a content validity study? Instead of asking them if an item has been assigned to the correct area of the specifications, ask them to each write down how they would have classified the item’s content. You can then see if topics picked by your content review committee correspond with the topics that your item writers assigned to the items.

There are several ways to conduct content validity studies, and a content validity study might not be sufficient evidence to support the overall validity of the assessment results. A full review of validity concepts is outside the scope of this article, but one way to check whether items match their intended topics is to have your committee members rate how well they
think an item matches each topic on the specifications. A score of 1 means they think the item matches, a score of -1 means they think it does not match, and a score of 0 means that they are not sure.

If each committee member provides their own ratings, you can calculate the index of congruence , which was proposed by Richard Rovinelli and Ron Hambleton. You can then create a table of these indices to see whether the committee’s classifications correspond to the content classifications given by your item writers.

The chart below compares item writers’ topic assignments for two items and the index of congruence determined by a content committee’s ratings of the two items on an assessment with ten topics. We see that both groups agreed that Item 1 belonged to Topic 5 and Item 2 belonged to Topic 1. We also see that the content review committee was uncertain on whether or not Item 1 measured Topic 2, and we see that some of the committee members felt that Item 2 measured  Topic 7.

ID2

Comparison of content review committee’s index of congruence and item writers’ classifications of two items on an assessment with ten topics.

 

Next Page »
SAP Microsoft Oracle HR-XML AAIC