Item Development – Conducting the final editorial review

Austin Fossey-42Posted by Austin Fossey

Once you have completed your content review and bias review, it is best to conduct a final editorial review.

You may have already conducted an editorial review prior to the content and bias reviews to cull items with obvious item-writing flaws or inappropriate item types—so by the time you reach this second editorial review, your items should only need minor edits.

This is the time to put the final polish on all of your items. If your content review committee and bias review committee were authorized to make changes to the items, go back and make sure they followed your style guide and that they used accurate grammar and spelling. Make sure they did not make any drastic changes that violate your test specifications, such as adding a fourth option to a multiple choice item that should only have three options.

If you have resources to do so, have professional editors review the items’ content. Ask the editors to identify issues with language, but review their suggestions rather than letting them make direct edits to the items. The editors may suggest changes that violate your style guide, they may not be familiar with language that is appropriate for your industry, or they may wish to make a change that would drastically impact the item content. You should carefully review their changes to make sure they are each appropriate.

As with other steps in the item development process, documentation and organization is key. Using item writing software like that provided by Questionmark can help you track revisions to items, document changes, and track your items to make sure each one is reviewed.

Do not approve items with a rubber stamp. If an item needs major content revisions, send it back to the item writers and begin the process again. Faulty items can undermine the validity of your assessment and can result in time-consuming challenges from participants. If you have planned ahead, you should have enough extra items to allow for some attrition while retaining enough items to meet your test specifications.

Finally, be sure that you have the appropriate stakeholders sign off on each item. Once the item passes this final editorial review, it should be locked down and considered ready to deliver to participants. Ideally, no changes should be made to items once they are in delivery, as this may impact how participants respond to the item and perform on the assessment. (Some organizations require senior executives to review and approve any requested changes to items that are already in delivery.)

When you are satisfied that the items are perfect, they are ready to be field tested. In the next post, I will talk about item try-outs, selecting a field test sample, assembling field test forms, and delivering the field test.

Check out our white paper: 5 Steps to Better Tests for best practice guidance and practical advice for the five key stages of test and exam development.

Austin Fossey will discuss test development at the 2015 Users Conference in Napa Valley, March 10-13. Register before Jan. 29 and save $100.

Early-bird deadline: Wednesday, December 17

Julie Delazyn HeadshotcollagePosted by Julie Delazyn

Have you been thinking about attending the 2015 Users Conference in Napa Valley, March 10-13? Register by this Wednesday, December 17th for your final chance to get your $200 early-bird discount.

We have some really exciting content on the agenda, including  interesting customer stories and discussions from Canon, National League for Nursing and the U.S. Coast Guard, to name a few.

Attend this essential learning event, March 10-13! grape icon

  • Explore what makes an assessment trustable and defensible
  • Learn how to protect your assessment data
  • Hear expert advice about best practices
  • Preview the product road map and share your views about it
  • Get instruction on the use of current Questionmark features and functions

Register now for early-bird savings Book your room at the Napa Valley Marriott Hotel and Spa

 

Measuring the Effectiveness of Social and Informal Learning

Posted by Julie Delazyn

How you can use assessments to measure the effectiveness of informal learning?  If people are learning at different times, in different ways and without structure, how do you know it’s happening? And how can you justify investment in social and informal learning initiatives?

The 70+20+10 model of learning – which explains that we learn 70% on-the-job, 20% from others and 10% from formal study – brings out the importance of informal learning initiatives. But the effectiveness of such initiatives needs to be measured, and there needs to be proof that people are performing better as a result of their participation in social and informal learning.

This SlideShare presentation:  Measuring the Impact of Social and Informal Learning, explains various approaches to testing and measuring learning for a new generation of students and workers.  We hope you will use it to gather some new ideas about how to answer these important questions about learning:  Did they like it? Did they learn it? Are they doing it?

Big Themes, Big Deadlines: Napa News

Julie Delazyn HeadshotPosted by Julie Delazyn

We have two big deadlines coming up for the Questionmark 2015 Users Conference in Napa!

All case study and presentation proposals are due on December 10, so submit your proposals soon if you want to lock down a chance to be a speaker at the conference. The perks for you? One 50 percent registration discount per case study and a VIP dinner for all presenters.

Early-bird deadline ends December 17. Register now o save $200. Want to save more? Bring your colleagues and take advantage of our group discounts! There is so much to learn at the conference that many of our customers take on the “divide and conquer” approach by attending different concurrent sessions and comparing notes later.

grape iconThis year, we’ll focus on:

Hackers, attackers and your assessments: Protecting your assessment data

We’ll explore some of the developments and emerging threats to data security and their implications.

Can you trust your assessment results?

The conference will explore what makes assessment results “trustable” any why trustable results matter.

Checking knowledge or checking a box: Assessments and Compliance

Regulatory compliance is a fact of life – one that drives training and the need for trustable, defensible assessment.

grape icon

We look forward to seeing you in Napa—where you will also have a chance to:

  • Get vital info and training on the latest assessment technologies and best practices
  • Network with fellow assessment and learning professional
  • Learn about the Questionmark product roadmap

Oh, and did I mention this will all take place in the heart of the beautiful California wine country? We look forward to learning with you there!

Item Development – Organizing a bias review committee (Part 2)

Austin Fossey-42Posted by Austin Fossey

The Standards for Educational and Psychological Testing describe two facets of an assessment that can result in bias: the content of the assessment and the response process. These are the areas on which your bias review committee should focus. You can read Part 1 of this post, here.

Content bias is often what people think of when they think about examples of assessment bias. This may pertain to item content (e.g., students in hot climates may have trouble responding to an algebra scenario about shoveling snow), but it may also include language issues, such as the tone of the content, differences in terminology, or the reading level of the content. Your review committee should also consider content that might be offensive or trigger an emotional response from participants. For example, if an item’s scenario described interactions in a workplace, your committee might check to make sure that men and women are equally represented in management roles.

Bias may also occur in the response processes. Subgroups may have differences in responses that are not relevant to the construct, or a subgroup may be unduly disadvantaged by the response format. For example, an item that asks participants to explain how they solved an algebra problem may be biased against participants for whom English is a second language, even though they might be employing the same cognitive processes as other participants to solve the algebra. Response process bias can also occur if some participants provide unexpected responses to an item that are correct but may not be accounted for in the scoring.

How do we begin to identify content or response processes that may introduce bias? Your sensitivity guidelines will depend upon your participant population, applicable social norms, and the priorities of your assessment program. When drafting your sensitivity guidelines, you should spend a good amount of time researching potential sources of bias that could manifest in your assessment, and you may need to periodically update your own guidelines based on feedback from your reviewers or participants.

In his chapter in Educational Measurement (4th ed.), Gregory Camilli recommends the chapter on fairness in the ETS Standards for Quality and Fairness and An Approach for Identifying and Minimizing Bias in Standardized Tests (Office for Minority Education) as sources of criteria that could be used to inform your own sensitivity guidelines. If you would like to see an example of one program’s sensitivity guidelines that are used to inform bias review committees for K12 assessment in the United States, check out the Fairness Guidelines Adopted by PARCC (PARCC), though be warned that the document contains examples of inflammatory content.

In the next post, I will discuss considerations for the final round of item edits that will occur before the items are field tested.

Check out our white paper: 5 Steps to Better Tests for best practice guidance and practical advice for the five key stages of test and exam development.

Austin Fossey will discuss test development at the 2015 Users Conference in Napa Valley, March 10-13. Register before Dec. 17 and save $200.

Get with the program: Final reminder to present in Napa

Julie Delazyn HeadshotWe are busy planning the program for the Questionmark 2015 Users Conference in Napa Valley March 10-13.2015-napa-01

We are thrilled about the location of this conference, in the heart of California Wine Country, surrounded by spectacular scenery, world-acclaimed wineries and award-winning restaurants.

A top priority is planning the conference program, which will include sessions on best practices, the use of Questionmark features and functions, demos of the latest technologies, case studies and peer discussions.

Equally significant will be the content created by Questionmark users themselves — people who present case studies or lead discussions. We are excited by the enriching case study and discussion proposals that are coming in, and we are still accepting proposals until December 10.

Space is limited — Click here to download and fill out the call-for-proposal for a chance to present in Napa.

grape iconPlease note that presenters will receive some red carpet treatment — including a special dinner in their honor on Tuesday, March 10. And we award one 50% registration for each case study presentation.

  • Do you have a success story to share about your use of Questionmark assessments?
  • Have you had experiences or learned lessons that would be helpful to others?
  • Is there a topic you’d like to talk about with fellow learning and assessment professionals?
Marriott

Napa Valley Marriott Hotel & Spa

If you can answer “yes” to any of these questions, we would welcome your ideas!

Plan ahead:
Plan your budget now and consider your conference ROI. The time and effort you save by learning effective ways to run your assessment program will more
than pay for your conference participation. Check out the reasons to attend and the conference ROI toolkit here.

Sign up soon for early-bird savings:
You will save $200 by registering on or before December 17 — and your organization will save by taking advantage of group registration discounts. Get all the details and register soon.

Next Page »
SAP Microsoft Oracle HR-XML AAIC