Trusting you have a good new year

John Kleeman HeadshotPosted by John Kleeman

As we bid 2014 goodbye  and welcome 2015, we wish all readers of this blog a happy and prosperous new year.

Trust is in the news a lot these days. 2013 was memorable for its revelations of government surveillance of the Internet. Well-intentioned government organizations were intercepting Internet communications for law and order purposes, and to protect society from harm. However the surprising scale of the interceptions divided the community with some feeling it was appropriate given the threat but others becoming less trustful of government.

In 2014, we have seen a series of Internet vulnerabilities. The catchy names – Heartbleed, Shellshock and Poodle – bely the potential seriousness of these threats. Questionmark was only lightly touched by these vulnerabilities and any minor issues were quickly corrected (see Questionmark and Heartbleed and Questionmark not impacted by Bash/ShellShock Internet vulnerability). However as we’ve seen in the news, some other companies have been impacted by these or other vulnerabilities, and we are all very sensibly being more cautious about security and data protection.

Questionmark has and continues to put a high priority on security and data protection. Watch this video for more about Questionmark’s commitment to security.

2014 seems to have been the year that security and data protection have come of age. Mature organizations recognize that there are significant security threats to their data, and mature suppliers put in place extensive measures to protect against such threats. The arguments in favour of outsourcing to the Cloud remain strong; if nothing else, Cloud providers can typically protect specialist data like assessments better than a busy in-company IT team, who are focused elsewhere. But trust must be at  the forefront – you need to trust and review all your suppliers, to check that they are following good security practice. We welcome all the review we get from our customers’ IT security departments   – good questions help make us stronger.

Trust and trustable assessment results are critical to Questionmark. Our vision is that in today’s world, success for organizations, individuals and society means having the right knowledge, skills and abilities at the right place and the right time. An organization needs to know what its people understand and what they need to change or learn to meet goals. An individual needs to demonstrate achievement and find out how to improve. And society needs to know who is competent and whom to trust.

Assessments are critically needed to identify if people “know it, understand it and can do it”. Questionmark aims to provide the world’s leading online assessment service, allowing organizations to securely create, deliver and report on tests, quizzes, surveys and exams. Questionmark focuses on getting trustable results that are actionable for organizations, individuals and society.

During 2015 we’ll be sharing lots about assessment and good practice on this blog, and I trust we will have much to interest you!

 

Item Development – Conducting the final editorial review

Austin Fossey-42Posted by Austin Fossey

Once you have completed your content review and bias review, it is best to conduct a final editorial review.

You may have already conducted an editorial review prior to the content and bias reviews to cull items with obvious item-writing flaws or inappropriate item types—so by the time you reach this second editorial review, your items should only need minor edits.

This is the time to put the final polish on all of your items. If your content review committee and bias review committee were authorized to make changes to the items, go back and make sure they followed your style guide and that they used accurate grammar and spelling. Make sure they did not make any drastic changes that violate your test specifications, such as adding a fourth option to a multiple choice item that should only have three options.

If you have resources to do so, have professional editors review the items’ content. Ask the editors to identify issues with language, but review their suggestions rather than letting them make direct edits to the items. The editors may suggest changes that violate your style guide, they may not be familiar with language that is appropriate for your industry, or they may wish to make a change that would drastically impact the item content. You should carefully review their changes to make sure they are each appropriate.

As with other steps in the item development process, documentation and organization is key. Using item writing software like that provided by Questionmark can help you track revisions to items, document changes, and track your items to make sure each one is reviewed.

Do not approve items with a rubber stamp. If an item needs major content revisions, send it back to the item writers and begin the process again. Faulty items can undermine the validity of your assessment and can result in time-consuming challenges from participants. If you have planned ahead, you should have enough extra items to allow for some attrition while retaining enough items to meet your test specifications.

Finally, be sure that you have the appropriate stakeholders sign off on each item. Once the item passes this final editorial review, it should be locked down and considered ready to deliver to participants. Ideally, no changes should be made to items once they are in delivery, as this may impact how participants respond to the item and perform on the assessment. (Some organizations require senior executives to review and approve any requested changes to items that are already in delivery.)

When you are satisfied that the items are perfect, they are ready to be field tested. In the next post, I will talk about item try-outs, selecting a field test sample, assembling field test forms, and delivering the field test.

Check out our white paper: 5 Steps to Better Tests for best practice guidance and practical advice for the five key stages of test and exam development.

Austin Fossey will discuss test development at the 2015 Users Conference in Napa Valley, March 10-13. Register before Jan. 29 and save $100.

Early-bird deadline: Wednesday, December 17

Julie Delazyn HeadshotcollagePosted by Julie Delazyn

Have you been thinking about attending the 2015 Users Conference in Napa Valley, March 10-13? Register by this Wednesday, December 17th for your final chance to get your $200 early-bird discount.

We have some really exciting content on the agenda, including  interesting customer stories and discussions from Canon, National League for Nursing and the U.S. Coast Guard, to name a few.

Attend this essential learning event, March 10-13! grape icon

  • Explore what makes an assessment trustable and defensible
  • Learn how to protect your assessment data
  • Hear expert advice about best practices
  • Preview the product road map and share your views about it
  • Get instruction on the use of current Questionmark features and functions

Register now for early-bird savings Book your room at the Napa Valley Marriott Hotel and Spa

 

Measuring the Effectiveness of Social and Informal Learning

Posted by Julie Delazyn

How you can use assessments to measure the effectiveness of informal learning?  If people are learning at different times, in different ways and without structure, how do you know it’s happening? And how can you justify investment in social and informal learning initiatives?

The 70+20+10 model of learning – which explains that we learn 70% on-the-job, 20% from others and 10% from formal study – brings out the importance of informal learning initiatives. But the effectiveness of such initiatives needs to be measured, and there needs to be proof that people are performing better as a result of their participation in social and informal learning.

This SlideShare presentation:  Measuring the Impact of Social and Informal Learning, explains various approaches to testing and measuring learning for a new generation of students and workers.  We hope you will use it to gather some new ideas about how to answer these important questions about learning:  Did they like it? Did they learn it? Are they doing it?

 

 

Big Themes, Big Deadlines: Napa News

Julie Delazyn HeadshotPosted by Julie Delazyn

We have two big deadlines coming up for the Questionmark 2015 Users Conference in Napa!

All case study and presentation proposals are due on December 10, so submit your proposals soon if you want to lock down a chance to be a speaker at the conference. The perks for you? One 50 percent registration discount per case study and a VIP dinner for all presenters.

Early-bird deadline ends December 17. Register now o save $200. Want to save more? Bring your colleagues and take advantage of our group discounts! There is so much to learn at the conference that many of our customers take on the “divide and conquer” approach by attending different concurrent sessions and comparing notes later.

grape iconThis year, we’ll focus on:

Hackers, attackers and your assessments: Protecting your assessment data

We’ll explore some of the developments and emerging threats to data security and their implications.

Can you trust your assessment results?

The conference will explore what makes assessment results “trustable” any why trustable results matter.

Checking knowledge or checking a box: Assessments and Compliance

Regulatory compliance is a fact of life – one that drives training and the need for trustable, defensible assessment.

grape icon

We look forward to seeing you in Napa—where you will also have a chance to:

  • Get vital info and training on the latest assessment technologies and best practices
  • Network with fellow assessment and learning professional
  • Learn about the Questionmark product roadmap

Oh, and did I mention this will all take place in the heart of the beautiful California wine country? We look forward to learning with you there!

Item Development – Organizing a bias review committee (Part 2)

Austin Fossey-42Posted by Austin Fossey

The Standards for Educational and Psychological Testing describe two facets of an assessment that can result in bias: the content of the assessment and the response process. These are the areas on which your bias review committee should focus. You can read Part 1 of this post, here.

Content bias is often what people think of when they think about examples of assessment bias. This may pertain to item content (e.g., students in hot climates may have trouble responding to an algebra scenario about shoveling snow), but it may also include language issues, such as the tone of the content, differences in terminology, or the reading level of the content. Your review committee should also consider content that might be offensive or trigger an emotional response from participants. For example, if an item’s scenario described interactions in a workplace, your committee might check to make sure that men and women are equally represented in management roles.

Bias may also occur in the response processes. Subgroups may have differences in responses that are not relevant to the construct, or a subgroup may be unduly disadvantaged by the response format. For example, an item that asks participants to explain how they solved an algebra problem may be biased against participants for whom English is a second language, even though they might be employing the same cognitive processes as other participants to solve the algebra. Response process bias can also occur if some participants provide unexpected responses to an item that are correct but may not be accounted for in the scoring.

How do we begin to identify content or response processes that may introduce bias? Your sensitivity guidelines will depend upon your participant population, applicable social norms, and the priorities of your assessment program. When drafting your sensitivity guidelines, you should spend a good amount of time researching potential sources of bias that could manifest in your assessment, and you may need to periodically update your own guidelines based on feedback from your reviewers or participants.

In his chapter in Educational Measurement (4th ed.), Gregory Camilli recommends the chapter on fairness in the ETS Standards for Quality and Fairness and An Approach for Identifying and Minimizing Bias in Standardized Tests (Office for Minority Education) as sources of criteria that could be used to inform your own sensitivity guidelines. If you would like to see an example of one program’s sensitivity guidelines that are used to inform bias review committees for K12 assessment in the United States, check out the Fairness Guidelines Adopted by PARCC (PARCC), though be warned that the document contains examples of inflammatory content.

In the next post, I will discuss considerations for the final round of item edits that will occur before the items are field tested.

Check out our white paper: 5 Steps to Better Tests for best practice guidance and practical advice for the five key stages of test and exam development.

Austin Fossey will discuss test development at the 2015 Users Conference in Napa Valley, March 10-13. Register before Dec. 17 and save $200.