When to Give Partial Credit for Multiple-Response Items

Austin Fossey-42 Posted by Austin Fossey

Three different customers recently asked me how to decide between scoring a multiple-response (MR) item dichotomously or polytomously; i.e., when should an MR item be scored right/wrong, and when should we give partial credit? I gave some garrulous, rambling answers, so the challenge today is for me to explain this in a single blog post that I can share the next time it comes up.

In their chapter on multiple-choice and matching exercises in Educational Assessment of Students (5th ed.), Anthony Nitko and Susan Brookhart explain that matching items (which we may extend to include MR item formats, drag-and-drop formats, survey-matrix formats, etc.) are often a collection of single-response multiple choice (MC) items. The advantage of the MR format is that is saves space and you can leverage dependencies in the questions (e.g., relationships between responses) that might be redundant if broken into separate MC items.

Given that an MR items is often a set of individually scored MC items, then a polytomously scored format almost always makes sense. From an interpretation standpoint, there are a couple of advantages for you as a test developer or instructor. First, you can differentiate between participants who know some of the answers and those who know none of the answers. This can improve the item discrimination. Second, you have more flexibility in how you choose to score and interpret the responses. In the drag-and-drop example below (a special form of an MR item), the participant has all of the dates wrong; however, the instructor may still be interested in knowing that the participant knows the correct order of events for the Stamp Act, the Townshend Act, and the Boston Massacre.

stamp 1

Example of a drag-and-drop item in Questionmark where the participant’s responses are wrong, but the order of responses is partially correct.

Are there exceptions? You know there are. This is why it is important to have a test blueprint document, which can help clarify which item formats to use and how they should be evaluated. Consider the following two variations of a learning objective on a hypothetical CPR test blueprint:

  • The participant can recall the actions that must be taken for an unresponsive victim requiring CPR.
  • The participant can recall all three actions that must be taken for an unresponsive victim requiring CPR.

The second example is likely the one that the test developer would use for the test blueprint. Why? Because someone who knows two of the three actions is not going to cut it. This is a rare all-or-nothing scenario where knowing some of the answers is essentially the same (from a qualifications standpoint) as knowing none of the answers. The language in this learning objective (“recall all three actions”) is an indicator to the test developer that if they use an MR item to assess this learning objective, they should score it dichotomously (no partial credit). The example below shows how one might design an item for this hypothetical learning objective with Questionmark’s authoring tools:

stamp 2

Example of a Questionmark authoring screen for MR item that is scored dichotomously (right/wrong).

To summarize, a test blueprint document is the best way to decide if an MR item (or variant) should be scored dichotomously or polytomously. If you do not have a test blueprint, think critically about what you are trying to measure and the interpretations you want reflected in the item score. Partial-credit scoring is desirable in most use cases, though there are occasional scenarios where an all-or-nothing scoring approach is needed—in which case the item can be scored strictly right/wrong. Finally, do not forget that you can score MR items differently within an assessment. Some MR items can be scored polytomously and others can be scored dichotomously on the same test, though it may be beneficial to notify participants when scoring rules differ for items that use the same format.

If you are interested in understanding and applying some basic principles of item development and enhancing the quality of your results, download the free white paper written by Austin: Managing Item Development for Large-Scale Assessment

Early-bird savings on conference registration end today: Sign up now!

Joan Phaup 2013 (3)Posted by Joan Phaup

Just a reminder that you can save $200 if you register today for the Questionmark 2014 Users Conference.

We look forward to seeing you March 4 – 7 in San Antonio, Texas, for three intensive days of learning and networking.

Check out the conference program as it continues to take shape, and sign up today!

This conference truly is the best place to learn about our technologies, improve your assessments and discuss best practices with Questionmark staff, industry experts and your colleagues. But don’t take my word for it. Let these attendees at the 2013 tell you what they think:

 

 

How to integrate (socially) with Questionmark

Posted by Steve Lay

Earlier this week I finished booking my travel for the forthcoming European Users Conference.  This year’s conference is being held 9 – 11 October in Brussels, which gives me a great opportunity to use the Channel Tunnel.  With so many cheap flights available from my local airport, I’m ashamed to say that I have never been through the Chunnel before, despite walking past the terminal every time I travel to Questionmark’s London Office!  (And guess what: the train turns out to be just as cheap as the plane!)

But it isn’t just the journey I’m looking forward to: it’s the conference program!

To start with, our Dutch distributor, Stoas, will be running a bonus session on using QMWISe, our web-service interface.  Last year’s session was so popular there was standing room only.

As someone who has been closely involved with the development of technical standards for assessment content, I’m also intrigued by Gregory Furter’s session on using Notepad++ and regular expressions to import large numbers of questions.

For less technical delegates, John Dermo will be giving tips on large scale deployments in the Case Studies strand.  John recently won best paper award at the CAA 2011 conference in Southampton; there are pictures and a link to John’s paper on the CAA website.

Continuing with the theme of large deployments, Michel Duijvestijn from Rotterdam will be talking about the use of JMeter to test and optimize an OnPremise deployment of Perception.

On the second day of the conference, David Lewis will be talking about his Blackboard integration at the University of Glamorgan and Onno Thompson will be updating us on the way ECABO use QMWISe to help create their integrations.

I’ve highlighted just a few papers from the programme.  The Product Management team will of course be available in Product Central.  I’m also talking in the Best Practice track on “Using Web Services to Integrate with Questionmark Perception”.  In fact, I spoke last month to Jane Townsend, our Marketing Coordinator, about that session, which you can read about elsewhere on the blog.

As always, we’ll be extending the conversation through the use of various social networking streams so don’t forget to pack your mobile.

See you at #qmcon

Keeping surveys anonymous even when controlling access to them

When you run a course evaluation or survey in Questionmark Perception, you will likely want to make responses anonymous so that people will give you the candid feedback you need. But what if you want  to make sure that each person takes the survey just once? Can you control access to a survey and still make the results stay anonymous? Yes!

Here’s how:

  • When creating the survey, check off the box that says Anonymous Results.
  • Double check your Special Fields to make sure they don’t contain identifying information.
  • Take a dummy survey and confirm that you cannot see the results when reporting.
  • Schedule participants just as you would for any assessment and have them log in.
  • You can even use Email Broadcast to invite participants and send reminders.

The survey results will give you participants’ responses but not their names or other identifying information.

Learn more about how to report on survey results using  Perception’s Survey Report and the Course Summary, Instructor Summary, Class Summary and Class Detail reports in Questionmark Analytics.

Including a Questionmark Knowledge Check within SharePoint is easier than you think

Posted by John Kleeman

Many Questionmark customers use SharePoint within their organization. Microsoft SharePoint is a fantastic tool that lets non-technical people create collaborative web sites, and SharePoint is a great system to deploy assessments in for learning, training and compliance.

One of the easiest ways to include an assessment inside SharePoint is as a knowledge check – you can easily put a Questionmark Perception assessment beside some learning content as in the screenshot.

embed assessment sharepoint 2010

Putting a knowledge check in a SharePoint page gives three benefits

  • The learner can check he/she understands
  • The learner gets retrieval practice to reinforce the learning
  • As author, you can run reports to see which parts of the learning are understood or missed

In order to help people get the benefits of using assessments inside SharePoint, Questionmark have launched a new blog http://blog.sharepointlearn.com which focuses on SharePoint and assessment. This will allow us to run more detailed articles on SharePoint and assessments than the main blog can.

SharePoint is a lot easier to use than many people think. You don’t need administrative rights or programming skills to do most things. At the Questionmark Users Conference last week, I ran a session where people added an assessment in a sandbox site in just a few minutes. You can include an assessment inside SharePoint using the Page Viewer Web Part, which most people who can edit SharePoint pages have access to – if you want to give it a go, here are some instructions from the new blog.

Questionmark Conference Update: Tech Training Topics and More

Joan Phaup

Posted by Joan Phaup

Questionmark users gathering in Los Angeles March 15 – 18 for the Questionmark 2011 Users Conference will have a host of breakout sessions to choose from. The conference schedule is being updated frequently to reflect additions to the program. To date, we have seven Tech Training sessions on tap — some for beginners and others for intermediate and advanced Questionmark users:

  • Introduction to Questionmark Perception for beginners
  • Advanced Authoring Techniques Using Authoring Manager (Intermediate/Advanced)
  • Planning Your Migration from Perception v4 to v5 (Intermediate/Advanced)
  • Configuring the User Experience and Understanding Templates in Perception v5 (Intermediate/Advanced)
  • Authoring with Questionmark Live – A hands-on introduction (Bring your own laptop!) (Beginning/intermediate)
  • Analyzing and Sharing Assessment Results (Beginning/Intermediate)
  • Integrating Perception with Other Systems (Advanced)

These are in addition to a keynote by Bryan Chapman on Assessment’s Strategic Role in Enterprise Learning and concurrent sessions including customer case studies, peer discussions, drop-in demos and best practice presentations on everything from principles of item and test analysis to design strategies for mobile devices.

Tech Central and Product Central offer participants the opportunity to meet with Questionmark technicians and product managers respectively. Product Central focus groups will explore potential new products and features, while Tech Central drop-ins provide the opportunity to touch base with tech support reps, ask questions and check on the progress of current tech support issues.

Participants in this conference are enthusiastic about the value they get from three days of learning with Questionmark staff, fellow Questionmark users and learning industry experts. Early-bird registration savings are available through January 21st, 2011 — so keep an eye on the conference program and register soon!