Grow what you know about assessment and measurement

Joan Phaup HeadshotPosted by Joan Phaup

We have not just one but two important opportunities coming up for customers to find out how to use our technologies more effectively, discuss best practices in assessment and measurement and soar up the learning curve.

The  Questionmark 2013 European Users Conference  in  Barcelona November 10 – 12 and the Questionmark 2014 Users Conference, in San Antonio March 4 – 7 both offer tremendous professional development opportunities, along with the chance to influence the future of our technologies.

We’ll tell you time and again that these conferences are the best places to learn, network and discover new tips and techniques — but don’t take our word for it.

Find out from this video what participants thought about our most recent conference — and start making your plans to attend!

[youtube http://www.youtube.com/watch?v=kcSN7qUa0zc?list=UUG2RlPbdbq1AO051Ityq9Wg]

Integrating your LMS with Questionmark OnDemand just got easier!

Steve Lay Headshot

Posted by Steve Lay
 

Last year I wrote about the impact that the IMS LTI standard could have on the way people integrate their LMS with external tools.

I’m pleased to say that we have just released our own LTI Connector for Questionmark OnDemand. The connector makes it easy to integrate your LMS with your Questionmark repository. Just enter some security credentials to set up the trusted relationships and your instructors are ready to start embedding assessments directly into the learning experience.

Moodle TestBy using a standard, the LTI connector enables a wide range of LMSs to be integrated in the same way. Many of them have LTI support built in directly too, so you won’t have to install additional software or request optional plugins from your LMS hosting provider.

You can read more about how to use the LTI connector with Questionmark OnDemand on our website: Questionmark Connectors.

You can also find out which tools are currently supporting the LTI standard from the IMS Conformance Certification page (which we hope to be joining shortly).

From Content to Tool Provider

The LTI standard, in many ways, does a similar job to the older SCORM and AICC standards. It provides a mechanism for an LMS to launch a student into an activity and for that activity to pass performance information (outcomes) back to the LMS to be recorded in their learning record.

Both the SCORM and AICC standards were designed with content portability in mind, before the Web became established. As a result, they defined the concept of a package of content that has to be published and ‘physically’ moved to the LMS to be run. The LMS became a player of the content.

Contrast this approach with that of IMS LTI. In LTI, the activity is provided by an external Tool Provider. The Tool Provider is hosted on the web and is identified by a simple URL; there is no publishing required! When the Tool’s URL is placed into the LMS, along with appropriate security credentials, the link is made. Now the student just follows an embedded link to the Tool Provider’s website where they interact with the activity directly. The two websites communicate via web services (much like AICC) to pass back information about outcomes.

The result is simpler and more secure! It is no wonder that the LTI specification has been adopted so quickly by the community.

To Your Health: Are regulators pleased if everyone passes a test?

John Kleeman Headshot

Posted by John Kleeman

We have all read about the huge regulator fines imposed  in the financial services industry, but regulatory activity in the health care industry is also rising rapidly. It seems to me that in this and other industries, there is a sea change: regulators are holding companies more to account and checking much more carefully that they are following the rules.

A stunning example of this is shown in the graph below from the US FDA showing the rise in the formal warning letters it has issued across industries in recent years – from 471 in fiscal year 2007 to 4,882 in fiscal year 2012.

FDA Warning Letters rising fiscal years 2007-2012

I have not read all these thousands of letters (!), but here are a few interesting excerpts relating to failing to train and assess well enough. You can see all warning letters on the FDA website:

From a letter to a medical devices company in 2012

“Failure to document that all personnel are trained to adequately perform their assigned responsibilities … Specifically, no documentation was provided which demonstrated that manufacturing technicians or customer service personnel received training for their assigned duties.”

From a letter to a contract pharmaceutical laboratory in 2012:

“your laboratory should establish a procedure for requiring, documenting, and periodically assessing all training”

From a letter to a pharmaceutical manufacturer in 2012:

“Your employees were not adequately trained in CGMPs as evidenced by the deficiencies listed in this letter. In your response to this letter, provide a plan to develop an ongoing and robust CGMP training program for your personnel, including an explanation of how you will assess training effectiveness.“

And from an older letter to a US blood bank:

There is no evidence an annual written test has been given to all laboratory personnel as required by the “Training Program” procedure for the production laboratory.

These are interesting evidence that regulators expect assessments or other evidence of training. I was prompted to write this blog article by one other excerpt from regulatory action by the FDA. This example fascinated me not because the organization failed to assess, but because it did assess, and the assessments failed to pick up the problems.

In a letter to a blood services organization in 2012, the FDA said (red highlighting by me):

During an inspection …  FDA discovered that annual competency reviews and/or QA reviews did not detect that employees were not correctly performing all steps of testing blood samples. One test was repeatedly performed incorrectly by many employees beginning 2007, and another test was repeatedly performed incorrectly by many employees since April 2008. FDA’s review of the competency assessments for those employees performing those tests found that none failed the assessment.

So in this case, employees were being tested. But they were all passing, in spite of not doing their job correctly. This highlights a key requirement when using assessments:  well designed assessments are the best way of measuring competence and understanding. But you need to make sure that your assessments are well designed, valid and reliable. A poor quality assessment can give false comfort.

A sensible regulator will not only want to see that you assess your employees’ competence, but also that you use professional techniques to create, deliver and report on those assessments to ensure that they match the requirements for the job.  It is of course desirable that all your employees pass competency assessments, but only if those assessments truly measure the key requirements for the job.

For more background on good practice using assessments in the health care industry, read the previous To Your Health posts listed below. You can also read Questionmark’s best practice white papers for advice on providing assessments that get effective results.

Going BYOD? “Responsive Design” will help you get there

Brian McNamara HeadshotPosted by Brian McNamara

We’veResponsive Design talked about “BYOD” (Bring Your Own Device) in this blog recently – and about how many organizations within corporate learning and higher education are either starting to embrace the idea, or – at the very least – start planning for how they can be ready for it in the future.

In fact, one of my recent blog articles focused on a few practical tips on how to optimize your online assessments for the broadest range of devices and browsers possible.

But today we’re going to take a look at how “responsive design” technology built into the latest release of Questionmark OnDemand will make the jump to supporting BYOD delivery of online assessments much, much easier.

Check out the video below for a look at how you can author a Questionmark assessment once, and then deliver it at a broad range of screen resolutions and to many different types of devices – from laptops to tablets to smartphones.

We have plenty of resources available to you. “How-to” videos and brief presentations about best practices, will give you valuable pointers about authoring, delivery and integration in our Learning Cafe. We also share presentations and videos on our SlideShare page.

Standard Setting: Compromise and Normative Methods

Austin Fossey

Posted by Austin Fossey

We have discussed the Angoff and Bookmark methods of standard setting, which are two commonly used methods, but there are many more. I would again refer the interested reader to Hambleton and Pitoniak’s chapter in Educational Measurement (4th ed.) for descriptions of other criterion-referenced methods.

Though criterion-referenced assessment is the typical standard-setting scenario, cut scores may also be determined for normative assessments. In these cases, the cut score is often not set to make an inference about the participant, but instead set to help make an operational decision.

A common example of a normative standard is when the pass rate is set based on information that is unrelated to participants’ performance. A company may decide to hire the ten highest-scoring candidates, not because the other candidates are not qualified, but because there are only ten open positions. Of course if the candidate pool is weak overall, even the ten highest performers may still turn out to be lousy employees.

We may also set normative standards based on risk tolerance. You may recall from our post about criterion validity that test developers may use a secondary measure that they expect to correlate with performance on the assessment. An employer may wish to set a cut score to minimize type I errors (false positives) because of the risk involved. For example, ability to fly a plane safely may correlate strongly with aviation test scores, but because of the risk involved if we let an unqualified person fly a plane, we may want to set the cut score high even though we will exclude some qualified pilots.

aviation 1

Normative Standard Setting with Secondary Criterion Measure

The opposite scenario may occur as well. If Type I errors have little risk, an employer may set the cut score low to make sure that all qualified candidates are identified. Unqualified candidates who happen to pass may be identified for additional training through subsequent assessments or workplace observation.

If we decided to use a normative approach to standard setting, we need to be sure that there is justification, and the cut score should not be used to classify individuals. A normative standard by its nature implies that not everyone will pass the assessment, regardless of their individual abilities, which is why it would be inappropriate for most cases in education or certification assessment.

Hambleton and Pitoniak also describe one final class of standard-setting methods called compromise methods. Compromise methods combine the judgment of the standard setters with information about the political realities of different pass rates. One example is the Hofstee Method, where stand setters define the highest acceptable cut score (1), the lowest acceptable cut score (2), highest acceptable fail rate (3), and the lowest acceptable fail rate (4). These are plotted against a curve of participants’ score data, and the intersection is used as a cut score.

 aviation 2Hofstee Method ExampleAdapted from Educational Measurement (Ed. Brennan, 2006)

Problem Questions and Summary – Item Writing Guide, Part 5

Doug Peterson Headshot

Posted By Doug Peterson

Let’s look at two more item writing problems. These last two are a little controversial.5- a

The stimulus for this question tells a wonderful story. The problem is, the first three sentences contain no information that relates to the question. A long stimulus full of extraneous, unneeded information can easily distract or confuse the test-taker. This item needs a re-write of the stimulus to get directly to the question at hand – and nothing else. Let’s change it to “Which Questionmark video explains how to use assessments in solving business problems?”

But here’s the controversy: This question is just fine if you’re trying to ascertain the test-taker’s ability to recognize pertinent information and ignore extraneous information! Therefore I won’t advise that you *never* use a question like this, only that you make sure you use it in the right situation.

And now, on to our last question in this series of posts.

5 -bAt first glance, there doesn’t appear to be a problem with this question – no repetition of a keyword, distracters are the same length, no grammar inconsistencies, short and to the point… But note the word “not” in the stimulus.

The other questions we’ve looked at in part 3 and part 4 of the series ask the test-taker to find the *correct* answer, but this question suddenly has them looking for an *incorrect* answer. This requires the test-taker to reverse their approach to the question, which can be very confusing.

That being said, there are some who advocate putting a certain number of negative questions on a *survey* to help ensure that the person filling it out is paying attention and not just flying through the questions. I’m not sure I agree with this approach. I feel that if they’re not interested and not paying attention to what they’re doing, negative questions aren’t going to change that, but they could lead to some very bad data being collected.

When it comes to quizzes, tests and exams, especially high-stakes exams, I strongly advise against using negative questions. If you absolutely must use a negative question, emphasize the negative by using all capital letters, bold it, and maybe even underline it.

So let’s pull it all together. It’s important to be fair to both the test-taker and the testing organization.

  • The test-taker should only be tested for the knowledge, skills or abilities in question, and nothing else.
  • The testing organization needs to be assured that the assessment accurately and reliably measures the test-taker’s knowledge, skills or abilities.

To do this, your assessment needs to be made up of well-written questions. To write good assessment questions:

  • Be careful with your wording so that you don’t create overly long or confusing questions.
  • Be concise. Sentences should be as short as possible while still posing the question clearly.
  • Keep it simple. Avoid compound sentences and use short, commonly used words whenever possible. Technical terminology is acceptable if it is part of what
    the test measures.
  • Make sure each question has a specific focus, and that you’re not actually testing multiple pieces of knowledge in a single question.
  • Always use positive phrasing to avoid confusion. If you have no choice but to use negative phrasing, make sure that the negative word – for example,
    “not” – is emphasized with capital letters, bold font, and/or underlining.
  • When creating distracters:
  • keep them all the same relative length,
  • as short as possible,
  • avoid using keywords from the stimulus,
  • watch out for grammatical cues, and
  • make sure that all distracters are reasonable answers within the context of the question.

As always, feel free to leave your comments, or contact me directly at doug.peterson@questionmark.com.

Next Page »