Podcast: High-Stakes Assessments for Doctors in Rural Australia

sarah-small

Posted By Sarah Elkins

I spoke recently with Lex Lucas, who is the online services manager at the Australian College of Rural and Remote Medicine (ACRRM), a professional organisation for rural and remote medicine education and training.

Rural doctors accredited with ACRRM usually live in very remote areas, and completing an assessment in a major city could require spending days away from their practice. The introduction of Questionmark Perception means that assessments can now be taken in the local community, and doctors can stay where they’re needed. Questionmark Live has also been used successfully to track question versions and revisions, strengthening the integrity of the assessment process. ACRRM’s experience is a great example of how online assessment management can help solve unique challenges!

Listen to the Podcast

Importing Qpacks into Questionmark Live: The Big Day is Here!

jim_small

Posted by Jim Farrell

Our latest release of Questionmark Live brings highly requested features to the Questionmark Live Community.

 The most asked-for feature is the ability to import Qpacks created in Perception into Questionmark Live, our browser-based authoring tool for subject matter experts (SMEs). Well, that day has come. You can now import Qpacks and share them out to SMEs within your organization. This now gives you the power to conduct item review workshops on test items housed in your repository. Other new features include the ability to create essay questions and easier sharing of question sets.

Watch the video below to learn how to import your Perception Qpacks into Questionmark Live.

Pre-Hospital Emergency Care Council Increases Certification Testing

sarah-small

Posted By Sarah Elkins

We recently published a case study on how the Pre-Hospital Emergency Care Council (PHECC) are using Questionmark to do certification testing.   It’s a great example of how online assessment management can make the statistical review process more efficient. By combining the easy-to-use authoring tools available in Questionmark Live and the statistical reporting features available in Questionmark Perception, PHECC have created a streamlined process that ensures valid and reliable items.

PHECC is the Irish EMS regulator, responsible for certifying all pre-hospital emergency care professionals including Emergency Medical Technicians, Paramedics and Advanced Paramedics. Having valid and reliable assessments is a crucial factor in this work. Volunteer organisations such as the Red Cross also use PHECC’s exams to certify staff on a voluntary basis. Since introducing Questionmark, PHECC have been able to dramatically increase the number of exams they have been able to provide, and they are now fully booked six months in advance.

Want to know more? Read the Case Study

Understanding Assessment Validity: Content Validity

greg_pope-150x1502

Posted by Greg Pope

In my last post I discussed criterion validity and showed how an organization can go about doing a simple criterion-related validity study with little more than Excel and a smile. In this post I will talk about content validity, what it is and how one can undertake a content-related validity study.

Content validity deals with whether the assessment content and composition are appropriate, given what is being measured. For example, does the test content reflect the knowledge/skills required to do a job or demonstrate that one grasps the course content sufficiently? In the example I discussed in the last post regarding the sales course exam, one would want to ensure that the questions on the exam cover the course content area of focus appropriately, in appropriate ratios. For example, if 40% of the four-day sales course deals with product demo techniques then we would want about 40% of the questions on the exam to measure knowledge/skills in the area of demo skills.

I like to think of content validity in two slices. The first slice of the content validity pie is addressed when an assessment is first being developed: content validity should be one of the primary considerations in assembling the assessment. Developing a “test blueprint” that outlines the relative weightings of content covered in a course and how that maps onto the number of questions in an assessment is a great way to help ensure content validity from the start. Questions are of course classified when they are being authored as fitting into the specific topics and subtopics. Before an assessment is put into production to be administered to actual participants, an independent group of subject matter experts should review the assessment and compare the questions included on the assessment against a blueprint. An example of a test blueprint is provided below for the sales course exam, which has 20 questions in total.

validity 4

The second slice of content validity is addressed after an assessment has been created. There are a number of methods available in the academic literature outlining how to conduct a content validity study. One way, developed by Lawshe in the mid 1970s, is to get a panel of subject matter experts to rate each question on an assessment in terms of whether the knowledge or skills measured by each question is “essential,” “useful, but not essential,” or “not necessary” to the performance of what is being measured (i.e., the construct). The more SMEs who agree that items are essential, the higher the content validity. Lawshe also developed a funky formula called the “content validity ratio” (CVR) that can be calculated for each question. The average of the CVR across all questions on the assessment can be taken as a measure of the overall content validity of the assessment.

validity 5

You can use Questionmark Perception to easily conduct a CVR study by taking an image of each question on an assessment (e.g., sales course exam) and creating a survey question for each assessment question to be reviewed by the SME panel, similar to the example below.

validity 6You can then use the Questionmark Survey Report or other Questionmark reports to review and present the content validity results.

So how does “face validity” relate to content validity? Well, face validity is more about the subjective perception of what the assessment is trying to measure than about conducting validity studies. For example, if our sales people sat down after the four-day sales course to take the sales course exam and all the questions on the exam were asking about things that didn’t seem related to the information they just learned on the course (e.g., what kind of car they would like to drive or how far they can hit a golf ball), the sales people would not feel that the exam was very “face valid” as it doesn’t appear to measure what it is supposed to measure. Face validity, therefore, has to do with whether an assessment looks valid or feels valid to the participant. However, face validity is somewhat important:  if participants or instructors don’t buy in to the assessment being administered, they may not take it seriously,  they may complain about and appeal their results more often, and so on.

In my next post I will turn the dial up to 11 and discuss the ins and outs of construct validity.

Assessment Standards Part Three: ISO 23988

john_smallPosted by John Kleeman

This is the third of a series of blog posts on standards that impact assessment. I’ve participated in many standards projects over the years, but there’s only one standard which I can be pretty sure would never have happened without my involvement.

Around the turn of the millennium I had a significant birthday, and rather than do the usual work tasks, I decided to use the day for something more creative. It was just around the formation of a brand new International Standards (ISO) working group on learning technology (SC36) and I was part of a newly formed British Standards Committee that shadowed the ISO committee. We were looking for new standards to develop and it was about the time that using computers and the Internet for delivering assessments was really coming of age. Lots of people were using Questionmark software or other software to deliver assessments and as people learned, they made mistakes which could cause unfairness and pain.

I thought it would be great to have a Code of Practice on how to use computers to deliver assessments.  If this could be a standard, it would encourage everyone to follow good practice and would make things fairer and better for everyone using assessments.  I would also allow everyone to benefit from the experience of the best practitioners.

So I proposed the idea to the UK committee and after a while I led a panel of many experts in assessment to come up with what was then called BS 7988 – Code of Practice for the use of Information Technology (IT) in the Delivery of Assessments. Many wiser people than I contributed to the standard: assessment experts, technology experts and educational experts.  BS 7988 was published in 2002, and in due course it was taken by the BSI to ISO to become (after some editing) an international standard ISO 23988.

The standard contains guidance and context for using IT to deliver assessments. Due to the vagaries of international standards economics, you have to pay to buy the standard so I’m limited in how I can quote from it.  However, I hope that ISO won’t mind me quoting one illustrative clause, which applies to assessments that are invigilated or proctored:

    At least one invigilator should be present in the room throughout the assessment
    session. If there is a single invigilator, he/she should be able to summon help (including
    technical help) quickly if needed. Unless there is only one candidate, the invigilator should
    not be distracted from invigilation duties by having to provide technical help.

Not rocket science, but useful common sense. And there are 45 pages of useful material in the standard with lots of sensible guidelines.

As the saying goes, “What’s the difference between theory and practice?  In theory there is none, but in practice there is!” ISO 23988 encapsulates a lot of good practice in delivering assessments and puts it in a standard code of practice for everyone to pick from or follow.

Early-birds: Register for the Questionmark 2010 Users Conference by Friday!

 

Posted by Joan Phaup

We’re fast approaching the first early-bird deadline for the Questionmark 2010 Users Conference. RLB_3816

If you register by this Friday, December 4th, you will save $200 off the full registration fee.  The conference program is taking shape, so check out some  session titles on the schedule-at-a-glance.

Case study topics include Integrating Perception and SAP, Using Flash and Captivate with Perception, and  Distributing the Workload and Increasing Accessibility with Questionmark Live. You will also see some Best Practice and Tech Traning topics listed. There are more to come, so keep checking the site.

The  Conference offers great opportunities for technical training on Questionmark Perception as  well as instruction on how to write better assessments. Delegates will see the latest Questionmark product developments and will influence what happens next. Meeting other Perception Users and learning from their experiences is another major plus of attending the  Conference. There’s really no better place to RLB_4087immerse yourself in learning how to get the most from your assessments.

If you missed our podcast with keynote speaker Dr. David Metcalf, click here and spend a few minutes listening to our conversation about the future of assessment.

….and remember to register! We look forward to seeing you in Miami March 14 – 17, 2010!

« Previous Page