Get with the program: Final reminder to present in Napa

Julie Delazyn HeadshotWe are busy planning the program for the Questionmark 2015 Users Conference in Napa Valley March 10-13.2015-napa-01

We are thrilled about the location of this conference, in the heart of California Wine Country, surrounded by spectacular scenery, world-acclaimed wineries and award-winning restaurants.

A top priority is planning the conference program, which will include sessions on best practices, the use of Questionmark features and functions, demos of the latest technologies, case studies and peer discussions.

Equally significant will be the content created by Questionmark users themselves — people who present case studies or lead discussions. We are excited by the enriching case study and discussion proposals that are coming in, and we are still accepting proposals until December 10.

Space is limited — Click here to download and fill out the call-for-proposal for a chance to present in Napa.

grape iconPlease note that presenters will receive some red carpet treatment — including a special dinner in their honor on Tuesday, March 10. And we award one 50% registration for each case study presentation.

  • Do you have a success story to share about your use of Questionmark assessments?
  • Have you had experiences or learned lessons that would be helpful to others?
  • Is there a topic you’d like to talk about with fellow learning and assessment professionals?
Marriott

Napa Valley Marriott Hotel & Spa

If you can answer “yes” to any of these questions, we would welcome your ideas!

Plan ahead:
Plan your budget now and consider your conference ROI. The time and effort you save by learning effective ways to run your assessment program will more
than pay for your conference participation. Check out the reasons to attend and the conference ROI toolkit here.

Sign up soon for early-bird savings:
You will save $200 by registering on or before December 17 — and your organization will save by taking advantage of group registration discounts. Get all the details and register soon.

Case Study: Live monitoring offers security for online tests

Headshot JuliePosted by Julie Delazyn

Thomas Edison State College (TESC) is one of the oldest schools in the country designed specifically for adults. The college’s 20,000+ students, many of them involved with careers and families, live all over the world and favor courses that enable online study.

In setting up online midterm and final exams, the college wanted to give distance leaners the same kind of security as on-campus students experience at more traditional institutions. At the same time, it was essential to give students some control over where and when they take tests.

Online proctoring offered a way to achieve both of these goals.

Working with Questionmark and ProctorU has enabled TESC to administer proctored exams to students at their home or work computers.

Proctors connect with test takers via webcam and audio hook-ups, verify the each test-taker’s identity, initiate the authentication process, ensure the students are not using any unauthorized materials or aids and troubleshoot technical problems. The college can now run secure tests while meeting the needs of busy students for flexible access to exams.

You can read the full case study here.

Using Diagnostic Assessments to Improve a Government Agency’s Workforce

Headshot JuliePosted by Julie Delazyn

The Aurelius Group (TAG) provides Federal acquisition, human capital, and technology consulting to private industry, federal agencies, and the U.S. Department of Defense. TAG

One of their clients is a large Federal agency that, faced with an expanding workload, inexperienced employees and increasingly scarce resources, needed to identify and close proficiency gaps in their acquisition.

In response, TAG has incorporated Questionmark assessments into a successful workforce improvement program that reveals the aggregate strengths and weaknesses of the workforce and enables the client to direct resources to high-value development opportunities.

Assessments provide annual or biannual snapshots that show how much employees know about the complex bodies of knowledge their work requires and identify competency gaps that can be addressed through further learning. Trend data gleaned from the assessments demonstrates decline and improvement over time and provides objective support for training resource requests, proficiency gap and workforce training.

This case study explains how, in addition to providing an enterprise view of the workforce’s strengths and weaknesses, the program has improved participants’ self-awareness, helped shape their individual development plans (IDPs) and resulted in more effective learning choices.

 

Join us July 27 at 12:00 PM (EDT) for a Questionmark “Customers Online” webinar presentation by The Aurelius Group: Generating and Sending Custom Completion Certificates

How can a randomized test be fair to all?

Joan Phaup 2013 (3) Posted by Joan Phaup

James Parry, who is test development manager at the U.S Coast Guard Training Center in Yorktown, Virginia, will answer this question during a case study presentation the Questionmark Users Conference in San Antonio March 4 – 7. He’ll be co-presenting with LT Carlos Schwarzbauer, IT Lead at the USCG Force Readiness Command’s Advanced Distributed Learning Branch.

James and I spoke the other day about why tests created from randomly drawn items can be useful in some cases—but also about their potential pitfalls and some techniques for avoiding them.

When are randomly designed tests an appropriate choice?

James Parry

James Parry

There are several reasons to use randomized tests.  Randomization is appropriate when you think there’s a possibility of participants sharing the contents of their test with others who have not taken it.  Another reason would be in a computer lab style testing environment where you are testing many on the same subject at the same time with no blinders between the computers. So even if participants look at the screens next to them, chances are they won’t see the same items.

How are you using randomly designed tests?

We use randomly generated tests at all three levels of testing low-, medium- and high-stakes.  The low- and medium-stakes tests are used primarily at the schoolhouse level for knowledge- and performance-based knowledge quizzes and tests.  We are also generating randomized tests for on-site testing using tablet computers or local installed workstations.

Our most critical use is for our high-stakes enlisted advancement tests, which are administered both on paper and by computer. Participants are permitted to retake this test every 21 days if they do not achieve a passing score.  Before we were able to randomize the test there were only three parallel paper versions. Candidates knew this so some would “test sample” without studying to get an idea of every possible question. They would retake the first version, then the second, and so forth until they passed it. With randomization the word has gotten out that this is not possible anymore.

What are the pitfalls of drawing items randomly from an item bank?

The biggest pitfall is the potential for producing tests that have different levels of difficulty or that don’t present a balance of questions on all the subjects you want to cover. A completely random test can be unfair.  Suppose you produce a 50-item randomized test from an entire test item bank of 500 items.   Participant “A” might get an easy test, “B” might get a difficult test and “C” might get a test with 40 items on one topic and 10 on the rest and so on.

How do you equalize the difficulty levels of your questions?

This is a multi-step process. The item author has to make sure they develop sufficient numbers of items in each topic that will provide at least 3 to 5 items for each enabling objective.  They have to think outside the box to produce items at several cognitive levels to ensure there will be a variety of possible levels of difficulty. This is the hardest part for them because most are not trained test writers.

Once the items are developed, edited, and approved in workflow, we set up an Angoff rating session to assign a cut score for the entire bank of test items.  Based upon the Angoff score, each item is assigned a difficulty level of easy, moderate or hard and assigned a metatag to match within Questionmark.  We use a spreadsheet to calculate the number and percentage of available items at each level of difficulty in each topic. Based upon the results, the spreadsheet tells how many items to select from the database at each difficulty level and from each topic. The test is then designed to match these numbers so that each time it is administered it will be parallel, with the same level of difficulty and the same cut score.

Is there anything audience members should do to prepare for this session?

Come with an open mind and a willingness to think outside of the box.

How will your session help audience members ensure their randomized tests are fair?

I will give them the tools to use starting with a quick review of using the Angoff method to set a cut score and then discuss the inner workings of the spreadsheet that I developed to ensure each test is fair and equal.

***

See more details about the conference program here and register soon.

Keep those conference comments coming through December 1

Joan Phaup 2013 (3)

Posted by Joan Phaup

Thinking of Thanksgiving, something we are  always grateful for here at Questionmark is, of course, our customers!

I’d like to thank people who have posted comments on our Facebook page as part of our 2014 Users Conference sweepstakes.

Those who “like” our page and post comments on the conference banner there are being entered into a random drawing for a free conference registration plus a food service gift certificate from the Grand Hyatt San Antonio.

If you have not done this yet, there’s still time! The sweepstakes ends Sunday, December 1, so take a moment to tell us why you’d like to attend the conference. We’ll put your name in the hat along with all the others. And if you’ve already registered, we’ll refund your fee.

Here are just a few of the answers we’ve received to the question, “Why would you like to attend the Questionmark 2014 Users Conference?”

  • “Excellent learning opportunities…and tacos.”Facebook Sweepstakes Final Banner 10-31-13
  • “I met so many great people at last year’s conference in Baltimore that were using Questionmark in so many different ways to support training. Good opportunity to pick everyone’s brains!”
  • ” I have never attended before and Questionmark has become essential in my day-to-day. So I’d like face-to-face interaction with other users and the Questionmark staff.”
  • “Would enjoy seeing what’s new with Questionmark and what is planned for the future.”
  • “We are getting ready to implement testing through Questionmark, so conference attendance would be a great opportunity to learn more about the software and network!”
  • “I would like to hear stories from other users as well as meet more of the Questionmark staff, who have always been generous in their assistance.”

Then there was the entrant who raved about everything from Tech Central and breakout presentations to the fun of meeting people with common  interests. then simply put into words the wish of everyone who is written in so far: “PICK ME!!!”

The winner’s name will be drawn at random on December 2nd. If you’re not on Facebook or have questions about the sweepstakes rules, click here.

So, what’s your answer to the question? Like us on Facebook if you haven’t already — and click on the conference sweepstakes banner there to tell us why you’d like to attend the conference. The deadline for entries is Sunday, December 1 — so go to our Facebook page soon — to “Like,” “Click,” and “Comment.”

 

 

Integration Highlights in Barcelona

Steve Lay HeadshotPosted by Steve Lay

The programme for Questionmark’s European Users Conference in Barcelona November 10 – 12 is just being finalized. As usual, there is plenty to interest customers who are integrating with our Open Assessment Platform.

This year’s conference includes a case study from Wageningen University on using QMWISe, our SOAP-based API, to create a dashboard designed to help you manage your eu confassessment process. Also, our Director of Solution Services, Howard Eisenberg, will be leading a session oncustomising the participant interface so you can learn how to integrate your own CSS into your Questionmark assessments.

I’ll be running a session introducing you to the main integration points and connectors with the assistance of two colleagues this year: Doug Peterson will be there to help translate some of the technical jargon into plain English and Bart Hendrickx will bring some valuable experience from real-world applications to the session. As always, we’ll be available throughout the conference to answer questions if you can’t make the session itself.

Finally, participants will also get the chance to meet Austin Fossey, our Analytics Product Owner, who will be talking, amongst other things, about our OData API for Analytics. This API allows you to create bespoke reports from data ‘feeds’ published from the results warehouse.

See the complete conference schedule here, and sign up soon if you have not done so already.

See you in Barcelona!