Heading home from San Antonio

Joan Phaup 2013 (3)Posted by Joan Phaup

Bryan Chapman (2)

Bryan Chapman

As we head back home from this week’s Questionmark Users Conference in San Antonio, it’s good to reflect on the connections people made with one another during discussions, focus groups, social events and a wide variety of presentations covering best practices, case studies and the features and functions of Questionmark technologies. Many thanks to all our presenters!

Bryan Chapman’s keynote on Transforming Open Data into Meaning and Action offered an expansive approach to a key theme of this year’s conference. Bryan described the tremendous power of OData while dispelling much of the mystery around it. He explained that OData can be exchanged in simple ways, such as using a URL or inserting a command line to create, read, update, and/or delete data items.

thurs night scottIt was interesting to see how focusing on key indicators that have the biggest impact can produce easy-to-understand visual representations of what is happening within an organization. Among the many dashboards Bryan shared was on that showed the amount of safety training in relation to incidence of on-the-job injuries.thurs night trio

No conference is complete without social events that nurture new friendships and cement long-established bonds. Yesterday ended with a visit to the Rio Cibolo Ranch outside the city, where we enjoyed a Texas-style meal, western music and all manner of ranch activities. Many of us got acquainted with some Texas Longhorn Cattle, and the bravest folks of all took some lassoing lessons (snagging a  mechanical calf, not a longhorn!).

Today’s breakouts and general session complete three intensive days of learning. Here’s wishing everyone a good journey home and continued connections in the year ahead.

Getting more value from assessment results

Joan Phaup 2013 (3)

Posted by Joan Phaup

How do you maximize the value of assessment results? How do you tailor those results to meet the specific needs of your organization? We’ll address these question and many others at the Questionmark Users Conference in San Antonio March 4 – 7.

The conference program will cover a wide range of topics, offering learning opportunities for beginning, intermediate and advanced users of Questionmark technologies. The power and potential of open data will be a major theme, highlighted in a keynote by Bryan Chapman on Transforming OData into Meaning and Action.

Here’s the full program:Gen 3

Optional Pre-conference Workshops

  • Test Development Fundamentals, with Dr. Melissa Fein (half day)
  • Questionmark Boot Camp: Basic Training for Beginners, with Questionmark Trainer Rick Ault (full day)

General Sessions

  • Conference Kickoff and Opening General Session
  • Conference Keynote by Bryan Chapman – Transforming Open Data into Meaning and Action
  • Closing General Session — Leaping Ahead: A View Beyond the Horizon on the Questionmark Roadmap

Case Studiesgen 1

  • Using Questionmark to Conduct Performance Based Certifications —  SpaceTEC®
  • Better Outcomes Make the Outcome Better! —  USMC Marine Corps University
  • Generating and Sending Custom Completion Certificates — The Aurelius Group
  • Leveraging Questionmark’s Survey Capabilities with a Multi-system Model —  Verizon
  • Importing Questions into Questionmark Live on a Tri-Military Service Training Campus — Medical Education & Training Campus
  • How Can a Randomly Designed Test be Fair to All? —  U.S. Coast Guard

Best Practices

  • Principles of Psychometrics and Measurement Design
  • 7 Reasons to Use Online Assessments for Compliance
  • Reporting and Analytics: Understanding Assessment Resultsgen 2
  • Making it Real: Building Simulations Into Your Quizzes and Tests
  • Practical Lessons from Psychology Research to Improve Your Assessments
  • Item Writing Techniques for Surveys, Quizzes and Tests

Questionmark Features & Functions

  • Introduction to Questionmark for Beginners
  • BYOL: Item and Topic Authoring
  • BYOL: Collaborative Assessment Authoring
  • Integrating with Questionmark’s Open Assessment Platform
  • Using Questionmark’s OData API for Analytics
  • Successfully Deploying Questionmark Perception
  • Customizing the Participant Interfacegen 4

Discussions

  • Testing is Changing: Practical and Secure Assessment in the 21st Century
  • Testing what we teach: How can we elevate our effectiveness without additional time or resources?
  • 1 + 1 = 3…Questionmark, GP Strategies and You!

Drop-in Demos

  • Making the Most of Questionmark’s Newest Technologies

Future Solutions: Influence Questionmark’s Road Map

  • Focus Group on Authoring and Deliverygen 5
  • Focus Group on the Open Assessment Platform and Analytics

Tech Central

  • One-on-one meetings with Questionmark Technicians

Special Interest Group Meetings

  • Military/Defense US DOD and Homeland Security
  • Utilities/Energy Generation and Distribution
  • Higher Education
  • Corporate Universities

Social Events

Click here to see details about all these sessions, and register today!

night river banner

 

How can a randomized test be fair to all?

Joan Phaup 2013 (3) Posted by Joan Phaup

James Parry, who is test development manager at the U.S Coast Guard Training Center in Yorktown, Virginia, will answer this question during a case study presentation the Questionmark Users Conference in San Antonio March 4 – 7. He’ll be co-presenting with LT Carlos Schwarzbauer, IT Lead at the USCG Force Readiness Command’s Advanced Distributed Learning Branch.

James and I spoke the other day about why tests created from randomly drawn items can be useful in some cases—but also about their potential pitfalls and some techniques for avoiding them.

When are randomly designed tests an appropriate choice?

James Parry

James Parry

There are several reasons to use randomized tests.  Randomization is appropriate when you think there’s a possibility of participants sharing the contents of their test with others who have not taken it.  Another reason would be in a computer lab style testing environment where you are testing many on the same subject at the same time with no blinders between the computers. So even if participants look at the screens next to them, chances are they won’t see the same items.

How are you using randomly designed tests?

We use randomly generated tests at all three levels of testing low-, medium- and high-stakes.  The low- and medium-stakes tests are used primarily at the schoolhouse level for knowledge- and performance-based knowledge quizzes and tests.  We are also generating randomized tests for on-site testing using tablet computers or local installed workstations.

Our most critical use is for our high-stakes enlisted advancement tests, which are administered both on paper and by computer. Participants are permitted to retake this test every 21 days if they do not achieve a passing score.  Before we were able to randomize the test there were only three parallel paper versions. Candidates knew this so some would “test sample” without studying to get an idea of every possible question. They would retake the first version, then the second, and so forth until they passed it. With randomization the word has gotten out that this is not possible anymore.

What are the pitfalls of drawing items randomly from an item bank?

The biggest pitfall is the potential for producing tests that have different levels of difficulty or that don’t present a balance of questions on all the subjects you want to cover. A completely random test can be unfair.  Suppose you produce a 50-item randomized test from an entire test item bank of 500 items.   Participant “A” might get an easy test, “B” might get a difficult test and “C” might get a test with 40 items on one topic and 10 on the rest and so on.

How do you equalize the difficulty levels of your questions?

This is a multi-step process. The item author has to make sure they develop sufficient numbers of items in each topic that will provide at least 3 to 5 items for each enabling objective.  They have to think outside the box to produce items at several cognitive levels to ensure there will be a variety of possible levels of difficulty. This is the hardest part for them because most are not trained test writers.

Once the items are developed, edited, and approved in workflow, we set up an Angoff rating session to assign a cut score for the entire bank of test items.  Based upon the Angoff score, each item is assigned a difficulty level of easy, moderate or hard and assigned a metatag to match within Questionmark.  We use a spreadsheet to calculate the number and percentage of available items at each level of difficulty in each topic. Based upon the results, the spreadsheet tells how many items to select from the database at each difficulty level and from each topic. The test is then designed to match these numbers so that each time it is administered it will be parallel, with the same level of difficulty and the same cut score.

Is there anything audience members should do to prepare for this session?

Come with an open mind and a willingness to think outside of the box.

How will your session help audience members ensure their randomized tests are fair?

I will give them the tools to use starting with a quick review of using the Angoff method to set a cut score and then discuss the inner workings of the spreadsheet that I developed to ensure each test is fair and equal.

***

See more details about the conference program here and register soon.

Need Customized Reports? Try Out Our OData API

Sample Attempt Distribution

Sample Attempt Distribution Reportlet

Joan Phaup 2013 (3)Posted by Joan Phaup

The standard reports and analytics that our customers use to evaluate assessment results meet a great many needs, but some occasions call for customized reports. The Questionmark OData API makes it possible to access data securely and create dynamic assessment reports using third-party business intelligence tools.

Once these reports are set up, they provide a flow of data, updating the results as new data becomes available. OData also makes it possible to cross-reference your assessment data with another data source  to get a fuller picture of what’s happening in your organization.

My recent interview with Austin Fossey goes into more detail about this, but you explore this idea yourself thanks to the OData tutorials and dashboards  on Questionmark’s Open Assessment Platform for developers.

The site provides example open source code to show how your organization could provide reportlets displaying key performance indicators from many types of assessment data.  The examples demonstrate these sample reportlets:

  • Attempt Distribution
  • PreTest PostTest
  • Score Correlation
  • Distribution

Questionmark OnDemand customers can plug in their own data to create their own reportlets, and developers can use tutorials to get detailed instructions about connecting to OData, retrieving data and creating charts.

You can also learn a lot about the power of OData at the upcoming Questionmark Users Conference in San Antonio March 4 – 7, so we hope you’ll join us there!

Create high-quality assessments: Join a March 4 workshop

Joan Phaup 2013 (3)

Posted by Joan Phaup

There will be a whole lot of learning going on in San Antonio March 4 during three workshops preceding the Questionmark 2014 Users Conference.

These sessions cover a broad range of experience levels — from people who are just beginning to use Questionmark technologies to those who want to understand best practices in test development and item writing.

Rick Ault

Rick Ault

Questionmark Boot Camp: Basic Training for Beginners (9 a.m. – 4 p.m.)

Questionmark Trainer Rick Ault will lead this hands-on workshop, which begins with a broad introduction to the Questionmark platform and then becomes an interactive, hands-on practice session. Bring your own laptop to get some firsthand experience creating and scheduling assessments. Participants will also get acquainted with reports and analytics.

Melissa Fein web

Dr. Melissa Fein

Test Development Fundamentals (9 a.m. – 12 p.m.)

Whether you are involved in workplace testing, training program evaluation, certification & certificate program development, or academic testing, an understanding of criterion-referenced test development will strengthen your testing program. Dr. Melissa Fein, author of Test Development Fundamentals for Certification and Evaluation, leads this morning workshop, which will help participants judge test quality, set mastery cutoff points, and improve test quality.

MaryLorenz_small

Mary Lorenz

The Art and Craft of Item Writing (1 p.m. – 4 p.m.)

Writing high-quality multiple-choice questions can present many challenges and pitfalls. Longtime educator and test author Mary Lorenz will coach workshop participants through the process of constructing well-written items that measure given objectives. Bring items of your own and sharpen them up during this interactive afternoon session.

___

Choose between the full-day workshop and one or both of the half-day workshops.

Conference attendees qualify for special workshop registration rates, and there’s a discount for attending both half-day sessions.

Click here for details and registration.

 

 

 

Early-birds: Check out the conference program and register by tomorrow

Joan Phaup 2013 (3)Posted by Joan Phaup

You can still get a $100 early-bird registration discount for the Questionmark 2014 Users Conference if you register by tomorrow (Thursday, January 30th).

The program for March 4 – 7 includes a keynote by Learning Strategist Bryan Chapman on the power of open data, which will be a hot topic throughout this gathering at the Grand Hyatt San Antonio.

You can click here for program details, but here’s the line-up:

Optional Pre-Conference Workshopsbreakout 2013

  • ​Questionmark Boot Camp: Basic Training for Beginners  ​(full day)
  • ​Test Development Fundamentals  (half day)
  • ​The Art & Craft of Item Writing   ​(half day)

Case Studies

  • Using Questionmark to Conduct Performance Based Certifications —  SpaceTEC®
  • Better Outcomes Make the Outcome Better! — US Marine Corps University
  • ​Generating and Sending Custom Completion Certificates —  The Aurelius Group
  • ​Leveraging Questionmark’s Survey Capabilities with a Multi-system Model – Verizon
  • ​Importing Questions into Questionmark Live on a Tri-Military Service Training Campus Medical Education & Training Campus
  • How Can a Randomly Designed Test be Fair to All? US Coast Guard Training Center

Best Practices doug teaching 2013

  • ​Principles of Psychometrics and Measurement Design
  • 7 Reasons to Use Online Assessments for Compliance
  • ​Reporting and Analytics: Understanding Assessment Results
  • ​Making it Real: Building Simulations Into Your Quizzes and Tests
  • ​Practical Lessons from Psychology Research to Improve Your Assessments
  • Item Writing Techniques for Surveys, Quizzes and Tests

Questionmark Features & Functions

  • Introduction to Questionmark for Beginners
  • BYOL: Item and Topic Authoring
  • BYOL: Collaborative Assessment Authoring
  • Integrating with Questionmark’s Open Assessment Platform
  • Using Questionmark’s OData API for Analytics (BYOL)
  • Successfully Deploying Questionmark Perception
  • Customizing the Participant Interface

Discussions

  • Testing What We Teach: How can we elevate our effectiveness without additional time or resources?
  • Testing is Changing: Practical and Secure Assessment in the 21st Century

Future Solutions Focus Groups

  • Open Assessment Platform and Analytics
  • Authoring and Delivery

Special Interest Group Meetings

  • Military/Defense US DOD and Homeland Security
  • Utilities/Energy Generation and Distribution
  • Higher Education
  • Universities

Drop-in Demos of new Questionmark features and capabilities

Tech Central: Drop-in meetings with Questionmark technicians

—-

Register for the conference by tomorrow, Thursday, January 31st, to get the early-bird discount.