Ten tips on recommended assessment practice – from San Antonio, Texas

John Kleeman HeadshotPosted by John Kleeman

One of the best parts of Questionmark user conferences is hearing about good practice from users and speakers. I shared nine tips after our conference in Barcelona, but Texas has to be bigger and better (!), so here are ten things I learned last week at our conference in San Antonio.

1. Document your decisions and processes. I met people in San Antonio who’d taken over programmes from colleagues. They valued all the documentation on decisions made before their time and sometimes wished for more. I encourage you to document the piloting you do, the rationale behind your question selection, item changes and cut scores. This will help future colleagues and also give you evidence if you have need to justify or defend your programmesgen 5.

2. Pilot with non-masters as well as masters. Thanks to Melissa Fein for this tip. Some organizations pilot new questions and assessments just with “masters”, for example the subject matter experts who helped compile them. It’s much better if you can pilot to a wider sample, and include participants who are not experts/masters. That way you get better item analysis data to review and you also will get more useful comments about the items.

3. Think about the potential business value of OData. It’s easy to focus on the technology of OData, but it’s better to think about the business value of the dynamic data it can provide you. Our keynote speaker, Bryan Chapman, made a powerful case at the conference about getting past the technology. The real power is in working out what you can do with your assessment data once it’s free to connect with other business data. OData lets you link assessment and business data to help you solve business problems.

4. Use item analysis to identify low-performing questions. The most frequent and easiest use of item analysis is to identify low-performing questions. Many Questionmark customers use it regularly to identify questions that are too easy, too hard or not sufficiently discriminating. Once you identify these questions, you modify them or remove them depending on what your review finds. This is an easy win and makes your assessments more trustworthy.thurs night longhorn flag

5. Retention of learning is a challenge and assessments help. Many people shared that retention was a key challenge. How do you ensure your employees retain compliance training to use when they need it? How do you ensure your learners retain their learning beyond the final exam? There is a growing realization that using Questionmark assessments can significantly reduce the forgetting curve.

6. Use performance data to validate and improve your assessments. I spoke to a few people who were looking at improving their assessments and their selection procedure by tracking back and connecting admissions or onboarding assessments with later performance. This is a rich vein to mine.

7. Topic feedback and scores. Topic scores and feedback are actionable. If someone gets an item wrong, it might just be a mistake or a misunderstanding. But if someone is weak in a topic area, you can direct them to remediation. It’s hugely successful for a lot of organizations to divide assessments into topics and feedback and analyze by topic.

8. Questionmark Community Spaces is a great place to get advice. Several users shared that they’d posed a question or problem in the forums there and got useful answers. Customers can access Community Spaces here.wed dinner gents

9. The Open Assessment Platform is real. We promote Questionmark as the “Open Assessment Platform,” allowing you to easily link Questionmark to other systems, and it’s not just marketing! As one presenter said at the conference “The beauty of using Questionmark is you can do it all yourself”. If you have a need to build a system including assessments, check out the myriad ways in which Questionmark is open.

10. Think of your Questionmark assessments like a doctor thinks of a blood test. A doctor relies on a blood test to diagnose a patient. By using Questionmark’s trustable processes and technology, you can start to think of your assessments in a similar light, and rely on your assessments for business value.

I hope some of these tips might help you get more business value out of your assessments.

Getting more value from assessment results

Joan Phaup 2013 (3)

Posted by Joan Phaup

How do you maximize the value of assessment results? How do you tailor those results to meet the specific needs of your organization? We’ll address these question and many others at the Questionmark Users Conference in San Antonio March 4 – 7.

The conference program will cover a wide range of topics, offering learning opportunities for beginning, intermediate and advanced users of Questionmark technologies. The power and potential of open data will be a major theme, highlighted in a keynote by Bryan Chapman on Transforming OData into Meaning and Action.

Here’s the full program:Gen 3

Optional Pre-conference Workshops

  • Test Development Fundamentals, with Dr. Melissa Fein (half day)
  • Questionmark Boot Camp: Basic Training for Beginners, with Questionmark Trainer Rick Ault (full day)

General Sessions

  • Conference Kickoff and Opening General Session
  • Conference Keynote by Bryan Chapman – Transforming Open Data into Meaning and Action
  • Closing General Session — Leaping Ahead: A View Beyond the Horizon on the Questionmark Roadmap

Case Studiesgen 1

  • Using Questionmark to Conduct Performance Based Certifications —  SpaceTEC®
  • Better Outcomes Make the Outcome Better! —  USMC Marine Corps University
  • Generating and Sending Custom Completion Certificates — The Aurelius Group
  • Leveraging Questionmark’s Survey Capabilities with a Multi-system Model —  Verizon
  • Importing Questions into Questionmark Live on a Tri-Military Service Training Campus — Medical Education & Training Campus
  • How Can a Randomly Designed Test be Fair to All? —  U.S. Coast Guard

Best Practices

  • Principles of Psychometrics and Measurement Design
  • 7 Reasons to Use Online Assessments for Compliance
  • Reporting and Analytics: Understanding Assessment Resultsgen 2
  • Making it Real: Building Simulations Into Your Quizzes and Tests
  • Practical Lessons from Psychology Research to Improve Your Assessments
  • Item Writing Techniques for Surveys, Quizzes and Tests

Questionmark Features & Functions

  • Introduction to Questionmark for Beginners
  • BYOL: Item and Topic Authoring
  • BYOL: Collaborative Assessment Authoring
  • Integrating with Questionmark’s Open Assessment Platform
  • Using Questionmark’s OData API for Analytics
  • Successfully Deploying Questionmark Perception
  • Customizing the Participant Interfacegen 4

Discussions

  • Testing is Changing: Practical and Secure Assessment in the 21st Century
  • Testing what we teach: How can we elevate our effectiveness without additional time or resources?
  • 1 + 1 = 3…Questionmark, GP Strategies and You!

Drop-in Demos

  • Making the Most of Questionmark’s Newest Technologies

Future Solutions: Influence Questionmark’s Road Map

  • Focus Group on Authoring and Deliverygen 5
  • Focus Group on the Open Assessment Platform and Analytics

Tech Central

  • One-on-one meetings with Questionmark Technicians

Special Interest Group Meetings

  • Military/Defense US DOD and Homeland Security
  • Utilities/Energy Generation and Distribution
  • Higher Education
  • Corporate Universities

Social Events

Click here to see details about all these sessions, and register today!

night river banner

 

How can a randomized test be fair to all?

Joan Phaup 2013 (3) Posted by Joan Phaup

James Parry, who is test development manager at the U.S Coast Guard Training Center in Yorktown, Virginia, will answer this question during a case study presentation the Questionmark Users Conference in San Antonio March 4 – 7. He’ll be co-presenting with LT Carlos Schwarzbauer, IT Lead at the USCG Force Readiness Command’s Advanced Distributed Learning Branch.

James and I spoke the other day about why tests created from randomly drawn items can be useful in some cases—but also about their potential pitfalls and some techniques for avoiding them.

When are randomly designed tests an appropriate choice?

James Parry

James Parry

There are several reasons to use randomized tests.  Randomization is appropriate when you think there’s a possibility of participants sharing the contents of their test with others who have not taken it.  Another reason would be in a computer lab style testing environment where you are testing many on the same subject at the same time with no blinders between the computers. So even if participants look at the screens next to them, chances are they won’t see the same items.

How are you using randomly designed tests?

We use randomly generated tests at all three levels of testing low-, medium- and high-stakes.  The low- and medium-stakes tests are used primarily at the schoolhouse level for knowledge- and performance-based knowledge quizzes and tests.  We are also generating randomized tests for on-site testing using tablet computers or local installed workstations.

Our most critical use is for our high-stakes enlisted advancement tests, which are administered both on paper and by computer. Participants are permitted to retake this test every 21 days if they do not achieve a passing score.  Before we were able to randomize the test there were only three parallel paper versions. Candidates knew this so some would “test sample” without studying to get an idea of every possible question. They would retake the first version, then the second, and so forth until they passed it. With randomization the word has gotten out that this is not possible anymore.

What are the pitfalls of drawing items randomly from an item bank?

The biggest pitfall is the potential for producing tests that have different levels of difficulty or that don’t present a balance of questions on all the subjects you want to cover. A completely random test can be unfair.  Suppose you produce a 50-item randomized test from an entire test item bank of 500 items.   Participant “A” might get an easy test, “B” might get a difficult test and “C” might get a test with 40 items on one topic and 10 on the rest and so on.

How do you equalize the difficulty levels of your questions?

This is a multi-step process. The item author has to make sure they develop sufficient numbers of items in each topic that will provide at least 3 to 5 items for each enabling objective.  They have to think outside the box to produce items at several cognitive levels to ensure there will be a variety of possible levels of difficulty. This is the hardest part for them because most are not trained test writers.

Once the items are developed, edited, and approved in workflow, we set up an Angoff rating session to assign a cut score for the entire bank of test items.  Based upon the Angoff score, each item is assigned a difficulty level of easy, moderate or hard and assigned a metatag to match within Questionmark.  We use a spreadsheet to calculate the number and percentage of available items at each level of difficulty in each topic. Based upon the results, the spreadsheet tells how many items to select from the database at each difficulty level and from each topic. The test is then designed to match these numbers so that each time it is administered it will be parallel, with the same level of difficulty and the same cut score.

Is there anything audience members should do to prepare for this session?

Come with an open mind and a willingness to think outside of the box.

How will your session help audience members ensure their randomized tests are fair?

I will give them the tools to use starting with a quick review of using the Angoff method to set a cut score and then discuss the inner workings of the spreadsheet that I developed to ensure each test is fair and equal.

***

See more details about the conference program here and register soon.

Create high-quality assessments: Join a March 4 workshop

Joan Phaup 2013 (3)

Posted by Joan Phaup

There will be a whole lot of learning going on in San Antonio March 4 during three workshops preceding the Questionmark 2014 Users Conference.

These sessions cover a broad range of experience levels — from people who are just beginning to use Questionmark technologies to those who want to understand best practices in test development and item writing.

Rick Ault

Rick Ault

Questionmark Boot Camp: Basic Training for Beginners (9 a.m. – 4 p.m.)

Questionmark Trainer Rick Ault will lead this hands-on workshop, which begins with a broad introduction to the Questionmark platform and then becomes an interactive, hands-on practice session. Bring your own laptop to get some firsthand experience creating and scheduling assessments. Participants will also get acquainted with reports and analytics.

Melissa Fein web

Dr. Melissa Fein

Test Development Fundamentals (9 a.m. – 12 p.m.)

Whether you are involved in workplace testing, training program evaluation, certification & certificate program development, or academic testing, an understanding of criterion-referenced test development will strengthen your testing program. Dr. Melissa Fein, author of Test Development Fundamentals for Certification and Evaluation, leads this morning workshop, which will help participants judge test quality, set mastery cutoff points, and improve test quality.

MaryLorenz_small

Mary Lorenz

The Art and Craft of Item Writing (1 p.m. – 4 p.m.)

Writing high-quality multiple-choice questions can present many challenges and pitfalls. Longtime educator and test author Mary Lorenz will coach workshop participants through the process of constructing well-written items that measure given objectives. Bring items of your own and sharpen them up during this interactive afternoon session.

___

Choose between the full-day workshop and one or both of the half-day workshops.

Conference attendees qualify for special workshop registration rates, and there’s a discount for attending both half-day sessions.

Click here for details and registration.

 

 

 

Psychometrics and Measurement Design: A conversation

Joan Phaup 2013 (3)Posted by Joan Phaup

Many delegates to the Questionmark 2014 Users Conference in San Antonio March 4 – 7 want to learn about assessment-related best practices.

Austin Fossey, our Reporting and Analytics Manager, will talk about Principles of Psychometrics and Measurement Design during one of the many breakout sessions on the agenda.

Austin Fossey

Austin Fossey

Austin had just joined Questionmark when he attended the 2013 conference. This time around, he’ll be more actively involved in the program, so I wanted to learn more about him and his presentation plans.

What made you decide to study psychometrics?

I was working in customer service at a certification testing company. They always brought in psychometricians to build their assessments. I’d never heard of psychometrics before, but I had studied applied math as an undergraduate and thought the math behind psychometrics was interesting. I liked the idea of doing analytical work and heard that psychometricians are always in demand, so I got started right away studying educational measurement at the University of Maryland.

How do you make principles of psychometrics understandable to, well, mere mortals?

I don’t think psychometricians are different than anybody else. Most of it is applying a probabilistic model to a set of data to make an inference about an unobserved trait. Those models are based on concepts or theories, so you don’t have to explain the math as long as you can explain the theory. People understand that.

I really like evidence-centered design, because it provides principles and a vocabulary that can be used by everyone involved in assessments. Using this framework, psychometricians can communicate about measurement design with subject matter experts, item writers, curriculum specialists, programmers, policy makers — all the stakeholders, from start to finish.

Who do you think would benefit from attending your presentation about psychometrics and measurement design?

People who feel they are applying the same test development formula day in and day out and who wonder if there might be a better way to do it. Even with certifications, which usually follow excellent standards based on best practices, we should always be critical about our assessments and we should always be aggressive about ensuring validity. It would be great to see people there who want to be mindful of every decision they make in assessment design.

How could people prepare for this session?

I hope they bring examples of their own test development process and validity studies. We can discuss people’s own experiences and the hurdles they have faced with their measurement design. Other than that I would say just bring an open mind.

What would you like your audience to take away from your presentation?

People who may be new to measurement design and psychometric concepts like validity can take away some tools to use in their assessment programs. I hope that if more experienced people come, they can learn from each others’ experiences and go away with new ideas about their own approach to assessment design.

What do you hope to take away from the Users Conference?
I want to harvest a lot of feedback with our clients during conversations and focus groups, so that we can recalibrate ourselves for the work we are doing and prioritize our tasks.

The session on psychometrics is just one of several for Austin in 2014. Check out the conference program and register by December 12 to save $200.

Reflections on Barcelona: Great learning, great connections

Doug Peterson HeadshotPosted By Doug Peterson

I had the opportunity to attend and present at the Questionmark European Users Conference in Barcelona, Spain, November 10 – 12. Questionmark2013_DSC3268 If you’ve never had the chance to go to Barcelona, I highly recommend it! This is a *beautiful* city full of charming people, wonderful architecture, and GREAT food.

Traveling and seeing new sights and engaging in new adventures is always fun, but the true “stars of the show” at a users conference are, of course, the users.

It was wonderful to catch up with customers I first met two years ago in Brussels. I also had the opportunity to meet in person several people I had met over the last couple of years only through emails and conference calls, and it was great to put a face with a name. And of course, it was wonderful to make brand new friends whom I hope to see again next year!

One of the things I enjoy the most about Questionmark Users Conferences is how customers learn from each other. This happens in a more structured way during the many sessions presented by members of our user community, but I enjoy it even more when it happens more informally.

During the opening reception Sunday night I had the opportunity to talk with several customers, to hear why they were attending – what they wanted to get out of the conference – and then introduce them to another customer or a Questionmark employee who could help them meet their goals for the conference. Breakfast and lunch conversations were always interesting, and even during Monday night’s fantastic dinner at El Torre Dels Lleones, the conversations between different users facing various challenges continued (except, of course, when we were all watching the flamenco dancers perform!). These conversations are simply invaluable, not only because customers help each other find answers to their challenges, but because I as an employee gain insights and a depth of understanding as to what our customers are doing and the problems they are facing in ways that an email or a phone call can’t communicate.

Questionmark2013_DSC3125Tuesday morning we tried something new during the General Session. Howard Eisenberg gave an informative presentation on item writing best practices, and at certain points he would pause so that each table could discuss the current topic amongst ourselves. Then he would take comments from some of the tables before moving forward with the next topic. The conversations at the table where I was sitting were GREAT! The sharing of different perspectives and experiences resulted in a lot , “Oh, I never thought of that!” expressions all around the room. THAT’S why I love going to our users conferences:  there’s just nothing like the information exchange and growth that takes place when a bunch of Questionmark users gather together in one place.

If you weren’t able to make it to Barcelona, I hope you can come to San Antonio, Texas, March 4 – 7. We’ll be right on the city’s River Walk. I’ve been there several times visiting family in the area and, I can tell you it’s beautiful and FUN! Please join us. I’m confident that by the end of the conference you will agree that it was time well spent.