Moving ahead with browser-based question and assessment authoring

Doug Peterson HeadshotPosted By Doug Peterson

Last week I attended the Questionmark Users Conference in Baltimore, from the opening reception on Sunday evening right through to noon on Wednesday. My head is still spinning!

We all tend to think of users conferences as being an event for users to attend and learn things (and maybe have some fun along the way). Given the feedback I received, that certainly took place. But there’s another side to such gatherings that is very important to me as a Product Owner – the things I learn from the users!

I had some great conversations over the course of our three days in Baltimore. I received excellent guidance from our customers as to what they need and want as we continue to enhance our browser-based tools for authoring questions and assessments.

We talked during sessions, we talked at breakfast, lunch, and dinner, we even put in an impromptu session on Tuesday because there were more things about which we wanted to talk!

I took notes, wrote on flip charts and stuffed them in my backpack, and I brought home business cards so I can send out wireframes for customers to review – and then we can talk about them!

All of this brings me back to my main role as a Product Owner – to represent the customer to the business. I need to understand how people are using Questionmark’s products, the problems that they’re trying to solve, the things they like and don’t like, and what they’d like to see added to the product. To gain this understanding, we need to talk!

If you are a Questionmark user, here are three ways in which you can help:

  1. Share with me with your thoughts, ideas, and feedback.
  2. Request online meetings where I can show you mock-ups of things we’re planning, or demo new features.
  3. Subscribe to the Authoring blog in Questionmark Community Spaces, where I share mock-ups and seek feedback.

Keep that information flowing so that I can make sure that your needs are met as we continue forward – together – with developing full-featured, easy-to-use question and assessment authoring tools.

Celebrating nationwide provision of high-quality tests

Joan Phaup HeadshotPosted by Joan Phaup

This week’s Questionmark Users Conference was a celebration of many things — Questionmark’s 25th anniversary, new and future tools for making tests and other assessments more powerful and meaningful, tremendous progress on many fronts. It’s also a celebration of our customers’ achievements, which shone brightly during case study presentations, discussions and a session on best practices on secure testing in remote environments.

This year, we had the pleasure of presenting a Questionmark Getting Results Award to the CareerTech Testing Center at the Oklahoma Department of Career and Technology Education (ODCTE). The award cited the Testing Center’s provision of tests that help institutions  throughout the United States satisfy compliance criteria for state and national educational initiatives and meet certification requirements for more than 60 occupations.

Jennifer Nuttle, Jennifer Lathem and ____ ____ accept the Questionmark Gettings Results Award on behalf of the CareerTech Testing Center(Aaron Clamage Photo)

Jennifer Nuttle, Jennifer Lathem and Teena Friend accept the Questionmark Gettings Results Award on behalf of the CareerTech Testing Center
(Photo credit: ClamagePix)

The CareerTech Testing Center creates and administers competency assessments that enable students in CareerTech programs to demonstrate their mastery of knowledge related to their chosen occupation. The center offers more than 100  exams covering a variety of occupational areas and delivers more than 95,000 assessments each year to proctored sites across Oklahoma and 25 other states.

The Testing Center develops exams with the help of subject matter experts and uses Questionmark Analytics extensively during the annual review process of all exams to maintain item quality and ensure that test items  meet rigorous performance standards.

ODCTE has worked together with Questionmark to find efficient ways to distribute tests nationwide  and has proven the value of leveraging test content.  Through partnerships with industry organizations, professional associations and government agencies, the CareerTech Testing Center offers many institutions convenient, efficient and economical access to high-quality tests. Congratulations to everyone who works there!

Learn more about CareerTech Testing Center assessments here.

 

 

Tiger sharks, dolphins and Texas – oh my!

Headshot JuliePosted by Julie Delazynfish

As we wrap up our annual Questionmark Users Conference, many customers have commented that with so many different types of presentations and discussions to choose from, it’s best to bring colleagues and use the “divide and conquer” method to cover all the bases. It’s been great to see organizations sending a range of employees so that everyone can go to a different kind of session and then brief each other on what they learned.

Yesterday’s keynote address by Charles Jennings gave us a lot to thing about regarding the 70:20:10 framework and the challenge of measuring informal and workplace learning. Incidentally, his column in yesterday’s Training Industry Quarterly picked up the theme of “Extracting Learning from Work,” – something Charles touched on during his talk.

jelliesLive blogs from both the keynote address and the opening general session will give you a taste of what we’ve been hearing about during the last couple of days. And we hope you will dive into the conversation on Twitter, using the #qmcon tag.

Last night we celebrated Questionmark’s 25 year anniversary at the National Aquarium, just across Baltimore’s Inner Harbor from the conference hotel. We had the Aquarium all to ourselves and enjoyed a dolphin show before dinner and birthday cake.

If you were not able to make it to the conference this year, stay in touch with us on Twitter, Facebook and of course, this Blog. And mark your calendar for March 4 – 7, 2014, when we will meet at the Grand Hyatt in San Antonio, Texas, for the Questionmark 2014 Users Conference!

Look out for pictures on our Flickr page from this intense and always-fun learning event.

Better testing drives better instruction — An update from the Questionmark Users Conference

Joan Phaup HeadshotPosted by Joan Phaup

My headline came from yesterday’s pre-conference workshop on Critierion-Referenced Test Development led by Sharon Shrock and Bill Coscarelli.

Sharon’s brief remark is a fitting theme for what’s happening this week in Baltimore at the Questionmark Users Conference, as assessment and measurement professionals come together to make testing and assessment better than ever — and as a result doing so much to improve learning.

photo (5)

Last night’s dessert reception brought Questionmark customers and staff back together again for our 11th annual conference.

The opening general session gave us a preview of what’s to come during some of the concurrent sessions, which will include:

  • bring-your-own laptop instruction in the use of Questionmark Live browser-base authoring
  • demonstrations of  Questionmarks’ new OData API for Analytics
  • sessions on incorporating mobile delivery into assessment programs
  • customer case studies
  • a session on best practices for leveraging SharePoint in a learning infrastrucure
  • a presentaton about secure testing in remote environments
  • a brainstorming session about possibilities for ADL’s new Experience API

We are looking forward to tomorrow’s keynote address by Charles Jennings of the 70:20:10 Forum on Meeting the Challenge of Measuring Informal and Workplace Learning.

This is a special conference for us, as we are also celebrating Questionmark’s 25th anniversary! We’re excited to be learning and celebrating with customers, and we look forward to these next few days. You can follow the conference and learn some new things on Twitter at #QMCON, so check in whenever you like.

Performance testing versus knowledge testing

Joan Phaup HeadshotPosted by Joan Phaup

Art Stark is an instructor at the United States Coast Guard National Search and Rescue School – and a longtime Questionmark user.

He will team up with James Parry, Test Development/E-Testing Manager at the Coast Guard’s Performance Systems Branch, to share a case study at the Questionmark Users Conference in Baltimore March 3 – 6.

I’m looking forward to hearing about the Coast Guard’s progress in moving from knowledge-based tests to performance-based tests. Here’s how Art explains the basic ideas behind this.

Tell me about your experience with performance-based training at the Coast Guard.

Art Stark photo

Art Stark

All Coast Guard training is performance-based. At the National Search and Rescue School we’ve recently completed a course re-write and shifted more from knowledge-based assessments to performance-based assessments. Before coming to the National SAR School, I was an instructor and boat operator trainer on Coast Guard small boats. Everything we did was 100% performance-based. The boat was the classroom and we had standards and objectives we had to meet.

How does performance testing differ from knowledge testing?

To me, knowledge-based testing is testing to the lowest denominator. All through elementary and high school we have been tested at the knowledge level and very infrequently at a performance level. Think of a test you may have crammed for, as soon as the test was over you promptly forgot the information. Most times this was just testing knowledge.

Performance testing is actually being able to observe and evaluate the performance while it is occurring. Knowledge testing is relatively easy to develop. Performance testing is much harder and much more expensive, to create. With reductions to budgets, it is becoming harder and harder to develop the type of facilities we need to use for performance testing, so we need to find new, less expensive ways to test performance.

It takes a much more concerted effort to develop knowledge application test items than to develop simple knowledge test items. When a test is geared to knowledge only, it does not give the evaluator a good assessment of the student’s real ability. An example of this would be applying for a job as a customer service representative. Often there are questions for the job that actually test the application of knowledge, such as “You are approached by an irate customer, what actions do you take…?”

How will you address this during your session?

We’ll look at using written assessments to test performance objectives, which requires creating test items that apply knowledge instead of just recalling it. Taking from Blooms Taxonomy, I look at the third step, application. I’ll be showing how to bridge the gap from knowledge-based testing to performance-based testing.

What would you and Jim like your audience to take away from your presentation?

A heightened awareness of using written tests to evaluate performance.

You’ve attended many of these conferences. What makes you return each year?

The ability to connect with other professionals and increase my knowledge and awareness of advances in training. Meeting and being with good friends in the industry.

Check out the conference program and register soon.

Five Steps to Better Test Design and Delivery

Doug Peterson

Posted by Joan Phaup

I’ve been enjoying a series of posts in this blog my colleague Doug Peterson about Test Design and Delivery – so much so that I suggested he elaborate on this theme during a presentation at the Questionmark 2013 Users Conference in Baltimore March 3 – 6 – and he said, “Yes!”

Doug will be presenting on some other topics, too, but during a recent conversation with him I asked if he could tell me a little more about this particular session, which will focus on five processes:

1. Plan: Establish your test’s reliability and validity, and identify content areas to be covered
2. Create: Write items that increase the cognitive load, avoid bias and measure what’s important
3. Build: Pull items together into a test form, develop clear instructions and set passing scores
4. Deliver: Protect test content, control item exposure, protect test content and discourage cheating
5. Evaluate: Use item-, topic-, and test-level data to assess reliability and improve quality

Could you talk about your own background as a test author?

It mainly stems from what I was doing in the 3 or 4 years before I joined Questionmark. My group was responsible for training call center employees, and that included writing and administering lots of tests. Before my group took that over, all the tests were paper-and-pencil and had to be graded by the instructors. And of course you know that instructors over a period of time tend to bond with their students and tend to lose their objectivity.

It was clear that subjective testing was not good! We were introduced to Questionmark and we said, “Let’s automate these tests and make sure they are objective and fair.” That’s when I really got heavy-duty into testing. Over the course of those few years I attended several Questionmark conferences and went to a number sessions on item analysis, test analysis, setting cut scores, and so forth. I tried to understand all those kinds of things so that we could run statistical reports on our own content and sure our tests were valid and reliable.

Those years were very full of testing, and I learned a great deal about item and test writing, secure delivery and analyzing test results. I applied everything I learned to our call center training tests, and the customer satisfaction numbers began to rise. Why? Because our tests were working! The tests were valid and reliable, and because of that, they were weeding out the people who truly were not qualified for the job. Our stakeholders were very pleased because they had confidence that our tests were only passing people who were qualified to work in the call centers.

What do you think are the most challenging aspects of test design and delivery?

That’s hard to answer, because there are so many important things to think about! At the end of the day, the main thing is that the assessment is fair to both the test taker and the stakeholder, That idea encompasses many, many things. For the stakeholder, it requires having a valid, reliable assessment that uses solid methodology. For the participant, it boils down to well-written items. That sounds pretty simple, but it actually requires careful attention to detail.

How will you be addressing these challenges during your presentation at the Users Conference?

We’re going take a look at everything I’ve been working on in the blog series. What does reliable mean? What does valid mean? How can we appropriately plan an assessment and tie it back to the job or subject matter we’re testing for? We will also incorporate a lot of ideas about item writing. Throughout the session, we will be looking at fairness to the stakeholder and fairness to the participant and breaking those principles down into several components.

Who would benefit from attending this session?

Anyone who has anything to do with creating and delivering assessments: item writers, assessment assemblers, administrators. It’s good for people in these different roles to understand the entire test development and delivery process, so they appreciate their co-workers’ concerns. I see this session as suitable for people who are just beginning their work with Questionmark as well as those at the intermediate level. I’m looking forward to sharing so much of what I learned when I was so closely involved in a testing program myself.

————-

There’s a lot to learn at this conference! Check out the agenda – and save $100 if you register by January 18th, 2013