Assessments Through the Learning Process: Video & white paper

Headshot JuliePosted by Julie Delazyn

Quizzes, tests, and exams do so much more than determine whether or not a learner passed a training course. These assessments, as well as surveys, play a crucial role in learning, performance improvement and regulatory compliance. I’m please to share an 8-minute video that explores the varied and important roles assessments play before, during and after a learning experience.

This video — as well as our white paper, Assessments through the Learning Process, are great places to start exploring the possibility of using online assessments in education, training, certification or compliance. They explain how you can use assessments to improve learning and measurement, and will point you to many additional information sources.

Make sure to bookmark the Questionmark Learning Cafe to stay up to date with videos, demonstrations and other resources about everything from assessment-related best practices to the use of Questionmark technologies.

Writing Good Surveys, Part 2: Question Basics

Doug Peterson HeadshotPosted By Doug Peterson

In the first installment in this series, I mentioned the ASTD book, Survey Basics, by Phillips, Phillips and Aaron. The fourth chapter, “Survey Questions,” is especially good, and it’s the basis for this installment.

The first thing to consider when writing questions for your survey is whether or not the questions return the data for which you’re looking. For example,let’s say one of the objectives for your survey is to “determine the amount of time per week spent reading email.”

Which of these questions would best answer the question?

  1. How many emails do you receive per week, on average?
  2. On average, how many hours do you spend responding to emails every week?
  3. How long does it take to read the average email?
  4. On average, how many hours do you spend reading emails every week?

All four questions are related to dealing with email, but only one pertains directly to the objective. Numbers 1 and 3 could be combined to satisfy the objective if you’re willing to assume that every email received is read – a bit of a risky assumption, in my opinion (and experience). Number two is close, but there is a difference between reading an email and responding to it, and again, you may not respond to every email you read.

The next thing to consider is whether or not the question can be answered, and if so, ensuring that the question does not lead to a desired answer.

The authors give two examples in the book. The first describes a situation where the author was asked to respond to the question, “Were you satisfied with our service?” with a yes or no. He was not dissatisfied with the service he received, but he wasn’t satisfied with it, either. However, there was no middle ground, and he was unable to answer the question.

The second example involves one of the authors checking out of a hotel. When she tells the clerk that she enjoyed her stay, the clerk tells her that they rate customer satisfaction on a scale of one to ten, and asks if she would give them a ten. She felt pressured into giving the suggested response instead of feeling free to give a nine or an eight.

Another basic rule for writing survey questions is to make sure the respondent can understand the question. If they can’t understand it at all, they won’t answer or they will answer randomly (which is worse than not answering, as it is garbage data that skews your results). If they misunderstand the question, they’ll be answering a question that you didn’t ask. Remember, the question author is a subject matter expert (SME); he or she understands the big words and fancy jargon. Of course the question makes sense to the SME! But the person taking the survey is probably not an SME, which means the question needs to be written in plain language. You’re writing for the respondent, not the SME.

Even more basic than providing enough options for the respondent to use (see the “yes or no” example above) is making sure the respondent even has the knowledge to answer. This is typically a problem with “standard” surveys. For example, a standard end-of-course survey might ask if the room temperature was comfortable. While this question is appropriate for an instructor-led training class where the training department has some control over the environment, it really doesn’t apply to a self-paced, computer-based e-learning course.

Another example of a question for which the respondent would have no way of knowing the answer would be something like, “Does your manager provide monthly feedback to his/her direct reports?” How would you know? Unless you have access to your manager’s schedule and can verify that he or she met with each direct report and discussed their performance, the only question you could answer is, “Does your manager provide you with monthly feedback?” The same thing is true about asking questions that start off with, “Do your coworkers consider…” – the respondent has no idea what his/her coworkers thoughts and feelings are, so only ask questions about observable behaviors.

Finally, make sure to write questions in a way that respondents are willing to answer. Asking a question such as “I routinely refuse to cooperate with my coworkers” is probably not going to get a positive response from someone who is, in fact, uncooperative. Something like “Members of my workgroup routinely cooperate with each other” is not threatening and does not make the respondent look bad, yet they can still answer with “disagree” and provide you with insights as to the work atmosphere within the group.

Here’s an example of a course evaluation survey that gives the respondent plenty of choices.

Using Questionmark’s OData API to Create a Response Matrix

Austin FosseyPosted by Austin Fossey

A response matrix is a table of data in which each row represents a participant’s assessment attempt and each column represents an item. The cells show the score that each participant received for each item – valuable information that can help you with psychometric analysis.

The Questionmark OData API enables you to create this and other custom data files by giving you flexible, direct access to raw item-level response data.

You can already see participant’s item-level response data in Questionmark reports, but the Questionmark reports group data together for one assessment at a time.

If you have a large-scale assessment design with multiple equated forms, you may want to generate a matrix that shows response data for common items that are used across the forms.

The example below shows a response matrix created with OData in Microsoft Excel 2013 using the PowerPivot add-in. The cells in a response matrix are coded with the score that the participant received for each item (e.g., 1 = correct and 0 = incorrect). (If an item was not delivered to a participant, the cell will be returned blank, though you can impute other values as needed.)

matrix

You can use OData to create a response matrix that can be used for form equating or as input files for item calibration in Item Response Theory (IRT) software. These data are also helpful if you want to check a basic item-level calculation, like the p-value for the item across all assessments. (Note that item-total correlations can only be calculated if the total score has been equated for all forms.)

Visit Questionmark’s website for more information about the OData API. (If you are a Questionmark software support plan customer, you can get step-by-step instructions for using OData to create a response matrix in the Premium Videos section of the Questionmark Learning Café.)

Prevent Cheating with Randomised Assessments

Chloe MendoncaPosted by Chloe Mendonca

It’s always interesting to talk with customers and discover how they use assessments to meet their business needs. My recent conversation with Onno Tomson, – Senior Advisor at ECABO (soon to be known as eX:plain), gave me some background into the world of metadata and its effectiveness in the prevention of cheating.

KONICA MINOLTA DIGITAL CAMERA

Onno Tomson, – Senior Advisor at ECABO

Onno will present a session at the Questionmark 2013 Users Conference in Barcelona from 10-12 November. I was interested to know more about his presentation, “Advanced Item Banking and Psychometric Analyses with Randomly Generated Assessments.”

Could you tell me about your organization and how you currently use Questionmark assessments?

eX:plain is a knowledge centre for vocational education and professional development. We develop and hold examinations and supply the appropriate content in training for a variety of professions. Questionmark is one of our main applications. We use it for the construction and delivery of our exams, both online and on paper. We average about 200, 000 exams per year.

Why do you use randomly generated questions in your assessments, and could you tell me how this works?

The randomization of questions in our exams is an essential way to prevent cheating. We have a large number of questions included in each bank which means that each individual taking the test will receive a completely different set of questions making it impossible for them to copy one another’s answers.

What problems can arise when drawing questions from large item banks to create your exams?

First, there is the challenge of keeping a good overview of the content.

Second we need to work with large amounts of metadata to make sure the right question pops up at the right spot in the right exam and for management and maintenance purposes.

Selecting questions on multiple metadata values can have the risk of negatively impacting performance. At the conference I will share my experiences and solutions to these important issues.

What do you hope people will take away from your breakout session?

A broader insight into how Questionmark deals with metadata, how it affects your grip on content and how it can affect performance.

What do you hope to take away from the conference?

I’m looking forward to getting some updates on the development of Questionmark and my network and helping to influence the future product features by speaking with the Questionmark management team.

LTI certification and news from the IMS quarterly meeting

Steve Lay HeadshotPosted by Steve Lay

Earlier this month I travelled to Michigan for the IMS Global Learning Consortium’s quarterly meeting. The meeting was hosted at the University of Michigan, Ann Arbor, the home of “Dr Chuck”, the father of the IMS Learning Tools Interoperability (LTI) protocol.

I’m pleased to say that, while there, I put our own LTI Connector through the new conformance test suite and we have now been certified against the LTI 1.0 and 1.1 protocol versions.IMS

The new conformance tests re-enforce a subtle change in direction at IMS. For many years the specifications have focused on packaged content that can be moved from system to system. The certification process involved testing this content in its transportable form, matching the data against the format defined by the IMS data specifications. This model works well for checking that content *publishers* are playing by the rules, but it isn’t possible to check if a content player is working properly.

In contrast, the LTI protocol is not moving the content around but integrating and aggregating tools and content that run over the web. This shifts conformance from checking the format of transport packages to checking that online tools, content and the containers used to aggregate them (typically an LMS) are all adhering to the protocol. With a protocol it is much easier to check that both sides are playing by the rules  — so overall interoperability should improve.

In Michigan, the LTI team discussed the next steps with the protocol. Version 2 promises to be backwards-compatible but will also make it much easier to set up the trusted link between the tool consumer (e.g., your LMS) and the tool provider (e.g., Questionmark OnDemand).  IMS are also looking to expand the protocol to enable a deeper integration between the consumer and the provider. For example, the next revision of the protocol will make it easier for an LMS to make a copy of a course while retaining the details of any LTI-based integrations. They are also looking at improving the reporting of outcomes using a little-known part of the Question and Test Interoperability (QTI) specification called QTI Results Reporting.

After many years of being ‘on the shelf’ there is a renewed interest in the QTI specification in general. QTI has been incorporated into the Accessible Portable Item Protocol (APIP) specification that has been used by content publishers involved in the recent US Race to the Top Assessment Program. What does the future of QTI look like?  It is hard to tell at this early stage, but the buzzword in Michigan was definitely EPUB3.

Celebrating 25 years of change — from DOS to SaaS

Question Mark for DOS (1988)John Kleeman HeadshotPosted by John Kleeman

Considering all the security, availability and flexibility we can achieve today with cloud-based assessment management systems, it’s remarkable to look back at the many changes and milestones we’ve seen over the past 25 years.

I wrote the first version of Question Mark for DOS in 1987-88. When I launched the company, 25 years ago, in London in August 1988, I always wanted to bring the benefits of computerized assessment to the world, but it was hard to foresee the dramatic technological changes that would transform our industry and make online assessment as widespread as it is today.

Coinciding with the rise of the PC, Question Mark for DOS empowered trainers and teachers to create, deliver and report on computerized assessments without having to rely on IT specialists.

Question Mark Designer for Windows (1993)QM Web 1995Things have been changing quickly ever since.  The early 1990s brought the move from DOS — functional but boring — to Windows — visual and graphical. This was radical at the time. To quote our marketing for Question Mark Designer for Windows, launched in 1993:

“Using Question Mark Designer, you can create tests using the full graphical power of Windows. You can use fonts of any size and type, and you can include graphics up to 256 colours. One of the most exciting features is a new question type, called the “Hot spot” question. This lets the student answer by “pointing” at a place on the screen.”

The switch to a visual user interface was huge, but the biggest paradigm shift of all was the move to delivering assessments over the Internet.

Pre-Internet, communicating results from assessments at a distance meant sending floppy disks by post. The World Wide Web made it possible to put an assessment on a web server, have participants answer it online and get instantly viewable results. This changed the world of online assessments forever.

QuestionmQuestion Mark Perception (1998)ark Technical Director Paul Roberts, who still plays an important role in Questionmark product development, wrote the code for the world’s first-ever Internet assessment product, “QM Web”, in 1995.  We followed up QM Web with the first version of Questionmark Perception, our database-driven assessment system, in 1998.

Eric Shepherd founded the U.S. division of Questionmark in the 1990s and in 2000 became CEO of what is now a global company. He is the heart and soul of Questionmark, an inspiring chief executive who has turned Questionmark from a small company into an industry leader.

One key paradigm shift in the 2000s was the desire to use surveys, quizzes, tests and exams in more than one department — across the entire enterprise. To make this practical, we began building scalability, reliability, translatability, accessibility, maintainability and controllability into our technologies. These attributes, along with multiple delivery options and in-depth reporting tools, are key reasons people use Questionmark today.

Cutting the ribbon at the Questionmark European Data Center

Opening our European Data Centre last year marked a major expansion of our cloud-based Questionmark OnDemand service

In recent years, we’ve seen another dramatic change – towards software-as-a-service applications in the “cloud”.  Just as Question Mark for DOS 25 years ago empowered ordinary users to create assessments without needing much IT support, so Questionmark OnDemand today allows easy creation, delivery and reporting on assessments without in-house servers.

So what’s in store for the future? Technology is making rapid advances in responsive design, security, “big data”, mobile devices and more. Questionmark keeps spending around 25% of revenues on product development. The huge demand for online assessments is making this our busiest time ever, and we expect continued, rapid improvement.

I’d like to thank our customers, suppliers, partners, users and employees – whose support, collaboration and enthusiasm have been critical to Questionmark’s growth during our first 25 years. I look forward to continuing the journey and am eager to work with all of you to shape what happens next!

Next Page »