How to create reliable tests using JTA

Jim Farrell HeadshotPosted by Jim Farrell

The gold standard of testing is to have valid test results. You must always be asking yourself: Does this test really test what it is supposed to test? Are the topics covered going to tell me whether or not the participant has the knowledge or skills to perform the tasks required for the job? The only way to be 100 percent sure is to truly know what the tasks are, how important they are, and how often they are performed to make sure you are asking relevant questions. All of this information is covered in a Job Task Analysis (JTA). (A JTA question type is available in Questionmark Live).

A JTA is an exercise that helps you define the tasks a person in a particular position needs to perform or supervise and then measure the:

1. difficulty of the task

2. importance of the task

3. frequency of the task

Together, these dimensions are often called the DIF. There may be other dimensions you may want to measure but the DIF can help you build a competency model for the job. A competency model is a visual representation of the skills and knowledge a person needs to be highly successful. This is created by interviewing subject matter experts (SMEs) who define the DIF for each task. This sounds like a piece of cake, right? Well it can be, but many people often disregard creating a JTA because of the time and expense. The thought of going out and interviewing SMEs and then going back and correlating a ton of data sounds daunting. That is where Questionmark can help out.

With our JTA question type, you can create a list of tasks and dimensions to measure them. You can then send out the survey to all of your SMEs and then use specific job task analysis reports to vet and create your competency model. Now that makes it a piece of cake!

Let’s take a quick look at the process a little more closely. In authoring, you can define your tasks and dimensions by entering them directly or importing them from an outside source.

 

JTA1

Once you add your question to a survey, you can deliver it to your SMEs.JTA2 (2)

The final step of the process is running reports broken down by different demographic properties. This will give you the opportunity to sit down and analyze your results, vet them with your SMEs, and develop your competency model.

JTA3Let’s get to why we are here…designing a test that will yield valid, meaningful results. Now that you know what needs to be tested, you can create a test blueprint or specification. This documentation will drive your item development process and make sure you have the right questions because you can map them back to the tasks in your competency model.

Questionmark Live: Watching the numbers grow

Jim Farrell HeadshotPosted by Jim Farrell

I hope everyone is having a wonderful holiday season.

Typically I come to you telling you about amazing new features, but this time I want to talk about the number of items created in Questionmark Live over the past year.

Let me set the stage. We started the year with approximately 30,000 questions being created in Questionmark Live each month.
As part of the team that gave birth to our newest authoring tool, I was over the moon about his strong start. This proved to me that the features we were releasing were helping people build up their item banks. I was excited enough as it was, but then came September, October and November.

September saw or most impressive increase in usage. More than 97,000 questions were created — almost 100,000 questions in just one month.

October followed with more than 74,000 questions –- including more than   20,000 questions on October 16th alone. This is the number our development teams were most excited about: 20,000 questions!  And the system ran flawlessly, speaking to the scalability of the software and the OnDemand Platform.

November proved that the usage was legitimate. More than 121,000 questions were created that month.

It’s clear now that Questionmark Live is the preferred authoring tool among Questionmark users — with ease of use and scalability as its foundation.

You didn’t think I would end without talking about something new, did you?

I can’t resist telling you that you no longer need to approve authors to have access to Questionmark Live. Anyone with a valid Questionmark Communities account can gain access to Questionmark Live. We hope this makes it easier for you to crowd- source content within your organization and write good questions to solve real business problems.

***

The Questionmark 2014 Users Conference will include bring-your-own-laptop sessions on Creating Items and Topics as well as Collaborative Assessment Authoring in Questionmark Live. The early-bird registration discount of $200 is available through tomorrow December 12th, so sign up now!

EATP Keynotes: insights on instructional technologies, learning and assessment

Jim Farrell HeadshotPosted by Jim Farrell

The best part of being a product manager is visiting customers. It is inspiring to see your product solving real-world business problems and contributing to the success of many organizations.

I recently got the opportunity to visit some of our customers while attending the European Association of Test Publishers (E-ATP) conference. I have attended this conference for many years in the US (ATP), but this was the first time attending in Europe. Both conferences bring together thought leaders and display real-world examples of how assessment programs benefit organizations — from formative assessment and quizzing to life-and-limb certification and compliance testing.

The highlights of the conference for me this year were the two keynotes, as I felt they were perfect bookends to the conference program (which included many presentations by Questionmark customers and team members).

The first was by Steve Wheeler (@timbuckteeth), an Associate Professor of Learning Technologies at Plymouth University in South West England. He painted a picture of where we are today with the use of instructional technologies. I have always said it is not the technology but the teaching methods that need to change to improve test scores. The threat of tests does not improve learning: good pedagogy improves learning. Steve compared teaching today to sitting on an airplane — everyone sitting in rows, facing forward, waiting for something to happen. He promoted the use of ipsative assessment ( which our chairman recently wrote about) and trans-literacy, which is showing knowledge across many different types of media. The theme that carried through his keynote was feedback. Feedback is vital to learning but often not included in assessment.

The closing keynote was much more application and less blue sky. The leaders of the session were Sue Stanhope and David Rippon from the Sunderland City Council in the UK. They painted a story of an economy that was going through dramatic change. with job loses throughout the region. By using assessments and job matching, they were able to match people with their strengths and put them into jobs that inspired them. The message is clear: assessment is not just to see what you know. It can be used to guide learning and careers, too. The success stories left everyone excited to take what they learned out into the world of learning, achievement, competency and performance.

Conferences really are a great place to share ideas, knowledge and innovation. I look forward to meeting with Questionmark customers either at the European Questionmark Users Conference in Barcelona, Spain, November 10-12 or at the Questionmark 2014 Users Conference in San Antonio, Texas, March 4 – 7.

mLearning’s about content – not devices, platforms or tools

Jim Farrell HeadshotPosted by Jim Farrell

mLearnCon is a relatively new conference (about 3 years old) produced by the eLearning Guild. I attended it recently in San Jose, California, and had a great time there. If you are wondering if this conference would benefit your organization, consider these questions from the Guild’s website:

  1. Is mLearning right for our organization?
  2. What tools are best for developing engaging learning?
  3. Should we build a platform-specific application or a mobile optimized website?

While the importance of these questions may be case-by-case specific, I’ll share my thoughts about each of them in a moment. The one thing I took away from this conference is that the most important thing is not the device, platform or development tool. It is the content. I think our friend Jason Haag (@mobilejson) from the ADL said it best: “We need to start thinking about being “learning designers” and not just instructional designers, because we now have an opportunity to design for more than just formal courses in the cognitive domain. “ So yes, indeed, it’s all about content!

Here are my own responses to the Guild’s questions:

1. Is mLearning right for your organization?

I think there are two key questions behind this one: Are people in your organization using mobile to access internal content? Are people in your organization taking assessments or filling our surveys on mobile devices? (If you don’t know how to respond, adding Google Analytics can help you figure this out quickly.) Given the fact that mobile devices are everywhere these days, your answer to both of my questions is probably yes. But you need to understand WHY people are accessing your content via mobile, then develop your content in digestible chunks or in ways that solve business problems. Learning via mobile is most often pull learning done at a moment of need to support performance. So, yes! It is the content that matters.

2. What tools are best for developing engaging learning?

I love the word “engaging” in this question, but I think it is the wrong word. I would replace it with efficient, productive or correct. As a manager I am not worried about being engaging, I am worried that people do their jobs correctly and efficiently. There are lots and lots of rapid development tools out there, and mLearnCon was ripe with them. There is no be-all end-all tool for developing learning. Never was, never will be. You have to find the tools that best fit each situation and medium you are producing for. Ask the question, “How will people be consuming my content?” — then pick your tool accordingly.

3. Should we build a platform specific application or a mobile optimized website?

It seems to me this question has already been answered. Responsive design allows people visiting your content to get appropriate views. Apps are not the answer. Are you ready to support all of the growing list of mobile operating systems?

Jason Haag shared a terrific quote during a presentation he gave at the conference: “Not every mobile device will have your app on it, but every mobile device will have a browser.” Although HTML5 is not an official standard until the end of 2014, there are techniques available to detect the features for the browser and display the best possible presentation to the user. At Questionmark we like to say Author once, Schedule once, and Deliver to any device. That is truly due to using responsive design to give participants the best experience possible.

The great thing about conferences is about meeting people you admire or follow virtually via Twitter. This conference was no different. I got the chance to meet the famous Sarah Gilbert @melsgilbert. Read her cover article this month in Training and Development magazine if you are truly a beginner in the mobile learning world.

Delivering assessments in multiple languages: What are your options?

Jim Farrell HeadshotPosted by Jim Farrell

Test publishers, businesses and other organizations that operate internationally or have multilingual audiences need to provide a consistent experience for  all participants, regardless of what language they speak.

I’d like to explain how we at Questionmark can help you achieve this and the options we offer to suit differing needs – whether subject matter experts are generating content in many different languages or you need to have existing content translated.

The interface of Questionmark Live, our browser-based authoring tool, is translated in more than 20 languages. SMEs from all over the world can create content easily, in their preferred language.

But what about localizing content you already have, and keeping track of questions in multiple languages?

Our Translation Management System provides translation interfaces, project management and workflows that make it easier for you to manage and deliver localized content to participants all over the world. You can author and translate once, schedule once, deliver your assessment in many different languages and present all the results together — in a single data set. This is very useful when doing item analysis. For example, if you have students taking a test in English and in Spanish. You can do analysis on the questions together, not as two different versions of the question.

There are two different ways to translate:

The first is by a translation/localization company. This is particularly useful when you have a large amount of translation to do or if you are using many different languages. You can export the content as an XLIFF file and send it to your translation for processing, This is a logical choice for organizations that are delivering dozens of multilingual exams and have hundreds of items in play. Once your files come back to you in your choice of languages, you can import the translated assessments back into the Translation Management System.

For smaller projects, you might rather have translators use the Translation Management System directly. It displays the base language of the question, options and feedback and provides an area for someone to go in and translate. This is very efficient for not only translating content but also for updating and localizing translations.

Once you have multilingual content, how do you deliver it?

Here, again, you have options: Let’s start with the participant interface. You can present Questionmark assessments in 30 different languages – including those that read right-to-left, such as Arabic and Hebrew. You might be saying, “Wow, 30 languages!” But you also might be saying, “Well, mine is not in the list.” If you are that group, you can provide you own translations.

When deciding how to present your content, you can allow participants to select the language they use, or you can force the language selection in the link to the survey, quiz, test or exam.

translation

Assessments can be scheduled for delivery in a specific language, or administrators may allow participants to select which language they prefer to take the assessment in.

Providing different ways to manage multilingual assessments reflects our commitment to helping customers who need to reach participants in many different places, from different cultures.

 

Try out our multi-lingual assessment:

New Questionmark OnDemand release enhances analytics and mobile delivery

Jim Farrell HeadshotPosted by Jim Farrell

With Questionmark having just released a major upgrade of our OnDemand platform. I want to highlight some of the great new features and functionality now available to our customers.

Let’s start with my favorite. Questionmark released a new API known as OData, which allows Questionmark customers to access data in their results warehouse database and create reports using third-party tools like PowerPivot for Excel and Tableau. Through a client, a user makes a request to the data service, and the data service processes that request and returns an appropriate response.

You can use just about any client to access the OData API as long as it can make HTTP requests and parse XML responses. Wow…that’s technical! But  the power of the new OData API is that it liberates your data from the results warehouse and lets you build custom reports, create dashboards, or feed results data into other business intelligence tools.

5.6

The OData API is not the only update we have made to Analytics. The addition of the Assessment Content Report allows users to review participant comments for all questions within an assessment, topic, or specific question. Enhancements to the Item Analysis report include the ability to ignore question and assessment revisions. This report now also supports our dichotomously-scored Multiple Response, Matching, and Raking question types.

Another improvement I want to highlight is the way Questionmark now works with mobile assessments. An updated template design for assessments when taken from a mobile device embraces responsive design, enhancing our ability to author once and deploy anywhere. The new mobile offering supports Drag and Drop and Hotspot question types — and Flash questions can now run on all Flash-enabled mobile devices.

Click here for more details about this new release of Questionmark OnDemand.