IMS specifications and standards update

Posted by Steve Lay

Last week I attended the IMS Global Learning Consortium‘s quarterly meeting in Nottingham, UK.  The meeting was co-located with JISC-CETIS, JISC is a collaboration of UK academic institutions and CETIS is their “Centre for Education Technology Interoperability Standards”. Essentially, CETIS helps promote the development and adoption of technical standards within the JISC community, while also playing a key role in advising JISC’s e-Learning programme.

The message from IMS was very clear. Three key standards provide a framework which covers the main interoperbility requirements of education. These standards are: Learner Tools Interoperability (LTI), Learner Information Services (LIS) and Common Cartridge. The first two are of particular interest.

LTI

I don’t think IMS has ever seen one of its specifications developed and adopted as rapidly as LTI. LTI allows a Learning Management System (LMS) or portal to be used to launch a wide range of activities hosted externally. In the past, content was either pre-loaded onto the LMS itself or hosted on an associated content server.  LTI is a simple mechanism that opens up the LMS to content hosted anywhere on the web using a simple extension to HTTP, the web protocol for accessing web pages.

When you click on a link to a website your browser navigates you there, but the  website knows little or nothing about where you linked to it from. When you click on an LTI link your browser does the same thing, but several important pieces of information are securely passed to the new website: your user identity, the context of the link you clicked (such as the course or even the course page from which you were coming) and your role within that context (such as instructor or student). This enables the remote content to behave as if it were a seamless part of the user’s learning experience. In fact, it is as if the content had been packaged up and hosted on the LMS itself.

LTI promises to open up the definition of ‘content’ to include a wider range of activities and tools, including assessments!

LIS

The related LIS specification enables the exchange of information about people involved in the learning experience. A tool launched from an LTI-enabled link can use LIS to find out more information about the user, or perhaps adjust the user’s learning record with updated test scores. If LTI is used to initiate the link between two systems it is LIS that is used to sustain it.

* *

In addition to the presentations from IMS, the conference also contained some interesting sessions from the JISC-CETIS community. This community has been very active in the development of Question and Test Interoperability (QTI). It appears that real progress is now being made with the demonstrators required by IMS before the latest draft can be promoted to a final specification.  Readers of this blog may feel that we’ve been here before, but there is reason to believe that the specification’s time finally has come. QTI forms an important foundation for the Accessible Portable Item Protocol (APIP) – a US-led accessibility initiative.

Which to use? Matching versus pull-down list questions

Posted By Doug Peterson

Matching questions and pull-down list questions look, well, identical, as you can see from these two screen captures:

.

Matching Question

Pull-down Question

So what are the differences?

When should you use a matching question, and when should you use a pull-down list question?

The answer lies in the behavior differences between the question types.

The matching question type allows an option (the values in the list) to be assigned to one – and only one – choice. I can select “Crackers” as the match for “Cheese”, but if I then try to also select “Crackers” as the match for “Peanut butter”, I will get an error message stating that “Crackers” has already been selected.

If you choose to score per match, this has the advantage of preventing the participant from using the same option for all the choices, which would guarantee they would receive at least some points even if they didn’t know a single match. At the same time, if you choose all-or-nothing scoring, this has the disadvantage of allowing a participant who doesn’t know every match to still get the question correct by process of elimination.

On the other hand, the pull-down list question type allows the same option to be assigned to multiple choices. This does away with the “process of elimination” problem, and also allows two choices to have the same answer. For example, if I wanted “Peanut butter” and “Cheese” to both match with “Crackers”, I could do that with a pull-down list question, but not with a matching question.

Another difference is how the Authoring Manager question wizard behaves for each question type. The matching question wizard prompts you for the choice and its matching option.

This results in the question having the same number of options as choices (remember the “process of elimination” problem?). However, the pull-down list wizard allows you specify as many choices as you want, as many options as you want, and then define the correct option for each choice.

This takes care of the “process of elimination” problem since you can specify more options than choices.

Another difference in the wizards is that the matching question wizard gives you the option of scoring per match or using all-or-nothing scoring, whereas the pull-down list wizard only allows for scoring per match.

But remember – you always have the question editor! If you want more options than choices in a matching question, start the question editor, edit each choice, and add the extra options to each choice’s list of options. If you want your pull-down list question to use all-or-nothing scoring, you can edit the outcomes in the question editor to make that happen. And if you’re really adventurous, you can use the editor to create different option lists for each choice in either question type!

Click here to learn more about these and other question types.

Conference close-up: Assessment as an integral part of instructional design

Jane Bozarth

Posted by Joan Phaup

With the Questionmark Users Conference now less than a month away, it’s a good time to check out the conference agenda and — if you haven’t already done so — to sign up for three great days of learning and networking in New Orleans March 20 – 23.

Two high points on the program will be presentations by Dr. Jane Bozarth:

  • a keynote address on the importance of  starting with good objectives and clear outcomes for assessments and using them to strategically to support organizational goals
  • a breakout session called Instructional Design for the Real World —  about tools and tricks that support rapid instructional design, help  with needs analysis and make for effective communication with subject matter experts, managers and others

As a training practitioner for more than 20 years, and as Elearning Coordinator for the North Carolina Office of State Personnel, Jane will bring a lot of firsthand experience to these presentations. During a conversation I had with her shortly after she agreed to present at the conference, Jane pointed out some common pitfalls  that she will address during her keynote to help listeners address the right things at the right time for the right outcome:

  • getting so caught up in writing objectives and developing  instruction as to lost sight of the desired end result
  • measuring the wrong things or things that have insignificant impact
  • paying too little attention to formative assessment
  • waiting until after a product is designed to go back and write the assessment for it, instead of addressing assessment first

You can listen to the podcast of our conversation right here and or read the transcript.

How can you assess the effectiveness of informal learning?

Posted by John Kleeman

Lots of people ask me how you can use assessments to measure the effectiveness of informal learning.  If people are learning at different times, in different ways and without structure, how do you know it’s happening? And how can you justify investment in social and informal learning initiatives?

The 70+20+10 model of learning is increasingly understood – that we learn 70% on-the-job, 20% from others and 10% from formal study. But as people invest in informal learning initiatives, a key question arises. How do you measure the impact? Are people learning? And more importantly, are they performing better?

Did they like it? Did they learn it? Are they doing it?In a presentation at the Learning Technologies conference in London in January, I suggested there are three areas in which to use assessments:

Did they like it?

You can use surveys to evaluate attitudes and reactions – either to specific initiatives or to the whole 70+20+10 initiative. Measuring reaction does not prove impact, but yields useful data. For example, surveys yielding consistently negative results could indicate initiatives are missing the mark.

You could also look at the Success Case Method, which lets you home in on individual examples of success to get early evidence of a learning programme’s impact. See here and here for my earlier blog posts on how to do this.

Of course, if you are using Questionmark technology, you can deliver such surveys embedded in blogs, wikis or other informal learning tools and also on mobile devices.

Did they learn it?

There is strong evidence for the use of formative quizzes to help direct learning, strengthen memory and engage learners. You can easily embed quizzes inside informal learning, e.g. side by side with videos or within blogs, wikis and SharePoint, to track use and understanding of content.

With informal learning, you also have the option of encouraging user-generated quizzes. These allow the author to structure, improve and explain his or her knowledge and engage and help the learner.

You can also use more formal quizzes and tests to measure knowledge and skills. And you can compare someone’s skills before and after learning, compare to a benchmark or compare against others.

Are they doing it?

Of course, in 70+20+10, people are learning in multiple places, at different times and in different ways. So measuring informal learning can be more difficult than measuring formal, planned learning.

But if you can measure a performance improvement, that is more directly useful than simply measuring learning. A great way of measuring performance is with observational assessments. This is described well in Jim Farrell’s recent post Observational assessments- measuring performance in a 70+20+10 world.

To see the Learning Technology presentation on SlideShare, click here. For more information on Questionmark technologies that can help you assess informal learning, see www.questionmark.com.

Valuable tips on assessment translation, localization and adaptation

Sue Orchard

Posted by Julie Delazyn
.
“Unprecedented interconnection.” Those are the words that Sue Orchard of Comms Multilingual, a professional translation services firm, uses to describe a world of Increasing global alliances and supply chains, in which assessments such as test, exams and certifications are administered around the world.

Last week Sue presented an excellent Questionmark web seminar about assessments translation, localization and adaptation (TLA), during which she explained the importance of carefully planning and preparing for TLA projects. She cautioned her audience about some of the pitfalls of translation – for instance the fact that a short sentence in one language can be a very long one in another – and shared some beat practice tips, too.

She also pointed out the need to consider cultural differences as well differences in language, and she lightened up the proceedings with some amusing examples of translations gone awry.

We’ve put slides from this presentation, Assessment Translation, Localization and Adaptation: Expanding the Reach of your Testing Program, on our SlideShare page and embedded them below, and you will find a brief Q&A interview with Sue here.

Conference Close-up: Alignment, Impact & Measurement with the A-model

Posted by Joan Phaup

Key themes of the Questionmark Users Conference March 20 – 23 include the growing importance of informal and social learning — as reflected by the 70+20+10 model — and the role of assessment in performance improvement and talent management. It’s clear that new strategies for assessment and evaluations are needed within today’s complex workplaces.

Dr. Bruce C. Aaron

We’re delighted that measurement and evaluation specialist Dr. Bruce C. Aaron will be joining us at the conference to talk about the A-model framework he has developed for aligning assessment and evaluation with organizational goals, objectives and human performance issues.

A conversation Bruce and I had about A-model explores the changes that have taken place in recent years and today’s strong focus on performance improvement.

“We don’t speak so much about training or even training and development anymore,” Bruce explained. “We speak a lot more about performance improvement, or human performance, or learning and performance in the workplace. And those sorts of changes have had a great impact in how we do our business, how we design our solutions and how we go about assessing and evaluating them…We’re talking about formal learning, informal learning, social learning, classroom, blended delivery, everything from online learning to how people collect information from their networks and the knowledge management functions that we’re putting in place.”

In a complex world that requires complex performance solutions, Bruce observed that “the thing that doesn’t change is our focus on outcomes.”

The A-model evolved out of dealing with the need to stay focused on goals to logically organize the components of learning, evaluation and performance improvement. It’s a framework or map for holding the many elements of human performance in place — right from the original business problem or business issue up through program design and evaluation.

You can learn more about this from Bruce’s white paper, Alignment, Impact and Measurement with the A-model, from this recording of our conversation — and, of course, by attending the Users Conference! Register soon!

Next Page »