Auto-sensing and auto-sizing wizardry at work

Posted by Noel Thethy

The best way to describe the wizardry at work in Questionmark is to see it happen. Click here and watch what happens to the assessment screen as you resize it in your browser window. Did you miss it? Try it again.

What you would have see (if you are using a supported web browser) is the assessment screen resizing and adjusting to accommodate the screen size. When reducing your browser, the buttons get smaller, the text wraps the screen and resizes so it’s still readable, and the date and time disappear when there is not enough space. If you are not using a supported browser or you are using a mobile device, you would see Questionmark’s ability to auto-detect your browser/device and display a compatible version of the assessment.

Auto-sensing and auto-sizing make it possible to reuse the same assessment in a variety of different circumstances without needing to create new templates or assessments for each occasion. Without modifications you can embed assessments in learning materials, display surveys in your portal, and deliver  assessments via mobile devices. (Each time we update our software, we test for compatibility with the latest browsers — so that participants can easily view questions on whatever device they are using.)

To find out more about  auto-sensing/auto-sizing  and the blended delivery options supported by Questionmark, click here.

Are True/False Questions Useless?

Posted By Doug Peterson

As a test designer, I need every question to tell me something about the learner that I didn’t know before the question was answered: Does the learner have the knowledge for which the question is testing? Developing questions costs money, and every question takes up some of the learner’s time, so every question needs to be effective.

Is a True/False question effective? Does it tell me whether or not the learner actually learned anything? One would think that if the learner answered correctly, it would mean the learning was successful. The problem is that with only two choices, the learner has a 50% chance of simply guessing the correct answer. So does a correct answer really mean the learner possessed the knowledge, or does it simply mean the learner guessed correctly? You can’t tell, so the True/False question cannot be counted on as an indicator of successful training.

So is that it for our good friend, the True/False question? No more True/False questions on our quizzes, tests and exams? Is the True/False question history?

No, not at all.

While a True/False question may not be truly able to tell you what a learner does know, it is very good at telling you what a learner doesn’t know! When the learner gets a True/False question wrong, you can be guaranteed it is because they don’t possess the desired knowledge.

This means that True/False questions work very well on pre-tests given to the learner before the training. They can help identify what the learner doesn’t know so that they know the topics on which to focus in the training.

So don’t give up on the trusty True/False question! Just make sure that you understand what it really shows you about the learner, and that you use it in the right place.

Security in the Final Step of Test and Exam Delivery

Posted by Julie Delazyn

The secure delivery of high-stakes assessments should protect valuable test/exam content and the integrity of results as well as personally identifiable information about participants.

Achieving this requires organizations to  thoughtfully determine what delivery methods and security measures to use – by carefully considering questions such as these:

  • How can we provide security measures that make sense for a particular type of assessment?
  • How do the potential outcomes of an assessment affect a participant’s propensity to cheat?
  • What should we look for in terms of network security, application security and data center security?
  • How can we mitigate such threats as impersonation, content theft and cheating?
  • What measures should we take to protect data?
  • What monitoring options should we consider?

Check out this SlideShare presentation as a framework for addressing these questions:

Learning Tools Interoperability fulfills its promise!

Posted by Steve Lay

In previous blog posts I’ve discussed the new specification that the IMS Global Learning Consortium (IMS) has been working on called Learning Tools Interoperability or LTI for short.  See my first post on this subject here and the later update here.

In March, IMS released the final version of the specification.  This clears up any confusion between the earlier variants (Basic and Simple have been used as prefixes in the past) and sets a single standard for embedding tools in learning management systems (LMS) and portals.  The final specification also introduced an important new feature: a method of returning grade information from the tool to the LMS gradebook.

At Questionmark we have developed a number of connectors for integrating with popular learning management systems and portals, such as Blackboard and Moodle.  LTI provides us with an opportunity to replace those connectors with a unified approach to integration, so I can’t wait to get started with the new specification.

To this end we are now working on adding LTI support to our own software.  I recently attended an IMS workshop called “Creating Enterprise Aware, Multiplatform Apps with IMS Interoperability”.  At this workshop we heard about the latest developments in both IMS Common Cartridge and IMS LTI.  It was a great to meet some of the key people in the community and take a deep-dive on some of the technical details involved in implementing the specifications.

So how will Questionmark integrate using LTI?

In LTI terminology there are tool providers and tool consumers.  A tool consumer is typically an LMS or other type of portal that deals with user registration and assignment to courses where learning and assessment activities are aggregated.  A tool provider is a web-based service that provides a specialized experience to the learner such as an assessment.

Our first steps with LTI are aimed particularly at users of the Moodle LMS, though anyone with access to a web server running PHP and a suitable database will be able to integrate this way.  We’ve teamed up with an LTI specialist, Dr Stephen Vickers, to create an open source Community Edition connector that makes it easy for Moodle users to talk to Questionmark software using the new LTI protocol.  The project is hosted on the OSCELOT community development system.

This is just the beginning for Questionmark and LTI – so stay tuned for more updates on the role LTI will play in future Questionmark assessment technology solutions!


Ensuring question text is accessible

Posted by Noel Thethy

This post is part of the accessibility series I am running. Here we will look at ensuring text and table elements are accessible.

We have done our best to ensure that Questionmark’s participant interface is readable via screen readers. However, to ensure these work as expected you need to make sure that:

  • The text you use does not contain any inline styles that may confuse a screen reader.
  • Any tables in your content use captions and header information to ensure the screen reader can distinguish content.

If you have copied and pasted text from another application, particularly Microsoft Word, you may find when looking at the HTML code that the question contains extraneous HTML. For example, when content is copied and pasted from Microsoft Word, the text copied will appear as follows in the HTML tab.

 

 

The text copied includes HTML mark-up tags which override the style determined by the templates and could affect how a screen reader interprets what is on the screen. The HTML used to provide the formatting can be viewed in the HTML tab of the Advanced HTML Editor in Authoring Manager and should be cleaned up as much as possible.

Alternatively, Questionmark Live automatically removes any style HTML that may be included from applications such as Word or other Internet pages. To find out more about Questionmark Live, please click here: Questionmark Live

If you are using tables, we recommend that you build them following the W3C guidelines rather than the default tables available. They should ideally look something like this:

 

 

 

 

 

By using the <caption>, <thead> and <tfoot> tags in your table you can clearly identify parts of the table to be read by the screen reader.

For more information see the W3C recommendations for non-visual user agents. These tables can be added by using the Advanced HTML Editor in Authoring Manager.

Creating an Extended Matching Question Type

Extended Matching Questions are similar to multiple choice questions but test knowledge in a far more applied, in-depth way. This question type is now available in Questionmark Live  browser-based authoring.

What it does:   An Extended Matching question provides an “extended” list of answer options for use in questions relating to at least two related scenarios or vignettes. (The number of answer options depends on the logical number of realistic options for the test taker.) The same answer choice could be correct for more than one question in the set, and some answer choices may not be the correct answer for any of the questions – so it is difficult the answer this type of question correctly by chance. A well-written lead-in question is so specific that students understand what kind of response is expected,  without needing to look at the answer options.

Who should use it: It is often used in medical education and other healthcare subject areas to test diagnostic reasoning.

What’s the process for creating it? This diagram shows how to create this question type in Questionmark Live:

How it looks:  Here is an example of an Extended Matching Question

Next Page »