Topic Hierarchies in Questionmark Live!

Posted By Doug Peterson

Questionmark Live, Questionmark’s web-based item and assessment authoring tool, includes hierarchical topics for organizing questions.

Hierarchical topics allow you to author your questions in a tree structure as shown in the screen capture, starting with a broad topic at the highest level and narrowing down to a specific piece of knowledge at the lowest level.

In the example shown, the highest level of organization is school curricula. Math is then further broken down into more specific topics such as Precalculus, which is in turn narrowed further into Algebra and Trigonometry.

The Trigonometry topic is divided into even more detailed sub-topics. I can now create questions related to calculating a cosine value in the Cosines topic, while questions relating to Euler’s Formula would be stored in the Eulers Formula topic.

At this point it is very easy to assemble an assessment for a specific purpose. If I want to give a quiz on cosines, I can pull questions from the Cosines topic. If I’m creating an end-of-course exam, I would pull a few questions from each sub-topic under Trigonometry.

You can share any topic at any level in the hierarchy with other Questionmark Live users so that your whole team can work collaboratively, and the hierarchical structure is preserved as you move data between Questionmark Live and Authoring Manager (Questionmark’s Windows-based authoring tool).

If you’d like to become more familiar with Questionmark Live, check out this webinar on June 27, 2012 – 12:00 PM (EDT).

Ensuring question text is accessible

Posted by Noel Thethy

This post is part of the accessibility series I am running. Here we will look at ensuring text and table elements are accessible.

We have done our best to ensure that Questionmark’s participant interface is readable via screen readers. However, to ensure these work as expected you need to make sure that:

  • The text you use does not contain any inline styles that may confuse a screen reader.
  • Any tables in your content use captions and header information to ensure the screen reader can distinguish content.

If you have copied and pasted text from another application, particularly Microsoft Word, you may find when looking at the HTML code that the question contains extraneous HTML. For example, when content is copied and pasted from Microsoft Word, the text copied will appear as follows in the HTML tab.

 

 

The text copied includes HTML mark-up tags which override the style determined by the templates and could affect how a screen reader interprets what is on the screen. The HTML used to provide the formatting can be viewed in the HTML tab of the Advanced HTML Editor in Authoring Manager and should be cleaned up as much as possible.

Alternatively, Questionmark Live automatically removes any style HTML that may be included from applications such as Word or other Internet pages. To find out more about Questionmark Live, please click here: Questionmark Live

If you are using tables, we recommend that you build them following the W3C guidelines rather than the default tables available. They should ideally look something like this:

 

 

 

 

 

By using the <caption>, <thead> and <tfoot> tags in your table you can clearly identify parts of the table to be read by the screen reader.

For more information see the W3C recommendations for non-visual user agents. These tables can be added by using the Advanced HTML Editor in Authoring Manager.

When and where should I use randomly delivered assessments?

greg_pope-150x1502

Posted by Greg Pope

I am often asked my psychometric opinion regarding when and where random administration of assessments is most appropriate.

To refresh memories, this is a feature in Questionmark Perception Authoring Manager that allows you to select questions at random from one or more topics when creating an assessment. Rather than administering the same 10 questions to all participants, you can give each participant a different set of questions that are pulled at random from the bank of questions in the repository.

So when is it appropriate to use random administration? I think that depends on the answer this question: What are the assessment’s  stakes and purpose? If the stakes are low and the assessment scores are used to help reinforce information learned, or to give participants a rough guess as to how they are doing in an area, I would say that using random administration is defensible. However, if the stakes are medium/high and the assessment scores are used for advancing or certifying participants I usually caution against random administration.  Here are a few reasons why:

  • Expert review of the assessment form(s) cannot be conducted in advance (each participant gets a unique form)
  • Generally SMEs, psychometricians, and other experts will thoroughly review a test form before it is put into live production. This is to ensure that the form meets difficulty, content and other criteria before being administered to participants in a medium/high stakes context. In the case of randomly administered assessments, this review in advance is not possible as every participant obtains a different set of questions.
  • Issues with the calculation of question statistics using Classical Test Theory (CTT)
  • Smaller numbers of participants will be answering each individual question. (Rather than all 200 participants answering all 50 questions in a fixed form test, randomly administered tests generated from a bank of 100 questions may only have a few participants answering each question.)
  • As we saw in a previous blog post, sample size has an effect on the robustness of item statistics. With fewer participants taking each question it becomes difficult to have confidence in the stability of the statistics generated.
  • Equivalency of assessment scores is difficult to achieve and prove
  • An important assumption of CTT is equivalence of forms or parallel forms. In assessment contexts where more than one form of an exam is administered to participants, a great deal of time is spent ensuring that the forms of the assessment are parallel in every way possible (e.g.., difficulty of questions, blueprint coverage, question types, etc.) so that the scores participants obtain are equivalent.
  • With random administration it is not possible to control and verify in advance of an assessment session that the forms are parallel because the questions are pulled at random. This leads to the following problem in terms of the equivalence of participant scores:
  • If one participant got 2/10 on a randomly administered assessment and another participant got 8/10 on the same randomly administered assessment it would be difficult to know whether the participant who got 2/10 scored low because they (by chance) got harder questions than the participant who got 8/10 or whether the low-scoring participant actually did not know the material and therefore scored low.
  • Using meta tags one can mitigate this issue to some degree (e.g.,  by randomly administering questions within topics by difficulty ranges and other meta tag data) but this would not completely guarantee randomly equivalent forms.
  • Issues with calculation of test reliability statistics using CTT
  • Statistics such as Cronbach’s Alpha have trouble with randomly administered assessment administration. Random administration produces a lot of missing data for questions (e.g., not all participants answer all questions), which psychometric statistics rarely handle well.

There are other alternatives to random administration depending on what the needs are. For example, if random administration is being looked at to curb cheating, options such as shuffling answer choices and randomizing presentation order could serve this need, making it very difficult for participants to copy answers off of one another.

It is important for an organization to look at their context to determine what is best for them. Questionmark provides many options for our customers when it comes to assessment solutions and invites them to work with us in adopting workable solutions.

Sending an email at the end of an assessment

john_smallPosted by John Kleeman

One of our customers — a training manager — recently requested we add to Questionmark Perception the ability to automatically send an email at the end of an assessment. He needs to know as soon as people pass certain assessments that have safety implications.

I was pleased to be able to tell him that you can do this already in the software. And in case anyone else wants to do this and doesn’t realize it’s already possible, I thought I’d blog to let others know how to do this.

When you are creating an assessment in Authoring Manager, you can associate an automatic email with an assessment outcome, for instance to send an email when someone passes or fails.

Here’s where you can specify what you want to happen:

image

You can arrange for the email to go to the participant, to a specified email address or to an email address set in a special field. There are also easy drop-downs that allow you to include people’s name, score or details within the email. In the example above, I have set the email to be sent to me and to tell me the name of the participant and his or her pass score.

Here are some typical uses for emails at the end of assessments:

  • Send an email to a participant confirming that they have passed a test, to give them a formal record of a pass
  • Email a person’s manager to let them know whether the person has passed or failed
  • Alert an instructor that someone requires remedial instruction
  • If you are using Perception for pre-employment screening, you can provide a notification when someone is screened successfully
  • You can also set up assessment outcomes more widely than pass and fail. For example, you might want to send an email if someone answers a survey with a particular set of weights or does exceptionally well or badly on an assessment

Pushing out a notification by email at the end of an assessment will be useful to most Perception users, and I hope it’s helpful to be reminded how to do it.