Branching an Assessment to Repeat Itself: Here’s How

There may be times when you want your participants to repeat an assessment – for instance if they fail a quiz or you just want to give them the chance to try an assessment over again regardless of their score. It’s easy to do either of these things in Questionmark Perception.  The ability to branch assessments is particularly helpful if you intend to embed them in a web page like SharePoint, blog or wiki.

Here are tips for allowing participants to repeat an assessment:

Setting up a re-take for participants who do not achieve the required score:

  • When creating or editing the assessment, check the Enable pass / fail check-box in the Assessment Feedback screen of the Assessment wizard and set the required pass % mark.
  • You will also need to select Branching from the Settings menu and indicate that you want the participant to “branch to another assessment.”
  • Just choose the assessment you are currently editing, so that participants will automatically repeat it if they do not achieve the passing score.

Branching an assessment back to itself regardless of the participant’s score:

  • When creating the assessment, ensure that there is only one assessment outcome for any score from 0 to 100%.
  • To do this, uncheck the Enable pass / fail check-box on the Assessment Feedback screen.
  • Follow the same instructions as above to indicate that you want participants to branch to the assessment you are currently editing. That way, they can be taken back to the assessment for another try.

I wandered lonely as a virtual machine – with apologies to William Wordsworth

Over the next few months I’m going to be taking a closer look at how to set up Questionmark Perception on cloud-based virtual machines.  I’ll be writing some tutorials aimed at system administrators and integration developers.  I’ll show you, in detail, how to install and configure Perception in Amazon’s cloud and how to use it to test your own integrations.

But first: what are virtual machines, and why do they form clouds?

Ten years ago you could walk into a typical data centre and locate the physical machine that was running your application. You could walk around the server room and do a quick scan of the flashing LEDs to get a dashboard-like view of your running applications. Sometimes a big server would be shared between multiple applications, but sharing the operating system, web server and other common components was error prone and things quickly got complicated!

Today, data centres are often just populated with anonymous machines that combine to form a cloud of computing resources. Each application is installed on its own virtual machine with its own virtual operating system. Virtual machines ‘float’ in the cloud, wandering transparently between the physical machines as required.

One of the main advantages of virtualization is that it enables resources to be used more efficiently. When an application is inactive, the physical resources it was using can be quickly recycled, saving energy and helping ensure that future generations can also enjoy the “vales and hills” that Wordsworth once wandered through.

One of the most popular computing clouds is Amazon’s Elastic Compute Cloud, known as Amazon EC2 for short.  Virtual machines in Amazon’s EC2 are rented out to Amazon account holders on an hour-by-hour basis for just a few dollars.

The physical machines that make up the cloud are distributed around the world in Amazon’s data centres.  But clouds can be private too.  Software for virtualization is available from well known suppliers like VMWare and Microsoft allowing companies to create private clouds in their own data centres.  There is even an open source cloud platform called Eucalyptus.

The easiest way to get going with Questionmark Perception is with our OnDemand solution, but if you need an OnPremise solution you might already be thinking of installing Perception on your own cloud based-systems. This is a subject I’ll be covering in more detail later.

Delivering certifications via the cloud increases candidates’ opportunities

Posted by Joan Phaup

The Medical Group Management Assocation (MGMA) and its standard-setting and certification body, the the American College of Medical Practice Executives (ACMPE),  have long sought effective ways to enhance the knowledge and skills of administrators, CEOs, physicians in management, board members, office managers and many other management professionals.

Their work in promoting the personal and professional growth of people involved in managing medical practices requires the ability to provide certification tests to a widely dispersed audience.

A switch nearly 10 years ago from paper-based to computerized testing made it possible to offer more certification  tests to more people in more locations. This move  reduced travel costs for candidates and offered them more flexibility as to when and where to take tests.

Still, an internal exam delivery network required a lot of attention, and some candidates still had to travel long distances to take quarterly exams at just 30 testing sites,. A more recent switch to secure, cloud-based delivery, using Questionmark OnDemand Services, has made ACMPE exams available at more than 230 sites, with six exam dates per quarter.

The move has also given ACMPE staff more time to focus on their own specialized tasks by delegating hardware maintenance, software upgrades  secure test delivery and many other tasks. You can get more details about this by reading our case study about MGMA.

Providing Calculators and Other Tools for Test Takers

There may be times when you want to give test participants access to a certain tool or resource: a calculator or a periodic table, for instance. Or maybe you are giving an open book test about policies and procedures and wish to make a PDF available for participants to refer to.

You can provide these types of resources within your assessments within Perception Authoring Manager. Use the question-by-question (QxQ) template and enable Perception’s Assessment Navigator, which allows participants to move easily from one question to another.

Here are a few rules of thumb for providing tools within assessments:

  • Any tools that you use must be web based or accessible via a network from the computer the participant is using. If you are adding more than one tool, each tool will display in the same order as it appears in the template.
  • The Questionmark Perception version 5 repository comes with a calculator tool that can be enabled or disabled in the template. Other tools can be stored as a resource and added to the repository.
  • You can add third-party tools to assessments, too. These tools are not stored directly in the repository but can be accessed via the Internet or network the participant’s computer is connected to.

How Topic Feedback can give Compliance Assessments Business Value

Posted by John Kleeman

If you need to prove compliance with regulatory requirements, should your training and assessments focus on compliance needs? Or should you train and assess to improve skills that will impact your business primarily, and meet compliance needs as well?

I recently interviewed Frederick Stroebel and Mark Julius from a large South African financial services company, Sanlam Personal Finance, for the SAP blog. Sanlam have used Questionmark Perception for more than a decade and combine it with SAP HR and Learning software. You can see the full interview on the SAP site. Their view was that compliance and business-related needs must be combined:

“We deliver assessments both for compliance and e-learning. It’s a combination of business requirements and legislation. We predominantly started it off thinking that the purpose would be for business reasons, but as soon as the business realized the value for regulatory compliance, we received more and more requests for that purpose.”

One of the key ways in which Sanlam use the results of assessments to improve feedback is to use topic feedback, which identifies topics that may be weak points for the participant.

We set up our assessments so that at the end, the computer gives the participant a summary of the topics and what the score was per topic, so the participant can immediately see where they need further facilitation as well.

It is also valuable in providing feedback to the learner, where a facilitator sits with the learner. The facilitator can immediately determine from the coaching report where exactly the learner needs to go for re-training. We have done extremely well in terms of increasing our overall pass mark and per topic scores by using topic feedback.  For example, for brokers and advisers, there’s an initial assessment they do, and because questions are in different topics, once they’ve taken the assessment, the facilitator can immediately see which type of training that person must go on.

To illustrate, here is part of a Coaching Report that shows a participant has scored 80% in one topic (well above what is needed for competency) and 58% in another (slightly above what is needed).


Questionmark Perception coaching report

Topic feedback is a great way of getting value from assessments and I hope Sanlam’s experience and insight can help you.

Randy Bennett of ETS says seize the opportunity to improve assessment

Posted by John Kleeman

Randy Bennett (who holds the Frederiksen Chair in Assessment Innovation at ETS – Educational Testing Service) is one of the world’s experts on computerizing assessments. I very much enjoyed his recent keynote at the International Computer Assisted Assessment Conference. With Dr. Bennett’s permission, here is a summary of his presentation.

Randy Bennett at International CAA Conference (CAA 2011)

His key proposal is that we should not use technology in assessment because it is cool or because it is efficient, but to make assessment better. “If we focus on efficiency, we may end up with nothing more than the ability to create existing tests faster, cheaper, and in greater numbers without necessarily making them better.”

Here are his 11 propositions for what technology in assessment should do:

1. Give students more meaningful assessment tasks than are feasible through traditional approaches.

2. Model good instructional practice, including encouraging habits of mind common to proficient performers in the domain.

3. Assess important competencies not measured well in conventional form, e.g. simulations or using a spreadsheet.

4. Measure “problem-solving with technology”, given that the workplace typically requires use of technology.

5. Collect response information that can enlighten substantive interpretation (e.g. the time taken to answer questions).

6. Make assessment fairer for all students including those with disabilities and for non-native language speakers.

7. Explore new approaches to adaptive testing to assess authentically the full range of important competencies not just the middle ranges.

8. Measure more frequently, aggregating information over time to form a summative judgement.

9. Improve the substantive aspects of scoring, for instance use technology to make scoring more effective.

10. Report assessment results in a timely and instructionally actionable manner, including pointing to likely next steps and instructional materials for them.

11. Help teachers and students understand the characteristics of good performance by participating in onscreen marking – for instance mark work and have others review your marks to help you develop understanding.

You can see the full keynote presentation here with several screenshots illustrating what can be done.

I believe that good computerized assessment does much more than simply computerize paper practices, and it’s great to see this thoughtful call to action.