What do you want for the holidays?

Posted by Howard Eisenberg

All I want for the holidays is …

As the acting product owner for Questionmark’s Reporting and Analytics zone, I would love to hear how you would complete that sentence … with respect to assessment reporting and analytics, of course.

To help stimulate the ideas, I will highlight recent developments in our reporting and analytics suite.

The Introduction of the Results Warehouse and Questionmark Analytics

In version 5.3, we introduced the Results Warehouse. This is a database of assessment results that is separate from the database responsible for delivering assessment content and storing the participant’s responses. Results are extracted, transformed and loaded (ETL’ed) into the Results Warehouse from the delivery database on a recurring schedule. This database is the data source for the Questionmark Analytics reports.

With the advent of Analytics, we have introduced some new reports and we plan to continue building reports in Analytics. In the case of the Item Analysis report, we’ve actually ported that to Analytics entirely, and in so doing have delivered improvements to the visualization of item quality and report output options.

Addition of New Reports in Questionmark Analytics

Here’s a brief inventory of the reports currently available in Analytics. You can read-up on the purpose of each of these reports and see sample outputs by consulting the Analytics Help Center.

In the spirit of holiday gift-giving, allow me to expound on a few of these reports.

Results Over Time and Average Score by Demographic

These are two separate reports but they are similar in that each one displays an average assessment score within the context of a 95% confidence interval, and a count of the number of results (sample size).

The “Results Over Time” report plots the assessment mean over a period of time, the interval of which is selected by the user.

The “Demographic” report does the same in terms of displaying a mean score, but it groups the results by a demographic. In this way, it enables the report consumer to compare the mean across different demographic groups.

Assessment Completion Time

This report can be used to help investigate misconduct. It plots assessments results on a score axis and a completion time axis. Outliers may represent causes of misconduct. That is, if a participant scores above the mean, yet takes an abnormally short time to complete the assessment; this may represent a case of cheating. If a participant takes an abnormally long time to complete the assessment, yet scores very poorly; this may represent a case of content theft. The report allows the user to set the range for normal score and completion time.

Item Analysis

Finally, the item analysis report has been improved to provide users with better visualization of the quality of items on an assessment form, as well as more output options.

Suspect items are immediately visible because users can specify acceptable ranges for p-value and item-total correlation. Items that fall within acceptable ranges for each measure are green, those that fall outside of the acceptable range for one of the two measures are orange, and any that miss the mark for both p-value and item-total correlation are red.

Additionally, different sections and level of detail included in this report can be output to PDF and/or comma separated value files.

So … what’s on your wish list for the holidays?

Top 5 Questionmark Videos in 2012

Posted by Julie Delazyn

Last week we highlighted our five most popular presentations on our SlideShare page. This week, we’re happy to focus our attention on videos – which offer such an easy way to share information.

You can find more than three dozen videos, demos and other resources in the Questionmark Learning Café about assessment-related best practices and the use of Questionmark technologies.

So which videos have attracted the most interest this year?

Here are the top five:

5. iPhone App
4. Questionmark Live Multiple Choice Demo
3. Course Evaluation Survey in Questionmark Live
2. Auto-Sensing and Auto-Sizing Assessments
1. CSV Question Creation and Import

Thank you for watching, and look for more videos in 2013!

Recommended Reading: Learning on Demand by Reuben Tozman

Posted by Jim Farrell

I don’t know about you, but I often feel spoiled by Twitter.

Being busy forces me to mostly consume short articles and blog posts with the attention span similar to my 6-year-old son. Over the course of the year, the pile of books on my nightstand grows, and I fall behind in books I want to read. My favorite thing about this time of the year (besides football and eggnog) is catching up on my reading.

One book that I’ve been really looking forward to reading, since hearing rumors of its creation by the author, is Learning on Demand by Reuben Tozman.

For those of you who are regulars at e-learning conferences, the name Reuben Tozman will not be new to you. Reuben is not one for the status quo. Like many of us, he is constantly looking for the disruptive force that will move the “learner” from the cookie-cutter, one-size-fits-all model that many of us have grown up with to a world where everything revolves around the context of performance. I put the word learner in quotes because Reuben hates the word. We are all learners all of the time in the 70+20+10 world. You are not only a learner when you are logged into your LMS.

Learning on Demand takes the reader through the topics of understanding and designing learning material with the evolving semantic web, the new technologies available today to make learning more effective and efficient, structuring content for an on-demand system, and key skills for instructional designers.

Each chapter includes real-world examples that anyone involved in education will connect with. This isn’t a book that tells you to throw away the baby with the bath water: There are a lot of skills that Instructional Designers use today that will help them be successful in a learning-on-demand world.

Even the appendix of case studies has nuggets to take forward and expand into your everyday work. My favorite was a short piece on work Reuben did with the Forum for International Trade Training (FITT). They called it a “J3 vision” which goes beyond training to performance support. The “Js” are:  J1 – just enough, J2 – Just in time (regardless of time and/or location), and J3 – Just for me (delivered in the medium I like to learn in,) (Notice I did not say learning style: That is a discussion for another time.) To me, this is the perfect way to define good performance support.

I think it would be good for Instructional Designers to put their Dick and Carey books into the closet and keep Reuben’s book close at hand.

Top 5 Questionmark Presentations on SlideShare in 2012

Posted by Julie Delazyn

As we near the end of the year, we’d like to highlight some of the most popular presentations we’ve featured here on the blog in 2012.

We have been sharing many presentations with you on our Questionmark SlideShare page – a great way to pass along what, our partners and customers have been learning about effective assessment and measurement I read and answer comments all the time from LinkedIn, Facebook, Twitter and, most recently, Google+ about the value of these presentations as well as the ways in which they are being shared and used.

The five most popular presentations on our SlideShare page… Drumroll, please…

5. Assessment translation, localisation and adaptation (Sue Orchard, Comms Multilingual)

4. Alignment, Impact and Measurement with the A-model (Bruce C. Aaron, Ametrico)

3. Using a Blended Delivery Model to Drive Strategic Success for SAP Certification (Questionmark/SAP co-presentation)

2. Measuring Social Learning in SharePoint with Assessments (John Kleeman, Questionmark)

1. Assess to Comply: how else can you be sure that employees understand? (John Kleeman, Questionmark)

Feel free to comment, share and let us know in which ways these have helped you!

Stay tuned next week for our 5 most-viewed videos of 2012.

Scalability testing for online assessments

Posted by Steve Lay

Last year I wrote a series of blog posts with accompanying videos on the basics of setting up virtual machines in the cloud and running them ready to install Questionmark Perception:

This type of virtual machine environment is very useful for development and testing; we use a similar capability ourselves when testing the Perception software as well as new releases of our US and EU OnDemand services. One thing these environments are particularly useful for is scalability testing.

Scalability can be summarised as the ability to handle increased load when resources are added. We actually publish details of the scalability testing we do for our OnDemand service in our white paper on the “Security of Questionmark’s US OnDemand Service”.

The connection between scalability and security is not always obvious, but application availability is an important part of any organisation’s security strategy. For example, a denial-of-service or DoS attack is one in which an attacker deliberately exploits a weakness of a system in order to make it unavailable. Most DoS attacks do not involve any breach of confidentiality or data integrity, but they are still managed under the umbrella of security. Scalability testing focuses on the ‘friendly’ threat from increased demand but, like a DoS attack, the impact of a failure on the end user is the same: loss of availability.

As the popularity of our OnDemand service continues to increase, we’ve been ramping up our scalability testing, too. Using an external virtual machine service we are able to temporarily, and cost-effectively, simulate loads that exceed the highest peaks of expected demand. As more and more customers join our OnDemand service, the peaks of demand tend to smooth out when compared to a single customer’s usage — allowing us to scale our hardware requirements more efficiently. Our test results are also used to help users of Question Perception, our system for on-premise installation, provision suitable resources for their peak loads.

I thought I’d share a graph from a recent test run to help illustrate how we test the software behind our services. These results were obtained with a set of virtual resources designed to support a peak rate equivalent of 1 million assessments per day. The graph shows results from 13 different types of test, such as logging in, starting a test, submitting results, etc. The vertical axis represents the response times (in ms) for the minimum, median and 90th percentile cases at peak load. As you can see, all results are well within the target time of 5000ms.

I hope I’ve given you a flavour of the type of testing we do to ensure that Questionmark OnDemand lives up to being a scalable platform for your high-volume delivery needs.

 

Determining the Stakes of Assessments

Posted by Julie Delazyn

Determining the stakes of an assessment helps you plan it appropriately, allocate resources wisely and determine an appropriate security level for it.

You can identify low-, medium- or high-stakes assessments by considering their consequences to the candidate. An exam, for instance, normally has significant consequences while a survey has low or no consequences.

This chart displays the consequences of different types of assessments and other factors that help indicate whether they are low-, medium- or high-stakes:

 

In low-stakes assessments, such as quizzes and surveys, the consequences to the candidate are low, and so the legal liabilities are low. These assessments are often taken alone since there isn’t any motivation to cheat or share answers with others. Little planning is required: subject matter experts (SMEs) simply write the questions and make them available to learners. The consequences of low-stakes tests are  easily reversed. If someone gets a poor score on a quiz, for instance, they could improve their score on a retake.

But what about the consequences of a high-stakes test, such as a nursing certification exam? It would very difficult, if not impossible, to reverse the consequences of failing such a test. This kind of test, therefore, requires a great deal of planning. This might include job task analysis, setting pass/fail scores, specifying the methods and consistency of delivery required, and determining how results will be stored and distributed. Psychometricians must analyze the results of such a test and ensure that it is valid and reliable. The motivation to cheat is high, so strong security measures – including the positive identification of each test taker – are in order. For example, high stakes tests related to national security might use biometric screening such as retinal scans to ensure that test takers are who they say they are.

Understanding the stakes of an assessment is an essential step in determining the steps you will take in authoring, delivery and reporting on it. For more details about assessments and their uses, check out our white paper, Assessments Through the Learning Process. You can download it free here, after login. Another good source for testing and assessment terms is our glossary.

Next Page »