Using the Test Center Analysis report in Questionmark Analytics

This week’s “how to” article highlights the “Test Center Analysis report,” one of the Questionmark Analytics reports now available in Questionmark OnDemand.

  • What it does: The Test Center Analysis Report allows users to evaluate the performance of participants who have been scheduled to take assessments at Test Centers. The report flags abnormal patterns that may indicate a statistically significant difference between the test scores.
  • Who should use it: Assessment, learning and education professionals can use this report to review the performance at various Test Centers and evaluate whether there are significant differences in performance between them. They could use the report to spot potential test security issues. For example, do participants at a specific test center do much better on assessments compared to others?
  • How it looks: This report offers a lot of information graphically and compactly, making it easy to interpret data quickly. It is broken down into two components (graphs).

1. Average assessment scores achieved by participants at each Test Center

  • The triangles represent the mean values for the assessment results. Alongside each triangle are vertical bars denoting the variance of each mean value. The bars are 95% confidence intervals: The length of the confidence intervals denotes the variance of the mean value. If the bars are long then the data is varied. If the bars are short then the confidence in the mean value is high.
  • The overlapping of the bars denotes the variance between mean values. If the bars overlap each other vertically in relation to each other, there is not a statistically significant difference between the means. If the vertical bars do not overlap, there is a statistically significant difference between the means.
  • Note that in this example, Test Center 1 had the highest mean. You will also see that the confidence intervals do not overlap: the results for this Test Center are statistically significantly higher than for other Test Centers.

2. Number of results that occurred in each Test Center

  • This volume information can help in planning for Test Center logistics.

A PDF and an analysis-friendly CSV version of the report can also be generated.

ETL: The Power behind Questionmark Analytics

Readers of this blog will be aware that Questionmark is transforming the way we provide reports using our new Questionmark Analytics tool.  The screenshots shared in previous posts demonstrate the new report types and the modern user interface, but what they don’t show is the technology that now makes it possible to provide rapid reporting even for the largest data sets.  In this post we’ll take a look “under the hood” at this technology.

E is for Extract

Running reports can be a time-consuming job. Thousands or even millions of database records need to be collected together, cross-matched and reformatted before being sent to the user. Unfortunately, data doesn’t stand still. There is nothing to stop you from taking an assessment and generating a new result in the database while someone else is trying to get a report on the very same assessment. Conflicts between these two demands can cause databases to slow down, especially when you start scaling up. The solution is to extract the data from the main database and generate the reports elsewhere.

T is for Transform

When you take an assessment with Questionmark Perception the results are written to the database in the simplest, fastest way. This enables us to scale up the delivery of assessments to cater for vast numbers of participants. But there is a cost to this speed and simplicity. The data is not organized in a very convenient way for reporting; as a result, complex reports require much more data processing to create. The solution is to transform the data so that reports can be created quickly when they are needed.

L is for Load

Having extracted and transformed our data the last step in the process is to load it into a special-purpose database for generating the reports. We call this database our “Results Warehouse” — because the science of ETL is part of the larger field of Data Warehousing.

This new architecture provides rapid, scalable reporting that doesn’t interfere with the participant experience. Learn more about Questionmark Analytics here.

New Item Analysis Report: The Detail Page

jim_small Posted by Jim Farrell

In a recent post I explained the Summary page of the new Item Analysis report in Questionmark Analytics, now available in Questionmark OnDemand. Today I want to show you the question detail page.

ItemAnalysisReport2

This is the top of the detail page for a question. It first provides the high-level summary row from the previous page and then shows a number of statements that interpret the psychometric performance of the question based on the statistics. This gives users a high-level text description of what is going right or wrong with the question. This makes it easier for non-expert users to get actionable item analysis information, and makes it easy to identify the questions that you need to review.

They can then drill deeper as they scroll down the page for more statistical details regarding the question performance:

ItemAnalysisReport3

The p-values and item-total correlations have 95% confidence interval information in brackets next to the observed values. This gives users a sense of the error range around the statistics. (See  a previous blog article on using confidence intervals.)

There are additional stats as well, such as the Item reliability index that some psychometricians use. Below that is the answer option table showing for MC questions the numbers and percentages of participants that selected each response option. Below/beside that is a question preview for the question so that users can see what the question looked like inline with the statistics.

Remember, if you are running a medium or high stakes assessment that has to be legally defensible, then you cannot confirm that the assessment is valid if you are not running item analysis. And for all quizzes, tests and exams, running an item analysis report will give you information to help you make the assessment better.

New Item Analysis Report in Questionmark Analytics: The Summary Page

 Posted by Jim Farrell

When I visit customers, I find that the Item Analysis report is one of the most useful reporting capabilities of Questionmark Perception. By using it, you can tell which questions are effective and which are not – and if you don’t use it, you are “running blind:” You hope your questions are good, but do not really know if they are.

Our most recent update to Questionmark OnDemand provides a new classical test theory item analysis report — one of several reports now available in Questionmark Analytics.  This report supports all question types commonly used on quizzes, tests and exams and is fully scalable for application to large pool of participants. Let’s take a look at the report!

ItemAnalysisReport1

This is the summary page. The graph show the performance of questions in relation to one another in terms of their difficulty (p-value) and discrimination (item-total correlation). The p-value is a number from 0 to 1, and represents the proportion of people who correctly answer the question. So a question with p-value of 0.5 means that half the participants get it right and half wrong.  And a question with p-value of 0.9 means that 90% of respondents get it right.

A rule of thumb is that it’s often useful to use questions with p-value that are reasonably close to the pass score of the assessment.  For instance, if your pass score is 60%, then questions with a p-value of around 0.6 will give you good information about your participants.  However a very high or very low p-value does not give you much information about a person who answers it. If the purpose of the test is to measure someone’s knowledge or skills, then you will get more information from a question with medium p-value. Using the item analysis report is an easy way to get p-values.

The other key statistic in the item analysis report is the item-total correlation discrimination, which shows the correlation between the question score and the assessment score. Higher positive correlation values indicate that participants who obtain high question scores also obtain high assessment scores. Conversely, participants who obtain low question scores also obtain low assessment scores. Low values for questions here could indicate unhelpful questions and are worth drilling down on.

Below the graph is a table that shows some high-level details of each question composing the assessment. The table can be sorted by any of the columns. By clicking on a row/question the user goes to the detail page, which we will discuss in our next blog post on this subject.

If you are running a medium or high stakes assessment that has to be legally defensible, then you cannot confirm that the assessment is valid if you are not running item analysis. And for all quizzes, tests and exams, running an item analysis report will give you information to help you make the assessment better.

Using the Assessment results over time report in Questionmark Analytics

This week’s “how to” article highlights the “Assessment results over time report,” one of the Questionmark Analytics reports now available in Questionmark OnDemand .

  • What it does: The Assessment results over time report provides summary assessment performance information over time, including the mean, minimum and maximum values for a test or exam as well as its 95% confidence interval. It also shows the number of participants who took the assessment during that period of time. You can assign these filters to your report:
  • Assessment filter
  • Group filter
  • Date filter
  • Who should use it: Assessment, learning and education professionals can use this report to view and filter assessment results over time, making it easy for them to flag abnormal patterns that may indicate a statistically significant difference between the means.
  • How it looks: This report offers a lot of information graphically and compactly, making it easy to interpret vast amounts of information quickly. It is broken down into two components (graphs).

1. A graph displaying average assessment scores achieved by participants over a period of time. The blue triangles represent the means for the assessment results. The vertical lines next to the triangles denote confidence intervals: long bars indicate the data is varied and short bars indicate high confidence in the mean value. It’s easy to see in the first graph  that the results of tests administered just before September 6, 2010, differ dramatically from the other results during this period!
2. A graph displaying the number of results from the same time period. This volume information can help plan administration sessions and load.

A PDF and an analysis-friendly CSV can also be generated.

 

Using the Question Status Report in Questionmark Analytics

This week’s “how to” article highlights the “Question Status Report,” one of the Questionmark Analytics reports now available in Questionmark OnDemand.

  • What it does: This item bank report tells how many questions you have in your repository by question status:
  • Normal – The question can be included in assessments
  • Retired – The question is retired and cannot be included in assessments
  • Incomplete – The question is still being developed and cannot be included in assessments
  • Experimental – The question can be included in assessments but is available in experimental form only
  • Beta – The question is treated in the same way as a normal question. Beta questions can be included in assessments.
  • Who should use it: This report gives testing, assessment, learning and education professionals a quick view of the current status of questions in their item banks.
  • How it looks: This report lists question status possibilities along the left-hand side. Horizontal bars indicate the number of questions with each status. These bars can be color coded by topic as well as question status. The report can be viewed in a web browser or downloaded and distributed as a PDF. The CSV version of this report lists question status in the first column and the number of questions per topic in the remaining columns. The question detail CSV distribution provides information such as each question’s Perception question ID, wording, description, status, topic and question type. You can see the information for your entire repository or just for specific topics.