Question Type Report: Use Cases

Austin Fossey-42Posted by Austin Fossey

A client recently asked me if there is a way to count the number of each type of item in their item bank, so I pointed them toward the Question Type Report in Questionmark Analytics. While this type of frequency data can also be easily pulled using our Results API, it can be useful to have a quick overview of the number of items (split out by item type) in the item bank.

The Question Type Report does not need to be run frequently (and Analytics usage stats reflect that observation), but the data can help indicate the robustness of an item bank.

This report is most valuable in situations involving topics for a specific assessment or set of related assessments. While it might be nice to know that we have a total of 15,000 multiple choice (MC) items in the item bank, these counts are trivial unless we have a system-wide practical application—for example planning a full program translation or selling content to a partner.

This report can provide a quick profile of the population of the item bank or a topic when needed, though more detailed item tracking by status, topic, metatags, item type, and exposure is advisable for anyone managing a large-scale item development project. Below are some potential use cases for this simple report.

Test Development and Maintenance:
The Question Type Report’s value is primarily its ability to count the number of each type of item within a topic. If we know we have 80 MC items in a topic for a new assessment, and they all need be reviewed by a bias committee, then we can plan accordingly.

Form Building:
If we are equating multiple forms using a common-item design, the report can help us determine how many items go on each form and the degree to which the forms can overlap. Even if we only have one form, knowing the number of items can help a test developer check that enough items are available to match the blueprint.

Item Development:
If the report indicates that there are plenty of MC items ready for future publications, but we only have a handful of essay items to cover our existing assessment form, then we might instruct item writers to focus on developing new essay questions for the next publication of the assessment.

Question type

Example of a Question Type Report showing the frequency distribution by item type.

 

Using OData queries to calculate simple statistics

Steve Lay HeadshotPosted by Steve Lay

In previous blog posts we’ve looked at how to use OData clients, like the PowerPivot plugin for Excel, to create sophisticated reports based on the data exported from your Questionmark Analytics results warehouse. In this post, I’ll show you that your web developers don’t need a complex tool like Excel to harness the power of OData.

There are several third party libraries available that make it easy for your developers to incorporate support for OData in their applications, but OData itself contains a powerful query language and developers need nothing more than the ability to fetch a URL to take advantage of it. For example, suppose I want to know what percentage of my participants have passed one of my exams.

Step 1: find out the <id> of your assessment.OnDemand

To start with, I’ll build a URL that returns the complete list of assessments in my repository. For example, if your customer number is 123456 then a URL like the following will do the trick:

https://ondemand.questionmark.com/123456/odata/Assessments (This and the following URLS are examples, not live links.)

The resulting output is an XML file containing one record for each assessment. Open up the result in a text editor or ‘view source’ in your browser, scan down for the Assessment you are interested in, and make a note of the entry’s <id>, that’s the URL of the assessment you are interested in. If you’ve got lots of assessments you might like to filter the results using an OData query. In my case, I know the assessment name starts with the word “Chemistry”, so the following URL makes it easier to find the right one:

https://ondemand.questionmark.com/123456/odata/Assessments?$filter=startswith(Name,’Chem’)

All I’ve done is add a $filter parameter to the URL! The resulting document contains a single assessment! I can see that the <id> of my assessment is actually the following URL:

https://ondemand.questionmark.com/123456/odata/Assessments(77014)

Step 2: count the results

I’m not interested in the information about the assessment but I am interested in the results. OData makes it easy to navigate from one data item to another. I just add “/Results” to the URL:

https://ondemand.questionmark.com/123456/odata/Assessments(77014)/Results

This URL gives me a similar list to the assessment list I had earlier but this time I have one entry for each result of this assessment. Of course, there could be thousands, but for my application I only want to know how many. Again, OData has a way of finding this information out just by manipulating the URL:

https://ondemand.questionmark.com/123456/odata/Assessments(77014)/Results/$count

By adding /$count to the URL I’m asking OData not to send me all the data, but just to send me a count of the number of items that it would have sent back. The result is a tiny plain text document containing just the number. If you view this URL in your web browser you’ll see the number appear as the only thing on the page.

I’ve now calculated the total number of results for my assessment without having to do anything more sophisticated than fetch a URL. But what I really want is the percentage of these results that represent a pass. It turns out I can use the same technique as before to filter the results and include only those that have passed. My assessment has Pass/Fail information represented using the ScorebandName.

odataD

the $count option just sends a count of the number of items that it would have sent back without it

https://ondemand.questionmark.com/123456/odata/Assessments(77014)/Results/$count?$filter=ScorebandName eq ‘Pass’

Notice that by combining $count and $filter I can count how many passing results there are without having to view the results themselves. It is now trivial to combine the two values that have been returned to your application by these URLs to display a percentage passed or to display some other graphic representation such as a pie chart or part filled bar.

As you can see, your developers don’t need a sophisticated library to write powerful client applications with OData. And these HTTP documents are just a few bytes in size so they won’t use much bandwidth (or memory) in your application either. For additional resources and definitions of all the OData filters, you can visit the OData URI conventions page at odata.org.

Questionmark Users Conferences offer many opportunities to learn more about Questionmark Analytics and OnDemand. Registration is already open for the 2014 Users Conference March 4 – 7 in San Antonio, Texas. Plan to be there!

 

Improving course evaluations and boosting participation

Posted by Jim Farrell

Course and session evaluations  are popular assessments  for helping  to improve course and instructor quality at conferences and in learning programs all over the world.

One of our major goals over the past few years has been to make it easier to create and deliver course evaluations – and to help organizations glean more meaningful, actionable results from them.

Back in 2010, when we added the ability to author course evaluation surveys in Questionmark Live, we included question libraries to draw from in creating  surveys. These libraries cover four topics: Demographics, Instructor, Course Materials and Facilities;  you can either write your own questions or choose some from the libraries.

More recently, we’ve  been exploring the use of QR codes to increase course evaluation response rates by taking participants directly to online surveys via their mobile devices. Go here and here for more details about the benefits of using QR codes.

What about the results of course evaluations? We now have 4 course evaluation surveys in Questionmark Analytics: Course Summary, Instructor Summary, Class Summary and Class Detail. The four course evaluation reports allow you to drill into more and more detail about your course evaluation results. You can start at the course level and work your way down to an instructor/question level of detail. Each report also has visual cues to make performance obvious with a quick glance.

In the example below, the course summary report compares evaluation results across courses. It is most useful for managers and supervisors to comparing different courses within an organization.

If you are a customer looking to improve your course evaluations, you can click  here to read for our Course Evaluation Best Practices Guide.  Anyone who hasn’t used Questionmark Live can sign up for a free trial via our Tryouts and Downloads page.

Using the Item Analysis Report

Here’s some basic information about the Item Analysis Report, which was recently added to Questionmark Analytics:

What it does: The item analysis report provides an in-depth Classical Test Theory psychometric analysis of item performance. It enables you to drill-down into specific item statistics and performance data. The report includes key item statistics including item difficulty p-value, high-low discrimination, item-total correlation discrimination and item reliability. It also provides assessment statistics relating to the amount of time taken and the scores achieved.

Who should use it: Assessment, learning and education professionals can use this report to determine how well questions perform psychometrically.

How it looks: The report includes an assessment level overview graph and summary table. The overview graph plots a single point for each item in the summary table. Each question is plotted in terms of its item difficulty p-value (X-axis) and by its item-total correlation discrimination (Y-axis):

  • Questions that have high (acceptable) discrimination will appear near the top of the graph
  • Questions with low (unacceptable) discrimination will appear at the bottom of the graph
  • Difficult questions will appear to the left of the graph
  • Easy questions will appear to the right of the graph

The summary table beneath the scatter plot graph contains a line item for every question on the assessment. The table provides information on the question order, question wording and description, and summary information regarding the item difficulty p-values and the item-total correlation discrimination for each question. You can select an item in the table to navigate to the details of the selected item. And you can sort on each column to get different views of question performance. For example, you can sort the questions by difficulty to look at the hardest questions at the top of the table.

By clicking on any of the rows in the summary table one can go to a detailed item view of question-level information.

Using the Demographic Report

The Demographic Report was recently added to Questionmark Analytics. Here are the basics:

What it does: The demographic report breaks down results by demographic variables such as language,  course name, location, department, instructor, and job role or military rank. This report can recognize up to 10 demographic variables recorded by an  assessment. Users can review the performance of assessment data broken out by a demographic variable, using data stored in special fields within the database.

Who should use it: Assessment, learning and education professionals can use this report to zero in on test results according to specific demographic information.

How it looks: Graph 1 in this example shows the assessment mean score for the selected demographic. It displays the average percentage score achieved as well as the high and low results.  Graph 2 shows the number of results for the selected demographic and includes an overall calculation of the number of results found.

You can assign several filters to limit the information included in the report:

  • Assessment (mandatory)
  • Group
  • Date
  • Attempt
  • Special field

Using the Assessment Completion Time Report

The Assessment Completion Time Report is one of three new reports in Questionmark Analytics. Here are the basics:

What it does: The assessment completion time report graphically monitors how long, on average, it took each participant to complete an assessment in relation to  his or her overall score.  For example, someone who took very  little time to complete the assessment  but achieved a very high score would be flagged. Might this person have had prior knowledge of the exam (e.g., prior exposure to the answer key)? On the flip side, someone who took a very long time to take the assessment  but scored very low would be flagged, too. Might this person have been memorizing questions (i.e., taking exam content to sell to other test takers)?

Who should use it: This report provides valuable test security information for those administering medium/high stakes tests. It can also be used to help determine whether the allotted test taking time window is sufficient for most participants.

How it looks: The scatterplot shows the assessment mean for each participant (X-axis) plotted by the assessment completion time for each participant (Y-axis). Clusters of participants that stand out as having extreme combinations of assessment score and assessment completion time are evident in the scatter plot (e.g., participants with short completion times and high assessment scores would stand out in the lower right section of the graph).

Below  the scatter plot is a table that lists participants that are flagged as having suspicious combinations of assessment score and assessment completion time. This list can be useful in conducting an investigation of what happened in a given context.  Click here to learn more about Questionmark Analytics.