Three levels of assessment feedback

Joan Phaup

Posted by Joan Phaup

Feedback can be used not only to tell test or quiz participants their final results but also to add tremendous learning value to formative and diagnostic assessments. In a recent post in his own blog, our CEO Eric Shepherd describes three levels of feedback: Assessment Level, Topic Level, and Question/Item Level. He also explains the different types of feedback available at each of these levels.

You can get some more advice about the use of feedback by reading this Jim Farrell’s previous post on this subject and checking out the feedback descriptions on our Web site.

IMS Basic LTI: connector-less integration with the LMS

Posted bySteve Lay

At this year’s Questionmark Users Conference I ran a session in our Product Central strand about our Open Assessment Platform. An important part of this strategy is to adopt technical standards where they exist and encourage the development and adoption of standards where they don’t. Our Product Central sessions give customers a chance to look ahead at our product road map and to discuss and influence our direction.

One of the topics in my session was the future of LMS integration. During the session I demonstrated a prototype connection between the popular Moodle LMS and Questionmark Perception using the IMS Basic LTI standard. LTI stands for Learning Tools Interoperability and defines a standard way for a tool like Perception to be used or ‘consumed’ by a learning management system.

LTI solves a similar problem to our LMS-specific integration connectors but in a standard way that can be supported directly by the LMS vendor. Basic LTI support is already available through a Moodle module (as demonstrated at the conference) and in Blackboard from 9.1 SP4.  There is also a plug-in available for Sakai and a development project on the codeplex open source hosting site working towards support from Sharepoint. You can see the full list on the Basic LTI status page.

Basic LTI is a relatively new standard so this adoption list is impressive. It usually takes years to get to this level of support. In my opinion, the reason for this rapid adoption is the simplicity of implementing the standard in a tool or product. The prototype Basic LTI support I demonstrated at the users conference was implemented in less than 300 lines of Python code.

A prototype Basic LTI connection between Moodle and Perception.

The LTI adoption community is being spear-headed by Charles Severance, aka Dr Chuck, who wrote in a recent blog post: “One key observation is that Basic LTI really reduces the barrier to entry for building a tool to plug into Sakai”.  We’re excited about the potential that LTI holds for connecting Questionmark with a broad range of learning systems.  Watch this space!

Using the Course Summary Report in Questionmark Analytics

As we roll out new report types using Questionmark Analytics, we are pleased to provide short how-to’s explaining their uses and features.

The Course Summary Report

  • What it does: Compares course evaluation information across courses. The course summary report has a dynamically generated table representing the summary survey scores for all topics in each course.
  • Who should use it: Useful for managers and learning/education/training professionals to gauge and compare how participants rate different courses offered within an organization.
  • How it looks: The report contains three elements:
    1. A dynamically generated table displaying the average ratings for each course in each “topic” (or group) of questions. For example, a course evaluation might group questions about “course materials” in one topic, and questions about “facilities” in another topic. The table includes data about the questions, ratings and number of responses – and is color-coded to highlight the high and low performing courses.
    2. A bar-chart that graphically compares the average aggregate ratings for each course.
    3. A bar-chart that graphically compares the average ratings by topic for each course.

How many items are needed for each topic in an assessment? How PwC decides

Posted by John Kleeman

I really enjoyed last week’s Questionmark Users Conference in Los Angeles, where I learned a great deal from Questionmark users. One strong session was on best practice in diagnostic assessments, by Sean Farrell and Lenka Hennessy from PwC (PricewaterhouseCoopers).

PwC prioritize the development of their people — they’ve been awarded #1 in Training Magazine’s top 125 for the past 3 years — and part of this is their use of diagnostic assessments. They use diagnostic assessments for many purposes but one is to allow a test-out. Such diagnostic assessments cover critical knowledge and skills covered by training courses. If people pass the assessment, they can avoid unnecessary training and not attend the course. They justify the assessments by the time saved from training not needed – being smart accountants using billable time saved!

imagePwC use a five-stage model for diagnostic assessments: Assess, Design, Develop, Implement and Evaluate as shown in the graph on the right.

The Design phase includes blueprinting, starting from learning objectives. Other customers I speak to often ask how many questions or items they should include on each topic in an assessment, and I thought PwC have a great approach for this. They rate all their learning objectives by Criticality and Domain size, as follows:

Criticality
1 = Slightly important but needed only once in a while
2 = Important but not used on every job
3 = Very important, but not used on every job
4 = Critical and used on every job

Domain size
1 = Small (less than 30 minutes to train)
2 = Medium (30-59 minutes to train)
3 = Large (60-90 minutes to train)

The number of items they use for each learning objective is the Criticality multiplied by the Domain size. So for instance if a learning objective is Criticality 3 (very important but not used on every job) and Domain size 2 (medium), they will include 6 items on this objective in the assessment. Or if the learning objective is Criticality 1 and Domain size 1, they’d only have a single item.

I was very impressed by the professionalism of PwC and our other users at the conference. This seems a very useful way of deciding how many items to include in an assessment, and I hope passing on their insight is useful for you.

Including a Questionmark Knowledge Check within SharePoint is easier than you think

Posted by John Kleeman

Many Questionmark customers use SharePoint within their organization. Microsoft SharePoint is a fantastic tool that lets non-technical people create collaborative web sites, and SharePoint is a great system to deploy assessments in for learning, training and compliance.

One of the easiest ways to include an assessment inside SharePoint is as a knowledge check – you can easily put a Questionmark Perception assessment beside some learning content as in the screenshot.

embed assessment sharepoint 2010

Putting a knowledge check in a SharePoint page gives three benefits

  • The learner can check he/she understands
  • The learner gets retrieval practice to reinforce the learning
  • As author, you can run reports to see which parts of the learning are understood or missed

In order to help people get the benefits of using assessments inside SharePoint, Questionmark have launched a new blog http://blog.sharepointlearn.com which focuses on SharePoint and assessment. This will allow us to run more detailed articles on SharePoint and assessments than the main blog can.

SharePoint is a lot easier to use than many people think. You don’t need administrative rights or programming skills to do most things. At the Questionmark Users Conference last week, I ran a session where people added an assessment in a sandbox site in just a few minutes. You can include an assessment inside SharePoint using the Page Viewer Web Part, which most people who can edit SharePoint pages have access to – if you want to give it a go, here are some instructions from the new blog.

Questionmark Users Conference: It’s a wrap!

Joan Phaup

Posted by Joan Phaup

As we close this year’s Questionmark Users Conference I am — as always — amazed at how quickly the time has gone by since we arrived in Los Angeles a few days ago.

During yesterday’s keynote address, Bryan Chapman encouraged all of us to think more strategically about the use of assessments such as surveys, quizzes and tests. He introduced several scenarios – such as inefficiencies in aircraft maintenance, deficiencies in call center service quality and the need for retail associates to communicate effectively about thousands of products – and in each case asked how assessments can help. He answered that question with some terrific examples of assessment solutions that improved performance and saved organizations time and money. He also showed how some organizations are using assessments to enhance social and informal learning, which was a key theme at this year’s conference.

Another highlight from yesterday was the presentation of our first Questionmark Assessment Innovation Award to Accenture, which runs more than 30 internal certification programs.

Our conferences are never complete without some fun, and we had plenty of that during our big dinner party Universal Studios, where everyone was a celebrity!  For more photos from the conference, check out our Flickr page.

Next Page »