Questionmark Perception Blackboard Connector

julie-smallPosted by Julie Delazyn

Questionmark Perception easily integrates with various third-party applications using “standards-based” integrations or specifically created connectors. A great example of this seamless integration is Perception and Blackboard. Using the Questionmark Perception Blackboard Connector enables institutions to integrate the best-of-breed assessment management and delivery features of Perception with the learning management features of the Blackboard Learn™ Platform.

Blackboard instructors can seamlessly access Perception to create and modify Perception assessments. Once the assessments are created the Blackboard instructors can assign them to various courses available on their Blackboard course web site. Students can launch their assessments from within Blackboard. Perception will then mark the answers provided before passing the results back to Blackboard.

For information about how to install the Blackboard Connectors for Perception, click here.

Providing user assistance by role

john_smallPosted by John Kleeman

Questionmark provides a wealth of documentation and assistance resources for our users. For version 5 of Questionmark Perception, there are more than 30 manuals and more than 400 detailed, knowledge base articles on our technical support site. We also have a developer support site, a learning cafe, best practice guides and more, including this blog and our Community Spaces social networking site, which includes interactive forums and other resources for our software support plan customers.

Customers can find documentation by using search and drill down, but in order to make it easier for them to find what they’re looking for, we’re introducing a roles based look-up system for our user assistance resources.This is now available to Questionmark software support plan (SSP) customers.

Obviously, work roles vary in different organizations but we’ve identified three main sectors for our customers: Corporate / Government, Academic and Awarding bodies. We have also defined 15 or so roles in each of these sectors. For each of these roles, we have a landing page that gives helpful resources for each role. And we will be extending this to allow searching and knowledge checks by role over time.

Here are the roles we’ve defined. (You’ll need SSP registration in Questionmark Community Spaces to view these pages.)

As always, we’d welcome feedback on whether these roles make sense for you. Please let us know whether the role pages are helpful and/or how we could improve them.

Import BlackBoard 8 and 9 Question Pools into Questionmark Live

jim_small

Posted by Jim Farrell

Since introducing Questionmark Live over a year ago we have received lots and lots of feedback on how we could improve it. One frequent suggestion is that we  add different ways to import questions more easily. To date, the most popular import methods have been LXR, Questionmark Live CSV, and BlackBoard.

We have had a lot of feedback  recently on improving our BlackBoard importer. Well, you have spoken and we have responded. The Questionmark Live team has expanded the BlackBoard question import tool to include question pools from BlackBoard 8 and BlackBoard 9.

image

Make sure you take advantage of Questionmark Live and get everyone on your staff writing questions that you can use today!

Integrate Questionmark Perception with SABA Learning

Questionmark Perception easily integrates with various third-party applications using “standards-based” integrations or specifically created connectors. One example of a standards-based integration is the pairing of Questionmark Perception and Saba via the AICC standard.

Questionmark Perception version 5 assessments are fully interoperable with Saba Learning. Saba users can easily launch Perception assessments from within the Saba software. Perception will provide analysis and statistics reports, while Saba controls the participant data. Results from the assessment are automatically passed back to Saba, where you can view what participants got or you can log in to Enterprise Manager to see detailed report.

For details about integrating Perception Version 5 and Saba, click here.

Off to Amsterdam October 3 – 5 for the next European Users Conference

Mel Lynch headshot

Posted by Mel Lynch

Questionmark users are once again anticipating two days of learning and networking at the Questionmark 2010 European Users Conference.

We are very excited about this year’s gathering, to be held in Amsterdam on October 3 – 5. Judging from what we’re seeing in registrations to date, it’s looking to be the biggest European Users Conference ever.  The fact that we are meeting  in Amsterdam may have something to do with it, as it is definitely a destination worth a visit!

The conference offers a great opportunity to meet with fellow learning and assessment professionals to discuss best practices and to get better at using online surveys, quizzes, tests and exams.  The agenda is really starting to take shape with a lot of the programme being confirmed and some great sessions lined up.

Of course, the Questionmark tradition of taking attendees on a fun-filled evening event has not been forgotten, with the canals of Amsterdam playing host to this year’s Monday night social event.

If you are thinking about joining us in Amsterdam but are still not quite sure what it’s all about, then take a moment to view this Video Invitation from Questionmark’s CEO Eric Shepherd – in a little over a minute (1 minute, 22 seconds to be exact!), you’ll get an overview of what the conference has in store for you. But remember – be sure not to procrastinate too much as spaces are limited and we don’t want you to miss out. 

Register now or visit the conference website for further information.

Should I include really easy or really hard questions on my assessments?

greg_pope-150x1502

Posted by Greg Pope

I thought it might be fun to discuss something that many people have asked me about over the years: “Should I include really easy or really hard questions on my assessments?” It is difficult to provide a simple “Yes” or “No” answer because, as with so many things in testing, it depends! However, I can provide some food for thought that may help you when building your assessments.

We can define easy questions as those with high p-values (item difficulty statistics) such as 0.9 to 1.0 (90-100% of participants answer the question correctly). We can define hard questions as those with low p-values such as 0.15 to 0 (15-0% answer the question correctly). These ranges are fairly arbitrary: some organizations in some contexts may consider greater than 0.8 easy and less than 0.25 difficult.

When considering how easy or difficult questions should be, start by asking, “What is the purpose of the assessment program and the assessments being developed?” If the purpose of an assessment is to provide a knowledge check and facilitate learning during a course, then maybe a short formative quiz would be appropriate. In this case, one can be fairly flexible in selecting questions to include on the quiz. Having some easier and harder questions is probably just fine. If the purpose of an assessment is to measure a participant’s ability to process information quickly and accurately under duress, then a speed test would likely be appropriate. In that case, a large number of low-difficulty questions should be included on the assessment.

However, in many common situations having very difficult or very easy questions on an assessment may not make a great deal of sense. For a criterion referenced example, if the purpose of an assessment is to certify participants as knowledgeable and skilful enough to do a certain job competently (e.g., crane operation), the difficulty of questions  would need careful scrutiny. The exam may have a cut score that participants need to achieve in order to be considered good enough (e.g., 60+%). Here are a few reasons why having many very easy or very hard questions on this type of assessment may not make sense:

Very easy items won’t contribute a great deal to the measurement of the construct

A very easy item that almost every participant gets right doesn’t tell us a great deal about what the participant knows and can do. A question like: “Cranes are big. Yes/No” doesn’t tell us a great deal about whether someone has the knowledge or skills to operate a crane. Very easy questions, in this context, are almost like “give-away” questions that contribute virtually nothing to the measurement of the construct. One would get almost the same measurement information (or lack thereof) from asking a question like “What is your shoe size?” because everyone (or mostly everyone) would get it correct.

Tricky to balance blueprint

Assessment construction generally requires following a blueprint that needs to be balanced in terms of question content, difficulty, and other factors. It is often very difficult to balance these blueprints for all factors, and using extreme questions makes this all the more challenging because there are generally more questions available that are of average rather than extreme difficulty.

Potentially not enough questions providing information near the cut score

In a criterion referenced exam with a cut score of 60% one would want the most measurement information in the exam near this cut score. What do I mean by this? Well, questions with p-values around 0.60 will provide the most information regarding whether participants just have the knowledge and skills to pass or just don’t have the knowledge and skills to pass. This topic requires a more detailed look at assessment development techniques that I will elaborate on soon in an upcoming blog post!

Effect of question difficulty on question discrimination

The difficulty of questions affects the discrimination (item-total correlation) statistics of the question. Extremely easy or extremely hard questions have a harder time obtaining those high discrimination statistics that we look for. In the graph below, I show the relationship between question difficulty p-values and item-total correlation discrimination statistics. Notice that the questions (the little diamonds) that have very low and very high p-values also have very low discrimination statistics and those around 0.5 have the highest discrimination statistics.

« Previous PageNext Page »