Managing a complex testing environment with a QMWise-based dashboard

 Chloe Mendonca Posted by Chloe Mendonca

Earlier this week I spoke with Gerard Folkerts of Wageningen University who will be delivering a presentation at the upcoming Questionmark 2013 European Users Conference in Barcelona  November 10 – 12.Wageningen UR logo

In their case study presentation, Gerard and his co-presenter, Gerrit Heida, will share their experience using the QMWISe API to create a dashboard for their  Questionmark Perception installation.

Could you tell me about Wageningen University and how you use Questionmark assessments?

Wageningen University is the only university in the Netherlands to focus specifically on the themes of ‘healthy food and the living environment’ — and for the eighth year in a row we’ve been awarded the title of “Best University in the Netherlands”.

We use Questionmark for summative and formative assessments. We do about 10 to 20 summative assessments each period with up to 500 participants simultaneously. Besides the summative assessments there are also formative assessments available to students. Besides the web based assessments we also have a setup for running tests in a secure environment which require the use of Microsoft Office, SAS or SPSS etc.

Could you tell me a little more about this secure setup?

eu confThe secure test environment turns a regular PC room into a test center. Each computer inside the room is closed from communication with the outside world. With our i- house developed tools we are able to deliver the necessary documents for the assessment to each participant. Such an assessment could be writing a report based on data analysis within SPSS. At the end of the assessment all the work of the participants is stored in a central location.

How do you effectively manage numerous participants taking simultaneous tests?

First of all we have set up a number of procedures to ensure a stabile testing environment. For this we have separated the environments needed for question development and formative and summative testing. Multiple administrators are working in the same environment and could schedule assessments to start simultaneously. With Questionmark Web Integration Services environment (QMWISe,) we created a dashboard to get a real-time view of the amount of assessments scheduled and the number of participants that are enrolled into the assessments.

How has this helped to control your environment?

Using the standard QMWise API makes it possible to build a dashboard which will show this data, and with the information already there, it is easy to do a basic health check on the assessments. For technical support it is essential to predict how much load can be expected on your QMP farm, so the dashboard has helped us to get our QMP environment in control

How do you hope that your session will benefit others?

I hope that my session will give some insight in the possibilities of QMWise. Once you understand how it works it is not that hard to add custom modules to the Questionmark Perception environment. I think the API is undervalued.

What are you looking forward to most about the Users Conference?

Meeting with other Questionmark users and learning about the roadmap of Questionmark Perception.  And of course a visit to Barcelona…

Check out the conference program and click here to register. We hope to see you there!

Euro Conf Banner

Assessment types and their uses: Formative

Posted by Julie Delazyn

Assessments have many different purposes, and to use them effectively it’s important to understand their context and uses within the learning process.

Last week I wrote about diagnostic assessments, and today I’ll explore formative assessments.

Typical uses:

  • Strengthening memory recall and correcting misconceptions
  • Promoting confidence in one’s knowledge
  • Enhancing learning by directing attention to and creating intrigue about a given subject
  • Measuring learners’ knowledge or skills and telling them how they’re doing
  • Giving learners search and retrieval practice and prescriptive feedback

Types:

  • Quizzes
  • Practice tests and exams
  • Self-assessments

Stakes: low

Example: An instructor gives a quiz to help reassure students that they’re actually learning — or alert them that they are not learning and provide feedback to correct any misconceptions. Students can use this feedback as a study guide to understand where they’re going right and where they’re going wrong. Students also benefit from the search and retrieval practice they’ve had while taking the quiz – which can help them remember the material in the future. Formative assessments give instructors a way to ask students: “Did you get that?” Sometimes, a series of quizzes is used to collect data that contribute to overall grades – but generally, formative assessments serve as check-ups on learners’ understanding and guideposts for further progress.

For a fuller analysis of assessments and their uses check out the white paper, Assessments Through the Learning Process. You can download it free here, after login. Another good source for testing and assessment terms is our glossary.

In the coming weeks I’ll take a look at three remaining assessment types:

  • Needs
  • Reaction
  • Summative

What makes a good diagnostic question?

Posted by John Kleeman

What makes a good diagnostic question?

First, it should be almost impossible for someone to get the right answer for the wrong reason: A participant will only get the question right if he/she has the right idea about whatever the instructor wants them to be able to know, understand or do.

Second, wrong answers should be interpretable: If a participant chooses a particular wrong response, the instructor should be able to guess why the person has done so and what misconception he/she has.

So suggests Dr. Dylan William in his excellent new book, Embedded Formative Assessment (published by Solution Tree Press, and recommended). A common use for diagnostic questions is to find out whether participants have understood your instruction – telling you whether you can go onto another topic or need to spend more time on this topic. And if participants get it wrong, you want to understand why they have done so in order to correct the misconception. Good diagnostic questions involve deep domain knowledge, as you have to understand why learners are likely to answer in a particular way.

One tactic for creating diagnostic questions is to look at answers that students give in open-ended questions and choose common ones as distractors in multiple choice questions.

Here is an example of a multiple response diagnostic question quoted in the book:

Example question

There are 64 possible answers to the question; the right answer is B. and D. It’s pretty unlikely that someone who does not understand Pythagoras’ rule will get the question right, and if they get it wrong, there will be good clues as to why.

Questions like this can be hinge-points in instruction – they give you evidence as to what you do next. Do you need to continue instruction and practice on this topic, or can you move on to the next topic?

Considering key applications for mobile assessments

Posted by John Kleeman

Questionmark have recently introduced multi-lingual apps to deliver assessments on Android, iPhone and iPad mobile devices, and I spent time at the European Users Conference last week demonstrating our Android app on my phone. In listening to our users talk about how they might deploy assessments on mobile devices, two areas attracted the most interest:

  1. Formative assessments. Assessments help learning; so giving quizzes to people on the device that they carry around with them all the time is interesting to many, and is easy to do. It’s attractive that with Questionmark, you author the assessments once, and participants can take them on their device of choice.
  2. Observational assessments. There is a big need to assess people doing things in work situations, whether it’s how a welder uses a machine, how a nurse interviews a patient or how a soldier cleans his equipment. It’s not practical to carry a laptop into many work environments, but it is practical to fill in questions on a mobile device while observing someone, so this possibility attracted a lot of interest.

Questionmark will shortly be improving our software for use for observational assessments – so watch this space.

P.S. I’m very excited to hear that Bryan Chapman is going to be the keynote speaker for our next users conference in Los Angeles in March 15 – 18, 2011. He is a real thought leader in our industry and someone I admire greatly – he introduced me and Questionmark to the AICC standard back in the 1990s and encouraged us to support it  –  and we’ve now been re-certified 5 times!

Podcast: Assessments for workers all across the globe

Bon Crowder

 

Posted by Joan Phaup

I spent some time talking the other day with Bon Crowder, a global instructional consultant for a large oilfield services company. She explained how her organization has expanded its use of online assessments to include not only high-stakes exams and certifications but also formative assessments such as quizzes.  With participants all over the world — and having recently launched an assessment to 40,000 people — Bon’s organization values the ability to gather and analyze data that will help improve instructional programs.

We touched on many subjects, including the involvement of subject matter experts (SMEs) in using Questionmark Live to create assessment content, options for monitoring some higher-stakes assessments taken outside of  testing centers, and a technique Bon has devised for easily creating multiple math-related questions from a single question stem.