Crowdsourcing Content

jim_smallPosted by Jim Farrell

Jay Cross recently mentioned in his Learning Blog a Harvard Business blog post by John Hagel III, John Seely Brown, and Lang Davison about The Collaboration Curve.  I was struck by this statement from their article:

…the more participants–and interactions between those participants–you add to a carefully designed and nurtured environment, the more the rate of performance improvement goes up

This is why Questionmark Live is so important. Think of it as crowdsourcing content from participants who may previously  have been indifferent to the assessment process within your organization. As participation and collaboration increases, the quality of your questions will improve and become far more job-relevant. Look at the following quote from our friends at the Harvard School of Business and replace collaboration curves with Questionmark Live.

Collaboration curves hold the potential to mobilize larger and more diverse groups of participants to innovate and create new value.

Questionmark Live is a vital tool in the creation of a true learnscape.

Being a Good SME Wrangler

jim_smallPosted by Jim Farrell

I was recently demonstrating Questionmark Live to one of our customers and he was telling me about the “Data Wrangler” job in animated movies. Basically the person’s job is to collect all the work from the animators. So all of you Instructional Designers now have a new role to add to your resume…SME Wrangler. It might actually be one of your more taxing and complicated duties.

newwaycowboyoldwaycowboyThe ELearning Guild recently held an online forum titled “To SME or Not to SME: Tips for Working with your Customer and your Team.” I loved the title, but I think  “How to be a Good SME Wrangler” has a little more bite. As learning professionals we have to wear many different hats and foster different relationships. The relationship you have with your SMEs can often make or break your training program. One of the presentations touched on seven tools of the trade for working successfully with SMEs. The number one item on the list was resources. They mainly discussed having a well thought out design document and project plan, but I immediately thought about Questionmark Live. What better way to establish a relationship with the experts in your company than to give them a tool that not only empowers them to transfer their knowledge but also immediately involves them in creating deliverables that will be used by their peers?

Make sure that when you take a look at Questionmark Live you think of your role as a SME Wrangler and how this tool could help you foster a successful relationship with your SMEs.

Summative e-assessment quality

john_smallPosted by John Kleeman

I’d like to highlight an important but not yet widely disseminated  report which sets out some best practice and recommendations on quality for summative e-assessment. It’s a must-read for anyone who is implementing summative assessments in an academic environment, and worth reading for those outside colleges and universities as well.

clip_image002

The report by the REAQ project team is commissioned by JISC in the UK and produced by the Learning Societies Lab at the University of Southampton, a center of expertise in e-learning and e-assessment. It had an expert panel of some experienced professionals reviewing and feeding into the work, including Greg Pope and me from Questionmark along with others.

The report asks interviewees who use e-assessment in practice what they think high quality means and compares this with the theory as to what high quality should be. One of the striking comments is that the experts suggested that the most important factors for getting quality in e-assessment were psychometrically based, starting with validity and reliability.  However, the practitioners thought that the most important factors were practical issues of delivery (security, reliability and accessibility) and also how innovative they are able to be. Part of this has to do with differences in perspective but part of it is also that psychometrics is not as well understood as it should be. One of the report recommendations is that JISC should set up workshops or other dissemination for psychometrics principles.

The report also includes much advice on process and advice on good practice, both from practitioner perspective and expert perspective. Recommended reading.

Do You Know How to Write Good Test Questions?

howard-headshot-small1

Posted by Howard Eisenberg

I had a typical education.  I took lots of tests.  Knowing what I know now about good testing practice, I wonder how many of those tests really provided an accurate measure of my knowledge.

Common testing practices often contradict what is considered best practice.  This piece will focus on four of the most common “myths” or “mistakes” that teachers, subject matter experts, trainers and educators in general make when writing test questions.

1) A multiple choice question must have at least four choices.  False.
Three to five choices is considered sufficient.  Of course the fewer the choices, the greater the chance a test-taker can guess the correct answer.  However, the point however is you don’t need four choices, and if you are faced with the decision of adding an implausible or nonsensical distracter to make four choices, it won’t add any measurement value to the question anyway.  Might as well just leave it at three choices.

2)  The use of “all of the above” as a choice in a multiple choice question is good practice.  False.
It may be widely used but it is poor practice.  “All of the above” is almost always the correct answer.  Why else would it be there?  It is tacked onto a multiple choice question so it can have only one best answer. After all, writing plausible distracters is difficult.  If at least two of the other choices answer the question, then “all of the above” is the answer. No need to consider any more choices.

3) Starting a question with “Which of the following is not …” is considered best practice.  False.

First, the use of negatives in test questions should be avoided (unless you are trying to measure a person’s verbal reasoning ability).  Second, the use of the “which of the following …” form usually results in a question that only tests basic knowledge or recall of information presented in the text or in the lecture.  You might as well be saying:  “Which of the following sentences does not appear exactly as it did in the manual?

A) Copy > paste (from manual) choice 1
B) Copy > past choice 2
C) Copy > past choice 3
D) Make something up

While that may have some measurement value, my experience tells me that most test writers prefer to measure how well a person can apply knowledge to solve novel problems.  This type of question just won’t reach that level of cognition.  If you really want to get to problem-solving, consider using a real-world scenario and then posing a question.

4) To a subject matter expert, the correct answer to a good test question should be apparent.  True.

A subject matter expert knows the content.  A person who really knows the content should be able to identify the best answer almost immediately.  Test writers often hold the misconception that a good test question is one that is tricky and confusing.  No, that’s not the point of a test.  The point is to attain an accurate measure of how well a person knows the subject matter or has mastered the domain.  The question should not be written to trick the test-taker, let alone the expert. That just decreases the value of the measurement.

There are many more “do’s” and “don’ts” when it comes to writing good test questions.  But you can start to improve your test questions now by considering these common misconceptions as you write your next test.

Introducing Questionmark Live

jim_thumb-2Posted by Jim Farrell

Questionmark Live is our new subject matter expert (SME) web-based authoring tool that allows you to harvest content from SMEs and import it into Questionmark Perception.

So what does that mean to you and your organization? Think about how you are currently getting information from your SMEs. Is it always timely? Is it in a format that is easy for you to use? Our aim in creating this content harvesting tool was to help SMEs create items quickly and easily.

  • Create 5 different question types
  • Use a multi-lingual interface
  • Create an unlimited number of questions
  • Provide choice based feedback
  • Add images and links to questions
  • Download or email questions to a Perception userqmlive

Our tutorial will show you how easy it is to use this new tool, which we are providing  this new tool without charge to active Questionmark Software Support Plan customers. (Other Questionmark Communities members can request a free 30-day trial.)

Learn more about Questionmark Live.

Access Questionmark Live or request a 30-day trial. (You may be prompted for your Questionmark Communities password.)

Soft Scaffolding and Other Patterns for Formative Assessment

steve-smallPosted by Steve Lay

As someone involved in software development, I’m used to thinking about ‘patterns’ in software design.  Design patterns started life as a way of looking at the physical design of buildings.  More recently, they’ve been used to identify solutions to common design problems in software.  One of the key aspects of pattern use is that patterns are named, and these names can be used as a vocabulary to help designers implement solutions in software.

So I was interested to see the technique discussed in the context of designs of formative assessment by the recent JISC project on Scoping a Vision for Formative e-Assessment.  In the final report, the authors document patterns for formative assessment as a way of bridging the gap between practitioners and those implementing solutions in software to support them.

The patterns have wonderful names like “Classroom Display,” “Round and Deep” and “Objects To Talk With” that entice me to want to use them in my own communications.

To give an example of how one might apply the theory, let’s take a design problem identified in the report.  Given that the point of formative assessment is to inform future learning activities it is not surprising that in some environments outcomes are used too rigidly to determine the paths students take resulting in a turgid experience.  What you need, apparently, is “soft scaffolding,” which describes solutions that soften the restrictions on types of responses or paths a student can take with a resource, for example, by providing free-text ‘other’ options in MCQs or replacing rigid navigation with recommendations and warnings.6107473_aaba2abff5

You can jump straight to the patterns themselves using this on the project wiki.

Next Page »