Embedding Assessments in E-Blogger

Embed a Questionmark Perception assessment, survey or quiz within an E-Blogger blog.

  • To see how this would look, see a snapshot of an assessment embedded within an IFrame using E-Blogger.
  • Check out this how-to on our developer web site.
  • E-Blogger is a free weblog publishing tool from Google, for sharing text, photos and video. It is possible to display an assessment in this blogging platform by embedding the html code of your assessment in an IFrame.

Questionmark Conference Update: Tech Training Topics and More

Joan Phaup

Posted by Joan Phaup

Questionmark users gathering in Los Angeles March 15 – 18 for the Questionmark 2011 Users Conference will have a host of breakout sessions to choose from. The conference schedule is being updated frequently to reflect additions to the program. To date, we have seven Tech Training sessions on tap — some for beginners and others for intermediate and advanced Questionmark users:

  • Introduction to Questionmark Perception for beginners
  • Advanced Authoring Techniques Using Authoring Manager (Intermediate/Advanced)
  • Planning Your Migration from Perception v4 to v5 (Intermediate/Advanced)
  • Configuring the User Experience and Understanding Templates in Perception v5 (Intermediate/Advanced)
  • Authoring with Questionmark Live – A hands-on introduction (Bring your own laptop!) (Beginning/intermediate)
  • Analyzing and Sharing Assessment Results (Beginning/Intermediate)
  • Integrating Perception with Other Systems (Advanced)

These are in addition to a keynote by Bryan Chapman on Assessment’s Strategic Role in Enterprise Learning and concurrent sessions including customer case studies, peer discussions, drop-in demos and best practice presentations on everything from principles of item and test analysis to design strategies for mobile devices.

Tech Central and Product Central offer participants the opportunity to meet with Questionmark technicians and product managers respectively. Product Central focus groups will explore potential new products and features, while Tech Central drop-ins provide the opportunity to touch base with tech support reps, ask questions and check on the progress of current tech support issues.

Participants in this conference are enthusiastic about the value they get from three days of learning with Questionmark staff, fellow Questionmark users and learning industry experts. Early-bird registration savings are available through January 21st, 2011 — so keep an eye on the conference program and register soon!

Meaningful Feedback: Some good learning resources

 

Posted by Jim Farrell

December is the time to take stock of the year that’s winding down, and a highlight for me in 2010 was attending the eLearning Guild’s DevLearn conference. One of the things I enjoy most about DevLearn is attending the general sessions where industry leaders speak passionately about the state of elearning and  important trends like social networking, games and simulations in learning.

One of the speakers at this year’s closing session  was Dr. Jane Bozarth, the elearning coordinator for the North Carolina Office of State Personnel. Jane is a great person to follow on Twitter (and not just because she is a fellow resident of the triangle here in NC). Jane’s tweets are full of valuable resources, and one of the many topics that interests her (and me!) is the use of feedback in learning and assessments. Jane’s recent article on Nuts and Bolts: Useful Interactions and Meaningful Feedback in Learning Solutions Magazine includes some great examples of feedback. In that article,  Jane emphasizes that “the point of instruction is to “support gain, not expose inadequacy” — and that feedback should be provided with that goal in mind.

Jane’s article reminded me that during one of our Questionmark Podcasts, Dr. Will Thalheimer of Work-Learning Research notes the importance of retrieval practice in the learning process and the role of feedback in supporting retrieval. The amount of feedback is tied to when the assessment comes in the learning process. For instance,  feedback with a formative assessment can pave new paths to information that can make future retrieval easier. Feedback for incorrect responses during learning is used to repair misconceptions and replace them with correct information and a new mental model that will be used to retrieve information in the future. As Dr. Thalheimer mentions in the podcast, good authentic questions that support retrieval also support good feedback. You will find more details about this in Dr. Thalheimer’s research paper, Providing Feedback to Learners, which you can download from our Web site.

All these resources can help you use feedback to “support gain, not expose inadequacy,” making your assessments in the coming year more effective.

Some favourite resources on data visualization and report design

greg_pope-150x1502

Posted by Greg Pope

In my last post I talked about using confidence intervals and how they can be used successfully in assessment reporting contexts. Reporting design and development has always been interesting to me. It started when I worked for the high-stakes provincial testing program in my home province of Alberta, Canada.

When I did my graduate degree with Dr. Bruno Zumbo he introduced me to a new world of exciting data visualization approaches including the pioneering functional data analysis work of Professor Jim Ramsay. Professor Ramsay developed a fantastic free program called TESTGRAF that performs non-parametric item response modeling and differential item functioning. I have used TESTGRAF many times over my career to analyze assessment data.

The work of both these experts has guided me through all my work in report design. In working on exciting new reports to meet the needs of Questionmark customers, I’m mindful of what I have learned from them and from others who have influenced me over the years. In this season of giving, I’d like to share some ideas that might be helpful to you and your organization.

I greatly admire the work of Edward Tufte, whose books provide great food for thought on data analysis visualization in numerous contexts. My favourite of these is The Visual Display of Quantitative Information, which offers creative ways to display many variables together in succinct ways. I have spent many a Canadian winter night curled up with that gift, so I know it is a great gift idea for that someone special this holiday season!

The Standards for Educational and Psychological Testing contains a section highlighting the commitments we have as assessment professionals in terms of appropriate, fair, and valid reporting of information to multiple levels of stakeholder…including the most important stakeholder, the test taker! In the section on “Test Administration, Scoring, and Reporting” you will find a number of important standards around reporting that are worth checking out.

A colleague of mine, Stefanie Moerbeek at EXIN Exams, introduced me to a number of great papers written by Dr. Gavin Brown and Dr. John Hattie around the validity of score reports. Dr. Hattie did a session at NCME in 2009 entitled Visibly Learning from Reports: The Validity of Score Reports, in which he listed some recommended principles of reporting to maximize the valid interpretations of reports:

1.    Readers of Reports need a guarantee of safe passage
2.    Readers of Reports need a guarantee of destination recovery
3.    Maximize interpretations and minimize the use of numbers
4.    The answer is never more than 7 plus or minus two 5
5.    Each report needs to have a major theme
6.    Anchor the tool in the task domain
7.    Report should minimize scrolling, be uncluttered, and maximize the “seen’ over the ‘read’
8.    A Report should be designed to address specific questions
9.    A Report should provide justification of the test for the specific applied purpose and for the utility of the test in the applied setting
10.    A Report should be timely to the decisions being made (formative, diagnostic, summative and ascriptive)
11.    Those receiving Reports need information about the meaning and constraints of any report
12.    Reports need to be conceived as actions not as screens to print.

You can read a paper Hattie wrote on this subject in the Online Educational Research Journal; Questionmark’s white paper on Assessments through the Learning Process offers helpful general information about reporting on assessment results.

Embedding Assessments in WikiSpaces

Embed a Questionmark Perception assessment, survey or quiz within a WikiSpaces page.

  • To see how this would look, see a snapshot of an assessment embedded within a widget using WikiSpaces.
  • Check out this how-to on our developer web site.
  • Wikispaces is a hosting service (sometimes called a wiki farm) based in San Francisco, California.  Simple wikis — simple web pages that groups can edit together — are easy to set up and free to use. However, private wikis with advanced features for businesses, non-profits and educators are available for an annual fee. Embedding an assessment into a WikiSpaces page is simple. WikiSpaces has the ability to add HTML to any web page using a widget, which extends the basic functionality of a wiki and adds an IFrame where you can embed your assessment.

Applications of confidence intervals in a psychometric context

greg_pope-150x1502

Posted by Greg Pope

I have always been a fan of confidence intervals. Some people are fans of sports teams, for me, it’s confidence intervals! I find them really useful in assessment reporting contexts, all the way from item and test analysis psychometrics to participant reports.

Many of us get exposure to the practical use of confidence intervals via the media, when survey results are quoted. For example: “Of the 1,000 people surveyed, 55% said they will vote for John Doe. The margin of error for the survey was plus or minus 5% 95 times out of 100.” This is saying that the “observed” percentage of people who  say they will vote for Mr. Doe is 55% and there is a 95% chance that the “true” percentage of people who will vote for John Doe is somewhere between 50-60%.

Sample size is a big factor in the margin of error: generally, the larger the sample the smaller the margin of error as we get closer to representing the population.  (We can’t survey approximately all 307,006,550 people in the US now, can we!) So if the sample was 10,000 instead of 1,000 we would expect that the margin of error would be smaller than plus or minus 5%.

These concepts are relevant in an assessment context as well. You may remember my previous post on Classical Test Theory and reliability in which I explained that an observed test score (the score a participant achieves on an assessment) is composed of a true score and error. In other words, the observed score that a participant achieves is not 100% accurate; there is always error in the measurement. What this means practically is that if a participant achieves 50% on an exam their true score could actually be somewhere between say 44% and 56%.

This notion that observed scores are not absolute has implications for verifying what participants know and can do. For example, a  participant who achieves 50% on a crane certification exam (on which the pass score is 50%) would pass the exam and be able to hop into a crane, moving stuff up and down and around. However, achieving a score right on the borderline means this person may not, in fact, know enough to pass the exam if he or she were to take it again and then be certified on crane operation. His/her supervisor might not feel very confident about letting this person operate that crane!

To deal with the inherent uncertainty around observed scores, some organizations factor this margin of error in when setting the cut score…but this is another fun topic that I touched on in another post. I believe a best practice is to incorporate a confidence interval into the reporting of scores for participants in order to recognize that the score is not an “absolute truth” and is an estimate of what a person knows and can do. A simple example of a participant report I created to demonstrate this shows a diamond that encapsulates the participant score; the vertical height of the diamond represents the confidence interval around the participant’s score.

In some of my previous posts I talked about how sample size affects the robustness of item level statistics like p-values and item-total correlation coefficients and provided graphics showing the confidence interval ranges for the statistics based on sample sizes. I believe confidence intervals are also very useful in this psychometric context of evaluating the performance of items and tests. For example, often when we see a p-value for a question of 0.600 we incorrectly accept this as the “truth” that 60% of participants got the question right. In actual fact, this p-value of 0.600 is an observation and the “true” p-value could actually be between 0.500 and 0.700, a big difference when we are carefully choosing questions to shape our assessment!

With the holiday season fast approaching, perhaps Santa has a confidence interval in his sack for you and your organization to apply to your assessment results reporting and analysis!

Next Page »