Create high-quality assessments: Join a March 4 workshop

Joan Phaup 2013 (3)

Posted by Joan Phaup

There will be a whole lot of learning going on in San Antonio March 4 during three workshops preceding the Questionmark 2014 Users Conference.

These sessions cover a broad range of experience levels — from people who are just beginning to use Questionmark technologies to those who want to understand best practices in test development and item writing.

Rick Ault

Rick Ault

Questionmark Boot Camp: Basic Training for Beginners (9 a.m. – 4 p.m.)

Questionmark Trainer Rick Ault will lead this hands-on workshop, which begins with a broad introduction to the Questionmark platform and then becomes an interactive, hands-on practice session. Bring your own laptop to get some firsthand experience creating and scheduling assessments. Participants will also get acquainted with reports and analytics.

Melissa Fein web

Dr. Melissa Fein

Test Development Fundamentals (9 a.m. – 12 p.m.)

Whether you are involved in workplace testing, training program evaluation, certification & certificate program development, or academic testing, an understanding of criterion-referenced test development will strengthen your testing program. Dr. Melissa Fein, author of Test Development Fundamentals for Certification and Evaluation, leads this morning workshop, which will help participants judge test quality, set mastery cutoff points, and improve test quality.

MaryLorenz_small

Mary Lorenz

The Art and Craft of Item Writing (1 p.m. – 4 p.m.)

Writing high-quality multiple-choice questions can present many challenges and pitfalls. Longtime educator and test author Mary Lorenz will coach workshop participants through the process of constructing well-written items that measure given objectives. Bring items of your own and sharpen them up during this interactive afternoon session.

___

Choose between the full-day workshop and one or both of the half-day workshops.

Conference attendees qualify for special workshop registration rates, and there’s a discount for attending both half-day sessions.

Click here for details and registration.

 

 

 

Early-birds: Check out the conference program and register by tomorrow

Joan Phaup 2013 (3)Posted by Joan Phaup

You can still get a $100 early-bird registration discount for the Questionmark 2014 Users Conference if you register by tomorrow (Thursday, January 30th).

The program for March 4 – 7 includes a keynote by Learning Strategist Bryan Chapman on the power of open data, which will be a hot topic throughout this gathering at the Grand Hyatt San Antonio.

You can click here for program details, but here’s the line-up:

Optional Pre-Conference Workshopsbreakout 2013

  • ​Questionmark Boot Camp: Basic Training for Beginners  ​(full day)
  • ​Test Development Fundamentals  (half day)
  • ​The Art & Craft of Item Writing   ​(half day)

Case Studies

  • Using Questionmark to Conduct Performance Based Certifications —  SpaceTEC®
  • Better Outcomes Make the Outcome Better! — US Marine Corps University
  • ​Generating and Sending Custom Completion Certificates —  The Aurelius Group
  • ​Leveraging Questionmark’s Survey Capabilities with a Multi-system Model – Verizon
  • ​Importing Questions into Questionmark Live on a Tri-Military Service Training Campus Medical Education & Training Campus
  • How Can a Randomly Designed Test be Fair to All? US Coast Guard Training Center

Best Practices doug teaching 2013

  • ​Principles of Psychometrics and Measurement Design
  • 7 Reasons to Use Online Assessments for Compliance
  • ​Reporting and Analytics: Understanding Assessment Results
  • ​Making it Real: Building Simulations Into Your Quizzes and Tests
  • ​Practical Lessons from Psychology Research to Improve Your Assessments
  • Item Writing Techniques for Surveys, Quizzes and Tests

Questionmark Features & Functions

  • Introduction to Questionmark for Beginners
  • BYOL: Item and Topic Authoring
  • BYOL: Collaborative Assessment Authoring
  • Integrating with Questionmark’s Open Assessment Platform
  • Using Questionmark’s OData API for Analytics (BYOL)
  • Successfully Deploying Questionmark Perception
  • Customizing the Participant Interface

Discussions

  • Testing What We Teach: How can we elevate our effectiveness without additional time or resources?
  • Testing is Changing: Practical and Secure Assessment in the 21st Century

Future Solutions Focus Groups

  • Open Assessment Platform and Analytics
  • Authoring and Delivery

Special Interest Group Meetings

  • Military/Defense US DOD and Homeland Security
  • Utilities/Energy Generation and Distribution
  • Higher Education
  • Universities

Drop-in Demos of new Questionmark features and capabilities

Tech Central: Drop-in meetings with Questionmark technicians

—-

Register for the conference by tomorrow, Thursday, January 31st, to get the early-bird discount.

 

 

Item Analysis Report – Item Reliability

Austin FosseyPosted by Austin Fossey

In this series of posts, we have been discussing the statistics that are reported on the Item Analysis Report, including the difficulty index, correlational discrimination, and high-low discrimination.

The final statistic reported on the Item Analysis Report is the item reliability. Item reliability is simply the product of the standard deviation of item scores and a correlational discrimination index (Item-Total Correlation Discrimination in the Item Analysis Report). So item reliability reflects how much the item is contributing to total score variance. As with assessment reliability, higher values represent better reliability.

Like the other statistics in the Item Analysis Report, item reliability is used primarily to inform decisions about item retention. Crocker and Algina (Introduction to Classical and Modern Test Theory) describe three ways that test developers might use the item reliability index.

1) Choosing Between Two Items in Form Construction

If two items have similar discrimination values, but one item has a higher standard deviation of item scores, then that item will have higher item reliability and will contribute more to the assessment’s reliability. All else being equal, the test developer might decide to retain the item with higher reliability and save the lower reliability item in the bank as backup.

2) Building a Form with a Required Assessment Reliability Threshold

As Crocker and Algina demonstrate, Cronbach’s Alpha can be calculated as a function of the standard deviations of items’ scores and items’ reliabilities. If the test developer desires a certain minimum for the assessment’s reliability (as measured by Cronbach’s Alpha), they can use these two item statistics to build a form that will yield the desired level of internal consistency.

3) Building a Form with a Required Total Score Variance Threshold

Crocker and Algina explain that the total score variance is equivalent to the square of the sum of item reliability indices, so test developers may continue to add items to a form based on their item reliability values until they meet their desired threshold for total score variance.

reliability

Item reliability from Questionmark’s Item Analysis Report (item detail page)

Using OData for dynamic, customized reporting: Austin Fossey Q&A

Joan Phaup 2013 (3)Posted by Joan Phaup

We’ll be exploring the power of the Open Data Protocol (OData) and its significance for assessment and measurement professionals during the Questionmark 2014 Users Conference in San Antonio March 4 – 7.

Austin Fossey, our reporting and analytics manager, will explain the ins and outs of using the Questionmark OData API, which makes it possible to access assessment results freely and use third-party tools to create dynamic, customized reports. Participants in a breakout session about the OData API, led by Austin along with Steve Lay, will have the opportunity to try it out for themselves.

Austin Fossey-42

Austin Fossey

I got some details about all this from Austin the other day:

What’s the value of learning about the OData API?

The OData API gives you access to raw data. It’s an option for accessing data from your assessment results warehouse without having to know how to program, query databases or even host the database yourself. By having access to those data, you are not limited to the reports Questionmark provides: You can do data merges and create your own custom reports.

OData is really good for targeting specific pieces of info people want. The biggest plus is that it doesn’t just provide data access. It provides a flow of data. If you know the data you need and you want to set up a report, a spreadsheet, or just have it in the web browser, you can get those results updated as new data become available. This flow of data is what makes OData reports truly dynamic, and this is what distinguishes OData reports from reports that are built from manually generated data exports.

What third-party tools can people use with the OData API?

Lots! Key applications include Microsoft Excel PowerPivot, Tableau, the Sesame Data Browser, SAP Business Objects and Logi Analytics, but there are plenty to choose from. People can also do their own programming if they prefer.  The Odata.org website includes a helpful listing of the OData ecosystem, which includes applications that generate and consume OData feeds.

Can you share some examples of custom reports that people can create with OData?

We have some examples of OData reportlets on our Open Assessment Platform website for developers, which also includes some tutorials. I’ve blogged about using the OData API to create a response matrix and to create a frequency table of item keys in Microsoft PowerPivot for Excel. There are so many different ways to use this!

What about merging data from assessments with data from other sources? What are some scenarios for doing that?

It could be any research where you want to cross-reference your assessment data with another data source. If you have another data set and were able to identify participants – say an HR database showing the coursework people have done – you could compare that with their test results to their course activity Reports don’t necessarily have to be about test scores. They can be about items and answer choices – anything you want.

Tell me about the hands-on element of this breakout session.

We will be working through a fairly simple example using Microsoft PowerPivot for Excel  in order to cement the concepts of using OData. We’re encouraging people to bring their laptops with Excel and the PowerPivot add-in already installed. If they don’t have that, they can either work with someone else or watch the exercise onscreen. We will provide a handout explaining everything so they can try this when they are back at work.

What do you want people to take away from this breakout?

We want to make sure people know how to construct an OData URL, that they understand the possibilities of using OData but also the limitations. It won’t be a panacea for everything. We want to be sure they know they have another tool in their tool box to answer the research questions or business questions they encounter day to day.

Our conference keynote speaker, Learning Strategist Bryan Chapman, will share insights about OData and examples of how organizations are using it during his presentation on Transforming Data into Meaning and Action.

Click here to see the complete conference program. And don’t forget to sign up by January 30th if you want to save $100 on your registration.

Integrating and Connectors – Moodle

Doug Peterson HeadshotPosted By Doug Peterson

This installment of the Integrating and Connectors series focuses on Moodle. Technically, it’s really about the Questionmark LTI Connector and how it can be used to integrate with Moodle. (We’ll take a look at integrating with Canvas using the LTI Connector in a future installment.)

LTI stands for Learning Tools Interoperability. LTI is a specification published by the IMS Global Learning Consortium with the goal of providing a way for different learning tools to talk to each other and work together. Moodle (a Learning Management System, or LMS) and Questionmark (an Assessment Management System, or AMS) integrating their functionality is a perfect example of the concept.

So far in this series, we’ve looked at using SCORM or AICC to do a simple launch-and-track, and in the case of SuccessFactors, a simple Single Sign On (SSO) from the Learning LMS into the Questionmark Enterprise Manager. This is a very high-level integration. The assessment is simply launched and reports back to the source of the launch. The SuccessFactors SSO requires some manual intervention to set up an admin ID within Questionmark – the connection doesn’t just happen “automagically”. The LTI Connector allows for a much deeper integration.

As you’ll see in this video, once the LTI Connector is configured in the Moodle environment, a Moodle instructor can log into Moodle and add a Questionmark assessment to a course – from within Moodle, without having to have an ID and password and log into Questionmark at all.

Similarly, a student can log into Moodle and launch a Questionmark assessment – again, from within Moodle, without a second set of credentials. Furthermore, an instructor can also use Questionmark’s authoring and reporting functionality – you guessed it – all from within Moodle.

The LTI Connector allows for a deep integration with Moodle, giving the instructor and student a seamless experience in what behaves to them like a single environment, even though they are actually moving back and forth between Moodle and Questionmark.

qm and moodle vid

10 reasons why practice tests help make perfect exams

John Kleeman HeadshotPosted by John Kleeman

Giving the opportunity for candidates / participants to take a practice or mock version of an exam before they take the real thing has huge benefits for all stakeholders. Here are 10 reasons why including practice tests within your exam programme will improve it.

1. Most importantly, practice tests tell candidates which topics they have not mastered and encourage them to focus future learning on weak areas.

2. Almost as important, practice tests tell candidates which topics they have already mastered. They can then direct their learning to other areas and spend minimal further time on the topics they already know.

3. Practice tests can also feed back to the instructional team the strengths and weaknesses of each candidate and for the candidate group. It can tell which topics have been successfully learned and which areas need more work.

image4. It’s well understood in psychology that you are more likely to retain something if you learn it spaced (separated) over time. Since practice tests stimulate revision and studying, they encourage earlier learning and so space out learning, which is likely to improve retention. See this Slideshare for more information on how assessments can help space out learning.

5. The accuracy and fairness of exams can be impacted by some candidate’s fear or anxiety around the exam process. Practice tests can reduce test anxiety. To quote ETS on test anxiety:

“The more you are accustomed to sitting for a period of time, answering test questions, and pacing yourself, the more comfortable you will feel when you actually sit down to take the test.”auth-collab-350x200

6. Accuracy and fairness can also be impacted by problems with familiarization or incompatibilities with the computers and software used for the testing. If the same equipment and software can be used in practice, this greatly reduces the chance of problems.

7. Taking a test doesn’t just measure how much you know, it helps reinforce the learning and make it more likely that you can retrieve the same information later. It’s a surprising fact that taking a test can actually be more beneficial to learning than spending the same amount of time studying. See Evidence from Medical Education that Quizzes Do Slow the Forgetting Curve for one of many research studies showing this.

8. Giving formative or practice tests seems to improve learning as well as final exam results. See Evidence that topic feedback correlates with improved learning or  Where’s the evidence for assessment? for a couple of articles with evidence of this.

9. Such tests are consistent with good practice and with assessment standards. For example the international standard on delivering assessments in the workplace ISO 10667 states:

“The service provider shall … where appropriate, provide guidance on ways in which the assessment participant might prepare for the assessment, including access to approved or recommended sample and practice materials”

10. It is crucial that exams  are fair and that they are seen to be fair. By providing practice tests, you remove the mystique from your exams and allow people to see the question styles, to practice the time planning required and to have a fair view of what the exam consists of. It helps level the playing field and promotes the concept of a fair exam, open to and equal for all.

Not all these reasons apply in every organization, but most do.  I hope this article helps remind you why practice tests are valuable and encourages their use.

Next Page »