MediaWiki is a free software wiki package written in PHP, originally for use on Wikipedia. Embedding an assessment into a MediaWiki page is possible provided the correct extension has been installed and configured. The correct extensions and configurations can be found in the How-to article mentioned above.
Questionmark has just re-certified for its third year under the US government Safe Harbor scheme, and I thought I’d share a little bit with you about this scheme and why it’s helpful for Questionmark customers and stakeholders, including assessment participants.
The Safe Harbor scheme, run by the US Government Department of Commerce, enables companies to certify that they are compliant with the stringent needs of the European Union for data security. This gives comfort to customers in Europe, but is also helpful for all customers and users worldwide, as following these standards means that we look after your data very carefully. We self-certify compliance with Safe Harbor, which allows us to use this logo:
In order to ensure that we are compliant, we have a formal data security policy which is managed by me as company Chairman. Among the measures are a classification scheme whereby all highly confidential data is registered when moved around and subjected to stringent security measures on the central IT systems of our D3 Hosting platform. Also, every Questionmark employee from our CEO Eric Shepherd on down is trained on data security and has to take and pass a test each year to check their knowledge. It’s good for us all sometimes to take tests as well as help others prepare and deliver them!
Seriously, people rely on Questionmark to ensure the integrity of their assessments and it’s important that question content is not revealed and that assessment results remain private. You can see details of our certification on the US government site here.
Live Web is the Microsoft PowerPoint add-in which enables users to insert web pages into a PowerPoint slide and refresh the pages real-time during the slide show. Perception assessments can easily be displayed using Live Web: all you need to do is insert the Perception assessment URL. You don’t need to add code.
My last post offered an introduction to standard setting; today I’d like to go into more detail about establishing cut scores. There are many standard setting methods used to set cut scores. These methods are generally split into two types: a) question-centered approaches and b) participant-centered approaches. A few of the most popular methods, with very brief descriptions of each, are provided below. For more detailed information on standard setting procedures and methods see the book, Setting Performance Standards: Concepts, Methods, and Perspectives, edited by Gregory Cizek and Robert Sternberg.
Modified Angoff method (question-centered): Subject matter experts (SMEs) are generally briefed on the Angoff method and allowed to take the test with the performance levels in mind. SMEs are then asked to provide estimates for each question of the proportion of borderline or “minimally acceptable” participants that they would expect to get the question correct. The estimates are generally in p-value type form (e.g., 0.6 for item 1: 60% of borderline passing participants would get this question correct). Several rounds are generally conducted with SMEs allowed to modify their estimates given different types of information (e.g., actual participant performance information on each question, other SME estimates, etc.). The final determination of the cut score is then made (e.g., by averaging estimates or taking the median). This method is generally used with multiple-choice questions.
I like a dichotomous modified Angoff approach where, instead of using p-value type statistics, SMEs are asked to simply provide a 0/1 for each question (“0” if a borderline acceptable participant would get the question wrong and “1” if a borderline acceptable participant would get the item right)
Nedelsky method (question-centered): SMEs make decisions on a question-by-question basis regarding which of the question distracters they feel borderline participants would be able to eliminate as incorrect. This method is generally used with multiple-choice questions only.
Bookmark method (question-centered): Questions are ordered by difficulty (e.g., Item Response Theory b-parameters or Classical Test Theory p-values) from easiest to hardest. SMEs make “bookmark” determinations of where performance levels (e.g., cut scores) should be (“As the test gets harder, where would a participant on the boundary of the performance level not be able to get any more questions correct?”) This method can be used with virtually any question type (e.g., multiple-choice, multiple-response, matching, etc.).
Borderline groups method (participant-centered): A description is prepared for each performance category. SMEs are asked to submit a list of participants whose performance on the test should be close to the performance standard (borderline). The test is administered to these borderline groups and the median test score is used as the cut score. This method can be used with virtually any question type (e.g., multiple-choice, multiple response, essay, etc.).
Contrasting groups method (participant-centered): SMEs are asked to categorize the participants in their classes according to the performance category descriptions. The test is administered to all of the categorized participants and the test score distributions for each of the categorized groups are compared. Where the distributions of the contrasting groups intersect is where the cut score would be located. This method can be used with virtually any question type (e.g., multiple-choice, multiple response, essay, etc.).
I hope this was helpful and I am looking forward to talking more about an exciting psychometric topic soon!
I am thrilled today as I am en route to Johannesburg for Questionmark’s Annual South African Users Meeting. No time to Watch the World Cup this time around but I’ll hopefully be back in SA in a couple of months!
We will be holding our annual South African Users Meeting close to Jo’burg (as locals refer to Johannesburg), in the city of Midrand, on April 20th. I am particularly excited about the meeting this year, as more and more users have completed the upgrade to the latest version of Questionmark Perception.
Our South African customers will gather at the meeting to share information and learn from each others’ experiences. Of course, it also will be a wonderful occasion to show the latest developments from Questionmark and to discuss best practices in online assessment management. The South African Questionmark users meeting is the best chance to network and learn from other users and speak face-to-face with Questionmark team.
Some main topics to be discussed:
New features available in Questionmark Perception version 5
The latest release of Questionmark Live
The South African Users Meeting is one of many Questionmark worldwide events. As I write this, Questionmark events are being organized in the UK, The Netherlands, Germany, USA, India, New Zealand and Australia. Look out for a post-Users Meeting blog entry from me!