10 reasons why practice tests help make perfect exams

John Kleeman HeadshotPosted by John Kleeman

Giving the opportunity for candidates / participants to take a practice or mock version of an exam before they take the real thing has huge benefits for all stakeholders. Here are 10 reasons why including practice tests within your exam programme will improve it.

1. Most importantly, practice tests tell candidates which topics they have not mastered and encourage them to focus future learning on weak areas.

2. Almost as important, practice tests tell candidates which topics they have already mastered. They can then direct their learning to other areas and spend minimal further time on the topics they already know.

3. Practice tests can also feed back to the instructional team the strengths and weaknesses of each candidate and for the candidate group. It can tell which topics have been successfully learned and which areas need more work.

image4. It’s well understood in psychology that you are more likely to retain something if you learn it spaced (separated) over time. Since practice tests stimulate revision and studying, they encourage earlier learning and so space out learning, which is likely to improve retention. See this Slideshare for more information on how assessments can help space out learning.

5. The accuracy and fairness of exams can be impacted by some candidate’s fear or anxiety around the exam process. Practice tests can reduce test anxiety. To quote ETS on test anxiety:

“The more you are accustomed to sitting for a period of time, answering test questions, and pacing yourself, the more comfortable you will feel when you actually sit down to take the test.”auth-collab-350x200

6. Accuracy and fairness can also be impacted by problems with familiarization or incompatibilities with the computers and software used for the testing. If the same equipment and software can be used in practice, this greatly reduces the chance of problems.

7. Taking a test doesn’t just measure how much you know, it helps reinforce the learning and make it more likely that you can retrieve the same information later. It’s a surprising fact that taking a test can actually be more beneficial to learning than spending the same amount of time studying. See Evidence from Medical Education that Quizzes Do Slow the Forgetting Curve for one of many research studies showing this.

8. Giving formative or practice tests seems to improve learning as well as final exam results. See Evidence that topic feedback correlates with improved learning or  Where’s the evidence for assessment? for a couple of articles with evidence of this.

9. Such tests are consistent with good practice and with assessment standards. For example the international standard on delivering assessments in the workplace ISO 10667 states:

“The service provider shall … where appropriate, provide guidance on ways in which the assessment participant might prepare for the assessment, including access to approved or recommended sample and practice materials”

10. It is crucial that exams  are fair and that they are seen to be fair. By providing practice tests, you remove the mystique from your exams and allow people to see the question styles, to practice the time planning required and to have a fair view of what the exam consists of. It helps level the playing field and promotes the concept of a fair exam, open to and equal for all.

Not all these reasons apply in every organization, but most do.  I hope this article helps remind you why practice tests are valuable and encourages their use.

Item Analysis Report – High-Low Discrimination

Austin Fossey-42Posted by Austin Fossey

In our discussion about correlational item discrimination, I mentioned that there are several other ways to quantify discrimination. One of the simplest ways to calculate discrimination is the High-Low Discrimination index, which is included on the item detail views in Questionmark’s Item Analysis Report.

To calculate the High-Low Discrimination value, we simply subtract the percentage of low-scoring participants who got the item correct from the percentage of high-scoring participants who got the item correct. If 30% of our low-scoring participants answered correctly, and 80% of our high-scoring participants answered correctly, then the High-Low Discrimination is 0.80 – 0.30 = 0.50.

But what is the cut point between high and low scorers? In his article, “Selection of Upper and Lower Groups for the Validation of Test Items,” Kelley demonstrated that the High-Low Discrimination index may be more stable when we define the upper and lower groups as participants with the top 27% and bottom 27% of total scores, respectively. This is the same method that is used to define the upper and lower groups in Questionmark’s Item Analysis Report.

The interpretation of High-Low Discrimination is similar to the interpretation of correlational indices: positive values indicate good discrimination, values near zero indicate that there is little discrimination, and negative discrimination indicates that the item is easier for low-scoring participants.

In Measuring Educational Achievement, Ebel recommended the following cut points for interpreting High-Low Discrimination (D):

Capture Blog 18

In Introduction to Classical and Modern Test Theory, Crocker and Algina note that there are some drawbacks to the High-Low Discrimination index. First, it is more common to see items with the same p value having large discrepancies in their High-Low Discrimination values. Second, unlike correlation discrimination indices, High-Low Discrimination can only be calculated for dichotomous items. Finally, the High-Low Discrimination does not have a defined sampling distribution, which means that confidence intervals cannot be calculated, and practitioners cannot determine whether there are statistical differences in High-Low Discrimination values.

Nevertheless, High-Low Discrimination is easy to calculate and interpret, so it is still a very useful tool for item analysis, especially in small-scale assessment. The figure below shows an example of the High-Low Discrimination value on the item detail view of the Item Analysis Report.

high_low_discrimination

High-Low Discrimination value on the item detail page of Questionmark’s Item Analysis Report.

Integrating and Connectors – SuccessFactors

Doug Peterson HeadshotPosted By Doug Peterson

In this installment of Integrating and Connectors, I’d like to take a look at SuccessFactors. SuccessFactors is a very popular cloud-based human capital management application suite. It includes modules for succession planning, goals, performance management, recruiting, and – you guessed it – a learning management system (LMS) called Learning (appropriate, no?).

Questionmark assessments can be integrated into SuccessFactors Learning items by publishing them as AICC or SCORM content packages and importing the content package as a content object, which is then included in the learning item. The student logs into SuccessFactors Learning, enrolls in a course, takes in the content, clicks on a link and – voila! – the assessment launches. It’s a seamless experience for the student.

However, our integration with SuccessFactors Learning goes a step further. Learning and Questionmark Enterprise Manager can be connected by a Single Sign-On bridge that allows a Learning administrator to access Questionmark reports directly – no signing into Questionmark EM separately with some other ID and password.

This short video tells the story. Check it out and feel free to contact any of the Questionmark team if you have any questions.

Success Factors

Podcast: Tim Ellis on Lancaster University’s Module Evaluation System

 

Posted By Sarah Elkins

I spoke recently with Tim Ellis about Lancaster University, which has a Virtual Learning Environment (LUVLE) that incorporates Questionmark Perception assessments as well as a  student-controlled social web space called MyPlace.

In addition to  discussing the general use of assessments at the university, we focused on the Lancaster University Module Evaluation System, which has increased the quality of student response and reduced the workload of administrators.