Summative e-assessment quality
Posted by John Kleeman
I’d like to highlight an important but not yet widely disseminated report which sets out some best practice and recommendations on quality for summative e-assessment. It’s a must-read for anyone who is implementing summative assessments in an academic environment, and worth reading for those outside colleges and universities as well.
The report by the REAQ project team is commissioned by JISC in the UK and produced by the Learning Societies Lab at the University of Southampton, a center of expertise in e-learning and e-assessment. It had an expert panel of some experienced professionals reviewing and feeding into the work, including Greg Pope and me from Questionmark along with others.
The report asks interviewees who use e-assessment in practice what they think high quality means and compares this with the theory as to what high quality should be. One of the striking comments is that the experts suggested that the most important factors for getting quality in e-assessment were psychometrically based, starting with validity and reliability. However, the practitioners thought that the most important factors were practical issues of delivery (security, reliability and accessibility) and also how innovative they are able to be. Part of this has to do with differences in perspective but part of it is also that psychometrics is not as well understood as it should be. One of the report recommendations is that JISC should set up workshops or other dissemination for psychometrics principles.
The report also includes much advice on process and advice on good practice, both from practitioner perspective and expert perspective. Recommended reading.