Conference Close-up: To proctor or not to proctor?
Posted by Joan Phaup
Now that we have the program in place for the Questionmark 2012 Users Conference, I’m eager to connect with people who will be leading the various breakout sessions.
Dr. Richard Pierce from Shenandoah University’s Bernard J. Dunn School of Pharmacy will be leading a discussion called Proctored versus non-proctored: How does assessment setting affect student achievement on web-based assessments?
I got together with Richard the other day and asked him for some background on the subject.
Why do you think this topic is important?
Increases in online assessment will accompany the growth in education offerings in the future. It’s critical to ensure accountability as an educational provider. Researching the impact of various assessment models –proctored or non-proctored — provides some evidence about policies that have organically evolved over the years, which may inform future decisions regarding assessment.
Tell me about your efforts at the School of Pharmacy to study the effects on student achievement of proctored and non-proctored web-based assessments.
Our programs come in two varieties and require different assessment proctoring strategies. Our traditional students attend face-to-face and synchronous teleconferenced lectures, and assessments in those settings are proctored. Our non-traditional students receive their instruction asynchronously and take non-proctored tests. We ask ourselves a lot of questions about these options.
For instance, are our efforts to dissuade academic misbehavior working? Is there any academic credibility to people taking non-proctored tests at home? Inquiring minds, such as curriculum committee members, accrediting bodies and instructional designers, want to use evidence-based practices to examine these questions. So we examined the efficacy and impact of online testing technologies and protocols to determine if the assessment setting — proctored in face-to-face situations or non-proctored in an online distance education setting — impacted student achievement.
What would you like to happen during (and after) this breakout session?
I would like to briefly discuss our small study and discuss where it might lead us. The School of Pharmacy hopes to integrate assessment for learning into what is now a primarily lecture-based model in order to facilitate more active classroom processes — while keeping our eye on student outcomes. I would like to see Questionmark Perception become a reporting engine to connect various levels of analysis such as course outcomes, program outcomes, the assessment matrix, and curriculum planning.
What are you looking forward to at the conference?
My primary goal is to improve student performance through evidence-based, or data-driven, processes. I am interested in learning more about coordinating instruction, objectives and assessment, as well as ways to improve test construction and item performance.
I’d also like to learn how to increase faculty use of online assessment by demonstrating how Questionmark’s reporting features can improve student performance and reduce faculty work. I need to be become better versed in the language of item analysis, so I expect to learn more about that, too.