2014 South African Users Conference – Addressing Compliance

Austin FosseyPosted by Austin Fossey

We are back from the first South African Users Conference which was hosted by Bytes People Solutions. Like all of our users conferences, the most valuable aspect of this gathering was hearing from our customers and potential customers—through presentations as well as informal conversations.

Many attendees manage assessment programs for large academic or commercial institutions, and I was struck by their teams’ organizational skills. From my conversations, it sounds as if many of these program managers have to strike a balance between traditional practices at their organizations and the needs to adopt innovative strategies to improve measurement practices. For example, one program manager spoke about helping item writers transition from writing items in MS Word to writing them in Questionmark Live. The people I spoke to appeared to be pushing the envelope of their assessment capabilities, helping their stakeholders through technological transitions, while simultaneously delivering thousands of assessments. It was impressive.

Compliance was a recurring theme. In the U.S., test developers are always collecting evidence to demonstrate the legal defensibility of their assessments, and we often turn to The Standards for Educational and Psychological Testing for guidance (the latest edition was released just last week). Though the legal and cultural expectations for test development may differ slightly in other regions, no modern test developer is exempt from accountability. Demonstrating compliance with organizational or legal requirements seemed to be a big consideration for many attendees.

Regardless of what compliance means to different organizations, one thing was the same for everyone: demonstrating compliance means having accurate, easily-accessed data. I noticed that many clients were able to cite data-backed evidence for the decisions they made in their testing programs to meet their stakeholders’ compliance requirements. Some of these data came from Questionmark through our APIs and assessment results, but these presenters also clearly did research about other important factors that impact the validity of the results.

For example, presenters talked about the evidence they gathered to support the use of computer-based testing over paper and pencil tests. Another presenter shared qualitative data from interviewing subject matter experts about their impressions of Questionmark’s authoring tools. These decisions affect the delivery mode and task models of the assessment, which directly relate to the validity of the results, so it is encouraging to see test developers documenting their rationales for these kinds of decisions.

All in all, it was an impressive group of professionals who gathered in Midrand, and I am sure that I learned just as much (if not more) from the participants as they did from me. Special thanks to everyone who attended and presented!

Leave a Reply