5 Things I Learned at the European Association of Test Publishers Conference Last Week

Posted by John Kleeman

I just attended the Association of Test Publisher’s European conference (EATP), held last week in Madrid, and wanted to share some of what I learned.

The Association of Test Publishers (ATP) is the trade association for the assessment industry and promotes good practice in assessment. Questionmark have been members for a long time and I am currently on their board of directors. The theme of the conference was “Transforming Assessments: Challenge. Collaborate. Inspire.”

Panel at European Association of Test Publishers

As well as seeing a bit of Madrid (I particularly enjoyed the beautiful Retiro Park), here are some things I learned at the conference. (These are all my personal opinions, not endorsed by Questionmark or the ATP).

1. Skills change. One area of discussion was skills change. Assessments are often used to measure skills, so as skills change, assessments change too. There were at least three strands of opinion. One is that workplace skills are changing rapidly – half of what you learn today will be out of date in five years, less if you work in technology. Another is that many important skills do not change at all – we need to collaborate with others, analyze information and show emotional resilience; these and other important skills were needed 50 years ago and will still be needed in 50 years’ time. And a third suggested by keynote speaker Lewis Garrad is that change is not new. Ever since the industrial revolution, there has been rapid change, and it’s still the case now. All of these are probably a little true!

2. Artificial Intelligence (AI). Many sessions at the conference covered AI. Of course, a lot of what gets called AI is in fact just clever marketing of smart computer algorithms. But nevertheless, machine learning and other things which might genuinely be AI are definitely on the rise and will be a useful tool to make assessments better. The industry needs to be open and transparent in the use of AI. And in particular, any use of AI to score people or identify anomalies that could indicate test cheating needs to be very well built to defend against the potential of bias.

3. Debate is a good way to learn. There were several debates at the conference, where experts debated issues such as performance testing, how to detect fraud and test privacy vs security, with the audience voting before and after. As the Ancient Greeks knew, this is a good format for learning, as you get to see the arguments on both sides presented with passion. I’d encourage others to use debates for learning.

4. Privacy and test security genuinely need balance. I participated in the privacy vs test security debate, and it’s clear that there is a genuine challenge balancing the privacy rights of individual test-takers and the needs of testing organizations to ensure results are valid and have integrity. There is no single right answer. Test-taker rights are not unlimited. And testing organizations cannot do absolutely anything they want to ensure security. The growing rise of privacy laws including the GDPR has brought discussion about this to the forefront as everyone seeks to give test-takers their mandated privacy rights whilst still being able to process data as needed to ensure test results have integrity. A way forward seems to be emerging where test-takers have privacy and yet testing organizations can assert legitimate interests to resist cheating.

5. Tests have to be useful as well as valid, reliable and fair. One of the highlights of the conference was a CEO panel, where Marten Roorda, CEO of ACT, Norihisa Wada, a senior executive at EduLab in Japan, Sangeet Chowfla, CEO of the Graduate Management Admission Council and Saul Nassé, CEO of Cambridge Assessment gave their views on how assessment was changing. I moderated this panel (see picture below) and it was great to hear these very smart thought leaders talk of the future.  There is widespread agreement that validity, reliability and fairness are key tenets for assessments , but also a reminder that we also need “efficacy” – i.e. that tests need to be useful for their purpose and valuable to those who use them.

There was a huge amount of other conference conversations including sessions on online proctoring, test translation, the update to the ISO 10667 standard, producing new guidelines on technology based assessment and much, much more.

I found it challenging, collaborative and inspiring and I hope this blog gives you a small flavor of the conference.

Leave a Reply

Your email address will not be published. Required fields are marked *