5 Things I Learned at the European Association of Test Publishers Conference Last Week

Posted by John Kleeman

I just attended the Association of Test Publisher’s European conference (EATP), held last week in Madrid, and wanted to share some of what I learned.

The Association of Test Publishers (ATP) is the trade association for the assessment industry and promotes good practice in assessment. Questionmark have been members for a long time and I am currently on their board of directors. The theme of the conference was “Transforming Assessments: Challenge. Collaborate. Inspire.”

Panel at European Association of Test Publishers

As well as seeing a bit of Madrid (I particularly enjoyed the beautiful Retiro Park), here are some things I learned at the conference. (These are all my personal opinions, not endorsed by Questionmark or the ATP).

1. Skills change. One area of discussion was skills change. Assessments are often used to measure skills, so as skills change, assessments change too. There were at least three strands of opinion. One is that workplace skills are changing rapidly – half of what you learn today will be out of date in five years, less if you work in technology. Another is that many important skills do not change at all – we need to collaborate with others, analyze information and show emotional resilience; these and other important skills were needed 50 years ago and will still be needed in 50 years’ time. And a third suggested by keynote speaker Lewis Garrad is that change is not new. Ever since the industrial revolution, there has been rapid change, and it’s still the case now. All of these are probably a little true!

2. Artificial Intelligence (AI). Many sessions at the conference covered AI. Of course, a lot of what gets called AI is in fact just clever marketing of smart computer algorithms. But nevertheless, machine learning and other things which might genuinely be AI are definitely on the rise and will be a useful tool to make assessments better. The industry needs to be open and transparent in the use of AI. And in particular, any use of AI to score people or identify anomalies that could indicate test cheating needs to be very well built to defend against the potential of bias.

3. Debate is a good way to learn. There were several debates at the conference, where experts debated issues such as performance testing, how to detect fraud and test privacy vs security, with the audience voting before and after. As the Ancient Greeks knew, this is a good format for learning, as you get to see the arguments on both sides presented with passion. I’d encourage others to use debates for learning.

4. Privacy and test security genuinely need balance. I participated in the privacy vs test security debate, and it’s clear that there is a genuine challenge balancing the privacy rights of individual test-takers and the needs of testing organizations to ensure results are valid and have integrity. There is no single right answer. Test-taker rights are not unlimited. And testing organizations cannot do absolutely anything they want to ensure security. The growing rise of privacy laws including the GDPR has brought discussion about this to the forefront as everyone seeks to give test-takers their mandated privacy rights whilst still being able to process data as needed to ensure test results have integrity. A way forward seems to be emerging where test-takers have privacy and yet testing organizations can assert legitimate interests to resist cheating.

5. Tests have to be useful as well as valid, reliable and fair. One of the highlights of the conference was a CEO panel, where Marten Roorda, CEO of ACT, Norihisa Wada, a senior executive at EduLab in Japan, Sangeet Chowfla, CEO of the Graduate Management Admission Council and Saul Nassé, CEO of Cambridge Assessment gave their views on how assessment was changing. I moderated this panel (see picture below) and it was great to hear these very smart thought leaders talk of the future.  There is widespread agreement that validity, reliability and fairness are key tenets for assessments , but also a reminder that we also need “efficacy” – i.e. that tests need to be useful for their purpose and valuable to those who use them.

There was a huge amount of other conference conversations including sessions on online proctoring, test translation, the update to the ISO 10667 standard, producing new guidelines on technology based assessment and much, much more.

I found it challenging, collaborative and inspiring and I hope this blog gives you a small flavor of the conference.

Influence the new ISO 10667 standard on workplace assessment

john_smallPosted by John Kleeman

An important new ISO international standard for assessments in the workplace is in its final stages and a draft is now available for consultation.

The standard is called ISO 10667 and covers assessment service delivery: procedures and methods to assess people in work and organizational settings. There are two parts, one for service providers and one for clients: organizations that use assessments. It has very broad scope, from assessment in appraisals and coaching through 360s and psychological assessments as well as compliance and training assessments, and it covers both paper and computer-delivered assessments.


There are two closely related parts of the standard and it sets out good practice and guidelines for areas such as the following:

  • Agreeing procedures between different stakeholders in the assessment process
  • Planning assessment delivery formally
  • Getting informed consent from assessment participants
  • Privacy and data protection on assessment results
  • Security and confidentiality
  • Ensuring that reports arising from the assessment are based fairly on what the assessment measures
  • Providing appropriate assessment feedback
  • Guidelines on rights and responsibilities of assessment participants

ISO 10667 is the result of many years’ work by an international committee with representatives from many countries including the US and several in Europe. I have been involved in a very small way with the committee and have been impressed by the professionalism and knowledge of those responsible for writing the standard.

When the standard is formally published, ISO 10667 will provide an opportunity for organizations to put in place a consistent quality standard for their use of assessment, which if we can get it right could help greatly in assessment consistency and fairness. For those involved with assessment in the workplace, there is also likely to be pressure from stakeholders to follow the standard, so reviewing it in advance could be sensible.

ISO is organized via national committees, and there is no single international place to get hold of the draft and provide comments. You have to do this within your own country. If you are in the UK, then you link via the British Standards Institution (BSI) to view Part 1  and Part 2  (needs registration) and comment. If you are in the US, the Association of Test Publishers is administering the review of the draft standard, and if you want to comment, you should contact wgharris@testpublishers.org in the first instance. If you are in other countries, you should contact your national standards organization.

The degree to which the final standard can help improve the quality of assessment in work and organizational settings will depend on the standard being reasonable and practical in terms of the demands it makes on all of us, and I’d encourage Questionmark users who are interested in this area to provide input to the consultation process to help ensure ISO gets it right.