“It takes 20 years to build a reputation and five minutes to ruin it”

John Kleeman HeadshotPosted by John Kleeman

A recent corporate survey reported in Insurance Journal suggests that reputation is the hardest risk to manage. The survey indicates that 81% of companies see reputation as their most significant asset but are challenged in knowing how to protect it.

Warren Buffett famously said, “It takes 20 years to build a reputation and five minutes to ruin it”. So how can organization’s avoid those fateful five minutes?

Assessments can be a great tool to mitigate risk to reputation. I’d like to share some ideas on this from my Questionmark colleagues, Eric Shepherd and Brian McNamara.

Let’s start by considering the classic business model shown in the diagram below. A company uses its core capabilities in Production with a supplier network and product/services development to make an Offer to its customers, which it communicates via a sales and marketing Channel, with a supporting Finance structure.

Classic business model. Production, Offer, Channel, Finance

The pink shaded areas in the diagram are where there is reputation risk.

If you make mistakes within Production – in regulatory compliance, processes & procedures or health & safety – this can seriously hurt your reputation. Errors in regulatory compliance or failing to follow processes & procedures can similarly damage reputation in Finance. Assessments can help by confirming health and safety, checking the competence of employees and testing knowledge of processes & procedures.

Many companies have a bigger challenge in the sales and marketing Channel, as this is more spread out and harder to control. You have to comply with laws on how you sell, both industry-specific and general ones like anti-corruption. The people in your Channel must have product/solution knowledge. And reputation is hurt by overselling and unsatisfied customers.

The diagram below breaks down the Channel into typical parts:

breaking down Channel into Market Messaging and Relationship Management

How can assessments help with reputation challenges in the Channel?

Market Messaging

When you message your customers, there is a risk that your messaging is inappropriate or that messages do not resonate. Most organizations assess customers with surveys to determine if they are “getting it”.


You need your sales people, whether in-house personnel or partners, to comply with laws and avoid corruption. They need to ensure your customers are satisfied, by selling fairly and not using trickery. Online quizzes and tests are great ways to check your sales people know all the rules and are competent to sell. Observational assessments using tablets or smartphones also let supervisors check team members.

Customer Care

In customer care, a challenge is high staff turnover, requiring lots of training. As in sales, the customer care team need product and process knowledge and need to satisfy customers. Quizzes and tests motivate learning, maintain focus and enable recognition of people who “get it”.

Technical Support

Lastly, every company has challenges when products or services don’t work. How you deal with problems impacts your reputation. The challenge for technical support is to delight the customer and fix problems on first call.

Quizzes and tests are useful in technical support, but something that works really well for technical teams is a certification program. Skills and knowledge required are often complex, and using assessments to certify gives technical support teams career progression. It also encourages pride in their jobs, leading to better employee retention and better service.


I hope this article helps you realize that online assessments help solve one of the biggest challenges facing business – mitigating risk to reputation. Next time you are making an internal case for online assessments, consider whether your senior management might find reducing reputation risk a compelling reason to deploy assessments.

The Fraud Triangle: Understanding threats to test security

Posted by Julie Delazyn

There can be lots of worry and planning around assessment security, but there’s not a one-size-fits-all solution. The Fraud Triangle described by criminologist Donald Cressey provides a useful lens for identifying security threats and understanding how to deal with them effectively.

The diagram below shows the key issues in fraud. In order for someone to commit fraud (e.g. cheat at a test), he or she must have Motivation, Opportunity and Rationalization.


Understanding the stakes of assessments is key to determining motivation and therefore the appropriate level of security for it. For example, if an employee’s promotion is at stake, then the risk of fraud is higher than if the individual were simply taking a low-stakes survey. So it’s important to evaluate the risk before deciding the measures to take and to apply security where the risk is higher.


There are three main areas of opportunity to be addressed:

  • Identity fraud: a participant might ask a friend or co-worker to take the test instead of him/her
  • Content theft: where questions are circulated from one test taker to another; e.g. someone copies an exam and shares it with someone else
  • Cheating: where test taker sits with a friend or uses Internet searching to help answer questions


In order to cheat at a test, people need to justify to themselves why it’s right to do so — perhaps reasoning that the process is unfair or that the test is unimportant. You can do a lot to reduce rationalization for cheating by setting up a fair program and clearly communicating that it is fair. (It’s notable that having a positive, long-term relationship with test takers lowers the risk of cheating: where there is strong trust, people generally would not want to break it over something like an exam.)

For a fuller description on The Fraud Triangle, see Questionmark CEO Eric Shepherd’s blog articles: Assessment Security and How to Reduce Fraud and Oversight Monitoring and Deliver of Higher Stakes Assessments Safely and Securely. Another good source of information, particularly in the context of compliance, is our white paper: The Role of Assessments in Mitigating Risk for Financial Services Organizations. You can download it free here, after login.

What I Learned Back in the Big Easy

Posted by John Kleeman

Questionmark cutomers and staff pose for a picture at the dessert reception

I’m just back from the 2012 Questionmark user conference in New Orleans “the Big Easy”. This is our second user conference in New Orleans – we were there in 2005, and it was great to re-visit a lovely city. Here is some of what I learned from Questionmark customers and speakers.

Most striking session: Fantastic to hear from a major US government agency about how they deployed Questionmark to do electronic testing after a long history of 92 years of paper testing. The agency have a major programme – with 800,000 items and 300,000 test takers. I hope to cover what they are doing in a future blog post.

Most memorable saying: Bill Coscarelli (co-author of Criterion-Referenced Test Development) noting that in his huge experience, the two things in testing that really make a difference are “test above knowledge” and “Use the Angoff method to set a cut/pass score for your tests”.

Meeting a hero: You’ll have seen me writing on this blog about the Bruce C. Aaron’s A-model and it was great to meet Bruce in person for the first time and hear him talk so intelligently about the A-model as a way of measuring effectiveness of training – starting from a business problem.

Most fun time: Got to mention the amazing food in New Orleans – especially the catfish and the oysters. But the highlight was being part of a parade down Bourbon Street with a marching band, all the Questionmark conference attendees and a full New Orleans ensemble.

Best conversation: Discussion with a user who’d come to the conference uncertain that Questionmark was the right system for them, but realized on talking to other customers and the Questionmark team that they were only using 10% of what Questionmark could offer them, and left the conference enthused about all they would do going forward.

Session I learned most from: a security panel chaired by Questionmark CEO Eric Shepherd with executives from Pearson Vue, ProctorU, Innovative Exams and Shenandoah University. There was a great discussion about the advantages of test center delivery vs remote proctoring. Test center delivery is highly secure, but testing at home with a remote proctor observing by video link is very convenient, and it was great to hear real experts debating pros and cons.

Most actionable advice: Keynote speaker Jane Bozarth: “what you measure is what you get”, a reminder that what we do with measurement will define behavior in our organization.

It was great to spend time with and learn from customers, partners and our own team. Look forward to seeing many of you again in the 2013 Questionmark user conference in Baltimore Inner Harbor in March 3-6 2013.

The SIF Association’s Assessment Life Cycle

julie-smallPosted by Julie Delazyn

While educators rely on a wide range of technologies, their primary focus and strength is teaching. Having  data systems that work together seamlessly supports them in their efforts and helps drive innovation.

The SIF (Schools Interoperability Framework) Association is working to make this happen. SIF, whose members include software vendors as well as federal, state and local educators, has been defining specifications for efficient, accurate and automatic data movement between applications.

Enhancing interoperability options between assessment management systems is an important part of this. Questionmark’s CEO, Eric Shepherd, wrote about this aspect of SIF’s work in a recent post on his own blog. He drills down into the processes involved in six key data-consuming/generating activities within the life cycle and how they relate to each other:

  • Content Development
  • Pre-Test Administration
  • Test Administration
  • Scoring
  • Reporting
  • Post-Test Administration

To read more about SIF’s assessment life cycle, click here to read the whole post. If this and other topics about assessment and learning interest you, check out Eric’s blog.

The Value of Open Standards

julie-smallPosted by Julie Delazyn

We can hardly imagine navigating through our daily tasks without social technologies such as Facebook, LinkedIn, Twitter, and so on. We constantly use them to connect with colleagues and share information – activities that hold a lot of value for learning and assessment communities as well as for individuals! But how do these networks communicate data and information in ways that seems so effortless?

Questionmark CEO Eric Shepherd’s latest blog post, Open Standards, notes how interoperability standards enable us to log in safely to our accounts, work with apps and send information back and forth. He describes several important elements that are defined by open standards:

  • Data
  • Signatures
  • Orchestration
  • Transfer Protocol
  • Wire Line Transport

To illustrate the value of these technologies, Eric highlights OpenID and OAuth and muses on the capabilities these open standards can bring to the world of learning and assessment.

If these and other topics about assessment and learning interest you, check out Eric’s blog .

Key Innovation Drivers for Learning Environments

julie-smallPosted by Julie Delazyn

It seems we’re in the midst of a revolution that will dramatically change the way we learn. In a recent post  about Key Innovation Drivers for Learning Environments on his own blog, our CEO, Eric Shepherd, suggests that key goals of these new environments will be to:

  • Use competency maps to understand where  learners are and help them navigate to where they want to be
  • Magically expose content at the moment of need and in the right context

Inter-system data sharing to allow personalization

Eric regards technology as crucial to turning us away from the formal, academic learning model (reliant on hard-to-find content  such as  books stored in libraries) to learning that is more personalized, accessible and shareable. This change will rely on learning content that is available and discoverable, inter-system data sharing that allows personalization and data flows that keep stakeholders engaged.

Eric describes innovation drivers that would help make all this happen and in the process reduce learner distraction, make learning easier anywhere, anytime, and create a more enjoyable, personalized learning experience. These drivers include:

  • Funding of open educational resources base on open standards
  • Inter-system data sharing to allow personalization
  • Standard integrations that allow one environment to launch another system with learner context, and
  • The creation of registries that make learning resources  easier to find and share.
  • Inter-system data sharing
  • The user of library science techniques for content classification

There’s a lot more detail in Eric’s post, so click here to read the whole article.. If this and other topics about assessment and learning interest you, check out Eric’s blog.