Predicting Success at Entel Begins with Trustworthy Assessment Results [Case Study]

Julie ProfilePosted by Julie Delazyn

Entel is one of Chile’s largest telecommunications firms, serving both the consumer and commercial sectors. With more than 12,000 employees across its extended Entelenterprise, Entel provides a broad range of mobile and fixed communications services, IT and call center outsourcing and network infrastructure services.

The Challenge

Success in the highly competitive mobile and telecommunications market takes more than world-class infrastructure, great connectivity, an established dealer network and extensive range of retail location. Achieving optimal on-the-job performance yields a competitive edge in the form of satisfied customers, increased revenues and lower costs. Yet actually accomplishing this objective is no small feat – especially for an industry job role notorious for high turnover rates.

With these challenges in mind, Entel embarked on an innovative new strategy to enhance the predictability of the hiring, onboarding, training and developing practices for its nationwide team of 6,000+ retail store and call center representatives.

Certification as a Predictive Metric

Entel conducted an exhaustive analysis – a “big data” initiative that mapped correlations between dozens of disparate data points mined from various business systems, HR systems as well as assessment results – to develop a comprehensive model of the factors contributing to employee performance.

Working with Questionmark OnDemand enabled Entel to create the valid and reliable tests and exams necessary to measure and document representatives’ knowledge, skills and abilities.

Find out more about Entel’s program planning and development, which helped identify and set benchmarks for required knowledge and skills, optimal behaviors and performance metrics, its use of SAP SuccessFactors to manage and monitor performance against many of the key behavioral aspects of the program, as well as the growing role their trustworthy assessment results are having on future product launches and the business as a whole.

Click here to read the full case study.

The tips and tools you need to get the most out of your assessments [Webinars]

Chloe Mendonca

Posted by Chloe Mendonca

What’s the big deal about assessments anyway? Though they’ve been around for decades, the assessment and eLearning industry is showing no sign of slowing down. Organisations large and small are using a wide variety of assessment types to measure knowledge, skills, abilities, personality and more.

Join us for one of our upcoming 60-minute webinars and discover the tools, technologies and processes organisations are using worldwide to increase the effectiveness of their assessment programs.

How to transform recruitment and hiring with online testing

This webinar, presented by Dr. Glen Budgell, Senior Strategic HR Advisor at Human Resource Systems Group (HRSG), will discuss the importance and effectiveness of using online testing within HR. This is a must-attend event for anyone exploring the potential of online testing for improving recruitment.

How to Build a Highly Compliant Team in a Fast Moving Market

Organisations across highly regulated industries contend with both stringent regulatory requirements and the need for rigorous asessment programs.  With life, limb, and livelihood on the line, safety and compliance requires much more than “checking a box”. During this webinar, hosted by Questionmark and SAP we will examine ways in which organisations can use online assessment to enhance and strengthen their compliance initiatives.

Introduction to Questionmark’s Assessment Management System

Join us for a live demonstration and learn how Questionmark’s online assessment platform provides organisations with the tools to efficiently develop and deliver assessments.

You can also catch this introductory webinar in Portuguese!

Conhecendo a Questionmark e seu Portal de Gestão de Avaliações [Portuguese]

 

Trustworthy Assessment Results – A Question of Transparency

Austin FosseyPosted by Austin Fossey

Do you trust the results of your test? Like many questions in psychometrics, the answer is that it depends. Like the trust between two people, trustworthy assessment results have to be earned by the testing body.

trustMany of us want to implicitly trust the testing body, be it a certification organization, a department of education, or our HR department. When I fill a car with gas, I don’t want to have to siphon the gas out to make sure the amount of gas matches the volume on the pump—I just assume it’s accurate. We put the same faith in our testing bodies.

Just as gas pumps are certified and periodically calibrated, many high-stakes assessment programs are also reviewed. In the U.S., state testing programs are reviewed by the U.S. Department of Education, peer review groups, and technical advisory boards. Certification and licensure programs are sometimes reviewed by third-party accreditation programs, though these accreditations usually only look to see that certain requirements are met without evaluating how well they were executed.

In her op-ed, Can We Trust Assessment Results?, Eva Baker argues that the trustworthiness of assessment results is dependent on the transparency of the testing program. I agree with her. Participants should be able to easily get information on the purpose of the assessment, the content that is covered, and how the assessment was developed. Baker also adds that appropriate validity studies should be conducted and shared. I was especially pleased to see Baker propose that “good transparency occurs when test content can be clearly summarized without giving away the specific questions.”

For test results to be trustworthy, transparency also needs to extend beyond the development of the assessment to include its maintenance. Participants and other stakeholders should have confidence that the testing body is monitoring its assessments, and that a plan is in place should their results become compromised.

In their article, Cheating: Its Implications for ABFM Examinees, Kenneth Royal and James Puffer discuss cases where widespread cheating affects the statistics of the assessment, which in turn mislead test developers by making items appear easier. The effect can be an assessment that yields invalid results. Though specific security measures should be kept confidential, testing bodies should have a public-facing security plan that explains their policies for addressing improprieties. This plan should address policies for the participants as
well as for how the testing body will handle test design decisions that have been impacted by compromised results.

Even under ideal circumstances, mistakes can happen. Readers may recall that, in 2006, thousands of students received incorrect scores on the SAT, arguably one of the best-developed and carefully scrutinized assessments in U.S. education. The College Board (the testing body that runs the SAT) handled the situation as well as they could, publicly sharing the impact of the issue, the reasons it happened, and their policies for how they would handle the incorrect results. Others will feel differently, but I trust SAT scores more now that I have observed how the College Board communicated and rectified the mistake.

Most testing programs are well-run, professional operations backed by qualified teams of test developers, but there are the occasional junk testing programs such as predatory certificate programs, that yield useless, untrustworthy results. It can be difficult to tell the difference, but like Eva Baker, I believe that organizational transparency is the right way for a testing body to earn the trust of its stakeholders.

Licensing Open Standards: What Can We Learn From Open Source?

steve-smallPosted by Steve Lay

At the recent Questionmark Users Conference I gave an introductory talk on Open Source Software in Learning Education and Training.  When preparing for the talk it really came home to me how important the work of the Open Source Initiative (OSI) and Creative Commons is.  These organizations help to take a very complex subject, namely the licensing of intellectual property, and to distill it into a small set of common licenses that can be widely understood.

I’ve always been an advocate of distributing technical standards using these standard licenses where possible.  Standard licenses allow developers who use them to be confident of the legal foundations of their work without a cumbersome process of evaluating each license on a case-by-case basis.  So I was delighted to see an excellent blog post by Chuck Allen from the HR-XML consortium discussing this issue and providing some detailed analysis of several such licenses that highlight the different approaches taken by several consortia.

The community reaction to the temporary withdrawal of the draft QTI specification has already been discussed by John Kleeman in this blog, Why QTI Really Matters.  What struck me in that case was that there was uncertainty amongst community members surrounding the license and the impact of the withdrawal on their rights to develop and maintain software based on the draft.

This problem is not unique to e-learning, as Chuck Allen demonstrates with his analysis of the licenses used by the organizations he studied in the related HR field.  I’d echo his call for more convergence on the licenses used for technical standards.  In fact, I’d go further.  The W3C publish much of the core work on which the other standards rely, for example, HTML used for web pages and XML used by almost all modern standards initiatives.  Using the same approach would surely be the simplest way to license open standards based on these technologies?

Just as organizations like GNU, BSD, MIT and Apache have given their names to commonly used open source code licenses, I look forward to a time when I can choose the “W3C” open standards license and everyone will know what I mean.