How the railway industry uses assessments to promote safety

Class_390_Pendolinos_@_EustonPosted by John Kleeman

I’ve been learning how the railway industry uses assessments to maintain competence. As a frequent train passenger, it’s reassuring that the industry and their regulators carefully enforce a safety first mantra. And how railway and rapid transit companies (sometimes with Questionmark software!) use assessments to check competency for rail workers, especially in safety-critical roles.

There are many government and industry bodies that oversee and promote safety including the US Federal Railroad Administration, the UK Railway Safety & Standards Board (RSSB) and the European Railway Agency.

The RSSB have produced a very useful document: Good practice on Competence Review and Assessments. Here is a table summarized from this document that gives pros and cons of different kinds of assessment.

Type of assessment What it means Pros Cons
Observational Observer watches  participant doing normal work Valid and reliable as it provides first-hand information about performance in real conditionsCaptures information about process and behavior not just outcomes Risk of a ‘special performance’ as someone behaves differently whilst being observedWill not cover emergencies and other non-routine work

Needs good planning

Simulation Participant completes activity which is not real work but replicates real work  closely Provides performance evidence for non-routine workMeasures response to emergencies Heavy on resourcesNeeds careful planning to be valid and reliable
Tests Formal assessment of knowledge on paper or on screen Consistent and objectiveGood for assessing technical knowledge

Cost effective for large numbers of people

Requires skill to make valid and reliable
Work products Examining outcomes of work done, e.g. document written or machine serviced Provides evidence of performance in real work conditions Need to verify authenticityShows outcome but not route to get there
Written reports Report from participant or colleague describing competence on the job Provides evidence to support other methods Need to check authenticityMemory fallible

Requires writing skills

Oral interview Conversation where performance is described and questioned Allows in depth exploration of knowledge and understanding Relies on skills of assessorsHard to make consistent and objective

It’s interesting to see the strengths and weaknesses of different kinds of assessments from such a safety-focused industry and to consider how using different assessments together can cover more ground and reduce safety risks.

How can assessments help prevent bribery?

Posted by John Kleeman

Increasingly companies around the world are being required to improve their procedures and policies to avoid participating in bribery and corruption. The US government is enforcing its Foreign Corrupt Practices Act – in 2010, the US imposed a total of US$1 billion in penalties under this act. And in 2009, a UK insurance company was fined more than UK£5m for bribery. In giving this penalty, the UK Financial Services Authority advised the insurance company that it

“should have ensured that appropriate members of staff … received focused training in relation to this area and were tested on their understanding of the relevant risks involved. Effective training and testing in this regard would have emphasized to staff the importance of carrying out effective due diligence”

Assessments can help in two main ways. First, as indicated above, a quiz or test after training allows you to confirm that the training has been understood. This is put very well in an excellent 2010 blog article from the US lawyers Jones Day who said:

Part and parcel of adequately communicating and training on company policies and processes is assessing the effectiveness of the training. At a minimum, a meaningful training assessment includes a “quiz” during or at the end of the training that is “graded” to ensure that the employee has learned at the least the required concepts. The results of such grades also provide important feedback regarding the content of the training materials and where the training needs to be clarified or improved.*

Secondly, employee surveys including course evaluation surveys, employee attitude surveys and quizzes that allow comments can enable employees to alert you that something may not be right. UK Ministry of Justice guidance says:

Staff surveys, questionnaires and feedback from training can also provide an important source of information on effectiveness and a means by which employees and other associated persons can inform continuing improvement of anti-bribery policies.

As of July 1st (this Friday), the UK is introducing its new Bribery Act, which strengthens the law in the UK. An interesting facet of the new UK law is that it applies to companies that do business in the UK, not just UK companies. If companies implement anti-bribery prevention procedures, they get protection from the law, so there is an incentive to put in effective procedures.

Here is a quiz to help you learn about the UK Bribery Act; the quiz will be slightly different each time you take it as some of the questions are pulled from a pool at random. Feel free to answer the quiz embedded in the blog below or pass on the URL to colleagues as www.questionmark.com/go/briberyact. This quiz won’t let you master the Bribery Act on its own, but quizzes and tests like this one can help ensure that your employees understand the rules help them follow them.

*Jones Day have not reviewed nor endorsed this blog post.

Using the Test Center Analysis report in Questionmark Analytics

This week’s “how to” article highlights the “Test Center Analysis report,” one of the Questionmark Analytics reports now available in Questionmark OnDemand.

  • What it does: The Test Center Analysis Report allows users to evaluate the performance of participants who have been scheduled to take assessments at Test Centers. The report flags abnormal patterns that may indicate a statistically significant difference between the test scores.
  • Who should use it: Assessment, learning and education professionals can use this report to review the performance at various Test Centers and evaluate whether there are significant differences in performance between them. They could use the report to spot potential test security issues. For example, do participants at a specific test center do much better on assessments compared to others?
  • How it looks: This report offers a lot of information graphically and compactly, making it easy to interpret data quickly. It is broken down into two components (graphs).

1. Average assessment scores achieved by participants at each Test Center

  • The triangles represent the mean values for the assessment results. Alongside each triangle are vertical bars denoting the variance of each mean value. The bars are 95% confidence intervals: The length of the confidence intervals denotes the variance of the mean value. If the bars are long then the data is varied. If the bars are short then the confidence in the mean value is high.
  • The overlapping of the bars denotes the variance between mean values. If the bars overlap each other vertically in relation to each other, there is not a statistically significant difference between the means. If the vertical bars do not overlap, there is a statistically significant difference between the means.
  • Note that in this example, Test Center 1 had the highest mean. You will also see that the confidence intervals do not overlap: the results for this Test Center are statistically significantly higher than for other Test Centers.

2. Number of results that occurred in each Test Center

  • This volume information can help in planning for Test Center logistics.

A PDF and an analysis-friendly CSV version of the report can also be generated.

ETL: The Power behind Questionmark Analytics

Readers of this blog will be aware that Questionmark is transforming the way we provide reports using our new Questionmark Analytics tool.  The screenshots shared in previous posts demonstrate the new report types and the modern user interface, but what they don’t show is the technology that now makes it possible to provide rapid reporting even for the largest data sets.  In this post we’ll take a look “under the hood” at this technology.

E is for Extract

Running reports can be a time-consuming job. Thousands or even millions of database records need to be collected together, cross-matched and reformatted before being sent to the user. Unfortunately, data doesn’t stand still. There is nothing to stop you from taking an assessment and generating a new result in the database while someone else is trying to get a report on the very same assessment. Conflicts between these two demands can cause databases to slow down, especially when you start scaling up. The solution is to extract the data from the main database and generate the reports elsewhere.

T is for Transform

When you take an assessment with Questionmark Perception the results are written to the database in the simplest, fastest way. This enables us to scale up the delivery of assessments to cater for vast numbers of participants. But there is a cost to this speed and simplicity. The data is not organized in a very convenient way for reporting; as a result, complex reports require much more data processing to create. The solution is to transform the data so that reports can be created quickly when they are needed.

L is for Load

Having extracted and transformed our data the last step in the process is to load it into a special-purpose database for generating the reports. We call this database our “Results Warehouse” — because the science of ETL is part of the larger field of Data Warehousing.

This new architecture provides rapid, scalable reporting that doesn’t interfere with the participant experience. Learn more about Questionmark Analytics here.

Moments of Contingency: How Black and William conceptualize formative assessment

Professors Black and William

Paul Black (left) and Dylan William

Posted by John Kleeman

I’ve always believed instinctively that assessment is the cornerstone of learning. I’ve recently read an interesting paper by the eminent Professors Paul Black and Dylan William that conceptualizes this powerfully.

In Developing the theory of formative assessment, published in 2009 in the journal Educational Assessment, Evaluation and Accountability, they describe how formative assessment gives “Moments of Contingency” in instruction – critical points where learning changes direction depending on an assessment.

In their model, assessment gives you information to take decisions to direct learning, and so makes instruction and learning more effective than it would have been otherwise. There are many paths that instruction can go down, and formative assessment helps people choose the right path.

Person with 3 paths to go onBlack and William’s formal definition of formative assessment is how “evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of the evidence that was elicited”.

Like Professor David Nicol about whom I blogged earlier, an important point they make is that formative assessment is not only instructor-led, but is also about interaction with peers and self-assessment. Black and William have done most of their work in education, but their message resonates with the 70+20+10 model currently sweeping corporate learning. Increasingly we are realizing that interaction with learning peers is a critical part of learning: they can give you feedback, questions or insight that help you learn. As a learner, you can regulate your own learning and are responsible for it – and assessments help you make the decisions on how to adjust your learning.

Importing content into Questionmark Live

Questionmark Live, Questionmark’s browser-based authoring tool, makes it easy for subject matter experts to collaborate in writing and editing assessment questions. They can create new questions or import content from elsewhere.

They can import text-based content from many different systems:

  • LXR Test 6.1 Merge
  • Blackboard 6/8/9 Pool
  • Blackboard Upload
  • Questionmark Live CSV – This is a new format that allows users to quickly create questions in Excel or other text tool that is in the format of a CSV file.
  • Questionmark Qpacks (Questionmark QML files)
  • Moodle XML

The imported questions will be stored in their own question sets, where they can be reviewed and modified before they are exported to Questionmark Perception.

To import questions, go to the Question Sets page and click Import Questions or Import Qpack from the Import menu, then assign a title and description to your question set.

You’ll also need to select the format of the questions you want to import and browse to the file you wish to import. You’ll be led through the process of uploading your file to the Questionmark Live server. Once your question set is there you will be able to view and edit the questions just as you would any questions created within Questionmark Live.

Next Page »
SAP Microsoft Oracle HR-XML AAIC