Assessment types and their uses: Needs Assessments

Posted by Julie Delazyn

Assessments have many different purposes, and to use them effectively it’s important to understand their context and uses within the learning process.

Last week I wrote about formative assessments, and today I’ll explore needs assessments.

Typical uses:

  • Determining the knowledge, skills, abilities and attitudes of a group to assist with gap analysis and courseware development
  • Determining the difference between what a learner knows and what they are required to know
  • Measuring against requirements to determine a gap that needs to be filled
  • Helping training managers, instructional designers, and instructors work out what courses to develop or administer to satisfy their constituents’ needs
  • Determining if participants were routed to the right kind of learning experiences

Types:

  • Job task analysis (JTA) survey
  • Needs analysis survey
  • Skills gap survey

Stakes: low


Example:
A food service company can run a needs analysis survey to identify differences between the knowledge of subject matter experts and people on the job. Evaluating the different groups’ scores, as shown in the gap analysis chart below, reveals the overall differences between the experts’ and workers’ knowledge. But more significantly, it diagnoses a strong need for workers to improve their understanding of food safety. Information like this can inform the organization’s decision about further staff development plans and learning programs.

For more details about assessments and their uses check out the white paper, Assessments Through the Learning Process. You can download it free here, after login. Another good source for testing and
assessment terms is our glossary.

In the coming weeks I’ll take a look at two other assessment types:

  • Reaction
  • Summative

The Fraud Triangle: Understanding threats to test security

Posted by Julie Delazyn

There can be lots of worry and planning around assessment security, but there’s not a one-size-fits-all solution. The Fraud Triangle described by criminologist Donald Cressey provides a useful lens for identifying security threats and understanding how to deal with them effectively.

The diagram below shows the key issues in fraud. In order for someone to commit fraud (e.g. cheat at a test), he or she must have Motivation, Opportunity and Rationalization.

Motivation:

Understanding the stakes of assessments is key to determining motivation and therefore the appropriate level of security for it. For example, if an employee’s promotion is at stake, then the risk of fraud is higher than if the individual were simply taking a low-stakes survey. So it’s important to evaluate the risk before deciding the measures to take and to apply security where the risk is higher.

Opportunity:

There are three main areas of opportunity to be addressed:

  • Identity fraud: a participant might ask a friend or co-worker to take the test instead of him/her
  • Content theft: where questions are circulated from one test taker to another; e.g. someone copies an exam and shares it with someone else
  • Cheating: where test taker sits with a friend or uses Internet searching to help answer questions

Rationalization:

In order to cheat at a test, people need to justify to themselves why it’s right to do so — perhaps reasoning that the process is unfair or that the test is unimportant. You can do a lot to reduce rationalization for cheating by setting up a fair program and clearly communicating that it is fair. (It’s notable that having a positive, long-term relationship with test takers lowers the risk of cheating: where there is strong trust, people generally would not want to break it over something like an exam.)

For a fuller description on The Fraud Triangle, see Questionmark CEO Eric Shepherd’s blog articles: Assessment Security and How to Reduce Fraud and Oversight Monitoring and Deliver of Higher Stakes Assessments Safely and Securely. Another good source of information, particularly in the context of compliance, is our white paper: The Role of Assessments in Mitigating Risk for Financial Services Organizations. You can download it free here, after login.

What is the Angoff Method?

Posted by Julie Delazyn

When creating tests that define levels of competency as they relate to performance, it’s essential to use a reliable method for establishing defensible pass/fail scores.

One of these is the Angoff Method, which uses a focus-group approach for this process. This method has a strong track record and is widely accepted by testing professionals and courts.

Subject-matter experts (SMEs) review each test question and then predict how many minimally-qualified candidates would answer the item correctly. The average of the judges’ predictions for test questions is used to calculate the passing percentage (cut score) for a test.

Basing cut scores on empirical data instead of choosing arbitrary passing scores helps test developers produce legally defensible tests that meet the Standards for Educational and Psychological Testing. The Angoff Method offers a practical way to achieve this.

View this SlideShare presentation to learn more:

Ten assessment types that can help mitigate risk

Posted by Julie Delazyn

Mitigating risk –- most notably the risk of non-compliance — is a key component of success in the financial services industry. Other risks abound, too, such as losing customers and/or good employees.

If employees don’t understand and follow the processes that organizations put in place to mitigate risk and maintain compliance, the risk of non-compliance increases – and a business is less likely to succeed.

Online assessments do a lot to help ensure that employees know the right procedures and follow them. Here are 10 assessment types that play essential roles here:

(1) Internal exams -– check your employees are competent

Some companies administer internal competency exams annually –- and do so more frequently. It’s also good to give these exams when regulations changed and new products are introduced. These exams address compliance with competency requirements and at the same time help employees prove they know how to do their jobs.

(2) Knowledge checks – confirm learning and document understanding

Running knowledge checks or post-course tests (also called Level 2s) right after training helps you find out whether the training has been understood. This also helps reduce forgetting.

(3) Needs analysis / diagnostic tests – allow testing out

These tests, which measure current skills and knowledge about particular topics, can be used as training needs assessments and/or pre-requisites for training. And if someone already has the critical skills and knowledge, he or she can “test out” and avoid unnecessary and costly training.

(4) Observational assessments – measure skills via role plays, customer visits

When checking practical skills, it’s common to have an observer monitor an employee to see if they are following correct procedures. With so many people using smartphones and tablets, such as the Apple iPad, it’s viable to use a mobile device for these assessments — which are great for measuring behavior, not just knowledge.

(5) Course evaluation surveys

These surveys, also called “level 1” or “smile sheet” surveys, let you check employee reaction following training. They are a key step in evaluating training effectiveness. You can also use them to gather qualitative information on topics, such as how well policies are applied in the field. Here is an example fragment from a course evaluation survey:

(6) Employee attitude surveys

Employee attitude surveys ask questions of your workforce or sections of it. HR department often use them to measure employee satisfaction, but they also can be used in corporate compliance to determine attitudes about ethical and cultural issues.

(7) Job task analysis surveys –- to fairly identify tasks against which to check compliance

How do you know that your competency assessments are valid and that they are addressing what is really needed for competence in a job role? A job task analysis (JTA) survey asks people who are experts in a job how important the task is for the job role and how often it is done. Analysis of JTA data lets you weight the number of questions associated with topics and tasks so that a competency test fairly measures the importance of different elements of a job role. Here is an extract from a typical JTA23:

(8) Practice tests

These often use questions that are retired from the exam question pool but remain valid. Candidates can take practice tests to assess their study needs and/or gain candidates experience with the technology and user interface before they take a real exam. This helps to reduce exam anxiety, and it’s important for less computer-literate candidates. Practice tests are also helpful when deploying new exam delivery technology.

(9) Formative quizzes during learning -– to help learning

These quizzes are those we are all familiar with: short quizzes during learning to inform instructors and learners about whether learners have understood the learning or need deeper instruction. Such quizzes can also diagnose misconceptions and also help reduce forgetting.

(10) 360-degree assessments of employees

A 360-degree assessment solicits opinions about an employee’s competencies from his/her superiors, reports and peers. It will usually cover job-specific competencies and general competencies such as integrity and communication skills. In compliance, such surveys allow you to potentially identify issues in people’s behavior and competencies that need review.

For more in-depth coverage of this subject, read our white paper,  The Role of Assessments in Mitigating Risk for Financial Services Organizations, which you can download free after login or sign-up.

Easier collaboration in Questionmark Live

Posted by Julie Delazyn

We always like sharing news about Questionmark Live, our browser-based assessment authoring tool, on the blog, including the recent addition of hierarchical topics. Another big change is the new interface, which makes it easier than ever for subject matter experts and test designers to collaborate on an assessment. We’ve made it very easy to share topics and subtopics.

How does it work? Simply click the share button, as you can see in the screen shot below, and type the email of the person you would like to share the subtopic with. The recipient will receive an invitation to view the topic in Questionmark Live (to see context) and the sub-topic they can work in. It’s that easy.

The subtopic folder that you shared will now display a green arrow, as shown in the screenshot below. Click on the number beneath the “No. of Revisions” column to track a question’s revision history as well as to see who edited the question and which changes were made. You can also compare the different versions of the same question and roll back to a previous version. Learn more about revision history in Questionmark Live in this blog post.

Video: Applying the A-model to a business problem

Posted by Julie Delazyn

In a recent post on his own blog,  Questionmark CEO Eric Shepherd offered some insights about the 70+20+10 learning model, in which social and informal learning play key roles in knowledge transfer and performance improvement.

The A-model, developed by Dr. Bruce C. Aaron, helps organizations make the most of all types of learning initiatives – both formal and informal — by providing an effective framework for defining a problem and its solution, implementing the solution, and tracking the results.

Eric’s post, A-model and Assessment: The 7-Minute Tutorial, notes the great feedback we have received in promoting awareness of the A-model and includes a brief video that walks viewers through a business problem  and explains how to approach it using the A-model.

You can check out this video – by Questionmark’s Doug Peterson – right here. If you’d like more details, you can download the white paper we collaborated on with Dr. Aaron: Alignment, Impact and Measurement with the A-model.