Video: Secure and Collaborative Assessment Authoring

Doug Peterson HeadshotPosted By Doug Peterson
 
Questionmark Live is Questionmark’s easy-to-use online item and assessment authoring tool that makes it easy for authors to collaborate securely.
 
In this video, we’ll take a quick look at sharing topics in Questionmark Live.
 
 

Conceptual Assessment Framework: Building the Student Model

Austin FosseyPosted by Austin Fossey

The student model is one of the three sections of the Conceptual Assessment Framework (CAF) in Evidence-Centered Design (ECD). In the student model, we define the variables from which we make inferences about the participant’s knowledge, skills, or abilities. We also define how those variables and inferences are related to each other.

The most basic form of a student model is a pass/fail indicator variable. The participant takes a test, and the test yields a pass/fail decision. These student model variables can be interpreted with respect to the proficiencies defined in a domain model for the assessment. For example, if the student passes, then we may infer that the student is proficient with the knowledge defined in the assessment’s domain.

Quite often, we have stakeholders who want more than just a pass/fail decision. Participants may want to know how close they were to passing, instructors may want to know which areas are strengths or weaknesses for their students, and researchers may want to classify participants based on similarities in response patterns.

In these cases, we need to define a more detailed student model. For example, instead of reporting a pass/fail indicator, we might report a score (e.g., percentage correct, scale score) so that participants understand how their performance relates to some criterion. We may also provide scores or outcomes for topics and subtopics within the assessment so that participants and instructors can look for patterns of strengths and weaknesses.

student model

Student model diagram illustrating relationships between student model variables.

Evidence-Centered Assessment Design
(Mislevy, Steinberg, & Almond, 2012)

The relationships between the variables in the student model are important because they determine many of the inferences we will make about the participants. Topic and subtopic scores and outcomes are a good example of dependencies between student model variables: the overall assessment outcome and its inference may be dependent on the inferences about the topic scores.

For example, you may have an assessment where participants must demonstrate a level of proficiency in one topic area, otherwise they do not pass the assessment. Driving tests are an example of this. We make different inferences about a participant based on their performance on the written test and their performance on the road test, and the overall evaluation (i.e., awarding of a driver’s license) is dependent on both of these two student model variables.

student model 2

Simplified example of student model diagram for a driver’s license test.

If you are interested in learning more about this, I’ll be at the Questionmark Users Conference in San Antonio March 4 – 7 and will be happy to talk with you.

Top 10 uses of assessments for compliance

Headshot JuliePosted by Julie Delazyn

We recently announced a webinar on September 18th about why it’s good to use assessments for compliance.

Today I’d like to focus on how to use them, particularly within  financial services organizations – for whom mitigating risk of non-compliance is essential.

You can find out more about these in a complimentary white paper that highlights good practices in using assessments for regulatory compliance: The Role of Assessments in Mitigating Risk for Financial Services Organizations. But here, for quick reference, are ten of the most useful applications of assessment in a compliance program:

1) Internal exams — Internal competency exams are the most commonly used assessments in financial services.

2) Knowledge checks — It’s common to give knowledge checks or post-course tests (also called Level 2s) immediately after training to ensure that the training has been understood and to help reduce forgetting. These assessments confirm learning and document understanding.

3) Needs analysis / diagnostic tests — These tests measure employee’s current skills in topics and help drive decisions on development topics. They can be used to allow employees to test out when it’s clear they already understand a particular subject.

4) Observational assessments — When checking practical skills, it’s common to have an observer monitor employees to see if they are following correct procedures. A key advantage of an observational assessment is that it measures behavior, not just knowledge. Using mobile devices for these assessments streamlines the process.

5) Course evaluation surveys — “Level 1” or “smile sheet” surveys let you check employee reaction following training. They are a key step in evaluating training effectiveness. In the compliance field You can use them to gather qualitative information on topics, such as how well policies are applied in the field. Here’s an example fragment from a course evaluation survey.

eval survey6) Employee attitude surveys — Commonly used by HR for measuring employee satisfaction, these surveys also can be used to determine attitudes about ethical and cultural issues.

7) Job task analysis surveys –How do you know that your competency assessments are valid and that they are addressing what is really needed for competence in a job role? A job task analysis (JTA) survey asks people who are experts in a job how important the task is for the job role and how often it is done. Analysis of JTA data lets you weight the number of questions associated with topics and tasks so that a competency test fairly measures the importance of different elements of a job role.

8) Practice tests — Practice tests often use questions that are retired from the exam question pool but remain valid. Practice tests are usually accompanied by question and topic feedback. As well as allowing candidates to assess their further study needs, practice tests give candidates experience with the technology and user interface before they take a real exam.

9) Formative quizzes— These quizzes are those we are all familiar with: during learning to inform instructors and learners about whether learners have understood the learning or need deeper instruction, they diagnose misconceptions and also help reduce forgetting. They provide the key evidence that helps instructors vary the pace of learning. Computerized formative quizzes are especially useful in remote or e-learning where an instructor cannot interact face-to-face with learners.

10) 360- degree assessments – This kind of assessment solicits opinions about an employee’s competencies from his/her superiors, reports and peers. It will usually cover job-specific competencies and general competencies such as integrity and communication skills. In compliance, such surveys allow you to potentially identify issues in people’s behavior and competencies that need review.

Click here for details and registration for the webinar, 7 Reasons to Use Online Assessments for Compliance.

You can download the white paper, The Role of Assessments in Mitigating Risk for Financial Services Organizations, here.

Writing Good Surveys, Part 3: More Question Basics

Doug Peterson HeadshotPosted By Doug Peterson

In part 2 of this series, we looked at several tips for writing good survey questions. To recap:

  • Make sure to ask the right question so that the question returns the data you actually want.
  • Make sure the question is one the respondent can actually answer, typically being about something they can observe or their own personal feelings, but
    not the thoughts/feelings/intentions of others.
  • Make sure the question doesn’t lead or pressure the respondent towards a certain response.
  • Stay away from jargon.
  • Provide an adequate rating scale. Yes/No or Dislike/Neutral/Like may not provide enough options for the respondent to reply honestly.

In this installment, I’d like to look at two more tips. The first is called “barreling”, and it basically refers to asking two or more questions at once. An example might be “The room was clean and well-lit.” Clearly the survey is trying to uncover the respondent’s opinion about the atmosphere of the training room, but it’s conceivable that the room could have been messy yet well-lit, or clean but dimly lit. This is really two questions:

  • The room was clean.
  • The room was well-lit.

I always look for the words “and” and “or” when I’m writing or reviewing questions. If I see an “and” or an “or”, I immediately check to see if I need to split the question out into multiple questions.

The second tip is to keep your questions as short, as clear, and as concise as possible. Long and complex questions tend to confuse the respondent; they get lost along the way. If a sentence contains several commas, phrases or clauses inserted with dashes – you know, like this – or relative or dependent clauses, which are typically set off by commas and words like “which”, it may need to be broken out into several sentences, or may contain unneeded information that can be deleted. (Did you see what I did there?)

In the next few entries in this series, we’re going to take a look some other topics involved in putting together good surveys. These will include how to construct a rating scale as well as some thoughts about the flow of the survey itself. In the meantime, here are some resources you might want to review:

Problems with Survey Questions” by Patti J. Phillips. This covers much of what we looked at in this and the previous post, with several good examples.
Performance-Focused Smile Sheets” by Will Thalheimer. This is an excellent commentary on writing level 2 and level 3 surveys.
Correcting Four Types of Error in Survey Design” by Patti P. Phillips. In this blog article, Patti give a quick run-down of coverage error, sampling error, response rate error, and measurement error.
Getting the Truth into Worplace Surveys” by Palmer Morrel-Samuels in the February 2002 Harvard Business Review. You have to register to read the entire article, or you can purchase it for $6.95 (registration is free).

If you are interested in authoring best practices, be sure to register for the 2014 Questionmark Users Conference  in San Antonio, Texas March 4 – 7. See you there!

An Introduction to Building Assessments with Evidence-Centered Design

Austin FosseyPosted by Austin Fossey

Questionmark users have a flexible set of assessment tools, including a wide variety of item types, conditional item blocks, and weighted scoring. But when should we use these tools, and how do they fit in with our overall measurement goals?

Many of us follow a set of guidelines for developing our assessments, be it The Standards for Educational and Psychological Testing or simply a set of development practices defined by our organization. These guidelines are often special cases of a framework known as evidence-centered design (ECD).

ECD is a formal yet flexible structure for designing and delivering assessments. You may recall my post about argument-based validity, where we discussed using evidence to support a claim, thus creating a validity argument. ECD is a common method for designing an assessment that provides the evidence needed for a validity argument about a participant’s knowledge or abilities.

But ECD is not just another checklist for how to build an assessment: it guides the decision-making process. As test developers, we are accountable for every design and content decision, and ECD helps us to map those choices to the assessment inferences.

For example, the task model is part of the Conceptual Assessment Framework (CAF) in ECD, and it is used to specify which tasks should be used to elicit the types of behavior we want to observe to support our inference about the participant. We use the task model to document why we chose a specific selected-response item format (e.g., the use of drag and drop items as opposed to multiple choice items).

I see more and more research where assessments are described using the vocabulary of ECD, including fixed form assessments, adaptive assessments, education games, and simulations (e.g., Journal of Educational Data Mining, 4(1)). ECD can be used to describe any assessment, not just the standardized formats that we are used to.

As technology allows us to explore new ways of assessing participants, ECD provides a common thread to help define our design choices, make comparisons  between designs, and support our inferences.

ECD has five parts:crane

  • Domain Analysis
  • Domain Modeling
  • Conceptual Assessment Framework (CAF)
  • Assessment Implementation
  • Assessment Delivery

If you are a Questionmark client, Questionmark may play a role in all five of these areas—we are almost certainly a part of the final three. In my next three posts, I will focus on the three parts of the CAF:

By exploring the CAF, we will learn about how to break up our assessment design into its functional components so that we can determine how different Questionmark tools can be leveraged to improve the validity of our inferences.

If you are interested in learning more, there are many great articles about ECD, and the links in this post are a good starting point. Have a good example of ECD being used to implement an assessment? Please share it by leaving a comment!

European Users Conference agenda and early-bird savings

Chloe MendoncaPosted by Chloe Mendonca

If you haven’t registered for the Questionmark 2013 European Users Conference, make sure you do so this month. Early-bird registration ends on September 30.  All it takes is a few clicks to set you on your way to some fantastic learning sessions, one-to-one meetings and networking in Barcelona November 10-12.

The program will cover an array of topics. Here are some of the breakout sessions we’ve scheduled to date:

g1Best Practice

  • ​Seven Reasons to Use Online Assessments for Regulatory Compliance
  • Assessment Feedback – What Can We Learn from Psychology Research?
  • Reporting and Analytics: Understanding Assessment Results

      

g4

Case Studies

  • Advanced Item Banking and Psychometric Analyses with Randomly Generated Assessments – ECABO
  • Get your Questionmark environment back in control with a QMWise based dashboard – Wageningen University
  • Questionmark Data Analysis for High Stakes Assessments – HBO Raad
  • Expanding and Testing Knowledge with Computer Assisted Learning – CFL (National Railway Company of Luxembourg)
  • Collaboration Facilities for Question Development Using QMlive​ – As-a-queue


g2Features and Functions

  • Introduction to Questionmark for Beginners
  • Assessment Authoring Part One: Creating Items and Topics,
  • Assessment Authoring Part Two: Assessments and Qpacks
  • Reporting and Analytics: Understanding Assessment Results
  • Integrating with Questionmark’s Open Assessment Platform
  • Customising the Participant Interface ​ ​
  • Deploying Questionmark Perception On Premise


g3Drop in Discussions and Demos

  • Drop-in Demo Session: Latest Questionmark Features
  • Tips for Delivering your Assessments to Mobile Devices

More Sessions will be added soon!

Conference participation is a valuable investment. Visit our cost justification page and download the ROI Toolkit to see why this is an essential experience for you and your organisation.

Barcelona banner

« Previous PageNext Page »