Assessment types and their uses: Formative

Posted by Julie Delazyn

Assessments have many different purposes, and to use them effectively it’s important to understand their context and uses within the learning process.

Last week I wrote about diagnostic assessments, and today I’ll explore formative assessments.

Typical uses:

  • Strengthening memory recall and correcting misconceptions
  • Promoting confidence in one’s knowledge
  • Enhancing learning by directing attention to and creating intrigue about a given subject
  • Measuring learners’ knowledge or skills and telling them how they’re doing
  • Giving learners search and retrieval practice and prescriptive feedback

Types:

  • Quizzes
  • Practice tests and exams
  • Self-assessments

Stakes: low

Example: An instructor gives a quiz to help reassure students that they’re actually learning — or alert them that they are not learning and provide feedback to correct any misconceptions. Students can use this feedback as a study guide to understand where they’re going right and where they’re going wrong. Students also benefit from the search and retrieval practice they’ve had while taking the quiz – which can help them remember the material in the future. Formative assessments give instructors a way to ask students: “Did you get that?” Sometimes, a series of quizzes is used to collect data that contribute to overall grades – but generally, formative assessments serve as check-ups on learners’ understanding and guideposts for further progress.

For a fuller analysis of assessments and their uses check out the white paper, Assessments Through the Learning Process. You can download it free here, after login. Another good source for testing and assessment terms is our glossary.

In the coming weeks I’ll take a look at three remaining assessment types:

  • Needs
  • Reaction
  • Summative

Writing Test Items – Test Design and Delivery Part 4

Posted By Doug Peterson

In Part 1, Part 2, and Part 3 of this blog series, we looked at the planning that goes into developing a test before the first item is ever written. Let’s assume that planning has taken place and it’s time to write some items.

There is a LOT of information out there about writing items. I have a blog on Improving Multiple-Choice Questions and another on Mastering Multiple-Choice Questions, as well as one on True/False questions. Any time anyone asks me about item writing do’s and don’ts, I always mention the book “Criterion-Referenced Test Development” by Shrock & Coscarelli, especially chapter 7. In this blog I’d like to touch on a few item-writing tips that apply to pretty much any item type.

Always strive for clarity and readability. Remember, you only want to test for one thing, so make sure that you’re not testing the learner’s reading ability or comprehension ability as well as the specific piece of knowledge for which you’re testing. Avoid “window dressing” or superfluous information that isn’t necessary to ask the question and get a response. Make sure that you’re truly asking a single question. You don’t want to ask something like “What color is a stop sign and how many sides does it have?” or “Which of the following Dallas attractions is the most popular and why?” You also want to make sure to avoid negative phrasing such as “Which of the following is not an acceptable way to …”, since this increases the cognitive load and introduces confusion without increasing the value of the question.

Use a style guide for consistency. You don’t want anything to distract the learner, so make sure you use the same font size and family, using bolding and italics consistently, etc. Test-takers are nervous enough as it is, you don’t want to unfairly add to their cognitive load by making them wonder why you used “item-writing” in one place and “item writing” in another. Was it on purpose? Is there a hidden meaning that they need to pick up on? Does the hyphenated one have a different meaning? A nervous learner may obsess over meaningless things like this, wasting time and preventing them from showing you what they truly know.

If you’re writing items that use distractors (e.g., multiple-choice questions), make sure the distractors are plausible. An obviously wrong distractor is a wasted distractor that only helps the learner guess correctly, making it look like they know something they don’t. At the same time, make sure that you have only one truly correct choice and that you’re not tricking the learner. Also be careful about using keywords in the choice that are also used in the main body of the question, as this can clue the learner to the correct answer, or unfairly trick the learner into selecting the wrong answer.

Also, keep in mind that the verb you use can raise or lower the cognitive level of the question. A simple recall question would use verbs like define, list, or identify. You could take that item to the next level by using “interpretation” verbs such as differentiate, contrast, categorize and distinguish. An even higher cognitive level can be achieved by using “problem solving” verbs such as formulate, value, rate, revise and evaluate.

Charles Jennings on Measuring Informal and Workplace Learning: Questionmark 2013 Users Conference Keynote

Posted by Joan Phaup

We are delighted to announce that  Charles Jennings, one of the world’s leading thinkers and practitioners in learning and development, will deliver the keynote address at the Questionmark 2013 Users Conference, which will take place in Baltimore, Maryland, March 3 – 6.

His presentation,  “Meeting the Challenge of Measuring Informal and Workplace Learning,” will focus on the widespread adoption of the 70:20:10 framework —  based on studies that show  high performers learn approximately 70% from experience, 20% from others and 10% from formal study.

Charles Jennings

Charles will show how the 70:20:10 framework serves as a strategy for extending development beyond formal, structured learning to include informal and experiential learning opportunities. He will pose some important questions about which approaches support informal and workplace learning most effectively and how to effectively measure the success of those approaches.

As head of Duntroon Associates, Charles helps clients with learning and performance strategy, change management and with implementing improved approaches to workforce and leadership development. Previously, he served as Chief Learning Officer for Reuters and Thomson Reuters, where he led a team of 350 learning professionals for the firm’s workforce of 55,000.

This will be our 11th annual North American conference and, incidentally, a celebration of Questionmark’s 25th anniversary! We are busy planning the conference program and are welcoming case study and peer discussion proposals from experienced Questionmark users.

If you have a success story to tell or would like to get together with your colleagues to discuss a topic that concerns you, please check out our call for proposals and send in your ideas by September 14. Even if you are not yet sure you’ll be at the conference, that’s okay. We’d still like to hear from you!

 

 

 

 

Assessment types and their uses: Diagnostic

Posted by Julie Delazyn

Assessments have many different purposes, and to use them effectively it’s important to understand their context and uses within the learning process. I’ll explore each of these five key assessment types over the next few weeks:

  • Diagnostic
  • Formative
  • Needs
  • Reaction
  • Summative

Let’s start with diagnostic assessments.

Typical uses:

  • Identifying the needs and prior knowledge of participants for the purpose of directing them to the most appropriate learning experience
  • Determining knowledge and identifying skills gaps and needs
  • Placing learners in appropriate courses and tailor instruction to their needs
  • Providing instructors and mentors information on a student’s abilities
  • Giving feedback to participants and providing recommendations for products, services and/or learning activities
  • Setting benchmarks for comparison with post-course tests
  • Analyzing personality traits in order to predict behaviors
  • Creating intrigue about the content of a learning activity, which can in turn actually enhance the learning experience

Types:

  • Pre-tests
  • Placement tests
  • Self-diagnostic tools
  • Personality assessments

Stakes: low/medium

Example: A diagnostic assessment might report that a learner has mastered every competency in using Microsoft Word but can only perform 50 percent of those required to use Excel. The results of the assessment would prescribe a course on Excel. In addition, a diagnostic assessment can help place students within suitable learning experiences by asking questions such as, “Do you prefer instructor-led training or online training?”

For a fuller analysis of assessments and their uses check out the white paper, Assessments Through the Learning Process. You can download it free here, after login. Another good source for testing and assessment terms is our glossary.

Tune in next week for a post on formative assessments.

Final Planning Considerations – Test Design and Delivery Part 3

Posted By Doug Peterson

In Part 2 of this series, we looked at how to determine how many items you needed to write for each content area covered by your assessment. In this installment, we’ll take a look at a few more things that must be considered before you start writing items.

You must balance the number of items you’ve determined that you need with any time constraints imposed on the actual taking of the test. Take a look at the following table showing the average amount of time a participant spends on different question types:

It’s easy to see that it will take much longer for a participant to take an assessment containing 100 short-answer questions than one with 100 True/False questions. Therefore a time limit on the assessment might constrain what question types can be used, or conversely, the time limit may be influenced by how many questions you calculate you need and the question types you want to use. There’s no hard and fast rule here, it’s just something that needs to be considered before you go off to write a bunch of items.

The time a participant requires to complete an item is not the only thing that should be considered when determining the item format for your assessment. True/False questions might have some problems because there is a 50% chance of guessing correctly, making it hard to know if the participant knew the material or was just lucky. (I look at this in more depth in this blog article.)

Multiple Choice questions are good for recall and are easy, quick and objective to score, but many assessment developers feel that they really only test the participant’s ability to memorize. Some learning professionals are critical about incorrect information being presented, which could lead to the wrong answer being recalled at some point in the future. I discuss writing Multiple Choice question here  and  here.

Essay questions are good for testing higher cognitive skills such as formulating a correct answer from scratch and applying concepts to a new situation. However, they take longer for the participant to answer, and scoring takes longer and is more subjective than for other question types.

Finally, you need to decide on the presentation format, which boils down to basically two choices: paper and pencil, or computer-based (including over the Internet). Each has their pros and cons.

Paper and pencil is not susceptible to technical problems. It is comfortable for people unfamiliar with computers. However, it’s labor-intensive when it comes to distribution of the materials, grading, tracking, reporting, etc.

Computer-based assessments are faster and easier to update. Results can be provided immediately. Computer-based assessments allow for question types such as software simulations that paper and pencil can’t provide. However, not everyone is comfortable with computers or can type at a reasonable rate – you don’t want someone who knows the material to fail a test because they couldn’t answer questions fast enough using a keyboard.

10 Reasons for Using an Assessment Management System for Compliance

Posted by John Kleeman

Most LMSs (learning management systems) have the capability to deliver basic quizzes and surveys. So is an LMS good enough to deliver online compliance assessments? Or do you need an assessment management system?

A strength of LMSs is that they roll up all training, for example face-to-face classroom events, and they’re often used as a system of record for compliance training events. But many companies that are professional about their use of assessments in compliance use an assessment management system as well as (or sometimes instead of) an LMS. Here are 10 of the reasons I hear for doing this.

Observational assessment

1. A key trend in compliance is to measure behaviour, not just knowledge. A great way to do this is observational assessments, during which an observer watches someone do something (e.g. interview a customer, use a machine) and rates them on an iPad or smartphone.  The ability to deliver assessments in many different environments is a leading advantage of a comprehensive assessment management system.

2. Running a professional assessment programme needs an item bank, where all your questions are organized by topic and metadata, so you can re-use questions and easily review and update them. Many LMSs link questions and assessments to courses, but you need a searchable item bank once you get a certain volume of assessments.

3. Assessment management systems usually provide an easier and more friendly user interface for Subject Matter Experts (SMEs) to author questions, for instance our easy-to-use Questionmark Live collaborative authoring environment.

Create Question set, add questions to set, download or email questions in a qpack, import into Perception

4. As mentioned in my earlier blog post, How Topic Feedback can give Compliance Assessments Business Value, being able to score and give feedback at the topic level lets you provide actionable feedback in compliance. You don’t just know  people are weak, you know where they are weak and how to improve it.

5. Assessment management systems offer more question types, allowing more variety and more engaging and realistic questions.

6. An assessment management system like Questionmark lets you deliver assessments on paper as well as on-screen, and also on mobile devices including smartphones and iPads; you can deliver assessments in more places.

Test analysis report7. Most LMSs have only basic assessment reporting – but to make your assessments valid and reliable and legally defensible, you need item and test statistics reports and other professional reports.

8. Assessments can continue to be delivered even if you change LMS – and many organizations are thinking of changing LMS or moving the LMS to the cloud.

9. Often an LMS is provisioned only for employees, but you may need to assess partners or contractors; it’s easy to allow direct login to Questionmark if desired.

Questionmark Secure icon10. Last but not least, the typical LMS does not major in test security. Most employees taking compliance assessments will not want to cheat, but it’s useful to have the stronger security — allowing monitoring, preventing cheating and avoiding fraud — of a professional assessment management system.

Bottom line, an assessment management system gives you trustable results that you can rely on. A large organization relies on employees who are geographically and functionally separated, but they all must act competently and follow proper procedures to make the business run effectively. Assessments delivered online to employees via an assessment management system are one of the few ways and likely the best way to touch individually your entire workforce and ensure that they understand their role in your business and what is required of them to meet business and regulatory needs.

Next Page »