Learning Styles: Fiction?

Doug Peterson HeadshotPosted By Doug Peterson

Last week, I wrote about learning styles and the importance many educators place on them. Today, let’s look at the downside of this approach.

Do a Google search on “debunking 4 learning styles” and you’ll find a lot of information. For example, a few years ago the Association for Psychological Science published an article stating that there is no scientific support for learning styles. But there are a couple of points in this article that I would like to bring out.

The first is that the article isn’t really saying that the learning styles theory has been disproved: it’s saying the theory hasn’t been correctly proven. In other words, learning styles may still exist, but the proponents of the theory simply haven’t proven it yet. That’s different from “proven not to exist at all.”

Second, note the little bit that says “the participants would need to take the same test at the end of the experiment.” We know that for an assessment to be fair, valid and reliable, one of the things it must do is allow the participant to display his/her level of knowledge, skill or ability without interference and without testing multiple skills simultaneously (like reading comprehension along with the actual knowledge objective).

So how should we be looking at the relationship between learning styles and assessments? Should proponents of learning styles need to use assessments that take them into consideration? If a person is a visual learner would they be better able to communicate their understanding with a visual question—say a Hot Spot—than with a multiple choice question? And maybe an auditory learner would better communicate his/her understanding with a spoken answer. Would forcing a visual learner to prove their understanding in a non-visual way be fair? Would it truly be testing not only their knowledge? Or would it also be testing their ability to overcome the learning style barrier presented by the question itself?

Those who don’t support the learning style theory feel that anyone can learn from any presentation style—people just have preferred styles. In other words, they feel that the evidence shows that if you had two groups who identify as visual learners, and they both learned the same subject matter but one group learned it visually while the subject matter was presented differently to the second group, both groups would still end up learning the same amount. Their learning style is not a limitation (so much so that they can’t learn as much or as well when the material is presented in other styles), it’s just a preference.

I can’t say that I accept learning styles as fact, but I also can’t say that I believe they are fiction. What I can say is that I believe that learning has to do with two things:

1.       Engagement
2.       Learner motivation

I don’t believe that “learning styles” and “engagement” are the same thing. I can see where, assuming that learning styles exist, it would be easier to engage a visual learner with visual content, but if you have boring visual content, even a visual learner will not learn. I also believe that a podcast done really well can engage a (supposedly) visual or tactile learner. True, according to the theory, the visual or tactile learner may not learn as much as when the material is presented in their style, but I think you get my point that learning must be engaging, and that engagement is independent of learning style.

My experience has also shown me that when a learner is motivated, nothing will stand in his or her way. If passing that eLearning course means a promotion and a raise, that auditory learner will do what it takes to learn the material and pass, even if the material is nothing but charts and graphs. Conversely, if the visual learner couldn’t care less about the material, the greatest graphs in the world won’t make one whit of difference.

I would love to hear your thoughts and opinions on learning styles. Do you think they’re real, and that a learner simply cannot learn as well from material not presented in their style as they can from material that is?? Or do you think that learning style is more of a preference, and that learning will take place regardless of the way in which it is presented as long as it is engaging and the learner is motivated?

Questionmark Live: Watching the numbers grow

Jim Farrell HeadshotPosted by Jim Farrell

I hope everyone is having a wonderful holiday season.

Typically I come to you telling you about amazing new features, but this time I want to talk about the number of items created in Questionmark Live over the past year.

Let me set the stage. We started the year with approximately 30,000 questions being created in Questionmark Live each month.
As part of the team that gave birth to our newest authoring tool, I was over the moon about his strong start. This proved to me that the features we were releasing were helping people build up their item banks. I was excited enough as it was, but then came September, October and November.

September saw or most impressive increase in usage. More than 97,000 questions were created — almost 100,000 questions in just one month.

October followed with more than 74,000 questions –- including more than   20,000 questions on October 16th alone. This is the number our development teams were most excited about: 20,000 questions!  And the system ran flawlessly, speaking to the scalability of the software and the OnDemand Platform.

November proved that the usage was legitimate. More than 121,000 questions were created that month.

It’s clear now that Questionmark Live is the preferred authoring tool among Questionmark users — with ease of use and scalability as its foundation.

You didn’t think I would end without talking about something new, did you?

I can’t resist telling you that you no longer need to approve authors to have access to Questionmark Live. Anyone with a valid Questionmark Communities account can gain access to Questionmark Live. We hope this makes it easier for you to crowd- source content within your organization and write good questions to solve real business problems.

***

The Questionmark 2014 Users Conference will include bring-your-own-laptop sessions on Creating Items and Topics as well as Collaborative Assessment Authoring in Questionmark Live. The early-bird registration discount of $200 is available through tomorrow December 12th, so sign up now!

Writing Good Surveys, Part 3: More Question Basics

Doug Peterson HeadshotPosted By Doug Peterson

In part 2 of this series, we looked at several tips for writing good survey questions. To recap:

  • Make sure to ask the right question so that the question returns the data you actually want.
  • Make sure the question is one the respondent can actually answer, typically being about something they can observe or their own personal feelings, but
    not the thoughts/feelings/intentions of others.
  • Make sure the question doesn’t lead or pressure the respondent towards a certain response.
  • Stay away from jargon.
  • Provide an adequate rating scale. Yes/No or Dislike/Neutral/Like may not provide enough options for the respondent to reply honestly.

In this installment, I’d like to look at two more tips. The first is called “barreling”, and it basically refers to asking two or more questions at once. An example might be “The room was clean and well-lit.” Clearly the survey is trying to uncover the respondent’s opinion about the atmosphere of the training room, but it’s conceivable that the room could have been messy yet well-lit, or clean but dimly lit. This is really two questions:

  • The room was clean.
  • The room was well-lit.

I always look for the words “and” and “or” when I’m writing or reviewing questions. If I see an “and” or an “or”, I immediately check to see if I need to split the question out into multiple questions.

The second tip is to keep your questions as short, as clear, and as concise as possible. Long and complex questions tend to confuse the respondent; they get lost along the way. If a sentence contains several commas, phrases or clauses inserted with dashes – you know, like this – or relative or dependent clauses, which are typically set off by commas and words like “which”, it may need to be broken out into several sentences, or may contain unneeded information that can be deleted. (Did you see what I did there?)

In the next few entries in this series, we’re going to take a look some other topics involved in putting together good surveys. These will include how to construct a rating scale as well as some thoughts about the flow of the survey itself. In the meantime, here are some resources you might want to review:

Problems with Survey Questions” by Patti J. Phillips. This covers much of what we looked at in this and the previous post, with several good examples.
Performance-Focused Smile Sheets” by Will Thalheimer. This is an excellent commentary on writing level 2 and level 3 surveys.
Correcting Four Types of Error in Survey Design” by Patti P. Phillips. In this blog article, Patti give a quick run-down of coverage error, sampling error, response rate error, and measurement error.
Getting the Truth into Worplace Surveys” by Palmer Morrel-Samuels in the February 2002 Harvard Business Review. You have to register to read the entire article, or you can purchase it for $6.95 (registration is free).

If you are interested in authoring best practices, be sure to register for the 2014 Questionmark Users Conference  in San Antonio, Texas March 4 – 7. See you there!

Simulating real life: Questions that test application of knowledge

Doug Peterson HeadshotPosted By Doug Peterson

Questionmark offers over 20 different question formats. We have the standard multiple choice, multiple response, true/false and yes/no question types. We also have question types with more interaction, to more fully engage the participant: hot spot and drag-and-drop, fill in blanks and select a blank, and matching and ordering. We also have text match and essay questions that allow participants to show their understanding by formulating a response from scratch.

Questionmark has two other question types that I’d like to discuss, because I feel they are very powerful yet often overlooked: the Adobe Captivate and Flash question types.

Prior to joining Questionmark, I worked for a large telecommunications company, and one of my group’s responsibilities was to train customer call center employees. The call center representatives had to know all about hooking up set-top boxes and DVD players as well as configuring routers and setting up wireless connections. They had to know how to use several different trouble-shooting applications as well as the main trouble ticket application. We used Captivate and Flash question types in a few different ways to effectively and accurately assess the participants’ knowledge as they went through their 13 weeks of training.

  1.  We used Flash and ActionScript to create a duplicate of the trouble ticket application. And I mean *duplicate*. It looked and behaved exactly like the real thing. Honestly, there were a couple of times I got confused as to which was the real thing and which was the simulation when I had them open side by side, that’s how realistic the simulation was. With this simulation, we were able to go beyond using multiple choice questions that just asked, “What value would you select for Trouble Reason?” or “What error code would you use, given the customer’s description?” Instead, we presented the participant with (what appeared to be and behaved exactly like) the trouble ticket application and said, “Fill out the ticket.” We gave a point or two for every value they entered correctly, or checkbox they checked correctly, or radio button they selected correctly. In this way we could assess their understanding of what fields needed to be populated at every stage of the process and their overall ability to use the software, as well as their understanding of what values to use in each field.
  2.  Similar to #1, we created simulations for setting up a wireless connection or configuring a router. We presented the participant with a Windows desktop and they had to go through the process of setting up a connection to a local router – entering the SSID, entering the WEP key, etc. We didn’t give points for individual steps in this one, as the instructions were to set up a connection – either you did it all correctly and established the connection, or you
    didn’t.
  3.  The class was typically taught in a classroom, and at one point the instructor would wheel in an audio/visual cart with a television, a set-top box, a DVD player, and a home theater sound system. The members of the class would then wire the components together correctly, or troubleshoot the instructor’s incorrect wiring job. Then one day we were asked to teach the class remotely, with student’s taking the training in their own homes. How could we do the wiring exercise if the students weren’t all in the same physical location? As I’m sure you’ve guessed, we used a Flash simulation. The simulation presented the backs of the various components along with the ability to select different types of wiring (HDMI cable, coax cable, and RCA cables). Students could click-and-drag the selected wire from a connector on one component to a connector on another component.

Because this way of assessing a learner’s understanding is not all that common, we used similar simulations as formative quizzes during the training and provided a practice assessment prior to the first “real” assessment. This helped the participants get comfortable with the format by the time it really counted, which is important: We want to be fair to the learner and make sure we give them every opportunity to prove their knowledge, skill or ability without interference or stress. It’s not fair to suddenly spring a new question format on them that they’ve never seen before.

One great way to learn more about this topic is to attend our the Questionmark 2014 Users Conference March 4-7 in San Antonio. I typically present a session on using Flash and Captivate in e-learning and assessments. Looking forward to seeing you there!

Authoring compliance-related assessments: good practice recommendations

Headshot JuliePosted by Julie Delazyn

Last week I wrote about deploying compliance-related assessments, as part of a series of posts offering good practice recommendations from our white paper, The Role of Assessments in Mitigating Risk for Financial Services Organizations.

This paper describes five stages of deploying legally defensible assessments, along with specific recommendations for people in different job roles. Some of these recommendations are specific to Questionmark technologies, but most can be applied to any testing and assessment system.

The five stages:

Compliance five steps
Today, let’s look at good practice for the third stage: authoring. You will find more recommendations in the White Paper:

authoring chart

Sharon Shrock & William Coscarelli’s Criterion-Referenced Test Development: Technical and Legal Guidelines for Corporate Training provides actionable, practical advice on test development. Sharon and Bill will conduct a workshop on writing valid, reliable tests in Baltimore on Sunday, March 3. Participants will explore testing best practices and will learn how to meet rigorous competency testing standards.

You can register for this workshop when you register for the Questionmark Users Conference or add the workshop later. It’s up to you!