Role-Based Permissions: A How-To Guide (Part 2)

Bart Hendrickx SmallPosted by Bart Hendrickx

In my previous post on this subject (How-To Guide Part 1), I described a situation where managing permissions in the classic version of Questionmark Enterprise Manager can quickly turn into a complicated task. The new version of Questionmark, which we are starting to roll out to Questionmark OnDemand customers, offers a more efficient approach: managing permissions based on the tenets of role-based access control.

Interested in learning more about role-based permissions? Drop in on my session on this topic at Questionmark Conference 2016. Register before March 3 to take advantage of our final early-bird discounts.

The principle of role-based access control is that you use roles to define what users can do in the system. You are free to choose what a role is in your organization. You can tie it to a job title and create a role such as Learning and Development Specialist. You can map it to a role on a project team (e.g. the role of setting up a project for an employee satisfaction survey) and create a role like Project Owner. Or you can use any of the default roles that ship with the new version of Questionmark OnDemand, such as Admin and Reporter.

Roles contain permissions. For example, the Reporter role contains a set of permissions to run all reports on all results. When you add that role to a user, that user inherits those permissions. So far, this is similar to how profiles work in the classic version of Questionmark.

The power of the new role-based access control system becomes obvious when you want to give more roles to a user. In the classic version of Questionmark, you can assign only one profile to a user. In the new version, you can assign multiple roles to a user. Do you have a role for creating test items and another one for running reports, and do you have a user who will take on both roles? No problem: assign both roles to the user.

Another advantage of the new role-based access control system is that you can change the permissions of a role, which will automatically trickle down to all users who have that role. Do you want to remove the permission to run a Grade Book report from all users who have the Reporter role? Remove the permission from the Reporter role and you are done.

To ensure there are no loopholes, the new version of Questionmark OnDemand makes it impossible to assign permissions directly to users. Instead, all permissions will be granted within roles.

If you are a Questionmark OnDemand user interested in moving to the new version, contact your account manager. And if you are attending Questionmark Conference 2016, April 12-15, feel free to drop in on my session on this topic. Register before March 3 to take advantage of our final early-bird discounts.

Item Development – Five Tips for Organizing Your Drafting Process

Austin FosseyPosted by Austin Fossey

Once you’ve trained your item writers, they are ready to begin drafting items. But how should you manage this step of the item development process?

There is an enormous amount of literature about item design and item writing techniques—which we will not cover in this series—but as Cynthia Shmeiser and Catherine Welch observe in their chapter in Educational Measurement (4th ed.), there is very little guidance about the item writing process. This is surprising, given that item writing is critical to effective test development.

It may be tempting to let your item writers loose in your authoring software with a copy of the test specifications and see what comes back, but if you invest time and effort in organizing your item drafting sessions, you are likely to retain more items and better support the validity of the results.

Here are five considerations for organizing item writing sessions:

  • Assignments – Shmeiser and Welch recommend giving each item writer a specific assignment to set expectations and to ensure that you build an item bank large enough to
    meet your test specifications. If possible, distribute assignments evenly so that no single author has undue influence over an entire area of your test specifications. Set realistic goals for your authors, keeping in mind that some of their items will likely be dropped later in item reviews.
  • Instructions – In the previous post, we mentioned the benefit of a style guide for keeping item formats consistent. You may also want to give item writers instructions or templates for specific item types, especially if you are working with complex item types. (You should already have defined the types of items that can be used to measure each area of your test specifications in advance.)
  • Monitoring – Monitor item writers’ progress and spot-check their work. This is not a time to engage in full-blown item reviews, but periodic checks can help you to provide feedback and correct misconceptions. You can also check in to make sure that the item writers are abiding by security policies and formatting guidelines. In some item writing workshops, I have also asked item writers to work in pairs to help check each other’s work.
  • Communication – With some item designs, several people may be involved in building the item. One team may be in charge of developing a scoring model, another team may draft content, and a third team may add resources or additional stimuli, like images or animations. These teams need to be organized so that materials are
    handed off on time, but they also need to be able to provide iterative feedback to each other. For example, if the content team finds a loophole in the scoring model, they need to be able to alert the other teams so that it can be resolved.
  • Be Prepared – Be sure to have a backup plan in case your item writing sessions hit a snag. Know what you are going to do if an item writer does not complete an assignment or if content is compromised.

Many of the details of the item drafting process will depend on your item types, resources, schedule, authoring software, and availability of item writers. Determine what you need to accomplish, and then organize your item writing sessions as much as possible so that you meet your goals.

In my next post, I will discuss the benefits of conducting an initial editorial review of the draft items before they are sent to review committees.

Writing Good Surveys, Part 2: Question Basics

Doug Peterson HeadshotPosted By Doug Peterson

In the first installment in this series, I mentioned the ASTD book, Survey Basics, by Phillips, Phillips and Aaron. The fourth chapter, “Survey Questions,” is especially good, and it’s the basis for this installment.

The first thing to consider when writing questions for your survey is whether or not the questions return the data for which you’re looking. For example,let’s say one of the objectives for your survey is to “determine the amount of time per week spent reading email.”

Which of these questions would best answer the question?

  1. How many emails do you receive per week, on average?
  2. On average, how many hours do you spend responding to emails every week?
  3. How long does it take to read the average email?
  4. On average, how many hours do you spend reading emails every week?

All four questions are related to dealing with email, but only one pertains directly to the objective. Numbers 1 and 3 could be combined to satisfy the objective if you’re willing to assume that every email received is read – a bit of a risky assumption, in my opinion (and experience). Number two is close, but there is a difference between reading an email and responding to it, and again, you may not respond to every email you read.

The next thing to consider is whether or not the question can be answered, and if so, ensuring that the question does not lead to a desired answer.

The authors give two examples in the book. The first describes a situation where the author was asked to respond to the question, “Were you satisfied with our service?” with a yes or no. He was not dissatisfied with the service he received, but he wasn’t satisfied with it, either. However, there was no middle ground, and he was unable to answer the question.

The second example involves one of the authors checking out of a hotel. When she tells the clerk that she enjoyed her stay, the clerk tells her that they rate customer satisfaction on a scale of one to ten, and asks if she would give them a ten. She felt pressured into giving the suggested response instead of feeling free to give a nine or an eight.

Another basic rule for writing survey questions is to make sure the respondent can understand the question. If they can’t understand it at all, they won’t answer or they will answer randomly (which is worse than not answering, as it is garbage data that skews your results). If they misunderstand the question, they’ll be answering a question that you didn’t ask. Remember, the question author is a subject matter expert (SME); he or she understands the big words and fancy jargon. Of course the question makes sense to the SME! But the person taking the survey is probably not an SME, which means the question needs to be written in plain language. You’re writing for the respondent, not the SME.

Even more basic than providing enough options for the respondent to use (see the “yes or no” example above) is making sure the respondent even has the knowledge to answer. This is typically a problem with “standard” surveys. For example, a standard end-of-course survey might ask if the room temperature was comfortable. While this question is appropriate for an instructor-led training class where the training department has some control over the environment, it really doesn’t apply to a self-paced, computer-based e-learning course.

Another example of a question for which the respondent would have no way of knowing the answer would be something like, “Does your manager provide monthly feedback to his/her direct reports?” How would you know? Unless you have access to your manager’s schedule and can verify that he or she met with each direct report and discussed their performance, the only question you could answer is, “Does your manager provide you with monthly feedback?” The same thing is true about asking questions that start off with, “Do your coworkers consider…” – the respondent has no idea what his/her coworkers thoughts and feelings are, so only ask questions about observable behaviors.

Finally, make sure to write questions in a way that respondents are willing to answer. Asking a question such as “I routinely refuse to cooperate with my coworkers” is probably not going to get a positive response from someone who is, in fact, uncooperative. Something like “Members of my workgroup routinely cooperate with each other” is not threatening and does not make the respondent look bad, yet they can still answer with “disagree” and provide you with insights as to the work atmosphere within the group.

Here’s an example of a course evaluation survey that gives the respondent plenty of choices.

Summer webinars — including tips on better test planning and delivery

Joan Phaup HeadshotPosted by Joan Phaup

Students (and teachers) may be clicking their heels about summer vacation, but the joy of learning continues year-round for us!

Helping our customers understand how to use assessments effectively is as important to us as providing good testing and assessment technologies — we ‘re keeping our web seminars going strong during the summer months.

Here’s the current line-up:

Questionmark Customers Online: Using Questionmark and SAP for Effective Learning and Compliance — June 20 at 1 p.m. Eastern Time:

Learn about the use of Questionmark and SAP for a wide array of learning and compliance needs, including safety training, certifications and regulatory compliance testing. This presentation by Kati Sulzberger of BNSF Railway also describes how Questionmark helped the company meet some unique test delivery requirements.

Five Steps to Better Tests: Best Practices for Design and Delivery — July 18 at noon Eastern Time:

Get practical tips for planning tests, creating items, and building, delivering and evaluating tests that yield actionable, meaningful results. Questionmark Product Owner Doug Peterson, who will present this webinar, previously spent more than 12 years in workforce development.  During that time, Doug created training materials, taught in the classroom and over the Web, and created many online surveys, quizzes and tests.

Questionmark Customers Online: Achieving a Better Assessment-Development Process — August 22 at 1 p.m. Eastern Time:

Need a better assessment building process? Find out how enterprise architecture principles can help you and your team work more efficiently.  Tom Metzler,  Knowledge Assessment Administrator at TIBCO Software, Inc.,  will explain how the company’s certification team uses well-established software architecture principles to continually improve the efficiency of its assessment development process. Find out how using systematic processes and thorough documentation result in better information for subject matter experts, time-savings and higher-quality assessments.

Introduction to Questionmark’s Assessment Management System — Choose from a variety of dates and times

This primer  explains and demonstrates key features and functions available in Questionmark OnDemand and Questionmark Perception. Spend  an hour with a Questionmark expert learning the basics of authoring, delivering and reporting on surveys, quizzes, tests and exams.

Click here for more details and free online registration.

Revision History in Questionmark Live

Revision history is an collaborative authoring capability of Questionmark Live. Here are the basics:

What it does: This feature enables users to track and manage edits made to questions:

    • View a question’s full revision history
    • Compare different versions side-by-side with marked-up changes
    • Roll back to previous versions of questions to undo edits made by others

Who should use it: Subject matter experts can use this feature to easily review, keep track and roll back to a previous version of the question.

How it looks: Each revision of a question is listed in the table. Where:

  • Revision – identifies how many revisions have been created
  • Version ID – identifies the version of the question (Used to identify which version of the question any rollback has been reverted to)
  • Modified On – states the date that the revision was made
  • Modified By – highlights who made the actual revision
  • Change Type – describes what the revision entailed
  • Comments – lists the notes made by the author that revised the question

By selecting a revision you can view the question. It is possible to select more than one revision to compare the differences. To roll back to a previous revision, select the revision and click Rollback.

Creating an Extended Matching Question Type

Extended Matching Questions are similar to multiple choice questions but test knowledge in a far more applied, in-depth way. This question type is now available in Questionmark Live  browser-based authoring.

What it does:   An Extended Matching question provides an “extended” list of answer options for use in questions relating to at least two related scenarios or vignettes. (The number of answer options depends on the logical number of realistic options for the test taker.) The same answer choice could be correct for more than one question in the set, and some answer choices may not be the correct answer for any of the questions – so it is difficult the answer this type of question correctly by chance. A well-written lead-in question is so specific that students understand what kind of response is expected,  without needing to look at the answer options.

Who should use it: It is often used in medical education and other healthcare subject areas to test diagnostic reasoning.

What’s the process for creating it? This diagram shows how to create this question type in Questionmark Live:

How it looks:  Here is an example of an Extended Matching Question

Next Page »