Building on SharePoint for your learning infrastructure : A SlideShare presentation

Headshot JuliePosted by Julie Delazyn

Social networking, wikis, blogs, portals and collaboration tools play an increasingly important role, offering powerful ways to increase participation and sustain momentum in learning. Enterprise portal applications such as Microsoft SharePoint offer content management and facilitate information sharing across boundaries.

According to Bill Finegan, Vice President of Enterprise Technology Solutions at GP Strategies Corporation, the “portalization” of learning and development enables workers to gain the knowledge and skills they need to succeed, using technologies they engage with every day. How can tools like these fit in with your organization’s learning needs? And how can you make them work effectively together?

Bill answered these and other questions during a presentation called Get to the Point! Leverage SharePoint to meet your Learning and Development Needs at the 2013 Questionmark Users Conference,

Now available on SlideShare, this presentation explores the elements of a dynamic learning ecosystem and explains out how to combine SharePoint, Questionmark and other technologies to provide a learning environment suitable for today’s workers, This is just one example of what people learn about at our Users Conferences. Registration is already open for the 2014 Users Conference March 4 – 7 at the Grand Hyatt on the beautiful Riverwalk in San Antonio, Texas. Plan to be there!

Why mobile assessment matters

John Kleeman HeadshotPosted by John Kleeman

The reason assessment on mobile phones and tablets matters is because so many people have these devices, and there is a huge opportunity to use them. Sometimes it’s easy to forget how rapid a change this has been!

I’m indebted to my colleague Ivan Forward for this visualization showing the increase in mobile phone ownership in 10 years. It’s based on data from the South African census reported by the BBC.

See for text figures behind this graph

As you can see, in the decade from 2001 to 2011, more South Africans gained access to electricity, flush toilets and higher education, but the change in use of mobile phones has been far more dramatic.

Figures in other countries will vary, but in every country mobile phone use has increased hugely.

Not only does this explain why mobile assessment matters, it also explains why so many organizations (including Questionmark customers) are moving to Software as a Service / on-demand systems. Because of the rise of mobile phones, the parallel rise in tablets and the fast changing nature of mobile technology, you need your software to be up to date. And for most organizations, this is easier to do if you delegate it to a system like Questionmark OnDemand than if you have to update and re-install your own software frequently.

Questionmark training courses scheduled in US & UK

Headshot JuliePosted by Julie Delazyn

Do you want to quickly and easily learn to create, deliver and analyze the results of Questionmark assessments? Sign up for a hands-on 3-day courses taught by an expert trainer who will help you create professional looking surveys, quizzes, tests and exams right from the start. By the end of course you will have a solid working knowledge of Questionmark’s major components and features.

Learn to:

  • Create questions, topics and assessments
  • Add feedback to questions
  • Create participant accounts
  • Schedule assessments
  • Get acquainted with reports and analytics

Registration is now open for these open enrollment courses:

In the US:

  • August 6 – 8 in San Francisco, CA
  • October 1 – 3 in Washington, DC

In the UK:

  • July 16 – 18 in London

Seats are limited, so register soon!

Secure Testing in Remote Environments: A SlideShare Presentation

Headshot JuliePosted by Julie Delazyn

How can you be sure that someone taking an online exam away from a testing center or classroom is adhering to the guidelines put in place by your instructional staff?

This SlideShare presentation will demonstrate how instructors can prevent or catch cheating and ensure a secure environment for employees or students taking tests in their homes, offices and other locations.

The slides are from a Best Practices sessions at the 2013 Questionmark Users Conference: Don Kassner of ProctorU discussed strategies for reducing incidents of dishonesty online, and Maureen Woodruff of Thomas Edison State College explained how online proctoring enables the college to administer tests securely to thousands of online learners.

This presentation offers a glimpse into the kind of discussions and sessions you can find at our Users Conferences. Registration is already open for the 2014 Users Conference March 4 – 7 at the Grand Hyatt on the beautiful Riverwalk in San Antonio, Texas. Discounts are available for groups and early registrants. Sign up soon and plan to be there!

Standard Setting: Angoff Method Considerations

Austin FosseyPosted by Austin Fossey

In my last few posts, I spoke about validity. You may recall that our big takeaway was that validity has to do with the inferences we make about assessment results.

With many of our customers conducting criterion-referenced assessments, I’d like to use my next few posts to talk about setting standards that guide inferences about outcomes (e.g., pass and fail).

I’ll start by discussing the Angoff Method – about which Questionmark has some great resources including a recorded webinar on the subject. I encourage you to use this as a reference if you plan on using this method to set standards. Just to summarize, there are five key steps in the Angoff Method:

  1. Select the raters.
  2. Take the assessment.
  3. Rate the items.
  4. Review the ratings.
  5. Determine the cut score.

angoff 1Expert Ratings Spreadsheet Example

Using the Angoff Method to Set Cut Scores (Wheaton & Parry, 2012)

Some psychometricians repeat steps 3-5 in a modified version of this method. In my experience, when raters compare their results, their second rating often regresses to the mean. If the assessment has been field tested, the psychometrician may also use the first round of ratings to tell your raters how many of the participants would have passed based on their recommended cut score (impact data).

Whether or not you choose to do a second round of rating depends on your preference. A second round means that your raters’ results may be biased by the group’s ratings and impact data, but this also serves to reign in outliers that may skew the group’s recommended cut score. This latter problem can also be mitigated by having a large number of representative raters, as discussed in a previous post.

As part of step 3, psychometricians train raters to rate items, and the toughest part is defining a minimally qualified participant. I find that raters often make the mistake of discussing what a minimally qualified participant should be able to do rather than what they can do. Taking the time to nail down this definition will help to calibrate the group and temper that one overzealous rater who insists that participants get 100% of items correct to pass the assessment.

The definition of a minimally qualified participant depends on the observable variables you are measuring to make inferences about the construct. If your assessment has a blueprint, your psychometrician may guide the raters in a discussion of what a minimally qualified participant is able to do in each content area.

For example, if the assessment has a content area about ingredients in a peanut butter sandwich, there may be a brief discussion to confirm that a minimally qualified participant knows how to unscrew the lid of the peanut butter jar.

This example is silly (and delicious), but this level of detail is valuable when two of your raters disagree about what a minimally qualified participant is able to do. Resolving these disagreements before rating items helps to ensure that differences in ratings are a result of raters’ opinions and not artifacts of misunderstandings about what it means to be minimally qualified.

On-demand or on-premise: Which is better for talent management? Part 2

John Kleeman HeadshotPosted by John Kleeman

In an earlier post, I explained 6 reasons why the Cloud is usually better for deploying talent management software.

These were:

1. On-demand gives you access to innovation and use of mobile devices

2. Deployment is easier with on-demand and allows quick pilots

3. On-demand requires less corporate IT bandwidth

4. You don’t need to worry about scalability with On-demand

5. On-demand is easier to make secure

6. On-demand is usually more reliable

We offer the on-premise Questionmark Perception as well as Questionmark OnDemand, our SaaS solution, so I have no “axe to grind”.

It’s important to consider all the angles when deciding between on-demand and on-premise — so now I’d like to identify 4 reasons why on-premise can be better:

1. Data protection is simpler if everything is in house

What are some of the reasons against on-demand deployment? One is data protection.

With an on-premise installation, you have full control of your own data protection.

With an on-demand installation, you need to ensure that you keep control of your data and that the Cloud provider responsibly processes it. Most reputable providers do a good job on data protection, so you can usually resolve this concern, but you do need to stay in control and be vigilant when using a network of data with different providers.

2. The US Patriot Act can be a concern for non-US organizations

Usually an organization will be reasonably confident that data in an on-premise system should be inaccessible by governments or other outside parties, at least without a legal process. But there is concern a government might force an on-demand provider to share data without the organization’s permission.

In particular, the US Patriot Act gives the US government the right to demand data from a US provider. If an organization is concerned about this, it would want to use an on-demand provider that is not US-owned, whose data center is outside the US and is not owned by a US company.  (Questionmark has a European data center for exactly this reason.)

3. There is less risk of lock-in with on-premise

Technology and suppliers and needs change, and every organization needs to be able to plan to move systems in the future. For both on-premise and on-demand, you need to make sure that your data is accessible in a documented format. But for on-demand, you also need to make sure that your contract permits you to get access to the data and export it or otherwise access the data to avoid lock-in.

4. You can customize on-premise

Picture of software development from Wikipedia ( you can configure an on-demand system and set up your own templates and branding, but major customization is harder. Most on-demand providers use the same software instance for all their customers; this is one of the key economies of scale that make on-demand successful.

An on-premise installation is much easier to customize, so a strong reason to go on-premise can be to do deep customization. For instance, you can usually access data via web services in the Cloud, but if you need direct database access or connections, you may need to go on-premise.

Of course, if you do customize, too wide a change can make things difficult when a new version of the software is produced. This goes back to the first reason on my list of arguments in favour of on-demand: it gives you easier access to new versions and innovation.

 Of course, other factors come into play as well – functionality, cost, support, and organization culture to name a few. Both routes are viable. There are advantages for on-demand services, but some organizations prefer on-premise installations for good reason.

If you’re trying to decide what’s right for you, I hope both of these posts (part 1 is here) help highlight some of the issues.