Assessment types and their uses: Needs Assessments

Posted by Julie Delazyn

Assessments have many different purposes, and to use them effectively it’s important to understand their context and uses within the learning process.

Last week I wrote about formative assessments, and today I’ll explore needs assessments.

Typical uses:

  • Determining the knowledge, skills, abilities and attitudes of a group to assist with gap analysis and courseware development
  • Determining the difference between what a learner knows and what they are required to know
  • Measuring against requirements to determine a gap that needs to be filled
  • Helping training managers, instructional designers, and instructors work out what courses to develop or administer to satisfy their constituents’ needs
  • Determining if participants were routed to the right kind of learning experiences

Types:

  • Job task analysis (JTA) survey
  • Needs analysis survey
  • Skills gap survey

Stakes: low


Example:
A food service company can run a needs analysis survey to identify differences between the knowledge of subject matter experts and people on the job. Evaluating the different groups’ scores, as shown in the gap analysis chart below, reveals the overall differences between the experts’ and workers’ knowledge. But more significantly, it diagnoses a strong need for workers to improve their understanding of food safety. Information like this can inform the organization’s decision about further staff development plans and learning programs.

For more details about assessments and their uses check out the white paper, Assessments Through the Learning Process. You can download it free here, after login. Another good source for testing and
assessment terms is our glossary.

In the coming weeks I’ll take a look at two other assessment types:

  • Reaction
  • Summative

Feeding back from eAssessment Scotland

 Posted by Steve Lay

eAssessment Scotland is an annual event hosted by the University of Dundee in Scotland.

This year’s conference had a very clear theme: Feeding Back, Forming the Future. I have to say that the programme was managed very well to fit with this theme and that the theme also fits well with the current mood of the wider community. For example, in the UK as a whole the JISC have an ongoing programme on assessment and feedback, and this event provided an opportunity for some of those projects to report on their progress.

I do find that ”feedback’ can be a very general term. In the opening keynote, Professor David Boud, University of Technology Sydney provided an analysis of the subject through a 3-generation model of feedback. At one point he encouraged us to “position feedback as part of learning and not as an adjunct to assessment”.

I sensed that assessment was being used in an Assessment of  Learning sense here. This contrasts with “Assessment for Learning”, these phrases are simpler ways of expressing the basic idea behind summative and formative assessment respectively. It is the latter which generates the type of feedback that could potentially meet the challenge posed by Dr Steve Draper, University of Glasgow: What If Feedback Only Counted When it Changed the Learner?

From the tone of the discussion at the conference, I do sense that the higher-education community is trying hard to adapt to the new perceptions of formal, informal and experiential learning reflected in the 70:20:10 model of education and development — by continuing to embrace the value of formal learning while adopting other modes of learning.

The 10% is sometimes summarised as being the part of our learning effected by formal courses (and reading). Feedback is reserved for the 20% where we learn from our peers. Many of the presentations were about embracing social systems to attempt to exploit these modes of feedback.

Clearly, assessment can have an important role to play in assessment for learning but I took away the impression that this community sometimes needs reminding that understanding the purpose of an assessment is vital to its success. Combining assessment for learning and assessment of  learning may not be fruitful.

Avoiding Bias and Stereotypes – Test Design & Delivery Part 5

Posted By Doug Peterson

As you write and evaluate your assessment items, it is critical to avoid bias and stereotyping, as they can inhibit the impartiality, and therefore the fairness, of your assessment.

“Bias” refers to giving a preference to one group over another. There are a number of ways that bias can creep into your item writing. For example, if you use language that is familiar to a group in a specific geographical location, it would give them an unfair advantage over participants from other parts of the globe. You can avoid bias by doing the following:

  • Use neutral terms, for example, sales agent instead of salesman.
  • Strive for a balanced representation of various groups in diverse roles.
  • Use standard, formal English. Avoid slang, idioms and colloquialisms. Also avoid obscure language or ambiguous acronyms unless they are standard, recognized terms with regards to the subject matter of the assessment.
  • Be wary of using a condescending tone. For example, this could be a tone that implies that a person with differing abilities is incapable of caring for himself or herself, or that a person of lower socio-economic status is not as intelligent as someone from a higher status.
  • Avoid references to race, ethnicity, gender, age, etc. unless they specifically apply to the question. For example, it would be appropriate to mention age in a question about a medical diagnosis if age is pertinent and could change the diagnosis, but it is not appropriate to mention age when age has nothing to do with the knowledge or skill being assessed.

Stereotyping is when you make generalizations or assumptions about a person based on his or her membership in a group. There are several ways to avoid stereotyping:

  • Include positive depictions of individuals in non-traditional roles. For example, don’t assume that all nurses are female and all doctors are male.
  • Make sure your items are reviewed by a diverse group of subject matter experts.
  • Present people with disabilities in active, capable and independent positions.
  • Avoid common racial/ethnic stereotyping.
  • Do not portray either sex as submissive or having an inferior status.
  • Do not demean the elderly by portraying them as feeble, lonely or dependent.

By avoiding bias and stereotypes, you help ensure that your assessment is testing only what it should be testing, and that nothing is interfering with or distracting from participants’ ability to demonstrate their true knowledge levels.

Focus on compliance at London Breakfast Briefing

Posted by Chloe Mendonca

Earlier this week, Questionmark held a Breakfast Briefing at Microsoft’s London office. Questionmark users and other assessment professionals got together to learn about the latest assessment technologies and discuss the various benefits and applications of online assessment.

There were some thought-provoking questions and answers, particularly about how to create tests for compliance. The demonstration of our Questionmark Live browser-based authoring tool was another key feature of the seminar.

We also learned how one financial services organization has built up its use of Questionmark for online assessments during the last seven years and now deploys them globally in more than 80 countries. It was fantastic to see so many people together and learn how they use and plan to implement assessments in the future.

If you missed the briefing, check out the SlideShare presentation below.

eAssessment in the Cloud, Sunshine or Thunderstorm?

Sunshine or Thunderstorm?Posted by John Kleeman

Earlier this week, I presented at the online part of the eAssessment Scotland conference on the advantages and disadvantages for academic institutions of using eAssessment in the Cloud “on-demand” or installing it “on-premise” within the institution. Does an on-demand eAssessment service give continual sunshine to a university or college? Or is it safer to install it locally and go on-premise? What questions do you need to ask about the potential thunderstorms using the Cloud?

Questionmark offers both Questionmark Perception, an installable assessment management system, and Questionmark OnDemand, our scalable software-as-a-service system, so we can see the pros and cons of both approaches- and can offer some unbiased advice.

Here is the presentation I gave – you can see it embedded at the end of the post or else view it on the Questionmark Slideshare site.

The presentation suggests that for a university or college, on-demand may be stronger in these areas:

  • Access to innovation
  • Speed/flexibility of deployment
  • Reliability and uptime
  • Scalability
  • Security and cheating
  • Getting IT bandwidth

And that 0n-premise may be stronger in these:

  • Ease of customization/integration
  • Connectivity
  • Governments accessing your data

In these areas, you need to look into the details to determine what would work best in your situation:

  • Data protection
  • Can you change providers?
  • Costs, features and other factors

I believe that for a lot of universities and colleges on-demand offers a lot of value. This is especially so if their IT department is focused elsewhere and does not easily have the bandwidth to manage eAssessment. But it is very important to get your solution right, and if you’re looking at On-demand, you might like to read a paper I presented at the 2012 International Computer Assisted Assessment Conference on How to Decide between On-demand and On-premise eAssessment. This includes a lot of useful questions to ask potential providers when evaluating potential on-demand solutions. You can see the paper here.

I hope you find the presentation useful.

Auto-sensing device and browser – now more important than ever!

Posted by Brian McNamara

The variety of widely-used web browsers, a field once dominated by Microsoft’s Internet Explorer (IE), is becoming more complex and diverse.

A recent review of web traffic stats from StatCounter indicates that IE still sees a significant market share worldwide, but that web traffic from other browsers such as Firefox, Chrome and Safari have gained significantly in the past three years. Indeed, in several countries and regions, traffic from Firefox and Chrome browsers now exceeds that from IE.

Worldwide browser states

IE still commands the largest share of web traffic in the US,  though it has declined from about 66% to 42% since 2008:

US Browser Stats

In Europe, three browsers split the lion’s share of browser traffic – Chrome, Firefox and IE each account for about 30% of traffic, with Chrome seeing steady increases apparently at IE’s expense:

Browser stats - Europe

If we drill down a bit further to the ‘country’ level, we can see IE and Chrome nearly equal in the UK, with a similar trend in Chrome’s growth at IE’s expense.

Browser stats -  UK

But we see a different story in Germany, Europe’s largest economy, where Firefox has led IE in market share for more than three years, with IE now accounting for only about 25% of browser usage.

Browser Stats - Germany

The growth in Chrome’s usage is even more pronounced in South America, now accounting for more than 52%, more than double that of either Firefox or IE.

Browser stats - South America

So what does all this have to do with online assessment? As more and more organizations deliver online assessments to increasingly diverse and globally dispersed audiences, they must consider the impact that different browser technologies could have on the experience of the employee, student or candidate who is taking the assessment.

Fortunately, Questionmark has taken the guesswork out of the authoring and delivery process with “auto sensing” delivery – in which the Questionmark delivery system senses the participant’s device, screen resolution and browser, and then delivers the assessment in a format best suited for the participant’s environment:

Auto Sensing - Auto Sizing
Plus, Questionmark’s assessment management system supports delivery to the latest versions of many major browsers, including Internet Explorer, Firefox, Chrome, Safari and Opera. This means that you can author an assessment once, deliver it to multiple types of devices and browsers, and obtain centralized, trustable results for analysis and sharing with stakeholders.

 

« Previous Page