Scheduling Observational Assessments

Observational Assessments enable an “observer” – such as an instructor or supervisor — to watch someone complete a task and assess the participant’s performance. This type of assessment, available in Questionmark OnDemand, provides a way to assess a participants in their everyday tasks and rate cognitive knowledge or abilities that would not normally get reflected in answers to a standard assessment. For example, an observational assessment could be used to rate areas such as skills, safety practices and adherence to required procedures.

In many cases, it’s convenient to conduct observational assessments using mobile devices, and this is easy to do using Questionmark Perception’s mobile delivery capabilities and our free Apps for Apple and Android devices.

Whether you plan to deliver an observational assessments via PC or  mobile device, here’s how to schedule it in Perception Enterprise Manager:

  • Set up a Perception administrator as a “monitor” who performs the role of observer
  • Assign participants to groups
  • Assign the monitor to administer the groups
  • Schedule the participant or a group of participants to an assessment and set it to require monitoring

The observer logs in, selects the desired assessment, chooses the participant to be rated, and then completes the assessment. Reports for the assessment will appear under the participant’s name. In addition, the name of the monitor who observed the tasks can be included in reports such as the Coaching Report, Test Analysis Report and Survey Report.

For more details observational assessments, including example applications and a video tutorial, click here.

New Item Analysis Report in Questionmark Analytics: The Summary Page

 Posted by Jim Farrell

When I visit customers, I find that the Item Analysis report is one of the most useful reporting capabilities of Questionmark Perception. By using it, you can tell which questions are effective and which are not – and if you don’t use it, you are “running blind:” You hope your questions are good, but do not really know if they are.

Our most recent update to Questionmark OnDemand provides a new classical test theory item analysis report — one of several reports now available in Questionmark Analytics.  This report supports all question types commonly used on quizzes, tests and exams and is fully scalable for application to large pool of participants. Let’s take a look at the report!

ItemAnalysisReport1

This is the summary page. The graph show the performance of questions in relation to one another in terms of their difficulty (p-value) and discrimination (item-total correlation). The p-value is a number from 0 to 1, and represents the proportion of people who correctly answer the question. So a question with p-value of 0.5 means that half the participants get it right and half wrong.  And a question with p-value of 0.9 means that 90% of respondents get it right.

A rule of thumb is that it’s often useful to use questions with p-value that are reasonably close to the pass score of the assessment.  For instance, if your pass score is 60%, then questions with a p-value of around 0.6 will give you good information about your participants.  However a very high or very low p-value does not give you much information about a person who answers it. If the purpose of the test is to measure someone’s knowledge or skills, then you will get more information from a question with medium p-value. Using the item analysis report is an easy way to get p-values.

The other key statistic in the item analysis report is the item-total correlation discrimination, which shows the correlation between the question score and the assessment score. Higher positive correlation values indicate that participants who obtain high question scores also obtain high assessment scores. Conversely, participants who obtain low question scores also obtain low assessment scores. Low values for questions here could indicate unhelpful questions and are worth drilling down on.

Below the graph is a table that shows some high-level details of each question composing the assessment. The table can be sorted by any of the columns. By clicking on a row/question the user goes to the detail page, which we will discuss in our next blog post on this subject.

If you are running a medium or high stakes assessment that has to be legally defensible, then you cannot confirm that the assessment is valid if you are not running item analysis. And for all quizzes, tests and exams, running an item analysis report will give you information to help you make the assessment better.

An update on electronic standards for higher education

Posted by Steve Lay

Last month I attended the PESC Spring Data Summit. PESC stands for Postsecondary Electronic Standards Council, a forum in which users and vendors can come together to help align the way data in Higher Education (HE) is collected and exchanged.

The world of electronic standards is a minefield of acronyms and I heard the usual joke about “so many to choose from” at least once during the 3-day meeting. However, each organization tends to represent a specific relationship between a group of users and their technology suppliers. College admission has a similarity to recruitment: applicants are selected and test scores are often involved, but they are carried out by different groups of stakeholders with different data requirements. This is why standards bodies like HR-XML and PESC are both important. There is no single electronic standard that can satisfy both sets of use cases even though tools like Questionmark Perception can be used in both applications.

During the summit, the US Department of Education (DoE) announced the conclusion of their investigation into electronic standards in the assessment domain. They received detailed responses to a Request for Information from a wide range communities and have published a summary of the responses on the ED.gov website.

The document contains a useful model of the key elements of the assessment process and the standards that help integrate them together:

I. Assessment Instruments and Items: Format and Packaging (Questionmark: Authoring)

II. Initiation and Return of Assessment Administrations (Questionmark: Scheduling)

III. Administration of Assessments (Questionmark: Delivery)

IV. Learning Outcomes Management (Questionmark: Reporting)

V. Learning Records Management

(Customers familiar with the Questionmark wheel logo may see a similarity with our own model of the assessment process!)

The summary will not be surprising to anyone who has worked with the standards bodies. In my opinion it is a fair description of the current state of affairs. It reaffirms my belief that IMS QTI/Common Cartridge used in conjunction with Learning Tools Interoperability (LTI) remains the best route forward to improving integration in the content Authoring and Scheduling elements of the process.

Finally, one area to watch for the future:  At the data summit, PESC announced a strategic partnership with InCommon, the access management federation for US Higher Education. The federation provides a legal and technical framework to help improve access management across this community. It applies across all tools, not just assessment. On reading the DoE recommendations in the Technology Landscape section of the summary, I see a strong resonance with the goals and activities of the federation.

“Follow Friday” — Some thought leaders to watch on Twitter

Posted by John Kleeman

I find Twitter a great way to learn every day and stay up to date with e-assessment and learning technology. With Twitter, I can see a summary of important information others think it worth me knowing.

If you’ve not tried Twitter and would like to get started, go to www.twitter.com and sign up. Choose a short, memorable user name, set up your profile and choose some people to follow. You can then view their tweets in a browser or – more useful for many – on your mobile phone or iPad. And if you want to contribute, it’s easy to post your own tweets back.

Twitter has a “Follow Friday” tradition of recommending who to follow at the end of a week. Here are a few of the people I personally follow and suggest many of you might find interesting … I’’ll mention more in a future post!

@charlesjennings

Charles JenningsCharles Jennings used to be head of learning at Reuters and is now a freelance learning specialist. He’s one of the people who’ve popularized the 70+20+10 model for informal learning.

Example tweet: “An effective social media security strategy starts with user education” http://bit.ly/iU0Wh8

@drdjwalker

drdjwalkerDavid Walker is Senior Learning Technologist at the University of Dundee, he’s also on the board of the E-Assessment Association and tweets on e-assessment.

Example tweet : Interesting article in this weeks @timeshighered about online exams and allowing students access to Internet/search tools. Worth a read.

@mfeldstein67

Michael Feldstein

Michael Feldstein is the author of a longstanding educational technology blog and is a knowledgeable commentator on the academic market and their LMSs.

Example tweet: Hmm. The IMS has removed the word “standards” from the mission statement. #LI11

@questionmark

The Questionmark marketing team said I’d better put this one here :). No, seriously if you follow @questionmark on Twitter, you’ll get to hear of all our announcements and blog articles and can follow up on those you’re interested.

Example tweet: The fraud triangle: understanding and mitigating threats to test and exam security http://slidesha.re/mP0Q6E a slideshare presentation

@sprabu

image

Prashanth Padmanabhan is a product manager at SAP. He’s great at finding new ideas in business, software design and talent management and condensing them into a stream of tweets.

Example tweet : “How Do We Prepare Kids for Jobs We Can’t Imagine Yet? Teach Imagination” http://bit.ly/jZAqma

@WillWorkLearn

Will Thalheimer

Will Thalheimer is one of the gurus of learning research, he reviews research from academics and applies them in a practical context.

Example tweet: Learning and Forgetting Curves — Implications Explained in New Video: http://tinyurl.com/3k3omyy

If you want to follow me on Twitter, you can find me as @johnkleeman. Check the tab at the left for Twitter addresses for several Questionmark colleagues – including our CEO @ericshepherd. I hope you find Twitter as useful as I do to learn every day.

Podcast: Essentials of Data Security for Online Assessments

Posted by Joan Phaup

Sean Decker

Data security being a crucial component of Questionmark’s D3 platform for Questionmark OnDemand hosted and subscription solutions, I got together recently with Questionmark chairman John Kleeman and Sean Decker, our IT architect, to learn more about how we  ensure the safety and security of confidential data.

John Kleeman

I  peppered John and Sean with questions about everything from intrusion detection systems to precautions for preventing the loss of data. We talked about the extensive protections at our SAS 70 Type II-certified data center, employee training on data security, multiple firewalls,  encryption and  other safeguards as well as the ways in which our software development process addresses potential security issues.

Questionmark takes the subject of data security very seriously, and our conversation was both serious and fascinating. If you’d like to know more about this subject, I hope you will listen in.

Minimizing bias when assessing across culture and language

Student numbers graph

Posted by John Kleeman

I attended a thought-provoking presentation last week by Dr. Janette Ryan of the Teaching International Students project about the rising numbers of international students at universities and the challenges of teaching and assessing them. This inspired me to do some research about the cultural and linguistic challenges involved with assessing in such contexts.

As you will see in the graph on the right, there is an increasing trend for countries to send students for university education overseas, so they can learn from other cultures as well as their own.There are around 3.5 million international students worldwide. The USA is the world’s most popular destination, with the UK and Australia coming second and third.

The UK Teaching International Students project has a page on assessment and feedback, I also found a paper from Oxford Brookes University on Sitting exams in a second language: minimising bias, maximising potential and an Australian guide to Assessing students unfamiliar with assessment practices in Australian higher education.

Here is some advice from these documents. It’s aimed for a university and higher education context, but much here will also be relevant to corporate training.

1. Consider giving extra time for people who are taking an assessment in a language that is foreign to them. You should consider an accommodation in the same way as you would for others who read more slowly, e.g. dyslexic students.

2. Make your questions and instructions clear and unambiguous; use as few words as you need. For someone not working in their native language, each extra word increases cognitive load.

3. Expectations for how to write essays vary between countries and cultures. In some cultures, presenting a contentious statement and asking the student to discuss it is a normal means of assessment; in others this is novel.

4. In some educational settings, the more closely a student replicates the work or words of an expert, the greater the student’s learning or mastery of the subject is considered to be. Elsewhere, replicating the words of someone else is regarded as plagiarism and cheating! Whichever approach you take, tell your students what is expected.

5. Explain well how you are going to run assessments. Styles of teaching, language for grades and ways of assessing vary in different cultures and countries. Your methods may be expected by students from your own country but novel for students from another country.

6. Many international students have a high level of language proficiency but a low level of cultural knowledge. Ensure your assessments do not presume cultural knowledge; using case study questions that make assumptions about prior knowledge or context is a common mistake. The question below is meaningful to Europeans but not to others.

(from Oxford Brookes paper) "As an anthropologist, how would you study Eurovision?"

7. Give plenty of opportunities for students to practice assessments. With software like Questionmark Perception, it’s easy to set up practice tests. This is especially valuable for international students so they can understand what is expected.

8. If your assessment involves participation or work in a group, remember that different cultures have different conventions in group communication — for example about interrupting others or being seen to criticize another in public.

9. Feedback is really important in all assessment. Ensure that it is meaningful and includes any necessary context and doesn’t assume prior knowledge for people who have come from different backgrounds.

10. Above all, set tasks which give all students a chance to succeed

Next Page »