New Questionmark App for Android™ phones available from Android Market


Posted by Jim Farrell

As Android phones and other mobile devices become the constant companions of learners the world over, mobile delivery offers an affordable way to administer quick knowledge checks, surveys and other assessments to people on the move or at work.

So we are pleased to announce that a new Questionmark App that enables streamlined assessment delivery to Android phones is now available for free download from the Android Market.

The app is completely configurable, so your participants can access assessments scheduled for them in your Questionmark Perception v5 repository. The app — currently available in English, Spanish, French, German, Portuguese, Dutch, Swedish, Russian, Chinese, Japanese and Korean — is designed to detect the language setting of each learner’s device.

If you don’t have Questionmark Perception v5 yet, you can still try this new app: Simply install it and choose the “demo” option, and the app will let you try out several assessments available from Questionmark’s demonstration site.

Click here for more info about Questionmark apps for Android and other mobile devices.

Learning Environments – The Learning Registry

julie-smallPosted by Julie Delazyn

A fascinating video of Steve Midgley from the US Department of Education talking about “The Learning Registry” recently appeared in Questionmark CEO Eric Shepherd’s latest blog post about Learning Environments.

Eric regards the Learning Registry – an informal collaboration among several federal agencies designed to make federal learning resources and primary source materials easier to find, access and integrate into educational environments — as is an important building block for presenting learning content at the right time and in the right context for individual learners.

Click here for more details about the Learning Registry. If this and other topics about assessment and learning interest you, check out Eric’s blog.

Embedding Questionmark Assessments in Socialtext

Embed a Questionmark Perception survey or quiz inside Socialtext

  • To see how this would look, see a snapshot of an assessment embedded within a Socialtext wiki page.
  • Check out this how-to on our developer Web site.
  • Socialtext is a wiki-centric software platform designed to enable people within a company to collaborate and share information with one another, thereby increasing the company’s productivity.  Using social networking tools like micro blogging, blogs, dashboards, Socialtext allows people to circulate information, ideas and updates quickly. To embed your Perception assessment within your Socialtext wiki page you will need to embed your assessment in an IFrame.

Happenings at the Questionmark Boston Briefing and User Group Meeting

Joan Phaup

Jeff Place demonstrates the Apple iPad app

Posted by Joan Phaup

I enjoyed attending yesterday’s Breakfast Briefing and Questionmark User Group meeting in Boston yesterday.

The day included some lively questions and answers, with participants showing particular interest in how to use various Questionmark Perception reports. Demonstrations of browser-based authoring in Questionmark Live and of mobile assessment delivery were also big hits. In the afternoon, Perception users had the opportunity to drill down into more detail  about coming features,ask quesitons and tap into each others’ experience as well as that of the Questionmark team.

Briefing Session

Mobile Delivery Options

Boston was the first of seven U.S. cities that Questionmark staff members are visiting. The others are New York (tomorrow, September 23rd), Chicago, Dallas, Washington, D.C. (Bethesda), Ft. Lauderdale and Los Angeles (Agoura Hills).

Breakfast Briefings in the morning are followed by User Group lunches and in-depth discussions in the afternoons. I’m glad I went, and I’d encourage you to do so, too, if you have the opportunity!

How should we measure an organization’s level of psychometric expertise?


Posted by Greg Pope

A colleague recently asked for my opinion on an organization’s level of knowledge, experience, and sophistication applying psychometrics to their assessment program. I came to realize that it was difficult to summarize in words, which got me thinking why. I concluded that it was because there currently is not a common language to describe how advanced an organization is regarding the psychometric expertise they have and the rigour they apply to their assessment program. I thought maybe if there were such a common vocabulary, it would make conversations like the one I had a whole lot easier.

I thought it might be fun (and perhaps helpful) to come up with a proposed first cut of a shared vocabulary around the levels of psychometric expertise. I wanted to keep it simple, yet effective in allowing people to quickly and easily communicate about where an organization would fall in terms of their level of psychometric sophistication. I thought it might make sense to break it out by areas (I thought of seven) and assign points according to the expertise/rigour an organization contains/applies. Not all areas are always led by psychometricians directly, but usually psychometricians play a role.

1.    Item and test level psychometric analysis

  • Classical Test Theory (CTT) and/or Item Response Theory (IRT)
  • Pre hoc analysis (beta testing analysis)
  • Ad hoc analysis (actual assessment)
  • Post hoc analysis (regular reviews over time)

2.    Psychometric analysis of bias and dimensionality

  • Factor analysis or principal component analysis to evaluate dimensionality
  • Differential Item Functioning (DIF) analysis to ensure that items are performing similarly across groups (e.g., gender, race, age, etc.)

3.    Form assembly processes

  • Blueprinting
  • Expert review of forms or item banks
  • Fixed forms, computerized adaptive testing (CAT), automated test assembly

4.    Equivalence of scores and performance standards

  • Standard setting
  • Test equating
  • Scaling scores

5.    Test security

  • Test security plan in place
  • Regular security audits are conducted
  • Statistical analyses are conducted regularly (e.g., collusion and plagiarism detection analysis)

6.    Validity studies

  • Validity studies conducted on new assessment programs and ongoing programs
  • Industry experts review and provide input on study design and finding
  • Improvements are made to the program if required as a result of studies

7.    Reporting

  • Provide information clearly and meaningfully to all stakeholders (e.g., students, parents, instructors, etc.)
  • High quality supporting documentation designed for non-experts (interpretation guides)
  • Frequently reviewed by assessment industry experts and improved as required

Expertise/rigour points
0.    None: Not rigorous, no expertise whatsoever within the organization
1.    Some: Some rigour, marginal expertise within the organization
2.    Full: Highly rigorous, organization has a large amount of experience

So an organization that has decades of expertise in each area would be at the top level of 14 (7 areas x 2 for expertise/rigour in each area = 14). An elementary school doing simple formative assessment would probably be at the lowest level (7 areas x 0 expertise/rigour = 0). I have provided some examples of how organizations might fall into various ranges in the illustration below.

There are obviously lots of caveats and considerations here. One thing to keep in mind is that not all organizations need to have full expertise in all areas. For example, an elementary school that administers formative tests to facilitate learning doesn’t need to have 20 psychometricians working for them doing DIF analysis and equipercentile test equating. Their organization being low on the scale is expected. Another consideration is expense: To achieve the highest level requires a major investment (and maintaining an army of psychometricians isn’t cheap!). Therefore, one would expect an organization that is conducting high stakes testing where people’s lives or futures are at stake based on assessment scores to be at the highest level. It’s also important to remember that some areas are more basic than others and are a starting place. For example, it would be pretty rare for an organization to have a great deal of expertise in the psychometric analysis of bias and dimensionality but no expertise in item and test analysis.

I would love to get feedback on this idea and start a dialog. Does this seem roughly on target? Would it would be useful? Is something similar out there that is better that I don’t know about? Or am I just plain out to lunch? Please feel free to comment to me directly or on this blog.

On a related note, Questonmark CEO Eric Shepherd has given considerable thought to the concept of an “Assessment Maturity Model,” which focuses on a broader assessment context. Interested readers should check out:

What’s new in online & mobile assessment? Find out at a Breakfast Briefing

Joan Phaup

Posted by Joan Phaup

Our first U.S. Questionmark Breakfast Briefings for 2010 are set for Boston and New York next week, so it seems like a good time to remind you about our annual visits to various cities. Questionmark staff look forward to greeting you at a briefing and sharing the latest news about online and mobile assessment.

Each briefing will include a complimentary breakfast at 8 a.m., followed by presentations and discussions from 8:30to about 11:30 a.m. There will be an overview of Questionmark Perception and a demonstration of Questionmark Live browser-based authoring, which makes it easy for subject matter experts to write everything from individual questions to entire assessments, including course evaluations.

We’ll cover lots of other topics  too, including:

* Delivering a single assessment to multiple devices, with centralized processing of results
* Using mobile devices such as Android phones and the Apple iPhone and iPad for quick course evaluations, observational assessments and more
* Assessments embedded within learning mashups on wikis, blogs and portals
* Integrations of Questionmark with SAP, SharePoint and other systems
* New tools for analyzing the results of surveys, quizzes and tests
* More meaningful reporting on the results of course evaluations

There will be a Questionmark User Group lunch and meeting after each of these briefings for people who are already using Questionmark Perception. User Groups offer an excellent opportunity to talk in depth about new technologies, to exchange ideas and to ask questions and get answers.

Here’s the schedule:

* Boston, MA (Woburn) –  Tuesday, September 21
* New York, NY   – Thursday, September 23
* Chicago, IL  – Monday, September 27
* Dallas, TX  – Tuesday, October 19
* Washington, DC  (Bethesda) – Thursday, October 21
* Ft Lauderdale, FL – Monday,October 26
* Los Angeles, CA  (Agoura Hills) – Tuesday, November 9

You can find complete details and sign up for a Breakfast Briefing and/or User Group meeting by visiting