Certification in the Cloud and the Move to Online Proctoring: An interview with SAP’s manager of global certification

John Kleeman Headshot

Posted by John Kleeman

I recently interviewed Ralf Kirchgaessner, SAP’s manager of global certification, about how the cloud is changing SAP certification. This is a shortened version of my conversation with Ralf. To read the full previously published post, check out this SAP blog.

John: What are the key reasons why SAP has a certification program?

Ralf: The overall mission of the program is that every SAP solution should be implemented and supported ideally by a certified SAP resource. This is to ensure that implementation projects go well for customers, and to increase customer productivity while reducing their operating costs. Customers value certification. In a survey of SAP User Group customers in Germany and the US, 80 percent responded that it was very important to have their employees certified and over 60 percent responded that certification was one of the criteria used to select external consultants for implementation projects.

John: What important trends do you see in high tech and IT certification?

Ralf: What comes first to the mind is the move to the cloud. Throughout the technology industry, the cloud drives flexibility and making everything available on demand. One aspect of this is that release cycles are getting quicker and quicker.

For certification, this means that consultants and others have to show that they are always up to date and are certified on the latest release. It’s not enough to become certified once in your lifetime: you have to continually learn and stay up to date. But of course if you are taking certification exams more often, certification costs have to be much lower. In some regions, people have to travel large distances to get to a test centre. With more frequent certification, it’s not practical to travel to a testing centre every time you take a certification. So our aim is to allow certification anytime and anywhere using the cloud.

John: How does online proctoring work for the candidate?

Ralf: A remote proctor monitors the candidate via a webcam, and there are a lot of security checks done by the proctor and by the system. For example, a secure browser is used, the candidate has to do a 360 degree check of his or her room, and there are lots of specific controls. For instance, you aren’t allowed to read the questions silently with your lips in case someone is watching or listening.

The great advantage to the candidate is flexibility. If someone says, “I’d like to do my exam in the middle of the night or on weekends because during the week I’m so busy with my project,” they can. They might say that they’d like to do their exam on Saturday afternoon: “After spending two hours playing with my kids, I’m relaxed to do my exam!” It’s such a flexible way to get certified and to quickly demonstrate that they have up-to-date knowledge and are allowed to provision customer systems.

John: Who benefits from certification in the cloud? Candidates, customers, partners or SAP?

Ralf: Of course, I think all benefit! Candidates have flexibility and lower cost. Customers can be sure that partner consultants who work for them are enabled and up to date. For partners, it’s a competitive advantage to show that their consultants are up to date, especially for new technologies like S/4HANA and Simple Finance. A partner is much more likely to be chosen to deploy new technologies if they can demonstrate that they have several consultants already certified in something that’s just been released. And for SAP, our goal is to have engaged consultants, happy partners and lower support costs. So everyone genuinely benefits.

John: What are some of the challenges?

Ralf: One example is that it’s important in cloud certification to get data protection right. SAP have very detailed requirements that we ensure our vendors like Questionmark meet.

Security is also a challenge. You need to prevent cheating and stealing questions.  And interfaces and integration need to be right. We have worked out how we get the data from our HR systems, how people book and subscribe to exams and then how they can authenticate with single sign-on into the certification hub to take cloud exams.

The delta concept also gives challenges. You need very precise pre-requisite management logic, where the certification software checks for example that, if you want to take the delta exam, you have already passed the core exam. It also can sometimes be difficult to prepare a good delta exam, particularly if a new release has very specific or detailed features, including some that apply in only some industries.

Lastly, providing seamless support is a challenge when using multiple vendors. The candidate doesn’t care where a problem happened: he or she just wants it fixed.

John: Where do you see the long term future of high-tech certification? Will there still be test centres, or will all certification be done via the cloud?

Ralf: Test centres won’t disappear at once, but there is a trend of moving from classroom-based learning and testing to learning and certification in the cloud. The future will belong to anytime, anywhere testing. The trend is for test centre use to decline, but it won’t happen overnight!

John: If another organization is thinking of moving towards certification in the cloud, what advice would you give them?

Ralf: Ensure that you are aware of the challenges I mentioned and can deal with them. And do some pilots before you try to scale.

Interested in learning more about Online Proctoring? I will be presenting a session on ensuring exam integrity with online proctoring at Questionmark Conference 2016: Shaping the Future of Assessment in Miami, April 12-15. I look forward to seeing you there! Click here to register and learn more about this important learning event.

Cloud vs. On-premise: A customer’s viewpoint

Chloe MendoncaPosted by Chloe Mendonca

The cloud or on-premise: which is best? Many organisations are asking this question, and migration to the cloud is undoubtedly one of the biggest trends in the technology industry right now.

Many organisations are finding vast benefits from moving to the cloud. A recent research study by the School of Computing, Creative Technologies & Engineering and Leeds Beckett University, highlights how the cloud plays an instrumental role in the reduction of environmental and financial costs – often a 50% saving on costs related to the installation and maintenance of IT infrastructure in academic institutions. I was interested to learn more about the customer experience of moving from an on-premise system into the cloud.

My recent conversation with Paul Adriaans, IT Coordinator in Education at the Faculty of Law of the University of Maastricht in the Netherlands gave me some insight into what prompted their move from Questionmark Perception—our on-premise assessment management system— to Questionmark OnDemand—our cloud-based solution. Paul explained how the transition worked for them.

What is your history with Questionmark?
We began using Questionmark Perception at the University in the Faculty of Law department in 2007 and in December 2013 we moved to OnDemand.

What made you decide to go OnDemand?
With the on-premise system we always needed to get someone onsite to do the upgrades, as it required us to install the latest version of the software and set up our servers. This took planning, time and resources. We deployed the on-demand system within a few hours and the preparation was easy. Cost was another factor. The money spent internally to maintain our systems, was higher compared to the cost of going OnDemand. Moving to OnDemand also gives us greater flexibility to expand and grow our usage in the future.

Did you have any concerns about moving to OnDemand?
Understandably we were cautious about security. With our on-premise installation, we were accustomed to being fully in control of our own data security. Moving to the cloud meant entrusting Questionmark with data protection–but this new approach provides excellent security while still giving us complete access to our data. The university worked with SURF, the collaborative ICT organisation for Dutch higher education and research, and online learning services provider Up Learning, to test and approve Questionmark’s security. (The Questionmark OnDemand environment is located in an ISO accredited EU data center with multiple layers of security.)

How did you find the transition?
The upgrade process didn’t take a lot of work from our end, as we started with a clean database. We chose not to transfer old exams, users and schedules, but only to keep questions and a selection of the most recent exams. So we added all of the questions and exams that we still needed by exporting and importing QPacks (encrypted zip files) ourselves. We were able to have support from Up Learning as well as support from Questionmark’s help desk and customer care teams. They were very supportive and provided useful emails to guide us throughout the process.

How has the switch to Questionmark OnDemand affected your work?
OnDemand gives us access to all of the latest features as soon as they’re available. Due to the internal IT resources required to carry out an upgrade for our on-premise system, we were often several versions behind the OnDemand system. Now we always have the latest version and don’t have to worry about upgrading. If and when we decide to grow our assessment programme across the university, we know that OnDemand is flexible enough to accommodate that.

Caveon Q&A: Enhanced security of high-stakes tests

Headshot JuliePosted by Julie Delazyn

Questionmark and Caveon Test Security, an industry leader in protecting high-stakes test programs, have recently joined forces to provide clients of both organizations with additional resources for their test administration toolboxes.

Questionmark’s comprehensive platform offers many features that help ensure security and validity throughout the assessment process. This emphasis on security, along with Caveon’s services, which include analyzing data to identify validity risks as well as monitoring the internet for any leak that could affect intellectual property, adds a strong layer of protection for customers using Questionmark for high-stakes assessment management and delivery.

I sat down with Steve Addicott, Vice President of Caveon, to ask him a few questions about the new partnership, what Caveon does and what security means to him. Here is an excerpt from our conversation

Who is Caveon? Tell me about your company.

At Caveon Test Security, we fundamentally believe in quality testing and trustworthy test results. That’s why Caveon offers test security and test item
development services dedicated to helping prevent test fraud and better protecting our clients’ items, tests, and reputations.

What does security mean to you, and why is it important?

High stakes test programs make important education and career decisions about test takers based on test results. We also spend a tremendous amount of time creating, administering, scoring, and reporting results. With increased security pressures from pirates and cheats, we are here to make sure that those results are trustworthy, reflecting the true knowledge and skills of test takers.

Why a partnership with Questionmark and why now?

With a growing number of Questionmark clients engaging in high-stakes testing, Caveon’s experience in protecting the validity of test results is a natural extension of Questionmark’s security features. For Caveon, we welcome the chance to engage with a vendor like Questionmark to help protect exam results.

And how does this synergy help Questionmark customers who deliver high-stakes tests and exams?

As the stakes in testing continue to rise, so do the challenges involved in protecting your program. Both organizations are dedicated to providing clients with the most secure methods for protecting exam administrations, test development investments, exam result validity and, ultimately, their programs’ reputations.

For more information on Questionmark’s dedication to security, check out this video and download the white paper: Delivering Assessments Safely and Securely.

Q&A: High-stakes online tests for nurses

Headshot JuliePosted by Julie Delazyn

I spoke recently with Leanne Furby, Director of Testing Services at the National League for Nursing (NLN), about her case study presentation at the Questionmark 2015 Users Conference in Napa Valley March 10-13.

Leanne’s presentation, Transitioning 70 Years of High-Stakes Testing to Questionmark, explains NLN’s switch from a proprietary computer- and paper-based test delivery engine to Questionmark OnDemand for securely delivering standardized exams worldwide. I’m happy to share a snippet from of our conversation:

Tell me about the NLN

The NLN is a national organization for faculty nurses and leaders in nurse education. We offer faculty development, networking opportunities, testing services, nursing research grants and public policy initiatives to more than 26,000 members.

Why did you switch to Questionmark?

Our main concern was delivering our tests and exams to a variety of different devices. We wanted our students to be able to take a test on a tablet or take a quiz on their own mobile devices, and this wasn’t something we could do with our proprietary test delivery engine.

Our second major reason to go with Questionmark was the Customized Assessment Reports and the analytics tools. Before making the switch, we were having to create reports and analyze results manually. It took time and resources. Now this is all integrated in Questionmark.

How do you use Questionmark assessments?

We have 90 different exam lines and deliver approximately 75,000 to 100,000 secure exams a year, both nationally and internationally, in multiple languages. The NLN partnered with Questionmark in 2014 to transition the delivery of these exams through a custom-built portal. Questionmark is now NLN’s turnkey solution—from item banking and test development with SMEs all over the world to inventory control, test delivery and analytics.

This transition has had a positive outcomes for both our organization and our customers. We have developed a new project management policy, procedures for system transition and documentation for training at all levels. This has transformed the way we develop, deliver and analyze exams and the way we collect data for business and education purposes.

What are you looking forward to at the conference?

I am most looking forward to the opportunity to speak to other users and product developers to learn tips, tricks and little secrets surrounding the product. It’s so important to speak to people who have experience and can share ways of utilizing the software in ways you hadn’t thought of.

Thank you Leanne for taking time out of your busy schedule to discuss your session with us!

***

You have the opportunity to save $100 on your own conference registration: Just sign up by January 29 to receive this special early-bird discount.

An easier approach to job task analysis: Q&A

Julie Delazyn HeadshotPosted by Julie Delazyn

Part of the assessment development process is understanding what needs to be tested. When you are testing what someone needs to know in order for them to do their job well, subject matter experts can help you harvest evidence for your test items by observing people at work. That traditionally manual process can take a lot of time and money.

Questionmark’s new job task analysis (JTA) capabilities enable SMEs to harvest information straight from the person doing the job. These tools also offer an easier way to see the frequency, importance, difficulty and applicability of a task in order to know if it’s something that needs to be included in an assessment.

Now that JTA question authoring, assessment creation and reporting are available to users of  Questionmark OnDemand and Questionmark Perception 5.7 I wanted to understand what makes this special and important. Questionmark Product Manager Jim Farrell, who has been working on the JTA question since its conception, was kind enough to speak to me about  its value, why it was created, and how it can now benefit our customers.

Here is a snippet of our conversation:

So … first things first … what exactly IS job task analysis and how would our customers benefit from using it?

Job task analysis, JTA, is a survey that you send out that has a list of tasks, which are broken down into dimensions. Those dimensions are typically difficulty, importance, frequency, and applicability. You want to find out things like this from someone who fills out the surveys: Do they find the job difficult? Do they deem it important? And how frequently do they do it? When you correlate all this data you’ll quickly see the items that are more important to test on and collect information on.

We have a JTA question type in Questionmark Live where you can either build your task list and your dimensions or you can import your tasks through a simple import process—so if you have a spreadsheet with all of your tasks you can easily import it. You would then add those to a survey and send them out to collect information. We also have two JTA reports that allow you to break down results by the actual dimension—just look at the difficulty for all the tasks—or you can look at a summary view of all of your tasks and all the dimensions all at
one time; have a snapshot.

That sounds very interesting and easy to use! I’m interested in how did question type actually came to be.

We initially developed the job task analysis survey for the US Navy. Prior to this, trainers would have to travel with paper and clipboards to submarines, battleships and aircraft carriers and watch sailors and others in the navy do their jobs. We developed the JTA survey to help them be more efficient to collect this data more easily and a lot more quickly than they did before.

What do you think is most valuable and exciting about JTA?

To me, the value comes in the ease of creating the questions and sending them out. And I am probably most excited for our customers. Most customers probably harvest information with paper and clipboard and walking around and watching people do their jobs. That’s a very expensive and time-consuming task, so by being able to send this survey out directly to subject matter experts you’re getting more authentic data because you are getting it right form the SMEs rather than from someone observing the behavior.

 

It was fascinating for me to understand how JTA was created and how it works … Do you find this kind of question type interesting? How do you see yourself using it? Please share your thoughts below!

How can a randomized test be fair to all?

Joan Phaup 2013 (3) Posted by Joan Phaup

James Parry, who is test development manager at the U.S Coast Guard Training Center in Yorktown, Virginia, will answer this question during a case study presentation the Questionmark Users Conference in San Antonio March 4 – 7. He’ll be co-presenting with LT Carlos Schwarzbauer, IT Lead at the USCG Force Readiness Command’s Advanced Distributed Learning Branch.

James and I spoke the other day about why tests created from randomly drawn items can be useful in some cases—but also about their potential pitfalls and some techniques for avoiding them.

When are randomly designed tests an appropriate choice?

James Parry

James Parry

There are several reasons to use randomized tests.  Randomization is appropriate when you think there’s a possibility of participants sharing the contents of their test with others who have not taken it.  Another reason would be in a computer lab style testing environment where you are testing many on the same subject at the same time with no blinders between the computers. So even if participants look at the screens next to them, chances are they won’t see the same items.

How are you using randomly designed tests?

We use randomly generated tests at all three levels of testing low-, medium- and high-stakes.  The low- and medium-stakes tests are used primarily at the schoolhouse level for knowledge- and performance-based knowledge quizzes and tests.  We are also generating randomized tests for on-site testing using tablet computers or local installed workstations.

Our most critical use is for our high-stakes enlisted advancement tests, which are administered both on paper and by computer. Participants are permitted to retake this test every 21 days if they do not achieve a passing score.  Before we were able to randomize the test there were only three parallel paper versions. Candidates knew this so some would “test sample” without studying to get an idea of every possible question. They would retake the first version, then the second, and so forth until they passed it. With randomization the word has gotten out that this is not possible anymore.

What are the pitfalls of drawing items randomly from an item bank?

The biggest pitfall is the potential for producing tests that have different levels of difficulty or that don’t present a balance of questions on all the subjects you want to cover. A completely random test can be unfair.  Suppose you produce a 50-item randomized test from an entire test item bank of 500 items.   Participant “A” might get an easy test, “B” might get a difficult test and “C” might get a test with 40 items on one topic and 10 on the rest and so on.

How do you equalize the difficulty levels of your questions?

This is a multi-step process. The item author has to make sure they develop sufficient numbers of items in each topic that will provide at least 3 to 5 items for each enabling objective.  They have to think outside the box to produce items at several cognitive levels to ensure there will be a variety of possible levels of difficulty. This is the hardest part for them because most are not trained test writers.

Once the items are developed, edited, and approved in workflow, we set up an Angoff rating session to assign a cut score for the entire bank of test items.  Based upon the Angoff score, each item is assigned a difficulty level of easy, moderate or hard and assigned a metatag to match within Questionmark.  We use a spreadsheet to calculate the number and percentage of available items at each level of difficulty in each topic. Based upon the results, the spreadsheet tells how many items to select from the database at each difficulty level and from each topic. The test is then designed to match these numbers so that each time it is administered it will be parallel, with the same level of difficulty and the same cut score.

Is there anything audience members should do to prepare for this session?

Come with an open mind and a willingness to think outside of the box.

How will your session help audience members ensure their randomized tests are fair?

I will give them the tools to use starting with a quick review of using the Angoff method to set a cut score and then discuss the inner workings of the spreadsheet that I developed to ensure each test is fair and equal.

***

See more details about the conference program here and register soon.

« Previous PageNext Page »