Information Security: from Human Factors to FIPS

Steve Lay HeadshotPosted by Steve Lay

Last year I wrote about Keeping up with Software Security and touched on the issue of cryptography and the importance of keeping up with modern standards to ensure data is stored and communicated securely. In this post, I return to this subject and dive a little deeper into maintaining information security and the role that software plays in it.

The first thing to is that software can only ever be part of an information security strategy. It doesn’t matter how complex your password policy or how strong your data encryption standards are if you allow authorised individuals to share information inappropriately. Amongst all the outcry following the information disclosed by Edward Snowden, it is surprising to me that the focus hasn’t been on why an organization that plays such a crucial role in security (in its widest sense) was so easily undone by a third-party contractor. According to this article from Reuters news agency, one of the techniques Snowden used to gain access to information was simply asking co-workers to give him their passwords.

The computing press will sometimes try and persuade you that there is a purely technical solution to this type of problem. It reminds me of the time my son’s school introduced fingerprint recognition for canteen payments, in part, to allow parents to check up on what their children had been spending — and eating!  It wasn’t long before the phrase, “Can I borrow your finger?” was commonplace amongst the pupils.

I hope my preamble has brought home to you the basic idea that these ‘human factors’, as the engineers like to call them, are just as important as getting the technology right. Earlier this week I had to go through my own routine security testing as part of our security process here at Questionmark, so this stuff is fresh in my mind!

Security Standards

Information security, in this wider context is covered by a whole series of international standards commonly known as the ISO 27000 series.  This series of standards and codes of practice cover a wide range of security processes and computer systems. To an engineer looking for a simple answer it can be frustrating though. ISO 27002 contains advice on the use of cryptography, but it runs more like a policy checklist. It won’t tell you which algorithms are safe to use. In part, this is a recognition of how dynamic this field is. The recommendations might change too fast for something like an ISO standard which takes a long time to develop and is designed to have a fairly long shelf-life.

For more practical advice to engineers developing software, help is at hand from the Federal Information Processing Standards (known as FIPS).  FIPS was developed by the U.S. government to help fill out the gaps where externally defined standards weren’t available. It ranges across many areas of information processing, not just security, but one of the gaps FIPS fills is specifying the details of which cryptographic algorithms are fit for modern software and, by implication, which ones need to be retired (FIPS-140). This standard has become so important that the word FIPS is often used to refer only to FIPS-140! It isn’t restricted to the U.S. either; it is being freely adopted by other governments including my own government here in the UK.

FIPS 140 also has a certification programme. The purpose of the programme is to certify the implementation of cryptographic code to check that it does indeed correctly implement the security standard. Microsoft have a technical article to explain which parts of their platform and which versions have been certified. There is even a “FIPS mode” in which the system can be instructed to use only cryptographic algorithms that have been certified.

Concentrating the cryptographic features of an application into a small number of modules that are distributed with the underlying operating system rather than having each application developer individually implement or incorporate the code themselves will, over time, make it easier to make use of appropriate cryptography and to implement policies such as those described by ISO 27002.

Keeping up with Software Security

Posted by Steve Lay

Readers of this blog will be familiar with many of the techniques that can be used to improve the security of your testing programmes. Part of that picture involves securing the software you use. With Cyber-warfare very much in the news I thought I’d take this opportunity to tell you a little about some of the challenges involved in keeping up with the bad guys.

It seems obvious that we all want the software we use to be secure, but security isn’t just a feature that can be implemented once like a new menu option in an application. The security landscape is constantly changing and software needs to change to remain secure.

Anyone who uses a PC or a web browser will know that software updates are pushed out routinely. Even my mobile phone receives updates on a regular basis. It isn’t always easy to tell why software is being updated, but in many cases the updates do contain important security fixes that address newly discovered vulnerabilities.

To give just one example of why this environment is so challenging, in late December 2011, the US Computer Emergency Readiness Team (US-CERT) issued a bulletin concerning a very basic weakness in many web applications (See http://www.kb.cert.org/vuls/id/903934). The weakness involved the parameters passed to web applications by your browser when you fill in an online form. A malicious user could cause the web server to slow down by sending it carefully selected parameters. There was no threat to the confidentiality of the information on the server, but there was a threat to the service availability. Suddenly it was possible for a suitably motivated individual to take almost any website offline. This type of attack is known as a “Denial of Service” or DoS attack.

Fortunately, software developers quickly addressed the problem and patches were distributed for affected systems including the frameworks we use. The team that runs Questionmark OnDemand was quick to act and the patches were tested and deployed very rapidly.

Although the breadth of systems affected was unusual in this case, in many respects this was a typical incident. Nobody sets out to make software that is insecure, but as we learn more about computing we discover new ways that systems can be misused. Keeping software up to date is critical to maintaining security. Users of Questionmark On Demand can be confident that our team of system administrators puts a very high priority on keeping the service current. You can read more about the trustable platform we use for Questionmark OnDemand in our white paper, Security of Questionmark’s OnDemand Service.

I’ve chosen to highlight a fairly dramatic case in this blog post, but sometimes we do get more warning. Cryptography is an important tool in the security arsenal. Computer scientists and mathematicians develop the algorithms on which cryptography depends. Cryptography doesn’t make it impossible to read confidential information; it is designed to make it impractical with the technology available today. As computers get faster, the codes become more vulnerable until eventually they have to be retired and replaced with new, stronger codes. You can actually see this effect in the results of the RSA Challenges. The RSA offered prizes to people who could crack codes of increasing strength. Just looking at the years in which each prize was awarded gives a feeling for the dynamic nature of this field: 1991, 1992, 1993, 1994, 1996, 1999, 2003 and the last prize was awarded in 2005 (US$20,000).

This may all sound esoteric, but behind the scenes our developers are working on issues like these to ensure that Questionmark technologies continue to meet with the strictest requirements, and that we stay ahead of the code-breakers.