To test or not to test

Large technology-oriented organisations that make strategic use of IT systems operate in an environment where both the benefits and risks of IT are critical to the organisation’s overall performance. The sheer complexity of the IT environment creates a whole new class of risk not experienced by other organisations and the management of risk becomes a fundamental discipline. One of the most effective risk management tools, and one that is often undervalued, is software testing.

Exploring software systems
Software testing, if planned and implemented in a structured way, is a systematic exploration of a software system. The value of this testing in complex environments is obvious, and the risks of failing to test adequately should be equally clear. How else can we be sure that every requirement has been satisfied, that performance, security, usability and the other non-functional features are satisfactory, and that there are no nasty surprises lurking when the system is used for the first time in its real environment?
    
The pitfalls of inadequate testing are hard to predict, like the software’s behaviour, and herein lies the most significant risk. We may see any of a wide variety of symptoms:

  • Popular web sites that crash when the number of people accessing it gets too high
  • Interactive systems that run too slowly for their users
  • Systems that need to integrate with other systems but do not
  • Systems that fail occasionally but unpredictably
  • Systems that can do most of their designed functions but not all of them.

All these are examples of system failures that have been well documented in the press and all could have been improved by more attention to systematic and structured testing before release. Our experience with software systems reminds us often that anything untried will not work, and in a complex environment there may well be more at stake than just the embarrassment of a minor failure. Government statistics on software failure in major systems make grim reading. One thing that is essential to avoiding these pitfalls is a structured approach to testing.
    
If we sometimes implement systems that do not work, there must be some weakness in our software testing approach. This is not to condone weaknesses in other key areas, such as requirements management, but weaknesses in other areas of development can only affect users if the system gets past the testing we set up as a safeguard against failure.
    
Poor requirements will lead to poor systems no matter how much testing we do, but the testers are the last people to touch the system before it is released to users, so they tend to get the blame. Testers have the job of ensuring that systems are released in a state that users can work with. Making the risk of system release clear, visible and quantified is the testers’ key responsibility, and that is why structured and systematic testing is essential.

Budget for testing
Testing budgets may be too small in some cases, but before we increase them we need to know that we are getting real value for money from software testing. Real value for money in this case means reducing the risk of failure, with its potentially massive direct and indirect costs, without significantly increasing the cost of developing the system. In fact, the benefits can be achieved without increasing the system development costs at all if we can avoid the rework and delay that account for a large chunk of the costs of failure.
    
The potential benefits of good software testing are seldom achieved in practice, and we need to understand why. Here are a few examples of where theory and practice do not meet:

  • Not basing testing on the requirements. We can only be sure we have tested every requirement of testing if it is planned so that every requirement has an identified test or tests. Using a tester’s or a user’s knowledge of the system is not an adequate substitute. No matter how much they know of what happens now they cannot know what should happen in the future unless they test against the requirements.

What if there are no written requirements? Then the testers need to produce something as a basis for testing, using their analysis skills and involving the stakeholders. Yes, of course it would be much cheaper and easier for analysts to do this at the beginning of the project.

  • Starting testing too late in the development life cycle. The simple rule of thumb is that testing that starts after development will be based on the products actually built; testing that starts during development will be based on specifications and may influence the product that is built; testing that starts before development starts will be based on the requirements and will help to define the product that is built. Only the last of these can both influence quality and provide a systematic exploration of the product against its requirements.
  • Not relating testing to risk. Testing is hardly ever completed according to plan. There is never enough time or resources, especially if development delivers late. Testing gets truncated. There is bound to be some impact from truncated testing, but if the tests were planned around risk and prioritised in risk order we can at least be sure that the tests that do get done are the most important tests and we can still offer an assessment of how much risk there is in stopping the testing at the planned release point.

Avoiding problems
These problems are not actually hard to avoid. They occur because project sponsors are inclined to commit to the idea of method and structure until they see the price tag; the effort and cost estimates kill their enthusiasm before they get as far as examining the real impact on project costs, which is positive if the testing avoids the need to repeat major elements of development, or software is released with only half the functionality in place.
    
There is an argument for recognising that you have got ‘cold feet’ and withdrawing support for the best practice methods; at least you would know what you were up against and you would have saved the cost of testing to offset the cost of dealing with the risks when they materialise. Alternatively, you could commit the budget for good testing and put in place what is needed to keep quality on track and reduce the risk to an acceptable level. The budget would need to cover the main drivers of good testing:

  • Setting up a test team to start early, during requirements analysis, and setting up a risk based strategy while also reviewing the requirements and other specifications for quality and for testability
  • Identifying and implementing the most effective structured testing techniques to address the identified risks
  • Tracking the defects identified in reviews and later in testing to understand the dynamics of quality as the project passes through development and to keep the project manager informed with measurements that can predict when risk of release will have reached an acceptable level.

What makes no sense at all is to commit the budget for only part of this structured approach. This can lead to no effective leverage on quality but still a sizeable bill for testing and the inevitability of having to recover from the problems we did not find.

System testing training
The good news is that a reliable and effective approach to system testing is not only available, it is also well documented and there is accredited training available to anyone who wants to use it to make their testing more effective.
    
The Information Systems Examination Board (ISEB) of the British Computer Society (BCS) has a progressive certification programme for software testers that is based on a very accessible Foundation level that provides a sound basis for setting up a systematic and structured testing regime. The Foundation certificate covers all the ground identified in this article and more. For those who want to understand the key ideas in more detail, or prepare themselves for the Foundation certificate examination, Software Testing: an ISEB Foundation (ed. Brian Hambling) provides everything needed (including practice examination questions for those intending to take the examination).
    
The Foundation level provides a very broad base on which to build software testing experience. The certificate offers unexpected advantages to large organisations, among them the enormous value of acquiring a standard technical vocabulary that can simplify communications between testers in different projects and between development and testing specialists. Equally valuable is the recognition of where testing fits most effectively into alternative development life cycles. The scheme’s success is evident from the more than 39,000 certificates already issued.
    
Beyond Foundation there is a Practitioner level for the more experienced testers and a professional level Specialist qualification is under development.
    
Implementing large, high risk, highly integrated software systems demands the very best from everyone involved. To improve system failure statistics we must ensure that systems are tested effectively, and the right kind of software testing is really not that hard to achieve.

Brian Hambling is a Chartered Member of the BCS and Chairman of the ISEB Software testing Examinations Panel.For more information
Website: www.bcs.org  

Please register to comment on this article