Testing is a crucial element of software development. It can also be a complex activity to structure correctly, and in a way that supports maximum efficiency. Because of this complexity, it is always helpful to review processes and guidelines to ensure you are following best practice, and a great place to start is with the ISTQB (International Software Testing Qualifications Board), who list seven fundamental principles of testing.
These are the principles that have been collated and established by the ISTQB as testing and software development has evolved over the years, and are recognised as the absolute core of testing.
This is part of the reason why we are proud to say all our testers at Box UK are ISTQB qualified!
If you’re involved with any aspect of software testing it’s worthwhile to fully review and comprehend these standards, checking that you are following them within your organisation and teams, as they will help you achieve high standards of quality and give your clients confidence that their software is production-ready.
We test software to discover issues, so that they can be fixed before they are deployed to live environments – this enables us to have confidence that our systems are working. However, this testing process does not confirm that any software is completely correct and completely devoid of issues. Testing helps greatly reduce the number of undiscovered defects hiding in software, but finding and resolving these issues is not itself proof that the software or system is 100% issue-free. This concept should always be accepted by teams, and effort should be made to manage client expectations.
It is important to remember however that while testing shows the presence of bugs and not their absence, thorough testing will give everyone confidence that the software will not fail. Having a comprehensive test strategy that includes thorough test plans, reports and statistics along with testing release plans can all help with this; reassuring clients as to testing progress and providing confidence that the right areas are being tested.
Additionally, ongoing monitoring and testing after systems have gone into production is vital. Thinking forward to potential issues that could arise is another good way to help mitigate against any future problems, for example considering load testing if a site is launching a new marketing campaign, so you can be confident the software will withstand any anticipated larger volumes of traffic.
As much as we would like to believe or wish it true(!), it is absolutely impossible to test EVERYTHING – all combinations of inputs and preconditions – and you could also argue that attempting to do so is not an efficient use of time and budget. However, one of the skills of testing is assessing risks and planning your tests around these – you can then cover vast areas, while making sure you are testing the most important functions. With careful planning and assessment, your test coverage can remain excellent and enable that necessary confidence in your software, without requiring that you test every single line of code.
Testing early is fundamentally important in the software lifecycle. This could even mean testing requirements before coding has started, for example – amending issues at this stage is a lot easier and cheaper than doing so right at the end of the product’s lifecycle, by which time whole areas of functionality might need to be re-written, leading to overruns and missed deadlines.
Involving testing early is also a fundamental Agile principle, which sees testing as an activity throughout, rather than a phase (which in a traditional waterfall approach would be at the end) because it enables quick and timely continuous feedback loops. When a team encounters hurdles or impediments, early feedback is one of the best ways to overcome these, and testers are essential for this. Consider the tester as the ‘information provider’ – a valuable role to play.
Essentially, testing early can even help you prevent defects in the first place!
This is the idea that certain components or modules of software usually contain the most number of issues, or are responsible for most operational failures. Testing therefore, should be focused on these areas (proportionally to the expected – and later observed – defect density of these areas). The Pareto principle of 80:20 can be applied – 80 percent of defects are due to 20 percent of code!
This is particularly the case with large and complex systems, but defect density can vary for a range of reasons. Issues are not evenly distributed throughout the whole system, and the more complicated a component, or the more third-party dependencies there are, the more likely it is that there will be defects. Inheriting legacy code, and developing new features in certain components that are undergoing frequent changes and are therefore more volatile, can also cause defect clustering.
Knowing this could prove to be very valuable for your testing; if we find one defect in a particular module/area there is a strong chance of discovering many more there. Identifying the more complex components, or areas that have more dependencies or are changing the most, for example, can help you concentrate your testing on these crucial risk areas.
This is based on the theory that when you use pesticide repeatedly on crops, insects will eventually build up an immunity, rendering it ineffective. Similarly with testing, if the same tests are run continuously then – while they might confirm the software is working – eventually they will fail to find new issues. It is important to keep reviewing your tests and modifying or adding to your scenarios to help prevent the pesticide paradox from occurring – maybe using varying methods of testing techniques, methods and approaches in parallel.
Testing is ALL about the context. The methods and types of testing carried out can completely depend on the context of the software or systems – for example, an e-commerce website can require different types of testing and approaches to an API application, or a database reporting application. What you are testing will always affect your approach.
If your software or system is unusable (or does not fulfill users’ wishes) then it does not matter how many defects are found and fixed – it is still unusable. So in this sense, it is irrelevant how issue- or error-free your system is; if the usability is so poor users are unable to navigate, or/and it does not match business requirements then it has failed, despite having few bugs.
It is important, therefore, to run tests that are relevant to the system’s requirements. You should also be testing your software with users – this can be done against early prototypes (at the usability testing phase), to gather feedback that can be used to ensure and improve usability. Remember, just because there might be a low number of issues, it does not mean your software is shippable – meeting client expectations and requirements are just as important as ensuring quality.
Applying these thoughtful principles to your testing can help you become more efficient and focused, and can even help improve the quality of your overall testing strategy. Additionally, sometimes by applying one principle you will find others naturally fall into place. For example, testing early can help mitigate the “absence of errors” fallacy – if testers are involved at a requirements level, you can help ensure the software will match client requirements/expectations.
Combine all these points, and you can really achieve maximum efficiency by utilising your time and endeavours effectively and economically.