When features are developed, and a working systems are passed into testing, we check that the functionality meets the agreed requirements. We build our own input data and compare it against expected results to verify that functionality is correct.
With each new feature being developed, there lies the risk that they could inadvertendly affect existing functionality. When we do regression testing, we look for these knock-on effects.
When you develop multiple related features in parallel, there is a risk of conflicts. Each development may work in isolation, but to check that each feature works together, we use integration testing.
We automate tests to streamline processes and handle jobs that aren't practical for manual testing. As the system grows in size so do regression packs and we automate their execution to reduce delays which increase with every release. Real world users can arrive in unpredictably large numbers and we use load testing to simulate them like no manual tester could hope to. For companies with business critical systems where uptime matters, we use automated testing to monitor uptime 24/7.
A bad user experience causes frustration and forces people to find their own way to use a system. By user testing, we help to remove sticking points for the user.
To test a system, we first need to understand it. Before we start to help, we take the time to learn about the system, the business logic behind it and the people who use it.
Starting with high level requirements, through to implementation, our skilled testers refine their work at every level of detail.
We start by asking if the requested functionality meet the intended purpose. Finding and fixing issues in these early stages can be up to ten times cheaper than they are once the system has been developed. The test strategy is written at this point which sets the basis for all future test cases.
Working with business analysts and product owners, we agree when a feature is considered to be done. We formalize this by documenting the agreement into Acceptance Criteria and start designing test cases to verify the agreed acceptance criteria.
Working along side the development team, we write our test cases and start to build up the regression tests in areas that may be affected by the development. Having tests written at this stage lets us give feedback to the developers soon after they complete a feature for testing.
As developed features are sent to us, we begin to run our functional tests and work closely within the development team to resolve any test failures. Because our tests are written in advance, we can feed back to the developers while the features are still fresh in their minds.
Individual features start coming together into a larger change set, and we take every care to make sure that they work together as a full product. The benefit of integration testing is that it catches bugs that occur as a result of communication issues which affect larger or remote development teams.
All new developments carry the risk of affecting existing functionality. Before any changes are released, we run our regression packs to check that there are no unexpected side effects. As the system grows, this process takes longer and longer so we try to automate our regression packs wherever possible to fit in with any continuous integration strategy.