How to do some performance tests : Defining your test strategy

5 minute(s) read

Why do I need a test strategy?

Having your performance test running in the best conditions implies a number of steps, material and immaterial means, and people in order to obtain the results which cover the objectives.

The test strategy is a kind of contract that covers all of that. The first thing to define in your strategy is the objectives you try to reach. Too often, you are asked to do “that kind of test” instead of having a clear objective defined, sometimes resulting in gaps between what have been done, and what has been demonstrated. Enforcing the definition of clear objectives, between both you and the person ordering the tests, will allow to match the tests with the desired results.

Your objective can be something like “having X concurrent users while response time is less than Y and error rate is less than Z%”. I consider standard values for Y to be 3s for web browsing, 1s for web services. The error rate Z is usually defined as 0.1% with a reported value usually being close to 0.

Once the objectives are defined, you can look at how you will match them.

Choosing the appropriate benchmarks to cover your objectives

Once we have our objectives, we need to define what kind of tests we will run to achieve them.

Those are the most common performance tests I had to execute:

Performance testing flavors comparison

The performance benchmarks are usually not part of the final test strategy as is, and they are more there to help you testing your scenarios in a more realistic context and they are mentioned here as a reminder. Please note that when you have multiple scenarios that have to run simultaneously (and possibly with different load levels), then it can take some time to adjust these benchmarks.

The endurance benchmark is usually the last objective that you will cover, as it tends to answer the question: “Will my architecture support all the concurrent scenarios with the projected load?”.

Stress benchmark and Stress endurance benchmark are dependent on each other to give significant results. The aim here will be to measure what is the maximum load the architecture will accept for a single scenario, and if needed, determine what is the limiting factor. The stress test will first guess what the maximum load is, and the stressed endurance test will run slightly under that value, to both verify if the value is accurate and what is the architecture element that limits the load increase.

Configuration benchmarks are usually short tests that aim to see how configuration changes are affecting the performance of the system under test to determine which one is the best.

Finally, robustness testing are looking at how failsafe an architecture is when a forced incident occurs and how the system recovers when things go back to normal.

Choosing the indicators and the stakeholders

A tester usually cannot run a performance project on his own. Indeed, he needs input and help from stakeholders which have to be well identified and available. Otherwise, the testing phase will last longer and this will impact the overall duration of the project. Defining who is responsible for what and when helps to plan a more accurate timeframe for the project.

The test strategy should also include what will be done, and what will not be done (for example, it may be useful to explain that no actions will be taken following the results obtained)

Here are the most common needs of a performance tester:

  • A working platform to develop the test. That platform should have the version your tests will run on. A dedicated platform is not mandatory to develop your scenarios, but you will be impacted by its availability (both uptime and others affecting your data).
  • A dedicated platform to run the tests. Your running test should not be disturbed by anyone.
  • Standard indicators on performance testing is provided by your test framework (statistics and details about number of users, requests per seconds, response time and errors), and you should be able to both collect and interpret them autonomously.
  • Access to a number of statistics regarding the architecture; CPU usage, memory, swap, DB connections, network load, etc. You need to collect all the data you need, either by yourself (and ensure that both the data is effectively collected, and gathering it) or with the support from someone else who will be responsible for providing accurate data on time.
  • Competence to analyze the collected statistics. The tester may need help (is he fully qualified?) to interpret all the collected data and give all the expected explanations about it.
  • The tester will also probably need applicative support to write or dynamize the scenarios and to understand why some steps don’t work as expected. Getting support from someone who understands the application and has access to its log is often required.

The scenarios to be implemented

Your strategy should also define what scenarios should be implemented to cover the objectives. Scenarios should be described enough to be well defined, and still be understood by each stakeholder. The scenarios should cover the majority / biggest use cases of the application in terms of volume.

The major problems you can encounter while designing scenarios are:

  • Trying to cover every functionality: performance testing’s aim is not to verify the performance and/or integrity of each functionality but the global performance of the application by focusing on its main usage.
  • Anti-bot protections: needless to say, but anti-bot protections like captcha are specifically designed to forbid access to the application’s main functionalities by bots. Since testing requires using bots, you must either disable those anti-bot functionalities or abort the tests (as it is forbidden to work around such protection).
  • Anti-DOS functionalities : For the same reasons, anti-DOS (Denial Of Services) capabilities will probably interfere with your performance tests and should be disabled beforehand.

Planning the project

Once you know what must be done and who must do it, it’s time to know when things needs to be done, and plan by taking into consideration the availability of the platforms and stakeholders. Plan your milestones with enough margins to take into account inevitable difficulties, unexpected leaves, holidays or whatever else that could occur in a project.

Validating the test strategy

Your test strategy has no value unless it has been validated accordingly. Make sure all the stakeholders agree and validate it.

Conclusion

The test strategy is the first step to run a smooth performance test. It must define what will be done (type of tests, indicators collected), how, by who (define the stakeholders). It also defines a reasonable time frame to achieve the tests.

Written by

Yann Couturier

Technical test expert at CCQA, a Worldline company