It is probably natural, but you will iterate with your optimizations. This means that you will run the same test again and again to measure the result of a change—for instance, increasing the pool size.
The direct implication of such a work structure is that you need to prepare yourself to store lots of reports in an organized way. There are many solutions for that and it mainly depends on the tools you are used to relying on. But at a very high level, you need to, at least, store:
- The benchmark report.
- The benchmark date (to be able to sort them, it is often useful to replay the iterations done afterwards).
- The benchmark configuration (you can store the full configuration or just write it in a file, named CHANGES.txt, for instance, ...