Hi,
I am working with a Performance test team at the moment. The standard process for Performance testing releases is to run a load test and compare against a previous result to assess if there is any regression from the Performance Perspective.
App D is a fairly new addition to their toolset.
When TEST 1 is run we save the time period as a custom time period. When new code is released a second test is run TEST 2 and the time period saved again.
We then use the compare release screen to compare the release.
This works well when the tests are close together. However when the tests are a couple of weeks apart the data from TEST 1 has been summarised into hourly data points. While trhis is fine for comapring average response times / server metrix etc, where there are differences we cannot see if the increase is due to a small spike in response times or a consisant increase across the test period, as the level of detail has been lost.
What would be exceptionally useful would be if we could mark the time period of TEST 1 so that the data detail is retained to allow better comaparisons against future tests.
I know that this could be achieved in some way by altering the thresholds for when data is summarised from 1 min to 10 min and from 10 min to 1 hour, but I believe that even they have some limits in the max values which can be set there.