I recently came across a series of articles written in 2001 by Alberto Savoia which impressed me very much. If you search for these titles, you can still find them:
“Web load test planning”
“Trade secrets from a web testing expert”
“Web page response time 101”
The second of Savoia's articles covers three main topics:
Misunderstanding concurrent users
Miscalculating user abandonment
Over-averaging page response times
When I read these I was interested (and somewhat relieved!) to find that much of what Savoia recommends aligns with my own approach to web performance testing.
In this article, I’ll outline a practical approach to how I deal with avoiding misunderstandings concerning concurrent user load.
Savoia rightly pointed out that when defining load for a web performance test, the starting point should be “number of user sessions started per hour”. (It matters less how long each of these individually takes from start to end, though as I will point out in a subsequent article, it cannot be completely ignored.)
Most of the well known load testing tools allow for “pacing” of a test user. You can arrange for a test user to repeat the same session with the start times for each session spaced apart at the “pacing” interval.
It is tempting to ignore this and perhaps try to disable pacing so as to start each session immediately the previous one has completed. Believe me, this is almost always not a good thing to do. The reason is essentially quite simple. As there are variations in the time each user session will actually take (particularly under load) the rate at which user sessions start will be uneven, and you will be unable to explain to anyone after the test what load you applied actually.
So keen am I to ensure that the load applied by a test user is even, that I employ a trick taught to me by a seasoned load testing professional (you know who you are Neil!) to measure the pacing achieved in the test. All decent load testing tools allow you to time transactions by inserting a “start” and “end” in a test script. All you do is start a transaction and immediately end it, at the very start of your script processing loop. This creates a transaction whose duration is always zero. At first sight this does not appear very useful. However the time interval between these transactions should align with the pacing interval. After the test, you can extract the transaction data for the test users, pop them in an Excel file and add a column which just uses a simple subtraction to calculate the intervals between the transactions. This should match the defined pacing.
Another trick to watch out for in the same area concerns the ramp-up and ramp-down graphs you will often see generated by load testing tools. I like to make each test user perform an exact number of sessions and using the pacing I can predict the time by which the last one should have ended. If just one of the sessions took longer than expected, you will see some sort of unevenness during the ramp-down. If all the sessions take longer, all test users are affected and you will see the ramp-down delayed. I have seen this in real life and it always points to either unexpected behaviour in the web system being tested or incorrect setup of the pacing.
Let me know what you think of this approach and have any ideas for future articles in the same area.
No comments:
Post a Comment