Tuesday, 18 October 2011

Performance Life Cycle


Performance Life Cycle

6.1 What is Performance Testing?

§  Primary objective of the performance testing is “to demonstrate the system works functionally as per specifications with in given response time on a production sized database

6.2 Why Performance Testing:

§  ­To assess the system capacity for growth
The load and response data gained from the tests can be used to validate the capacity planning model and assist decision making.
§  To identify weak points in the architecture
The controlled load can be increased to extreme levels to stress the architecture and break it bottlenecks and weak components can be fixed or replaced
§  To detect obscure bugs in software
Tests executed for extended periods can cause failures caused by memory leaks and reveal obscure contention problems or conflicts
§  To tune the system
Repeat runs of tests can be performed to verify that tuning activities are having the desired effect – improving performance.
§  To verify resilience & reliability
Executing tests at production loads for extended periods is the only way to access the systems resilience and reliability to ensure required service levels are likely to be met.

6.3 Performance-Tests:

§  Used to test each part of the web application to find out what parts of the website are slow and how we can make them faster.

6.4 Load-Tests:

§  This type of test is done to test the website using the load that the customer expects to have on his site. This is something like a “real world test” of the website.
§  First we have to define the maximum request times we want the customers to experience, this is done from the business and usability point of view, not from a technical point of view. At this point we need to calculate the impact of a slow website on the company sales and support costs.
§  Then we have to calculate the anticipated load and load pattern for the website (Refer Annexure I for details on load calculation) which we then simulate using the Tool.
§  At the end we compare the test results with the requests times we wanted to achieve.

6.5 Stress-Tests:

  • They simulate brute force attacks with excessive load on the web server. In the real world situations like this can be created by a massive spike of users – far above the normal usage – e.g. caused by a large referrer (imagine the website being mentioned on national TV…).
  • The goals of stress tests are to learn under what load the server generates errors, whether it will come back online after such a massive spike at all or crash and when it will come back online.

6.6 When should we start Performance Testing:

  • It is even a good idea to start performance testing before a line of code is written at all! Early testing the base technology (network, load balancer, application-, database- and web-servers) for the load levels can save a lot of money when you can already discover at this moment that your hardware is to slow. Also the first stress tests can be a good idea at this point.
  • The costs for correcting a performance problem rise steeply from the start of development until the website goes productive and can be unbelievable high for a website already online.
  • As soon as several web pages are working the first load tests should be conducted and from there on should be part of the regular testing routine each day or week or for each build of the software.

6.7 Popular tools used to conduct Performance Testing:

  • LoadRunner from Mercury Interactive
  • AstraLoad from Mercury Interactive
  • Silk Performer from Segue
  • Rational Suite Test Studio from Rational
  • Rational Site Load from Rational
  • Webload from Radview
  • RSW eSuite from Empirix
  • MS Stress tool from Microsoft

6.8 Performance Test Process:


§  This is a general process for performance Testing. This process can be customized according to the project needs. Few more process steps can be added to the existing process, deleting any of the steps from the existing process may result in Incomplete process. If Client is using any of the tools, In this case one can blindly follow the respective process demonstrated by the tool.
General Process Steps:

 


































     Setting up of the test environment
§  The installation of the tool, agents
§  Directory structure creation for the storage of the scripts and results
§  Installation of additional software if essential to collect the server statistics
§  It is also essential to ensure the correctness of the environment by implementing the dry run.
Record & playback in the stand by mode
§  The scripts are generated using the script generator and played back to ensure that there are no errors in the script.
     Enhancement of the script to support multiple users
§  The variables like logins, user inputs etc. should be parameterised to simulate the live environment.
§  It is also essential since in some of the applications no two users can login with the same id.
Configuration of the scenarios
§  Scenarios should be configured to run the scripts on different agents, schedule the scenarios
§  Distribute the users onto different scripts, collect the data related to database etc.
·         Hosts
The next important step in the testing approach is to run the virtual users on different host machines to reduce the load on the client machine by sharing the resources of the other machines.
·         Users
The number of users who need to be activated during the execution of the scenario.
·         Scenarios
A scenario might either comprise of a single script or multiple scripts.  The main intention of creating a scenario to simulate load on the server similar to the live/production environment.
·         Ramping
In the live environment not all the users login to the application simultaneously.  At this stage we can simulate the virtual users similar to the live environment by deciding -
1.     How many users should be activated at a particular point of time as a batch?
2.     What should be the time interval between every batch of users?
     Execution for fixed users and reporting the status to the developers
§  The script should be initially executed for one user and the results/inputs should be verified to check it out whether the server response time for a transaction is less than or equal to the acceptable limit (bench mark).   
§  If the results are found adequate the execution should be continued for different set of users.  At the end of every execution the results should be analysed. 
§  If a stage reaches when the time taken for the server to respond to a transaction is above the acceptable limit, then the inputs should be given to the developers.

Re-execution of the scenarios after the developers fine tune the code
§   After the fine-tuning, the scenarios should be re-executed for the specific set of users for which the response was inadequate.
§   If found satisfactory, then the execution should be continued until the decided load.
Final report
At the end of the performance testing, final report should be generated which should comprise of the following –
·         Introduction – about the application.
·         Objectives – set / specified in the test plan.
·         Approach – summary of the steps followed in conducting the test
·         Analysis & Results – is a brief explanation about the results and the analysis of the report.
·         Conclusion – the report should be concluded by telling whether the objectives set before the test is met or not.
·         Annexure – can consist of graphical representation of the data with a brief description, comparison statistics if any etc.

No comments: