Thursday, 20 October 2011

Types of Testing:


Types of Testing:
Functional Testing
Functional Testing :- In functional testing, we test the functionality of the system without regard to its implementation.
Functional tests fill in the gap left by unit tests and give the team even more confidence in the code. Unit tests miss many bugs. They may give you all the code coverage you need, but they might not give you all the system coverage you need. The functional tests will expose problems that your unit tests are missing. A maintained, automated suite of functional tests might not catch everything either, but it will catch more than the best suite of unit tests can catch alone.
Functional testing is also known as ‘Requirements based testing’ as this testing deals with requirements only.
Advantage of Functional Testing :-
  Simulates actual system usage.
  Makes no system structure assumptions.
Disadvantage of Functional Testing :-
  Potential of missing logical errors in software
  Possibility of redundant testing
Objective :- The objective of Functional Testing is to ensure that the requirements are properly satisfied by the application system. The functions are those tasks that the system is designed to accomplish. Functional testing is not concern with how processing occur, but rather, with the result of processing.
Functional Testing covers :-
  Unit Testing
  Smoke testing / Sanity testing
  Integration Testing (Top Down, Bottom up Testing)
  Interface & Usability Testing (including Independent Focus Groups)
  System Testing
  Regression Testing
  Pre User Acceptance Testing (Alpha & Beta)
  User Acceptance Testing
  White Box Testing, Black Box Testing
  Globalization and Localization Testing (Regional Settings, Languages etc.)
Integration Testing
Integration Testing :- Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Incremental Integration Testing :- Continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
Integration testing hierarchy :-
Big-bang Integration :- In this approach, all or most of the developed modules are coupled together to form a complete software system or major part of the system and then used for integration testing. The Big Bang method is very effective for saving time in the integration testing process.
Bottom-up Integration :- Bottom-up integration testing begins with unit testing, followed by tests of progressively higher-level combinations of units called modules or builds.
Top-down Integration :- In top-down integration testing, the highest-level modules are tested first and progressively lower-level modules are tested after that.
Sandwich Integration :- Uses top-down tests for upper levels of program structure, coupled with bottom-up tests for subordinate levels.
Limitations :- Any conditions not stated in specified integration tests, outside of the confirmation of the execution of design items, will generally not be tested. Integration tests can not include system-wide (end-to-end) change testing.
System Testing:
System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements.The aim is to verify all system elements and validate conformance against SRS. 
  System testing is aimed at revealing bugs that can not be attributed to a component  as such, to inconsistencies between components or planned interactions between components.
  Concerns : issues , behaviours that can only be exposed by testing the entire integrated system (eg. Performance, security, recovery etc)
  System testing is categorised into the following 15 types.
  The type(s) of testing is to  chosen depending on the customer / system requirements.
Compatibility / Conversion Testing :- Where the software developed is a plug-in into an existing system, the compatibility of the developed software with the existing system has to be tested.
Configuration Testing :- Configuration testing includes either or both of the following:
  testing the software with the different possible hardware configurations
testing each possible configuration of the software
Documentation Testing :- Documentation testing is concerned with the accuracy of the user documentation
Facility Testing :- Facility Testing is the determination of whether each facility / functionality  mentioned in SRS is actually implemented
Insatiability Testing :- Certain software systems will have complicated procedures for installing the system e.g. the system generation (system) process in IBM Mainframes.
Performance Testing :- Performance testing is designed to test run-time performance of software within the context of an integrated system. Performance testing occurs throughout all phases testing
Procedure Testing :- When the software forms a part of a large and not completely automated system, the interfaces of the developed software with the other components in the larger system shall be tested.
Recovery Testing :- Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed.
Reliability Testing :- Test any specific reliability factors that are stated explicitly in the SRS.
Security Testing :- Verify that protection mechanisms built into a system will protect it from  improper penetration.
  Design test cases that try to penetrate into the system using all possible  mechanisms. 
Serviceability Testing :- Serviceability testing covers the serviceability or maintainability characteristics of the software.
 The requirements stated in the SRS may include :-
  service aids to be provided with the system, e.g., storage-dump
   programs, diagnostic programs
  the mean time to debug an apparent problem
  the maintenance procedures for the system
the quality of the internal-logic documentation
Storage Testing :- Storage testing is to ensure that the primary & secondary storage requirements    are within the  specified bounds.
Stress Testing :- Stress tests are designed to confront programs with abnormal situations.
  Stress testing executes a system in a manner that demand rescues in abnormal quantity, frequency or volume.
Usability Testing :- Attempt to uncover the software usability problems involving the human-factor
Volume Testing :- Volume Testing is to ensure that the software can handle the volume of data as specified in the SRS Does not crash with heavy volumes of data, but gives an appropriate  message and/or makes a clean exit.
Performance Testing :- Performance Tests are tests that determine end to end timing of various time critical business processes and transactions under  a particular workload.
Purpose of performance testing :-
  It can demonstrate that the system meets performance criteria.
  It can compare two systems to find which performs better.
  It can measure what parts of the system or workload cause the system to perform badly.
Performance is concerned with achieving response times, throughput, and resource utilization levels that meet the performance objectives for the application under test.
There are different types of performance testing :-
Ø  Load Testing
Ø  Volume Testing
Ø  Stress Testing
What is Load Testing?
Load Testing :-
Ø  Varying the number of users
Ø  Varying think times i.e., transaction fire rates
Ø  System behavior at various LOADS
Purpose :-
Ø  How response times are varying with load ?
Ø  What is System resource utilization with increase in load?
Ø  Estimate the hardware capacity for future workloads
Ø  Volume Testing :-
Ø  Vary database entities like policies, accounts
Ø              ex: 100000, 200000, 500000 policies etc
Ø                    100GB, 200GB, 500GB etc
Ø  Vary number of users
Ø                     100, 200, 500 etc 
Ø  Purpose :-
Ø  Response times with increase in database entities?
Ø  Is query plans are changing?
Ø  What are the system resources consumption with increase in database size?
What is Stress Testing?
Stress Testing :-
Max out in Users
Increase the transaction rates
Run for a long duration
Purpose :-
When the system breaks?
What is the maximum load the system can support?
To find out memory leaks
Different Types Of Testing:
Compatibility Testing :- Testing to ensure compatibility of an application or Web site with different browsers, OSs, and hardware platforms. Compatibility testing can be performed manually or can be driven by an automated functional or regression test suite.
Conformance Testing :- Verifying implementation conformance to industry standards. Producing tests for the behavior of an implementation to be sure it provides the portability, interoperability, and/or compatibility a standard defines.
Regression Testing :- Similar in scope to a functional test, a regression test allows a consistent, repeatable validation of each new release of a product or Web site. Such testing ensures reported product defects have been corrected for each new release and that no new quality problems were introduced in the maintenance process. Though regression testing can be performed manually an automated test suite is often used to reduce the time and resources needed to perform the required testing.
Smoke Testing :- A quick-and-dirty test that the major functions of a piece of software work without bothering with finer details. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
Unit Testing :- Functional and reliability testing in an Engineering environment. Producing tests for the behavior of components of a product to ensure their correct behavior prior to system integration.
Acceptance Testing :- Formal testing conducted to determine whether a system satisfies its acceptance criteria and thus whether the customer should accept the system.
Sanity Testing :- Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
Usability Testing :- Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
Installation/Uninstallation Testing :- Testing of full, partial, or upgrade install/uninstall processes.
Security Testing :- Testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
Adhoc Testing :-  Testing the application in a random manner.
Alpha Testing :- Testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
Beta Testing :- Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
GUI Software Testing :- Testing a product that uses a graphical user interface, to ensure it meets its written specifications. This is normally done through the use of a variety of test cases.
Scalability Testing :-  Testing of a software application for measuring its capability to scale up or scale out - in terms of any of its non-functional capability - be it the user load supported, the number of transactions, the data volume etc.
Exploratory Testing :- Exploratory testing is the tactical pursuit of software faults and defects driven by challenging assumptions. It is an approach in software testing with simultaneous learning, test design and test execution. While the software is being tested, the tester learns things that together with experience and creativity generates new good tests to run.
Recovery Testing :-  Recovery testing is the activity of testing how well the software is able to recover from crashes, hardware failures and other similar problems . Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is properly performed.
Maintenance Testing :- Maintenance testing is that testing which is performed to either identify equipment problems, diagnose equipment problems or to confirm that repair measures have been effective. It can be performed at either the system level (e.g., the HVAC system), the equipment level (e.g., the blower in a HVAC line), or the component level (e.g., a control chip in the control box for the blower in the HVAC line).




3 comments:

suresh said...

Good article on testing!!!!!!!!

T MAHESH said...

nice.....for learners of testing

maheshthumala@gmail.com

mahesh t said...

nice.....for learners