Tuesday, 18 October 2011

Testing


Testing
8.1 Test Strategy
§   Test strategy is statement of overall approach of testing to meet the business and test objectives.
§   It is a plan level document and has to be prepared in the requirement stage of the project.
§   It identifies the methods, techniques and tools to be used for testing .
§   It can be a project or an organization specific.
§   Developing a test strategy which effectively meets the needs of the organization/project is critical to the success of the software development
§   An effective strategy has to meet the project and business objectives
§   Defining the strategy upfront before the actual testing helps in planning the test activities
A test strategy will typically cover the following aspects
§   Definition of test objective
§   Strategy to meet the specified objective
§   Overall testing approach
§   Test Environment
§   Test Automation requirements
§   Metric Plan
§   Risk Identification, Mitigation and Contingency plan
§   Details of Tools usage
§   Specific Document templates used in testing
8.2 Testing Approach
§   Test approach will be based on the objectives set for testing
§   Test approach will detail the way the testing to be carried out
§   Types of testing to be done viz Unit, Integration and system testing
§   The method of testing viz Black–box, White-box etc.,
§   Details of any automated testing to be done

8.3 Test Environment
§   All the Hardware and Software requirements for carrying out testing shall be identified in detail.
§   Any specific tools required for testing will also be identified
§   If the testing is going to be done remotely, then it has to be considered during estimation
8.4 Risk Analysis
§   Risk analysis should carried out for testing phase
§   The risk identification will be accomplished by identifying causes-and-effects or effects-and-causes
§   The identified Risks are classified into to Internal and External Risks.
-        The internal risks are things that the test team can control or influence.
-        The external risks are things beyond the control or influence of the test team
§   Once Risks are identified and classified, the following activities will be carried out
-        Identify the probability of occurrence
-        Identify the impact areas – if the risk were to occur
-        Risk mitigation plan – how avoid this risk?
-        Risk contingency plan – if the risk were to occur what do we do?
8.5 Testing Limitations
§   You cannot test a program completely
§   We can only test against system requirements
-        May not detect errors in the requirements.
-        Incomplete or ambiguous requirements may lead to inadequate or incorrect testing.
§   Exhaustive (total) testing is impossible in present scenario.
§   Time and budget constraints normally require very careful planning of the testing effort.
§   Compromise between thoroughness and budget.
§   Test results are used to make business decisions for release dates.
§   Even if you do find the last bug, you’ll never know it
§   You will run out of time before you run out of test cases
§   You cannot test every path
§   You cannot test every valid input
§   You cannot test every invalid input
8.6 Testing Objectives
§   You cannot prove a program correct (because it isn’t!)
§   The purpose of testing is to find problems
§   The purpose of finding problems is to get them corrected
8.7 Testing Metrics
  • Time
        Time per test case
        Time per test script
        Time per unit test
        Time per system test
  • Sizing
        Function points
        Lines of code
  • Defects
        Numbers of defects
        Defects per sizing measure
        Defects per phase of testing
        Defect origin
        Defect removal efficiency

  Number of defects found in producer testing
Defect Removal Efficiency  =
  Number of defects during the life of the product

                                                  Actual Size-Planed Size
Size Variance =         
                                                              Planed Size

                                                 Actual end date – Planed end date
Delivery Variance =
                                                  Planed end date – Planed start date

                                                Actual effort – Planed effort           
Effort  =
                                                                Planed effort

                             

                                              Effort
Productivity =
                                                Size

                                             No defect found during the review time
Review efficiency =
                                                                           Effort

8.7 Test Stop Criteria:
§   Minimum number of test cases successfully executed.
§   Uncover minimum number of defects (16/1000 stm)
§   Statement coverage
§   Testing uneconomical
§   Reliability model
8.8 Six Essentials of Software Testing
  1. The quality of the test process determines the success of the test effort.
  2. Prevent defect migration by using early life-cycle testing techniques.
  3. The time for software testing tools is now.
  4. A real person must take responsibility for improving the testing process.
  5. Testing is a professional discipline requiring trained, skilled people.
  6. Cultivate a positive team attitude of creative destruction.
8.9 What are the five common problems in s/w development process?
Poor Requirements: If the requirements are unclear, incomplete, too general and not testable there will be problem.
Unrealistic Schedules: If too much work is creamed in too little time. Problems are inventible.
Inadequate Testing: No one will know weather the system is good or not. Until the complains system crash
Futurities: Requests to pile on new features after development is underway. Extremely common 
Miscommunication: If the developers don’t know what is needed (or) customers have erroneous expectations, problems are guaranteed.

8.10 Give me five common problems that occur during software development.
Solid requirements :Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable.
Realistic Schedules: Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.
Adequate Testing: Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.
Firm Requirements: Avoid new features. Stick to initial requirements as much as possible.
Communication. Communicate Require walk-thorough and inspections when appropriate
8.11 What Should be done no enough time for testing
Risk analysis to determine where testing should be focused
§   Which functionality is most important to the project's intended purpose?
§    Which functionality is most visible to the user?
§    Which functionality has the largest safety impact?
§    Which functionality has the largest financial impact on users?
§    Which aspects of the application are most important to the customer?
§    Which aspects of the application can be tested early in the development cycle?
§    Which parts of the code are most complex and thus most subject to errors?
§    Which parts of the application were developed in rush or panic mode?
§    Which aspects of similar/related previous projects caused problems?
§    Which aspects of similar/related previous projects had large maintenance expenses?
§    Which parts of the requirements and design are unclear or poorly thought out?
§    What do the developers think are the highest-risk aspects of the application?
§    What kinds of problems would cause the worst publicity?
§    What kinds of problems would cause the most customer service complaints?
§    What kinds of tests could easily cover multiple functionalities?
§    Which tests will have the best high-risk-coverage to time-required ratio?
8.12 How do you know when to stop testing?
Common factors in deciding when to stop are...
§    Deadlines, e.g. release deadlines, testing deadlines;
§    Test cases completed with certain percentage passed;
§    Test budget has been depleted;
§    Coverage of code, functionality, or requirements reaches a specified point;
§    Bug rate falls below a certain level; or
§    Beta or alpha testing period ends.
8.14 Why does the software have Bugs?
§   Miscommunication or No communication
§   Software Complexity
§   Programming Errors
§   Changing Requirements
§   Time Pressures
§   Poorly Documented Code
8.15 Different Type of Errors in Software
  • User Interface Errors
  • Error Handling
  • Boundary related errors
  • Calculation errors
  • Initial and Later states
  • Control flow errors
  • Errors in Handling or Interpreting Data
  • Race Conditions
  • Load Conditions
  • Hardware
  • Testing Errors

 
Source, Version and ID Control

No comments: