Software Testing Definitions
— Software testing is the process used to help identify the correctness, completeness, security, and quality of developed computer software.
— Software testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate.
— Software Testing furnishes a criticism or comparison that compares the state and behavior of the product against a specification.
Features of Software Testing
— Software testing proves presence of error. But never its absence.
— Software testing is a constructive destruction process.
—
— Software testing involves operation of a system or application under controlled conditions and evaluating the results.
Objective of testing
— Finding defects.
—
— Gaining confidence about the level of quality and providing information.
—
— Preventing defects.
Reasons for bugs
— Miscommunication or no communication
—
— Software complexity
—
— Programming errors
—
— Changing requirements
— Time pressures
—
— Egos
—
— Poorly documented code
—
— Software development tools
Error, Fault, Failure
— Errors
— It refers to an incorrect action or calculation performed by software.
— Fault
— An accidental condition that causes a functional unit to fail to perform its required function.
— Failures
— It refers to the inability of the system to produce the intended result.
— Defect or Bug
— Non-conformance of software to its requirements is commonly called a defect.
Quality Assurance
— Quality assurance is an activity that establishes and evaluates the processes that produce products
—
Quality Control
— Quality control is the process by which the product quality is compared with applicable standards, and the action taken when nonconformance is detected.
Quality Assurance | Quality Control |
It helps establish processes | It helps to execution of process |
It is a management responsibility | It is the producer’s responsibility |
It identifies the weakness in the process | It identifies the weakness in the product |
Challenges of Effective and Efficient Testing
— Time pressure
—
— Software complexity
—
— Choosing right approach
—
— Wrong process
The Limits of Software Testing
— Testing does not guarantee 100% defect free product
—
— Testing proves presence of error; but not its absence.
—
— Testing will not improve the development process.
—
Prioritizing Tests
— Test case prioritization techniques schedule test cases in an order that increases their effectiveness in meeting some performance goal. One performance goal, rate of fault detection, is a measure of how quickly faults are detected within the testing process; an improved rate of fault detection can provide faster feedback on the system under test, and let software engineers begin locating and correcting faults earlier than might otherwise be possible.
—
Cost of quality
— Prevention cost
— Money required to prevent errors and to do the job right the first time.
— Ex. Establishing methods and procedures, Training workers, acquiring tools.
— Appraisal cost
— Money spent to review completed products against requirements.
— Ex. Cost of inspections, testing, reviews.
— Failure cost
— All costs associated with defective products that have been delivered to the user or moved in to the production.
— Ex. Repairing cost, cost of operating faulty products, damage incurred by using them.
Software quality factors
Correctness |
Reliability |
Efficiency |
Integrity |
Usability |
Maintainability |
Testability |
Flexibility |
Portability |
Reusability |
Interpretability The Fundamental Test Process — Assess development plan and status — Develop the test plan — Test Software requirements — Test software design — Program (Build) phase testing — Execute and record results — Acceptance test — Report test results — The software installation — Test software changes — Evaluate test effectiveness Black box testing — Also known as functional testing. — — A software testing technique whereby the internal workings of the item being tested are not known by the tester. — White box testing — — Also known as glass box, structural, clear box and open box testing. — — A software testing technique whereby explicit knowledge of the internal workings of the item being tested are used to select the test data. Unlike black box testing, white box testing uses specific knowledge of programming code to examine outputs. Traceability matrix — In a software development process, a Traceability Matrix is a table that correlates any two base lined documents that require a many to many relationship to the determine completeness of the relationship. It is often used with high-level requirements (sometimes known as Marketing Requirements) and detailed requirements of the software product to the matching parts of high-level design, detailed Design, test plan, and test cases. — Template Black Box Testing Techniques — Boundary value analysis — Boundary value analysis is a software testing related technique to determine test cases covering known areas of frequent problems at the boundaries of software component input ranges. — The boundary value analysis can have 6 test cases', n-1,n+1 for the upper limit and n, n-1,n+1 for the lower limit. — — Equivalence partitioning — Equivalence partitioning is software testing related technique with the goal: — To reduce the number of test cases to a necessary minimum. — To select the right test cases to cover all possible scenarios. — — Example : The valid range for the month is 1 to 12, standing for January to December. This valid range is called a partition. In this example there are two further partitions of invalid ranges. The first invalid partition would be <= 0 and the second invalid partition would be >= 13. — Test case — A test case is a set of conditions or variables under which a tester will determine if a requirement upon an application is partially or fully satisfied. — Test script — A test script is a short program written in a programming language used to test part of the functionality of a software system. Best practices — Avoid unnecessary duplication of test case — — Map all the test cases with its requirements — — Provide sufficient information on the test cases with the appropriate naming conventions — Manual Test Execution — Collect the requirements from the User Requirement document — — Analyze the requirement — — Identify the areas to be tested — — Prepare the detailed test plan and test case — — Prepare the environment for testing — — Execute the test cases in the planned manner — — Observe the behavior — — Report in the defect log during abnormality Test Reporting — It is a process to collect data, analyze the data, supplement the data with metrics, graphs, and charts other pictorial representations which help the developers and users interpret that data. — — Prerequisites to test reporting — Define the test status data to be collected — — Define the test metrics to be used in reporting test results — — Define effective test metrics Defect Management system — Identify the defect — — Identify the priority and severity — — Report the bug to the programmer — — Once fixed, test again the same test case to verify that the bug has indeed been fixed. — — Test the related test cases to verify there are no side effects — — Close the bug — |
1 comment:
Very nice for testing
Post a Comment