SE Unit-4
SE Unit-4
NOTES
• Testing Strategies: A strategic approach to software testing, test strategies
for conventional software, Black-Box and White-Box testing, Validation
testing, System testing, the art of Debugging.
• Product metrics: Software Quality, Metrics for Analysis Model, Metrics for
Design Model, Metrics for source code, Metrics for testing, Metrics for
maintenance.
• Metrics for Process and Projects: Software Measurement, Metrics for
software quality.
UNIT TESTING:
• It focuses verification effort on the smallest unit of software design i.e. each module or
component.
• It focuses on internal processing logic and data structures.
Unit test Considerations:
• The module interface is tested to ensure that information properly flows in and out of
program unit.
• Local data structures are tested to ensure that stored data maintains its integrity during
execution.
• Figure: Unit testing
• Independent paths are tested to ensure that all statements in module are executed at least
once.
• Boundary conditions are tested to ensure that modules operate properly at boundary.
• Finally, all errors handling paths are tested.
• Test cases are designed to find errors due to erroneous computations, incorrect
comparisons, or improper flow.
Unit test procedures:
• Next figure shows unit test environment.
• Driver is nothing more than a “main program” that accepts data, passes that data to
components and prints the results.
• Stubs serve to replace modules that are called by component to be tested.
• Drivers and stubs represent overhead.
• Means both are software that are written but not delivered with final software product
Figure: Unit testing environment
INTEGRATION TESTING:
• Once all modules are unit tested individually.
• If they all work individually, why do we doubt that they will not work when we put them
together?
• The problem is “putting them together” –interfacing.
• Data can be lost during interface, one module can affect other when combined, and data
structures can present problems and so on.
• Integration testing is a systematic technique for constructing software architecture and
does combined testing to find errors.
Top down Integration testing:
• Modules are integrated by moving downward beginning with main module (main
program).
• Subordinate modules tested in either depth first or breadth first.
• Next figure.1 shows the top down integration testing.
• Selecting left hand path components M1, M2, M5 integrated first.
• Next M8 or M6 will be integrated. Then right hand paths are tested.
• Breadth first integration integrates moving horizontally.
• Components M2, M3, M4 integrated first, next M5, M6 and so on.
Regression Testing:
• Each time a new module is added, software changes in which new input output may
occur.
• These changes can cause problems to old functions which were working properly.
• With integration testing, regression testing is to perform retest that changed software to
make sure those changes have not made any side effects on software.
• Regression testing makes sure that changes do not introduce errors.
Smoke Testing:
• It is also an integration testing approach that is used when software is being developed.
• Smoke testing helps to test a project on frequent (daily) basis.
• Whenever software is rebuilt, software is smoke tested to find errors on daily basis.
• These frequent tests give easy detection and correction of errors.
• Also it makes the software progress assessment easy.
SOFTWARE TESTING:
• Two major categories of software testing
Black box testing and White box testing
2. Equivalence partitioning:
• Divides all possible inputs into classes such that there are a finite equivalence classes.
• Equivalence class
-- Set of objects that can be linked by relationship
• Reduces the cost of testing
• Example : Input consists of 1 to 10
• Then classes are n<1,1<=n<=10,n>10
• Choose one valid class with value within the allowed range and two invalid classes where
values are greater than maximum value and smaller than minimum value.
3. Boundary Value analysis:
• Select input from equivalence classes such that the input lies at the edge of the
equivalence classes.
• Set of data lies on the edge or boundary of a class of input data or generates the data that
lies at the boundary of a class of output data.
• Example : If 0.0<=x<=1.0
• Then test cases (0.0, 1.0) for valid input and (-0.1 and 1.1) for invalid input.
4. Orthogonal array Testing:
• To problems in which input domain is small but too large for exhaustive testing
• Example: Three inputs A,B,C each having three values will require 27 test cases
• L9 orthogonal testing will reduce the number of test case to 9 as shown below :
A B C
1 1 1
1 2 2
1 3 3
2 1 3
2 2 3
2 3 1
3 1 3
3 2 1
3 3 2
Cyclomatic complexity:
• It is software metric that provide measure of logical complexity of a program.
• It defines number of independent paths in basic set program.
• It provides number of tests that are conducted to make sure that all statements are
executed at least once.
• Cyclomatic complexity is computed in one of three ways :
• 1. The number of regions corresponds to cyclomatic complexity
• 2. Cyclomatic complexity, V(G), for flow graph, G, is define as :
• V (G) = E – N + 2, where E is the number of flow graph edges and N is the number of
flow graph nodes.
• 3. Also, V (G) = P + 1, where p is number of predicate nodes.
• The computed cyclomatic complexity for the graph of figure 2(b) :
• 1. Flow graph has for regions
• 2. V(G) = 11 edges – 9 nodes + 2 = 4
• 3. V(G) = 3 predicate nodes + 1 = 4
Steps of Basis Path Testing:
1. Draw the flow graph from flow chart of the program.
2. Calculate the cyclomatic complexity of the resultant flow graph.
3. Determine the basis set of independent paths.
4. Prepare test cases that will force execution of each path of basis set.
VALIDATION TESTING:
• Validation succeeds when the software functions in a manner that can be expected by the
customer.
• Expectations are defined as: a document that describes all users’ visible attributes of the
software.
Validation Test Criteria:
Software validation is achieved through a series of tests.
• A test plan provides the class of tests to be conducted, and a test procedure defines
specific test cases.
Both plan and procedure are designed to make sure that:
• All functional requirements are satisfied, Behavioral properties are achieved, All
performance requirements are achieved, Documentation is correct.
• After each validation test case, one of two possible condition is :
• The functions or performance confirms the document and accepted
• Deviation from document is found and deficiency list is created.
Configuration Review:
• The intention of review is to make sure that all elements of software configuration have
been developed.
• It is sometimes called an audit.
ALPHA AND BETA TESTING:
• When custom software is built for one customer, a series of acceptance tests are
conducted to allow the customer to validate all requirements.
• These tests are conducted by the end user rather than software engineers are kind of
informal “test drive”.
• If software is developed to be used by many customers, it is not possible to perform
acceptance tests with each one.
• Most software builders use a process called alpha and beta testing to uncover errors that
only the end-users able to find.
Alpha Testing:
• It is conducted at the developer’s site by end users.
• Developer is present with user and recording errors and usage problems and alpha tests
are conducted in controlled environment.
Beta Testing:
• It is conducted at end user sites.
• Unlike alpha testing, developer is generally not present.
• So, beta test is a “live” application of the software in an environment that cannot be
controlled by developer.
• End user records all problems and reports these problems to developer at regular
intervals.
• According to the problems reported during beta tests, software engineers make
modifications, and then release software to customers.
SYSTEM TESTING:
• Its primary purpose is to test the complete software.
• It is actually a series of different tests whose purpose is to fully test the computer based
system.
1) Recovery Testing:
• Recovery testing is a system test that forces the software to fail in different ways and
verifies that recovery is properly performed.
• If recovery is automatic, then re-initialization, data recovery, and restart are evaluated for
correctness.
• If recovery needs human effort, mean time to repair is evaluated.
• Most computer based systems recover from faults and resume processing quickly.
• In some cases, system must be fault tolerant, i.e. faults should not stop overall system
function.
• In other case, system faults must be corrected within time or before severe damage will
occur.
2) Security Testing:
• Security testing verifies that protection built into a system protect it from improper
penetration.
• Any system that manages sensitive information is a target for improper or illegal
penetration.
Penetration has many activities:
• Hacker who attempt to penetrate the system
• Disgruntled employees who attempt to penetrate for revenge
• And dishonest individuals who penetrate for illicit personal gain.
• During security testing, tester tries to penetrate the system.
• Tester try to get passwords, try to attack the system, cause system errors, try to use
insecure data, find the key of entry.
• Good enough time and resources, good security testing will penetrate system.
3. Stress Testing:
• Stress testing is performed to check how program deals with abnormal situations.
• Tester who performs stress testing asks: How high can we move this up before it fails?
• Stress testing executes a system that demands resources in abnormal quantity, frequency,
or volume.
• For example :
• 1. Tests are performed that generate ten interrupts per second, when one or two interrupts
is average.
• 2. Input data rate is increased to check how input functions responds.
• 3. Tests that require maximum memory and resources are executed.
• 4. Test case that causes memory problems is performed.
• A variation of stress testing is called sensitivity testing.
• Sensitivity testing attempts to find that data that cause instable or improper processing.
4) Performance Testing:
• It is designed to test run time performance of software.
• At the unit level, performance of individual module is tested.
• It is often necessary to measure resource utilization.
• Performance testing tests hardware and software instrumentation.
• By instrumentation, the tester can find situations that cause degrades and possible system
failure.
PRODUCT METRICS:
• Product metrics: Describe the characteristics of the product such as size,
complexity, design features, performance, and quality level.
• Product metrics for computer software helps us to assess quality.
• Measurement: is "The action of measuring something or the size, length, or amount of
something, as established by measuring."
• Key element of any engineering process is measurement.
• Measures are used to assess the quality of products we develop and to understand the
properties of models we create.
• A "measure" is a number that is derived from taking a measurement.
• So measurement relates more to the action of measuring.
• Measurements are the raw data
• Measurement can be used by software engineers to help assess the quality of technical
work products and to assist in decision making as a project proceeds.
• Software measurement is quantified attribute of software product or software process.
• It is a discipline within software engineering.
• The content of software measurement is defined and governed by ISO Standard ISO
15939 (software measurement process)
• Metrics: are "A method of measuring something, or the results obtained from this“.
• Metrics are derived combinations of measurements.
• In contrast, a "metric" is a calculation between two measures.
• Metrics relates more to the method of measuring.
• In software testing, Metric is a quantitative measure of the degree to which a system,
system component, or process possesses a given attribute.
• In other words, metrics helps estimating the progress, quality and health of a software
testing effort
• A software metric is a standard of measure of a degree to which a software system or
process possesses some property.
• Metric is not a measurement (metrics are functions, while measurements are the
numbers obtained by the application of metrics).
• Still these two terms are used as synonyms sometimes
• It is a calculated or composite indicator based upon two or more measures.
• Metrics are defined as: “standards of measurement” and have been used to indicate a
method of measuring effectiveness and efficiency of particular activity within a project.
• An example of a metric would be that there were only two user-discovered errors in the
first 18 months of operation.
• This provides more meaningful information than a statement that the delivered system is
of top quality.
• This metric indicates the quality of the product under test.
• It can be used as a basis for estimating defects to be addressed in the next phase or the
next release.
• This is an Organizational Measurement.
• Test Metrics is a mechanism to know the effectiveness of the testing that can be
measured quantitatively.
• It is a feedback mechanism to improve the Testing Process that is followed currently.
• Process metrics can be used to improve software development and maintenance
• Metrics can be defined as “STANDARDS OF MEASUREMENT”.
• Software Metrics are used to measure the quality of the project.
• Metric is a unit used for describing an attribute.
• Measure :
• -- Provides a quantitative indication of the extent, amount, dimension, capacity or size of
some attribute of a product or process
Metric (IEEE 93 definition):
• -- A quantitative measure of the degree to which a system, component or process possess
a given attribute
• Metrics can be defined as “STANDARDS OF MEASUREMENT”.
• Software Metrics are used to measure the quality of the project.
• Simply, Metric is a unit used for describing an attribute
• Indicator :
• -- A metric or a combination of metrics that provide insight into the software process, a
software project or a product itself
SOFTWARE QUALITY:
• Everyone will agree that high quality software is an important.
• But what is quality?
• Software quality is Conformance to functional and performance requirements,
documented development standards, and characteristics that are expected from developed
software.
• Factors that affect software quality can be categorized in two broad groups:
• Factors that can be directly measured (e.g. defects found during testing)
• Factors that can be indirectly measured (e.g. usability or maintainability).
• In each case management should occur.
SOFTWARE MEASUREMENT:
Software measurement can be categorized as:
1) Direct Measure
2) Indirect Measure
Direct Measurement:
Direct measure of software process includes cost and effort.
Direct measure of product includes lines of code, Execution speed, memory size, defects
per reporting time period.
Indirect Measurement:
Indirect measure examines the quality of software product itself(e.g. :-functionality,
complexity, efficiency, reliability and maintainability)
Reasons for measurement:
1. To gain baseline for comparison with future assessment
2. To determine status with respect to plan
3. To predict the size, cost and duration estimate
4. To improve the product quality and process improvement