Unit 4

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 29

Software Engineering 1

Unit 4
Software Testing
Introduction
Once the source code has been developed, testing is required to uncover the errors before it is implemented. In order to
perform software testing a series of test cases is designed. Since testing is a complex process hence, in order to make the process simpler,
testing activities are broken into smaller activities. Due to this, for a project incremental testing is generally preferred. In this testing process
the system is broken in set of subsystems and there subsystems are tested separately before integrating them to form the system for system
testing.

Definition of Testing
1. According to IEEE – “Testing means the process of analyzing a software item to detect the differences between existing and
required condition (i.e. bugs) and to evaluate the feature of the software item”.
2. According to Myers – “Testing is the process of analyzing a program with the intent of finding an error”.

Primary objectives of Software Testing


According to Glen Myers the primary objectives of testing are:
1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding and as yet undiscovered error.
3. A successful test is one that uncovers as yet undiscovered error.
Testing cannot show the absence of errors and defects, it can show only error and detects present. Hence, objective of testing is to design
tests that systematically uncover different errors and to do so with a minimum amount of time and effort.

ERRORS, FAULT AND FAILURE


Error – The term error is used to refer to the discrepancy between computed, observed or measured value and the specified value. In other
terms errors can be defined as the difference between actual output of software and correct output.
Fault – It is a condition that causes a system to fail in performing its required function.
Failure – A software failure occurs if the behavior of the software is different from specified behavior. It is a stage when system becomes
unable to perform a required function according to the specification mentioned.

TEST ORACLES
A test oracle is a mechanism, different from the program itself that can used to check the correctness of the output of the program for the test
cases. In order to test any program we need to have a description of its expected behavior and a method of determining whether the observed
behavior conforms to the expected behavior. For this test oracle is needed.

Test oracles generally use the system specification of the program to decide what the correct behavior of the program should do. In order to
determine the correct behavior it is important that the behavior of the system be unambiguously specified and the specification itself should
be error free.
Software under
test

Test cases Comparator Results of testing

Test Oracles

Fig (a) Test Oracles


Testing Principles
Before applying methods to design effective test cases, a software engineer must understand the basic principles that guide software testing.
Davis suggested a set of testing principles as follows
1. All tests should be traceable to customer requirements – According to customer’s point of view the most severe defects are those
that cause the program to fail to meet its requirements. Hence, the tests should be done in keeping the objective of customers
requirement is must.
2. Tests should be planned long before testing begins – Test planning can begin as soon as the requirements model is complete.
Detailed definition of test cases can begin as soon as the design model has been solidified. Therefore, all tests can be planned
and designed before any code has been generated.
3. The Pareto principle applies to software testing – Stated simply, the Pareto principle implies that 80 percent of all errors
uncovered during testing will likely be traceable to 20 percent of all program components. The problem, of course, is to isolate
these suspect components and to thoroughly test them.
4. Testing should begin “in the small” and progress toward testing “in the large” – The first tests planned and executed generally
focus on individual components. As testing progresses, focus shifts in an attempt to find errors in integrated clusters of components
and ultimately in the entire system.
5. Exhaustive testing is not possible – The number of path permutations for even a moderately sized program is exceptionally large.
For this reason, it is impossible to execute every combination of paths during testing. It is possible, however, to adequately cover
program logic and to ensure that all conditions in the component-level design have been exercised.
6. To be most effective, testing should be conducted by an independent third party – By most effective, we mean testing that has
the highest probability of finding errors. The software engineer who created the system is not the best person to conduct all tests for
the software, so to be more effective third independent party should be involved on performing test.

Attributes of Good Test


Kaner, Falk, and Nguyen suggest the following attributes of a “good” test:
1. A good test has a high probability of finding an error.
To achieve this goal, the tester must understand the software and attempt to develop a mental picture of how the software
might fail. Ideally, the classes of failure are probed. For example, one class of potential failure in a GUI (graphical user interface)
is a failure to recognize proper mouse position. A set of tests would be designed to exercise the mouse in an attempt to
demonstrate an error in mouse position recognition.
2. A good test is not redundant.
Testing time and resources are limited. There is no point in conducting a test that has the same purpose as another test.
Every test should have a different purpose (even if it is subtly different). For example, a module of software is designed to
recognize a user password to activate and deactivate the system. In an effort to uncover an error in password input, the tester
designs a series of tests that input a sequence of passwords. Valid and invalid passwords (four numeral sequences) are input as
separate tests. However, each valid/invalid password should probe a different mode of failure. For example, the invalid password
1234 should not be accepted by a system programmed to recognize 8080 as the valid password. If it is accepted, an error is
present. Another test input, say 1235, would have the same purpose as 1234 and is therefore redundant. However, the invalid input
8081 or 8180 has a subtle difference, attempting to demonstrate that an error exists for passwords “close to” but not identical with
the valid password.
3. A good test should be “best of breed”.
In a group of tests that have a similar intent, time and resource limitations may mitigate toward the execution of only a
subset of these tests. In such cases, the test that has the highest likelihood of uncovering a whole class of errors should be used.
4. A good test should be neither too simple nor too complex.
Although it is sometimes possible to combine a series of tests into one test case, the possible side effects associated with
this approach may mask errors. In general, each test should be executed separately.

Test Case Design


The test case design provides the developer with a systematic approach to test and also provide a mechanism that can help to ensure
the completeness of tests and provide the highest likelihood errors in software.
Since, effort and time both is needed to execute the test cases and also machine time is needed to execute the program for those test
cases also, effort is needed to evaluate the results. Hence, the fundamental goals of a testing activity are to minimize the test cases and
maximize the error unfolded. According to Yamaura there is only one rule in designing test cases and it is that it should cover all features,
but do not make too many test cases.
An ideal set of test cases is one that includes all the possible inputs to the program. This is called exhaustive testing. However, this
testing is practically not possible because number of elements in the input domain is very large and testing them through one ideal set is not
feasible. Hence, realistic goal for testing is to select a set of test cases that is close to the ideal.

White Box Testing (Gloss Box Testing) (Structural Testing)


1. In this testing technique the internal logic of software components is tested.
2. It is a test case design method that uses the control structure of the procedural design test cases.
3. It is done in the early stages of the software development.
4. Using this testing technique software engineer can derive test cases that:
a) All independent paths within a module have been exercised at least once.
b) Exercised true and false both the paths of logical checking.
c) Execute all the loops within there boundaries.
d) Exercise internal data structures to ensure their validity.
Advantages:
1. As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in
testing the application effectively.
2. The other advantage of white box testing is that it helps in optimizing the code.
3. It helps in removing the extra lines of code, which can bring in hidden defects.
4. We can test the structural logic of the software.
5. Every statement is tested thoroughly.
6. Forces test developer to reason carefully about implementation.
7. Approximate the partitioning done by execution equivalence.
8. Reveals errors in "hidden" code.
Disadvantages:
1. It does not ensure that the user requirements are fulfilled.
2. As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which
increases the cost.
3. It is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of
the application.
4. The tests may not be applicable in real world situation.
5. Cases omitted in the code could be missed out.

Black Box Testing (Behavioral Testing) (Closed box Testing)


1. This technique exercises the input and output domain of the program to uncover errors in program, function, behavior and
performance.
2. Software requirements are exercised using “black box” test case design technique.
3. It is done in the later stage of the software development.
4. Black box testing technique attempts to find errors related to:
a) Missing functions or incorrect functions.
b) Errors created due to interfaces.
c) Errors in accessing external databases.
d) Performance related errors.
e) Behavior related errors.
f) Initialization and termination errors.
5. In Black box testing the main focus is on the information domain. Test is designed for following questions:
a) How is functionality validation testing?
b) What class of input will make good test cases?
c) Is the system particularly sensitive to certain input values?
d) How are the boundaries of a data class isolated?
e) What data rates and data volume can the system tolerate?
f) What effects will specific combinations of data have on system operation?
Advantages:
1. More effective on larger units of code than glass box testing.
2. Tester needs no knowledge of implementation, including specific programming languages.
3. Tester and programmer are independent of each other.
4. Tests are done from a user's point of view.
5. Will help to expose any ambiguities or inconsistencies in the specifications.
6. Test cases can be designed as soon as the specifications are complete.
Disadvantages:
1. Only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever.
2. Without clear and concise specifications, test cases are hard to design.
3. There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried.
4. May leave many program paths untested.
5. Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone).
6. Most testing related research has been directed toward glass box testing.
Difference between Black box and White box testing
No. Black Box Testing White Box Testing
1 This method focuses on functional requirements of the software This method focuses on procedural details i.e.,
i.e., it enables the software engineer to derive sets of input internal logic of a program.
conditions that will fully exercise all functional requirements for
a system.
2 It is not an alternative approach to white box technique rather is It concentrates on internal logic, mainly.
complementary approach that is likely to uncover a different
class of errors.
3 Black box testing is applied during later stages of testing. White box testing is performed early in the testing
process.
4 It attempts to find errors in following categories: White box testing attempts errors in following cases:
(a) incorrect or missing functions. (a) internal logic of your program
(b) interface errors. (b) status of program.
(c) errors in data structures or external database access.
(d) performance errors.
(e) initialization and termination errors.
5 It disregards control structure of procedural design (i.e., what is It uses control structure of the procedural design to
the control structure of our program, we do not consider here). derive test cases.
6 Black box testing, broadens our focus on the information White box testing is “testing in small” i.e., testing
domain and might be called as “testing in the large” i.e., testing small program components (e.g. modules or small
bigger monolithic programs. group of modules).
7 Using black box testing techniques, we derive a set of test cases Using White box testing, the software engineer can
that satisfy following criteria: derive test cases that:
(a) Test cases that reduce, (by a count that is greater than 1), the (a) guarantee that all independent paths within a
number of additional test cases that must be designed to achieve module have been exercised at least once.
reasonable testing. (b) exercise all logical decisions on their true and
(b) and test cases that tell us something about the presence or false sides.
absence of classes of errors rather than an error associated only (c) execute all loops at their boundaries and within
with the specific tests at hand. their operational bounds.
(d) and exercise internal data structures to ensure
their validity.
8 It includes the tests that are conducted at the software interface A close examination of procedural detail is done.
9 Are used to uncover errors. Logical paths through the software are tested by
providing test cases, that exercise specific sets of
conditions or loops.
10 To demonstrate that software functions are operational i.e., input A limited set of logical paths be however examined.
is properly accepted and output is correctly produced. Also, the
integrity of external information (e.g. a data base) is maintained.

A Software Testing Strategy


The software engineering process may be viewed as the spiral illustrated in Figure below. Initially, system engineering defines the role of
software and leads to software requirements analysis, where the information domain, function, behavior, performance, constraints, and
validation criteria for software are established. Moving inward along the spiral, we come to design and finally to coding. To develop
computer software, we spiral inward along streamlines that decrease the level of abstraction on each turn.

Fig A testing Strategy


A strategy for software testing may also be viewed in the context of the spiral. Unit testing begins at the vortex of the spiral and concentrates
on each unit (i.e., component) of the software as implemented in source code. Testing progresses by moving outward along the spiral to
integration testing, where the focus is on design and the construction of the software architecture. Taking another turn outward on the spiral,
we encounter validation testing, where requirements established as part of software requirements analysis are validated against the software
that has been constructed. Finally, we arrive at system testing, where the software and other system elements are tested as a whole. To test
computer software, we spiral out along streamlines that broaden the scope of testing with each turn.

Various Levels of Testing


If we focus the process of testing in context of software engineering, it is actually a series of four steps that are implemented sequentially.
Each step known as level is meant for test different aspects of the system. The basic levels of testing are
 Unit Testing
 Validation Testing
 Integration Testing
 System Testing
The relation of the fault introduced indifferent phase and different levels of testing are shown
below: System engineering System Testing

Requirement Validation Testing

Design Integration Testing

Code Unit Testing

1. Unit Testing – This testing is essentially for verification of the code produced during the code phase. The basic objective of this
phase is to test the internal logic of the module. Unit testing makes heavy use of the white box testing technique, exercising specific
path in a module control structure to ensure complete coverage and maximum error detection. In Unit testing the module interface is
tested to ensure that information properly flows into and out of the program unit. Also the data structure is tested to ensure that data
stored temporarily maintain its integrity during execution or not.
2. Integration Testing – This testing uses the address of verification and program construction. Black box testing technique is widely
used in this testing strategy although limited amount of white box testing may be used to ensure coverage of control paths. The basic
emphasis of this testing the interfaces between the modules.

3. Validation Testing – Criterion, which is established during requirement analysis, is tested and it provides final assurance that
software meets all functional behavior, performance requirement etc.
4. System Testing – Here the entire software is tested. The last high order testing stage falls outside the boundary of software
engineering. System testing verifies that all elements mesh properly and that overall system function/performance is achieved.

Guidelines for a Successful Testing Strategy


1. Specify product requirements in a quantifiable manner long before testing commences.
2. State testing objectives explicitly.
3. Understand the users of the software and develop a profile for each user category.
4. Develop a testing plan that emphasizes “rapid cycle testing.”
5. Build “robust” software that is designed to test itself.
6. Use effective formal technical reviews as a filter prior to testing.
7. Conduct formal technical reviews to assess the test strategy and test cases themselves.
8. Develop a continuous improvement approach for the testing process.

Unit Testing
Unit testing focuses verification effort on the smallest unit of software design—the software component or module. Using the component-
level design description as a guide, important control paths are tested to uncover errors within the boundary of the module. The relative
complexity of tests and uncovered errors is limited by the constrained scope established for unit testing. The unit test is white-box oriented,
and the step can be conducted in parallel for multiple components.

Unit Test Considerations


The tests that occur as part of unit tests are illustrated schematically in Figure below. The module interface is tested to ensure that
information properly flows into and out of the program unit under test. The local data structure is examined to ensure that data stored
temporarily maintains its integrity during all steps in an algorithm's execution. Boundary conditions are tested to ensure that the module
operates properly at boundaries established to limit or restrict processing. All independent paths (basis paths) through the control structure
are exercised to ensure that all statements in a module have been executed at least once. And finally, all error handling paths are tested.

Fig Unit Test


Tests of data flow across a module interface are required before any other test is initiated. If data do not enter and exit properly, all other tests
are moot. In addition, local data structures should be exercised and the local impact on global data should be ascertained (if possible) during
unit testing.
Selective testing of execution paths is an essential task during the unit test. Test cases should be designed to uncover errors due to erroneous
computations, incorrect comparisons, or improper control flow. Basis path and loop testing are effective techniques for uncovering a broad
array of path errors.
Among the more common errors in computation are
(1) Misunderstood or incorrect arithmetic precedence,
(2) Mixed mode operations,
(3) Incorrect initialization,
(4) Precision inaccuracy,
(5) Incorrect symbolic representation of an expression.
Comparison and control flow are closely coupled to one another (i.e., change of flow frequently occurs after a comparison). Test cases
should uncover errors such as
(1) Comparison of different data types,
(2) Incorrect logical operators or precedence,
(3) Expectation of equality when precision error makes equality unlikely,
(4) Incorrect comparison of variables,
(5) Improper or nonexistent loop termination,
(6) Failure to exit when divergent iteration is encountered, and
(7) Improperly modified loop variables.
Among the potential errors that should be tested when error handling is evaluated are
1. Error description is unintelligible.
2. Error noted does not correspond to error encountered.
3. Error condition causes system intervention prior to error handling.
4. Exception-condition processing is incorrect.
5. Error description does not provide enough information to assist in the location of the cause of the error.

Boundary testing is the last (and probably most important) task of the unit test step. Software often fails at its boundaries. That is, errors often
occur when the nth element of an n-dimensional array is processed, when the ith repetition of a loop with i passes is invoked, when the
maximum or minimum allowable value is encountered. Test cases that exercise data structure, control flow, and data values just below, at,
and just above maxima and minima are very likely to uncover errors.

Unit Test Procedures


Unit testing is normally considered as an adjunct to the coding step. After source level code has been developed, reviewed, and verified for
correspondence to component level design, unit test case design begins. A review of design information provides guidance for establishing
test cases that are likely to uncover errors in each of the categories discussed earlier. Each test case should be coupled with a set of expected
results.
Fig Unit Test Environment
Because a component is not a stand-alone program, driver and/or stub software must be developed for each unit test. The unit test
environment is illustrated in Figure above. In most applications a driver is nothing more than a "main program" that accepts test case data,
passes such data to the component (to be tested), and prints relevant results. Stubs serve to replace modules that are subordinate (called by)
the component to be tested. A stub or "dummy subprogram" uses the subordinate module's interface, may do minimal data manipulation,
prints verification of entry, and returns control to the module undergoing testing. Drivers and stubs represent overhead. That is, both are
software that must be written (formal design is not commonly applied) but that is not delivered with the final software product. If drivers and
stubs are kept simple, actual overhead is relatively low. Unfortunately, many components cannot be adequately unit tested with "simple"
overhead software. In such cases, complete testing can be postponed until the integration test step (where drivers or stubs are also used).
Unit testing is simplified when a component with high cohesion is designed. When only one function is addressed by a component, the
number of test cases is reduced and errors can be more easily predicted and uncovered.

Advantage of Unit Testing


1. Can be applied directly to object code and does not require processing source code.
2. Performance profilers commonly implement this measure.
Disadvantages of Unit Testing
1. Insensitive to some control structures (number of iterations)
2. Does not report whether loops reach their termination condition
3. Statement coverage is completely insensitive to the logical operators (|| and &&).

Integration Testing
Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors
associated with interfacing. The objective is to take unit tested components and build a program structure that has been dictated by design.
There is often a tendency to attempt non incremental integration; that is, to construct the program using a "big bang" approach. All
components are combined in advance. The entire program is tested as a whole. And chaos usually results! A set of errors is encountered.
Correction is difficult because isolation of causes is complicated by the vast expanse of the entire program. Once these errors are corrected,
new ones appear and the process continues in a seemingly endless loop.
Incremental integration is the antithesis of the big bang approach. The program is constructed and tested in small increments, where errors
are easier to isolate and correct; interfaces are more likely to be tested completely; and a systematic test approach may be applied.
Top-down Integration
Top-down integration testing is an incremental approach to construction of program structure. Modules are integrated by moving downward
through the control hierarchy, beginning with the main control module (main program). Modules subordinate (and ultimately subordinate) to
the main control module are incorporated into the structure in either a depth-first or breadth-first manner.

Fig Top – Down Integration


Referring to Figure above, depth-first integration would integrate all components on a major control path of the structure. Selection of a
major path is somewhat arbitrary and depends on application-specific characteristics. For example, selecting the left-hand path, components
M1, M2, M5 would be integrated first. Next, M8 or (if necessary for proper functioning of M2) M6 would be integrated. Then, the central
and right-hand control paths are built. Breadth-first integration incorporates all components directly subordinate at each level, moving
across the structure horizontally. From the figure, components M2, M3, and M4 (a replacement for stub S4) would be integrated first. The
next control level, M5, M6, and so on, follows.

The integration process is performed in a series of five steps:


1. The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control
module.
2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are replaced one at a time with
actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real component.
5. Regression testing may be conducted to ensure that new errors have not been
introduced. The process continues from step 2 until the entire program structure is built.
The top-down integration strategy verifies major control or decision points early in the test process. In a well-factored program structure,
decision making occurs at upper levels in the hierarchy and is therefore encountered first. If major control problems do exist, early
recognition is essential. If depth-first integration is selected, a complete function of the software may be implemented and demonstrated. For
example, consider a classic transaction structure in which a complex series of interactive inputs is requested, acquired, and validated via an
incoming path. The incoming path may be integrated in a top-down manner. All input processing (for subsequent transaction dispatching)
may be demonstrated before other elements of the structure have been integrated. Early demonstration of functional capability is a
confidence builder for both the developer and the customer.
Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems can arise. The most common of these problems
occurs when processing at low levels in the hierarchy is required to adequately test upper levels. Stubs replace low-level modules at the
beginning of top-down testing; therefore, no significant data can flow upward in the program structure. The tester is left with three choices:
(1) Delay many tests until stubs are replaced with actual modules,
(2) Develop stubs that perform limited functions that simulate the actual module, or
(3) Integrate the software from the bottom of the hierarchy upward.
The first approach (delay tests until stubs are replaced by actual modules) causes us to loose some control over correspondence between
specific tests and incorporation of specific modules. This can lead to difficulty in determining the cause of errors and tends to violate the
highly constrained nature of the top-down approach. The second approach is workable but can lead to significant overhead, as stubs become
more and more complex.

Bottom-up Integration
Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., components at the lowest
levels in the program structure). Because components are integrated from the bottom up, processing required for components subordinate to
a given level is always available and the need for stubs is eliminated.

A bottom-up integration strategy may be implemented with the following steps:


1. Low-level components are combined into clusters (sometimes called builds) that perform a specific software sub-function.
2. A driver (a control program for testing) is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.

Fig Bottom up Integration


Integration follows the pattern illustrated in Figure above. Components are combined to form clusters 1, 2, and 3. Each of the clusters is
tested using a driver (shown as a dashed block). Components in clusters 1 and 2 are subordinate to Ma. Drivers D1 and D2 are removed and
the clusters are interfaced directly to Ma. Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb. Both Ma and Mb
will ultimately be integrated with component Mc, and so forth.
As integration moves upward, the need for separate test drivers lessens. In fact, if the top two levels of program structure are integrated top
down, the number of drivers can be reduced substantially and integration of clusters is greatly simplified.

Regression Testing
Each time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O may
occur, and new control logic is invoked. These changes may cause problems with functions that previously worked flawlessly. In the context
of an integration test strategy, regression testing is the re-execution of some subset of tests that have already been conducted to ensure that
changes have not propagated unintended side effects.
In a broader context, successful tests (of any kind) result in the discovery of errors, and errors must be corrected. Whenever software is
corrected, some aspect of the software configuration (the program, its documentation, or the data that support it) is changed. Regression
testing is the activity that helps to ensure that changes (due to testing or for other reasons) do not introduce unintended behavior or additional
errors.
Regression testing may be conducted manually, by re-executing a subset of all test cases or using automated capture/playback tools.
Capture/playback tools enable the software engineer to capture test cases and results for subsequent playback and comparison.
The regression test suite (the subset of tests to be executed) contains three different classes of test cases:
 A representative sample of tests that will exercise all software functions.
 Additional tests that focus on software functions that are likely to be affected by the change.
 Tests that focus on the software components that have been changed.

As integration testing proceeds, the number of regression tests can grow quite large. Therefore, the regression test suite should be designed
to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient
to re-execute every test for every program function once a change has occurred.

Smoke Testing
Smoke testing is an integration testing approach that is commonly used when “shrink – wrapped” software products are being developed. It
is designed as a pacing mechanism for time-critical projects, allowing the software team to assess its project on a frequent basis. In essence,
the smoke testing approach encompasses the following activities:
1. Software components that have been translated into code are integrated into a “build.” A build includes all data files, libraries,
reusable modules, and engineered components that are required to implement one or more product functions.
2. A series of tests is designed to expose errors that will keep the build from properly performing its function. The intent should be
to uncover “show stopper” errors that have the highest likelihood of throwing the software project behind schedule.
3. The build is integrated with other builds and the entire product (in its current form) is smoke tested daily. The integration approach may be
top down or bottom up.
The daily frequency of testing the entire product may surprise some readers. However, frequent tests give both managers and practitioners a
realistic assessment of integration testing progress. McConnell describes the smoke test in the following manner:
“The smoke test should exercise the entire system from end to end. It does not have to be exhaustive, but it should be capable of exposing
major problems. The smoke test should be thorough enough that if the build passes, you can assume that it is stable enough to be tested more
thoroughly”.
Smoke testing provides a number of benefits when it is applied on complex, time critical software engineering projects:
 Integration risk is minimized. Because smoke tests are conducted daily, incompatibilities and other show-stopper errors are
uncovered early, thereby reducing the likelihood of serious schedule impact when errors are uncovered.
 The quality of the end-product is improved. Because the approach is construction (integration) oriented, smoke testing is likely to
uncover both functional errors and architectural and component-level design defects. If these defects are corrected early, better
product quality will result.
 Error diagnosis and correction are simplified. Like all integration testing approaches, errors uncovered during smoke testing are
likely to be associated with “new software increments”—that is, the software that has just been added to the build(s) is a probable
cause of a newly discovered error.
 Progress is easier to assess. With each passing day, more of the software has been integrated and more has been demonstrated
to work. This improves team morale and gives managers a good indication that progress is being made.

Validation Testing
At the culmination of integration testing, software is completely assembled as a package, interfacing errors have been uncovered and
corrected, and a final series of software tests—validation testing—may begin. Validation can be defined in many ways, but a simple (albeit
harsh) definition is that validation succeeds when software functions in a manner that can be reasonably expected by the customer. At this
point a battle-hardened software developer might protest: "Who or what is the arbiter of reasonable expectations?"
Reasonable expectations are defined in the Software Requirements Specification— a document that describes all user-visible attributes of
the software. The specification contains a section called Validation Criteria. Information contained in that section forms the basis for a
validation testing approach.

Validation Test Criteria


Software validation is achieved through a series of black-box tests that demonstrate conformity with requirements. A test plan outlines the
classes of tests to be conducted and a test procedure defines specific test cases that will be used to demonstrate conformity with
requirements. Both the plan and procedure are designed to ensure that all functional requirements are satisfied, all behavioral characteristics
are achieved, all performance requirements are attained, documentation is correct, and human engineered and other requirements are met
(e.g., transportability, compatibility, error recovery, maintainability).
After each validation test case has been conducted, one of two possible conditions exists:
(1) The function or performance characteristics conform to specification and are accepted or
(2) A deviation from specification is uncovered and a deficiency list is created. Deviation or error discovered at this stage in a project can
rarely be corrected prior to scheduled delivery. It is often necessary to negotiate with the customer to establish a method for resolving
deficiencies.

Configuration Review
An important element of the validation process is a configuration review. The intent of the review is to ensure that all elements of the
software configuration have been properly developed, are cataloged, and have the necessary detail to bolster the support phase of the
software life cycle.

Alpha Testing
The alpha test is conducted at the developer's site by a customer. The software is used in a natural setting with the developer "looking over
the shoulder" of the user and recording errors and usage problems. Alpha tests are conducted in a controlled environment.
Beta Testing
The beta test is conducted at one or more customer sites by the end-user of the software. Unlike alpha testing, the developer is generally not
present. Therefore, the beta test is a "live" application of the software in an environment that cannot be controlled by the developer. The
customer records all problems (real or imagined) that are encountered during beta testing and reports these to the developer at regular
intervals. As a result of problems reported during beta tests, software engineers make modifications and then prepare for release of the
software product to the entire customer base.

System Testing
Software is the only one element of a larger computer-based system. Ultimately, software is incorporated with other system elements (e.g.,
hardware, people, information), and a series of system integration and validation tests are conducted. These tests fall outside the scope of the
software process and are not conducted solely by software engineers. However, steps taken during software design and testing can greatly
improve the probability of successful software integration in the larger system.
A classic system testing problem is "finger-pointing." This occurs when an error is uncovered, and each system element developer blames the
other for the problem. Rather than indulging in such nonsense, the software engineer should anticipate potential interfacing problems and
(1) Design error-handling paths that test all information coming from other elements of the system,
(2) Conduct a series of tests that simulate bad data or other potential errors at the software interface,
(3) Record the results of tests to use as "evidence" if finger-pointing does occur, and
(4) Participate in planning and design of system tests to ensure that software is adequately tested.

System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Although each test
has a different purpose, all work to verify that system elements have been properly integrated and perform allocated functions.

Recovery Testing
Many computer based systems must recover from faults and resume processing within a pre-specified time. In some cases, a system must be
fault tolerant; that is, processing faults must not cause overall system function to cease. In other cases, a system failure must be corrected
within a specified period of time or severe economic damage will occur.
Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed.
If recovery is automatic (performed by the system itself), re-initialization, check-pointing mechanisms, data recovery, and restart are
evaluated for correctness. If recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to determine whether it is
within acceptable limits.

Security Testing
Any computer-based system that manages sensitive information or causes actions that can improperly harm (or benefit) individuals is a target
for improper or illegal penetration. Penetration spans a broad range of activities: hackers who attempt to penetrate systems for sport;
disgruntled employees who attempt to penetrate for revenge; dishonest individuals who attempt to penetrate for illicit personal gain.
Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration.
To quote Beizer: "The system's security must, of course, be tested for invulnerability from frontal attack—but must also be tested for
invulnerability from flank or rear attack." During security testing, the tester plays the role(s) of the individual who desires to penetrate the
system. Anything goes! The tester may attempt to acquire passwords through external clerical means; may attack the system with custom
software designed to breakdown any defenses that have been constructed; may overwhelm the system, thereby denying service to others;
may purposely cause system errors, hoping to penetrate during recovery; may browse through insecure data, hoping to find the key to
system entry.
Given enough time and resources, good security testing will ultimately penetrate a system. The role of the system designer is to make
penetration cost more than the value of the information that will be obtained.

Stress Testing
During earlier software testing steps, white-box and black-box techniques resulted in thorough evaluation of normal program functions and
performance. Stress tests are designed to confront programs with abnormal situations. In essence, the tester who performs stress testing asks:
"How high can we crank this up before it fails?"
Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. For example,
(1) Special tests may be designed that generate ten interrupts per second, when one or two is the average rate,
(2) Input data rates may be increased by an order of magnitude to determine how input functions will respond,
(3) Test cases that require maximum memory or other resources are executed,
(4) Test cases that may cause thrashing in a virtual operating system are designed,
(5) Test cases that may cause excessive hunting for disk-resident data are created. Essentially, the tester attempts to break
the program.
A variation of stress testing is a technique called sensitivity testing. In some situations (the most common occur in mathematical algorithms),
a very small range of data contained within the bounds of valid data for a program may cause extreme and even erroneous processing or
profound performance degradation. Sensitivity testing attempts to uncover data combinations within valid input classes that may cause
instability or improper processing.

Performance Testing
For real-time and embedded systems, software that provides required function but does not conform to performance requirements is
unacceptable. Performance testing is designed to test the run-time performance of software within the context of an integrated system.
Performance testing occurs throughout all steps in the testing process. Even at the unit level, the performance of an individual module may
be assessed as white-box tests are conducted. However, it is not until all system elements are fully integrated that the true performance of a
system can be ascertained.
Performance tests are often coupled with stress testing and usually require both hardware and software instrumentation. That is, it is often
necessary to measure resource utilization (e.g., processor cycles) in an exacting fashion. External instrumentation can monitor execution
intervals, log events (e.g., interrupts) as they occur, and sample machine states on a regular basis. By instrumenting a system, the tester can
uncover situations that lead to degradation and possible system failure.

Software Testing Techniques with Test Case Design Examples


What is Software Testing Technique?
Software Testing Techniques help you design better test cases. Since exhaustive testing is not possible; Manual Testing Techniques help reduce
the number of test cases to be executed while increasing test coverage. They help identify test conditions that are otherwise difficult to recognize.

 Boundary Value Analysis (BVA)


Boundary value analysis is based on testing at the boundaries between partitions. It includes maximum, minimum, inside or outside boundaries,
typical values and error values.

It is generally seen that a large number of errors occur at the boundaries of the defined input values rather than the center. It is also known as BVA
and gives a selection of test cases which exercise bounding values.

This black box testing technique complements equivalence partitioning. This software testing technique base on the principle that, if a system
works well for these particular values then it will work perfectly well for all values which comes between the two boundary values .
Guidelines for Boundary Value analysis

 If an input condition is restricted between values x and y, then the test cases should be designed with values x and y as well as values
which are above and below x and y.
 If an input condition is a large number of values, the test case should be developed which need to exercise the minimum and maximum
numbers. Here, values above and below the minimum and maximum values are also tested.
 Apply guidelines 1 and 2 to output conditions. It gives an output which reflects the minimum and the maximum values expected. It also
tests the below or above values.

Example:

 Input condition is valid between 1 to 10


 Boundary values 0,1,2 and 9,10,11

Equivalence Class Partitioning


 Equivalent Class Partitioning allows you to divide set of test condition into a partition which should be considered the same. This
software testing method divides the input domain of a program into classes of data from which test cases should be designed.
 The concept behind this Test Case Design Technique is that test case of a representative value of each class is equal to a test of any other
value of the same class. It allows you to Identify valid as well as invalid equivalence classes .

Example:

Input conditions are valid between

1 to 10 and 20 to 30
Hence there are five equivalence classes

--- to 0 (invalid)
1 to 10 (valid)
11 to 19 (invalid)
20 to 30 (valid)
31 to --- (invalid)
You select values from each class, i.e.,

-2, 3, 15, 25, 45

Decision Table Based Testing


A decision table is also known as to Cause-Effect table. This software testing technique is used for functions which respond to a combination of
inputs or events. For example, a submit button should be enabled if the user has entered all required fields.

The first task is to identify functionalities where the output depends on a combination of inputs. If there are large input set of combinations, then
divide it into smaller subsets which are helpful for managing a decision table.

For every function, you need to create a table and list down all types of combinations of inputs and its respective outputs. This helps to identify a
condition that is overlooked by the tester.

Following are steps to create a decision table:


 Enlist the inputs in rows
 Enter all the rules in the column
 Fill the table with the different combination of inputs
 In the last row, note down the output against the input combination.

Example: A submit button in a contact form is enabled only when all the inputs are entered by the end user.
State Transition
In State Transition technique changes in input conditions change the state of the Application Under Test (AUT). This testing technique allows the
tester to test the behavior of an AUT. The tester can perform this action by entering various input conditions in a sequence. In State transition
technique, the testing team provides positive as well as negative input test values for evaluating the system behavior .

Guideline for State Transition:

 State transition should be used when a testing team is testing the application for a limited set of input values.
 The Test Case Design Technique should be used when the testing team wants to test sequence of events which happen in the application
under test.

Example:

In the following example, if the user enters a valid password in any of the first three attempts the user will be able to log in successfully. If the user
enters the invalid password in the first or second try, the user will be prompted to re-enter the password. When the user enters password incorrectly
3rd time, the action has taken, and the account will be blocked.

State Transition Diagram

In this diagram when the user gives the correct PIN number, he or she is moved to Access granted state. Following Table is created based on the
diagram above-

State Transition Table


Correct PIN Incorrect PIN
S1) Start S5 S2
S2) 1st attempt S5 S3
S3) 2nd attempt S5 S4
S4) 3rd attempt S5 S6
S5) Access Granted – –
S6) Account blocked – –
In the above-given table when the user enters the correct PIN, the state is transitioned to Access granted. And if the user enters an incorrect
password, he or she is moved to next state. If he does the same 3rd time, he will reach the account blocked state.
Error Guessing
Error Guessing is a software testing technique based on guessing the error which can prevail in the code. The technique is heavily based on the
experience where the test analysts use their experience to guess the problematic part of the testing application. Hence, the test analysts must be
skilled and experienced for better error guessing.
The technique counts a list of possible errors or error-prone situations. Then tester writes a test case to expose those errors. To design test cases
based on this software testing technique, the analyst can use the past experiences to identify the conditions.

Guidelines for Error Guessing:

 The test should use the previous experience of testing similar applications
 Understanding of the system under test
 Knowledge of typical implementation errors
 Remember previously troubled areas
 Evaluate Historical data & Test results

Conclusion
 Test Case Design Technique allow you to design better cases. There are five primarily used techniques.
 Boundary value analysis is testing at the boundaries between partitions.
 Equivalent Class Partitioning allows you to divide set of test condition into a partition which should be considered the same.
 Decision Table software testing technique is used for functions which respond to a combination of inputs or events.
 In State Transition technique changes in input conditions change the state of the Application Under Test (AUT)
 Error guessing is a software testing technique which is based on guessing the error which can prevail in the code.

Mutation Testing
What is mutation testing?

Mutation testing, also known as code mutation testing, is a form of white box testing in which testers change specific components of an

application's source code to ensure a software test suite can detect the changes. Changes introduced to the software are intended to cause errors in

the program. Mutation testing is designed to ensure the quality of a software testing tool, not the applications it analyzes.

Mutation testing is typically used to conduct unit tests. The goal is to ensure a software test can detect code that isn't properly tested or hidden

defects that other testing methods don't catch. Changes called mutations can be implemented by modifying an existing line of code. For example, a

statement could be deleted or duplicated, true or false expressions can be changed or other variables can be altered. Code with the mutations is

then tested and compared to the original code.

If the tests with the mutants detect the same number of issues as the test with the original program, then either the code has failed to execute, or the

software testing suite being used has failed to detect the mutations. If this happens, the software test is worked on to become more effective. A

successful mutation test will have different test results from the mutant code. After this, the mutants are discarded.

The software test tool can then be scored using the mutation score. The mutation score is the number of killed mutants divided by the total number

of mutants, multiplied by 100.

Mutation score = (number of killed mutants/total number of mutants killed or surviving) x 100

A mutation score of 100% means the test was adequate.


Reasons mutations can appear

A mutation is a small syntactic change made to a program statement. Mutations typically contain one variable that causes a fault or bug. For

example, a mutation could look like the statement (A<B) changed to (A>B).

Testers intentionally introduce mutations to a program's code. Multiple versions of the original program are made, each with its own mutation, or

mutants. The mutants are then tested along with the original application. After testing, testers compare the results to the original program test.

Once the testing software has been fixed, the mutants can be kept and reused in another code mutation test. If the test results from the mutant code

to the original programs are different, then the mutants can be discarded, or killed.

Mutants that are still alive after running the test are typically called live mutants, while those killed after mutation testing are called killed

mutants. Equivalent mutants have the same meaning as the original source code even though they have different syntax. Equivalent mutants aren't

calculated as part of the mutant score.

Types of mutation testing

There are three main types of mutation testing:

 Statement mutation. Statements are deleted or replaced with a different statement. For example, the statement "A=10 by B=5" is replaced
with "A=5 by B=15."
 Value mutation. Values are changed to find errors. For example, "A= 15" is changed to "A= 10" or "A=25."
 Decision mutation. Arithmetic or logical operators are changed to detect errors. For example, "(A<B)" is changed to "(A>B)."

Advantages and disadvantages

Code mutation provides the following advantages:

 Helps to ensure the identification of weak tests or code.


 Offers a high level of error detection.
 Increases the use of object-oriented frameworks and unit tests if an organization uses them.
 Offers more mutation testing tools due to the increased frameworks and unit tests.
 Helps organizations determine the usefulness of their testing tool through the use of scoring.

Disadvantages of code mutation include the following:

 Isn't practical without the use of an automation tool.


 Can be time-consuming and costly due to the large number of mutants being tested.

Mutation testing tools

Mutation testing tools can help speed up the mutant generation process. The following are examples of open source mutation testing tools:

 Insure++.
 Jester for JUnit.
 PIT for Java and the Java Virtual Machine.
 MuClipse for Eclipse.

A mutation testing tool can be used to run unit tests against automatically modified code. Tools can also create reports that show killed and live

mutations as well as no coverage, timeouts, memory and run errors.

How to conduct mutation testing

Mutation tests are typically completed using the following steps .

1. Write a unit test.

2. Write code to go through the test.

3. Run the mutation test with the code, ensuring the code and test work.

4. Make some mutations to the code and run the new mutated code through the test. Every mutant should have one error to validate the test's

efficiency.

5. Compare the results from the original code and the mutated version.

a. If the results don't match, the test successfully identified and executed the mutant.

b. If the test case produced the same result between the original and mutant code, the test failed and the mutant is saved.

6. A mutation score can also be calculated. The score is given as a percentage that's determined using the formula noted above.

Mutation testing vs. regression testing

At first glance, regression testing could be confused with mutation testing. Regression testing tests new changes to a program to ensure the older

program still works with these changes. Test department coders develop code test scenarios that test new units of code after being written.

While regression testing is used to test if new changes to a program causes an issue, mutation tests make small changes to code to ensure a

software test works as intended.

Static Testing vs. Dynamic Testing


Static Testing

Static testing is testing, which checks the application without executing the code. It is a verification process. Some of the essential activities are done
under static testing such as business requirement review, design review, code walkthroughs, and the test documentation review.

Static testing is performed in the white box testing phase, where the programmer checks every line of the code before handling over to the Test
Engineer.

Static testing can be done manually or with the help of tools to improve the quality of the application by finding the error at the early stage of
development; that why it is also called the verification process.

The documents review, high and low-level design review, code walkthrough take place in the verification process.
Dynamic Testing

Dynamic testing is testing, which is done when the code is executed at the run time environment. It is a validation process where functional testing
[unit, integration, and system testing] and non-functional testing [user acceptance testing] are performed.

We will perform the dynamic testing to check whether the application or software is working fine during and after the installation of the application
without any error.

Difference between Static testing and Dynamic Testing


Static testing Dynamic testing

In static testing, we will check the code or the In dynamic testing, we will check the code/application by executing the code.
application without executing the code.

Static testing includes activities like code Review, Dynamic testing includes activities like functional and non-functional testing such as UT
Walkthrough, etc. (usability testing), IT (integration testing), ST (System testing) & UAT (user acceptance
testing).

Static testing is a Verification Process. Dynamic testing is a Validation Process.

Static testing is used to prevent defects. Dynamic testing is used to find and fix the defects.

Static testing is a more cost-effective process. Dynamic testing is a less cost-effective process.

This type of testing can be performed before the Dynamic testing can be done only after the executables are prepared.
compilation of code.

Under static testing, we can perform the Equivalence Partitioning and Boundary Value Analysis technique are performed under
statement coverage testing and structural testing. dynamic testing.

It involves the checklist and process which has This type of testing required the test case for the execution of the code.
been followed by the test engineer.

Reliability Metrics
Reliability metrics are used to quantitatively expressed the reliability of the software product. The option of which metric is to be used depends upon
the type of system to which it applies & the requirements of the application domain.

Some reliability metrics which can be used to quantify the reliability of the software product are as follows:
1. Mean Time to Failure (MTTF)

MTTF is described as the time interval between the two successive failures. An MTTF of 200 mean that one failure can be expected each 200-time
units. The time units are entirely dependent on the system & it can even be stated in the number of transactions.  MTTF is consistent for systems
with large transactions.

For example, It is suitable for computer-aided design systems where a designer will work on a design for several hours as well as for Word-processor
systems.

To measure MTTF, we can evidence the failure data for n failures. Let the failures appear at the time instants t1,t2.....tn.

MTTF can be calculated as

2. Mean Time to Repair (MTTR)

Once failure occurs, some-time is required to fix the error. MTTR measures the average time it takes to track the errors causing the failure and to fix
them.

3. Mean Time Between Failure (MTBR)

We can merge MTTF & MTTR metrics to get the MTBF metric.

                  MTBF = MTTF + MTTR

Thus, an MTBF of 300 denoted that once the failure appears, the next failure is expected to appear only after 300 hours. In this method, the time
measurements are real-time & not the execution time as in MTTF.

4. Rate of occurrence of failure (ROCOF)

It is the number of failures appearing in a unit time interval. The number of unexpected events over a specific time of operation.  ROCOF is the
frequency of occurrence with which unexpected role is likely to appear. A ROCOF of 0.02 mean that two failures are likely to occur in each 100
operational time unit steps. It is also called the failure intensity metric.

5. Probability of Failure on Demand (POFOD)

POFOD is described as the probability that the system will fail when a service is requested. It is the number of system deficiency given several
systems inputs.

POFOD is the possibility that the system will fail when a service request is made.

A POFOD of 0.1 means that one out of ten service requests may fail.POFOD is an essential measure for safety-critical systems. POFOD is relevant for
protection systems where services are demanded occasionally.

6. Availability (AVAIL)

Availability is the probability that the system is applicable for use at a given time. It takes into account the repair time & the restart time for the
system. An availability of 0.995 means that in every 1000 time units, the system is feasible to be available for  995 of these. The percentage of time
that a system is applicable for use, taking into account planned and unplanned downtime. If a system is down an average of four hours out of 100
hours of operation, its AVAIL is 96%.
Software Metrics for Reliability
The Metrics are used to improve the reliability of the system by identifying the areas of requirements.

Different Types of Software Metrics are:-

Requirements Reliability Metrics

Requirements denote what features the software must include. It specifies the functionality that must be contained in the software. The
requirements must be written such that is no misconception between the developer & the client. The requirements must include valid structure to
avoid the loss of valuable data.

The requirements should be thorough and in a detailed manner so that it is simple for the design stage. The requirements should not include
inadequate data. Requirement Reliability metrics calculates the above-said quality factors of the required document.

Design and Code Reliability Metrics

The quality methods that exists in design and coding plan are complexity, size, and modularity. Complex modules are tough to understand & there
is a high probability of occurring bugs. The reliability will reduce if modules have a combination of high complexity and large size or high complexity
and small size. These metrics are also available to object-oriented code, but in this, additional metrics are required to evaluate the quality.

Testing Reliability Metrics

These metrics use two methods to calculate reliability.

First, it provides that the system is equipped with the tasks that are specified in the requirements. Because of this, the bugs due to the lack of
functionality reduces.

The second method is calculating the code, finding the bugs & fixing them. To ensure that the system includes the functionality specified, test plans
are written that include multiple test cases. Each test method is based on one system state and tests some tasks that are based on an associated set
of requirements. The goals of an effective verification program is to ensure that each elements is tested, the implication being that if the system
passes the test, the requirement?s functionality is contained in the delivered system.
Reliability Growth Models

The reliability growth group of models measures and predicts the improvement of reliability programs through the testing process. The growth
model represents the reliability or failure rate of a system as a function of time or the number of test cases. Models included in this group are as
following below.
1. Coutinho Model – Coutinho adapted the Duane growth model to represent the software testing process. Coutinho plotted the cumulative
number of deficiencies discovered and the number of correction actions made vs the cumulative testing weeks on log-log paper. Let N(t)
denote the cumulative number of failures and let t be the total testing time. The failure rate,  (t), the model can be expressed as
where are the model parameters. The least squares method can be used to estimate the parameters of this model.
2. Wall and Ferguson Model – Wall and Ferguson proposed a model similar to the Weibull growth model for predicting the failure rate of
software during testing. The cumulative number of failures at time t, m(t), can be expressed as where  are the unknown parameters. The
function b(t) can be obtained as the number of test cases or total testing time. Similarly, the failure rate function at time t is given by Wall
and Ferguson tested this model using several software failure data and observed that failure data correlate well with the model Reliability
growth models are mathematical models used to predict the reliability of a system over time. They are commonly used in software
engineering to predict the reliability of software systems, and to guide the testing and improvement process.

There are several types of reliability growth models, including:

1. Non-homogeneous Poisson Process (NHPP) Model: This model is based on the assumption that the number of failures in a system
follows a Poisson distribution. It is used to model the reliability growth of a system over time, and to predict the number of failures that
will occur in the future.
2. Duane Model: This model is based on the assumption that the rate of failure of a system decreases over time as the system is improved. It
is used to model the reliability growth of a system over time, and to predict the reliability of the system at any given time.
3. Gooitzen Model: This model is based on the assumption that the rate of failure of a system decreases over time as the system is improved,
but that there may be periods of time where the rate of failure increases. It is used to model the reliability growth of a system over time,
and to predict the reliability of the system at any given time.
4. Littlewood Model: This model is based on the assumption that the rate of failure of a system decreases over time as the system is
improved, but that there may be periods of time where the rate of failure remains constant. It is used to model the reliability growth of a
system over time, and to predict the reliability of the system at any given time.
5. Reliability growth models are useful tools for software engineers, as they can help to predict the reliability of a system over time and to
guide the testing and improvement process. They can also help organizations to make informed decisions about the allocation of
resources, and to prioritize improvements to the system.
6. It is important to note that reliability growth models are only predictions, and actual results may differ from the predictions. Factors such
as changes in the system, changes in the environment, and unexpected failures can impact the accuracy of the predictions.

Advantages of Reliability Growth Models:

1. Predicting Reliability: Reliability growth models are used to predict the reliability of a system over time, which can help organizations to
make informed decisions about the allocation of resources and the prioritization of improvements to the system.
2. Guiding the Testing Process: Reliability growth models can be used to guide the testing process, by helping organizations to determine
which tests should be run, and when they should be run, in order to maximize the improvement of the system’s reliability.
3. Improving the Allocation of Resources: Reliability growth models can help organizations to make informed decisions about the allocation
of resources, by providing an estimate of the expected reliability of the system over time, and by helping to prioritize improvements to the
system.
4. Identifying Problem Areas: Reliability growth models can help organizations to identify problem areas in the system, and to focus their
efforts on improving these areas in order to improve the overall reliability of the system.

Disadvantages of Reliability Growth Models:

1. Predictive Accuracy: Reliability growth models are only predictions, and actual results may differ from the predictions. Factors such as
changes in the system, changes in the environment, and unexpected failures can impact the accuracy of the predictions.
2. Model Complexity: Reliability growth models can be complex, and may require a high level of technical expertise to understand and use
effectively.
3. Data Availability: Reliability growth models require data on the system’s reliability, which may not be available or may be difficult to
obtain.

What is Risk?
"Tomorrow problems are today's risk." Hence, a clear definition of a "risk" is a problem that could cause some loss or threaten the progress of the
project, but which has not happened yet.

These potential issues might harm cost, schedule or technical success of the project and the quality of our software device, or project team morale.

Risk Management is the system of identifying addressing and eliminating these problems before they can damage the project.

We need to differentiate risks, as potential issues, from the current problems of the project.

Video Player is loading.


PauseNext
Unmute

Current Time 0:04

Duration 18:10
Loaded: 0.37%
 
Fullscreen

Different methods are required to address these two kinds of issues.

For example, staff storage, because we have not been able to select people with the right technical skills is a current problem, but the threat of our
technical persons being hired away by the competition is a risk.

Risk Management
A software project can be concerned with a large variety of risks. In order to be adept to systematically identify the significant risks which might
affect a software project, it is essential to classify risks into different classes. The project manager can then check which risks from each class are
relevant to the project.

There are three main classifications of risks which can affect a software project:

1. Project risks
2. Technical risks
3. Business risks

1. Project risks: Project risks concern differ forms of budgetary, schedule, personnel, resource, and customer-related problems. A vital project risk is
schedule slippage. Since the software is intangible, it is very tough to monitor and control a software project. It is very tough to control something
which cannot be identified. For any manufacturing program, such as the manufacturing of cars, the plan executive can recognize the product taking
shape.

2. Technical risks: Technical risks concern potential method, implementation, interfacing, testing, and maintenance issue. It also consists of an
ambiguous specification, incomplete specification, changing specification, technical uncertainty, and technical obsolescence. Most technical risks
appear due to the development team's insufficient knowledge about the project.

3. Business risks: This type of risks contain risks of building an excellent product that no one need, losing budgetary or personnel commitments,
etc.

Other risk categories

1. 1. Known risks: Those risks that can be uncovered after careful assessment of the project program, the business and technical environment
in which the plan is being developed, and more reliable data sources (e.g., unrealistic delivery date)
2. 2. Predictable risks: Those risks that are hypothesized from previous project experience (e.g., past turnover)
3. 3. Unpredictable risks: Those risks that can and do occur, but are extremely tough to identify in advance.

Principle of Risk Management


1. Global Perspective: In this, we review the bigger system description, design, and implementation. We look at the chance and the impact
the risk is going to have.
2. Take a forward-looking view: Consider the threat which may appear in the future and create future plans for directing the next events.
3. Open Communication: This is to allow the free flow of communications between the client and the team members so that they have
certainty about the risks.
4. Integrated management: In this method risk management is made an integral part of project management.
5. Continuous process: In this phase, the risks are tracked continuously throughout the risk management paradigm.

Risk Management Activities


Risk management consists of three main activities, as shown in fig:

Risk Assessment
The objective of risk assessment is to division the risks in the condition of their loss, causing potential. For risk assessment, first, every risk should be
rated in two methods:

o The possibility of a risk coming true (denoted as r).


o The consequence of the issues relates to that risk (denoted as s).

Based on these two methods, the priority of each risk can be estimated:

          p=r*s

risk becoming true, and s is the severity of loss caused due to the risk becoming true. If all identified risks are set up, then the most likely and
damaging risks can be controlled first, and more comprehensive risk abatement methods can be designed for these risks.
1. Risk Identification: The project organizer needs to anticipate the risk in the project as early as possible so that the impact of risk can be reduced
by making effective risk management planning.

A project can be of use by a large variety of risk. To identify the significant risk, this might affect a project. It is necessary to categories into the
different risk of classes.

There are different types of risks which can affect a software project:

1. Technology risks: Risks that assume from the software or hardware technologies that are used to develop the system.
2. People risks: Risks that are connected with the person in the development team.
3. Organizational risks: Risks that assume from the organizational environment where the software is being developed.
4. Tools risks: Risks that assume from the software tools and other support software used to create the system.
5. Requirement risks: Risks that assume from the changes to the customer requirement and the process of managing the requirements
change.
6. Estimation risks: Risks that assume from the management estimates of the resources required to build the system

2. Risk Analysis: During the risk analysis process, you have to consider every identified risk and make a perception of the probability and
seriousness of that risk.

There is no simple way to do this. You have to rely on your perception and experience of previous projects and the problems that arise in them.

It is not possible to make an exact, the numerical estimate of the probability and seriousness of each risk. Instead, you should authorize the risk to
one of several bands:

1. The probability of the risk might be determined as very low (0-10%), low (10-25%), moderate (25-50%), high (50-75%) or very high (+75%).
2. The effect of the risk might be determined as catastrophic (threaten the survival of the plan), serious (would cause significant delays),
tolerable (delays are within allowed contingency), or insignificant.

Risk Control
It is the process of managing risks to achieve desired outcomes. After all, the identified risks of a plan are determined; the project must be made to
include the most harmful and the most likely risks. Different risks need different containment methods. In fact, most risks need ingenuity on the part
of the project manager in tackling the risk.

There are three main methods to plan for risk management:

1. Avoid the risk: This may take several ways such as discussing with the client to change the requirements to decrease the scope of the
work, giving incentives to the engineers to avoid the risk of human resources turnover, etc.
2. Transfer the risk: This method involves getting the risky element developed by a third party, buying insurance cover, etc.
3. Risk reduction: This means planning method to include the loss due to risk. For instance, if there is a risk that some key personnel might
leave, new recruitment can be planned.

Risk Leverage: To choose between the various methods of handling risk, the project plan must consider the amount of controlling the risk and the
corresponding reduction of risk. For this, the risk leverage of the various risks can be estimated.

Risk leverage is the variation in risk exposure divided by the amount of reducing the risk.

Risk leverage = (risk exposure before reduction - risk exposure after reduction) / (cost of reduction)

1. Risk planning: The risk planning method considers each of the key risks that have been identified and develop ways to maintain these risks.

For each of the risks, you have to think of the behavior that you may take to minimize the disruption to the plan if the issue identified in the risk
occurs.

You also should think about data that you might need to collect while monitoring the plan so that issues can be anticipated.
Again, there is no easy process that can be followed for contingency planning. It rely on the judgment and experience of the project manager.

2. Risk Monitoring: Risk monitoring is the method king that your assumption about the product, process, and business risks has not changed.

Reactive and Proactive Software Risk Management


Introduction
A "risk" is a situation that could result in a loss or threaten the project's progress but still hasn't happened. The process of identifying risks and
implementing solutions to limit their impact on the project is known as risk management. Risk management's objective is to prevent accidents or
substantial losses.
Reactive risk management aims to minimise the impact of potential dangers and accelerate an organisation's recovery, but it anticipates that threats
will occur at some point. 
Proactive risk management seeks to identify risks and mitigate them from arising, to begin with. Proactive risk management is a discipline that an
organisation must practice and embed into its overall business strategy, not a process or a program. It can't be defined in a day, and it can't be done
alone. It is a constant process until it becomes ingrained in the organisation's risk culture.
Reactive Software Risk Management
A firefighting scenario is frequently used to visualise reactive risk management. Reactive risk management kicks in when an accident occurs or
concerns are discovered following an audit. The incident is being investigated, and actions are being taken to avert future occurrences. In addition,
steps will be taken to minimise the impact of the incident on the profitability and long-term viability of the company.
Reactive risk management gathers and documents all past incidents to identify the errors that created the problem. The reactive risk management
strategy is used to advise and execute preventive measures. This is the earlier model of risk management. Due to the unpreparedness for new errors,
reactive risk management can cause substantial delays in the workplace. The lack of preparation complicates the resolution process because the
cause of the disaster necessitates inquiry, and the solution is expensive and requires dramatic transformation.
Below are the measures included in reactive risk management:
 Preventing possible incidents from occurring
 Mitigating the effects of incidents
 Preventing minor dangers from becoming more serious
 Keeping important business activities running in the face of incidents
 Identifying the fundamental cause of each incident and rectifying it
 Keeping an eye on the situation to make sure it doesn't happen again
Proactive Software Risk Management
In contrast to reactive risk management, proactive risk management aims to identify all relevant risks before an incident happens. The current
organisation must deal with an age of fast environmental change brought on by technological advancements, deregulation, intense competition,
and raising public awareness. As a result, risk management based on previous accidents is not a suitable decision for any company. As a result, new
risk management thinking was required, paving the way for proactive risk management.
"Dynamic, closed-loop feedback control strategy based on measurement, surveillance of the current safety level, and planned explicit target safety
level with a creative intellectuality" is the definition of proactive risk management. The description refers to the adaptability and inventiveness of
humans who are concerned with safety. Humans are a source of error, but they can also be an essential source of security for proactive risk
management. Furthermore, the closed-loop technique relates to the establishment of operating boundaries. These limits are thought to provide a
safe level of competency.
The accidental analysis is a component of proactive risk management, in which accident scenarios are created and essential personnel and
stakeholders who could cause an accident are identified. As a result, prior accidents are also significant in proactive risk management.
The following are included in the proactive risk management strategies:
 Identifying existing risks to the enterprise, business unit, or project
 Crafting a risk-response strategy
 Organising identified threats into categories based on the severity of the danger
 Evaluating risks to decide the best course of action for each.
 Putting in place the essential controls to keep risks from becoming threats or events
 Continuously monitoring the threat environment.
Difference Between Reactive and Proactive Risk Management
Proactive risk management is a versatile, closed-loop feedback control technique based on measurement and observation. In contrast, reactive risk
management is a response-based risk management approach that is dependent on accident analysis and audit-based discoveries.
The way risks are analysed, disclosed, and mitigated distinguishes a proactive risk management approach from a reactive approach. It entails
thoroughly examining a situation or evaluating processes to identify potential risks, identifying risk drivers to understand the root cause, estimating
probability and impact to prioritise risks, and formulating a contingency plan fittingly. Risk managers must learn to analyse the strength of the
organisation's innovation component and use that knowledge appropriately to combat existing and new risks in order to achieve this. Also, to
engage in strategic risk usage, focus on utilising the expertise of experienced risk managers.
Now we'll compare and contrast the two approaches to risk management.
Definition
Reactive risk management: A response-based risk management strategy based on accident analysis and audit-based discoveries.
Proactive risk management: Adaptive, closed-loop feedback control technique based on measurement, observation of the current safety level,
and predicted explicit target safety level with creative intellectuality.
Purpose
Reactive risk management: Reactive risk management intends to minimise the probability of similar or identical accidents occurring again in the
future.
Proactive risk management: Proactive risk management aims to decrease the likelihood of an accident occurring in the future by identifying
activity boundaries where a breach can result in an accident.
Time Frame
Reactive risk management: Reactive risk management is exclusively based on the study and response to previous accidents.
Proactive risk management: Before identifying solutions to avoid risks, proactive risk management employs a hybrid strategy of past, present, and
future prediction.
Flexibility
Reactive risk management: Reactive risk management's methodology does not account for humans' abilities to anticipate, develop, and resolve
issues, making it less adaptable to changes and obstacles.
Proactive risk management: Proactive risk management entails creative thinking and foresight. Furthermore, eliminating the accident is primarily
dependent on the accident source, which is a human attribute. As a result, it is highly adaptable to changing environments.

You might also like