Unit 4
Unit 4
Unit 4
Unit 4
Software Testing
Introduction
Once the source code has been developed, testing is required to uncover the errors before it is implemented. In order to
perform software testing a series of test cases is designed. Since testing is a complex process hence, in order to make the process simpler,
testing activities are broken into smaller activities. Due to this, for a project incremental testing is generally preferred. In this testing process
the system is broken in set of subsystems and there subsystems are tested separately before integrating them to form the system for system
testing.
Definition of Testing
1. According to IEEE – “Testing means the process of analyzing a software item to detect the differences between existing and
required condition (i.e. bugs) and to evaluate the feature of the software item”.
2. According to Myers – “Testing is the process of analyzing a program with the intent of finding an error”.
TEST ORACLES
A test oracle is a mechanism, different from the program itself that can used to check the correctness of the output of the program for the test
cases. In order to test any program we need to have a description of its expected behavior and a method of determining whether the observed
behavior conforms to the expected behavior. For this test oracle is needed.
Test oracles generally use the system specification of the program to decide what the correct behavior of the program should do. In order to
determine the correct behavior it is important that the behavior of the system be unambiguously specified and the specification itself should
be error free.
Software under
test
Test Oracles
1. Unit Testing – This testing is essentially for verification of the code produced during the code phase. The basic objective of this
phase is to test the internal logic of the module. Unit testing makes heavy use of the white box testing technique, exercising specific
path in a module control structure to ensure complete coverage and maximum error detection. In Unit testing the module interface is
tested to ensure that information properly flows into and out of the program unit. Also the data structure is tested to ensure that data
stored temporarily maintain its integrity during execution or not.
2. Integration Testing – This testing uses the address of verification and program construction. Black box testing technique is widely
used in this testing strategy although limited amount of white box testing may be used to ensure coverage of control paths. The basic
emphasis of this testing the interfaces between the modules.
3. Validation Testing – Criterion, which is established during requirement analysis, is tested and it provides final assurance that
software meets all functional behavior, performance requirement etc.
4. System Testing – Here the entire software is tested. The last high order testing stage falls outside the boundary of software
engineering. System testing verifies that all elements mesh properly and that overall system function/performance is achieved.
Unit Testing
Unit testing focuses verification effort on the smallest unit of software design—the software component or module. Using the component-
level design description as a guide, important control paths are tested to uncover errors within the boundary of the module. The relative
complexity of tests and uncovered errors is limited by the constrained scope established for unit testing. The unit test is white-box oriented,
and the step can be conducted in parallel for multiple components.
Boundary testing is the last (and probably most important) task of the unit test step. Software often fails at its boundaries. That is, errors often
occur when the nth element of an n-dimensional array is processed, when the ith repetition of a loop with i passes is invoked, when the
maximum or minimum allowable value is encountered. Test cases that exercise data structure, control flow, and data values just below, at,
and just above maxima and minima are very likely to uncover errors.
Integration Testing
Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors
associated with interfacing. The objective is to take unit tested components and build a program structure that has been dictated by design.
There is often a tendency to attempt non incremental integration; that is, to construct the program using a "big bang" approach. All
components are combined in advance. The entire program is tested as a whole. And chaos usually results! A set of errors is encountered.
Correction is difficult because isolation of causes is complicated by the vast expanse of the entire program. Once these errors are corrected,
new ones appear and the process continues in a seemingly endless loop.
Incremental integration is the antithesis of the big bang approach. The program is constructed and tested in small increments, where errors
are easier to isolate and correct; interfaces are more likely to be tested completely; and a systematic test approach may be applied.
Top-down Integration
Top-down integration testing is an incremental approach to construction of program structure. Modules are integrated by moving downward
through the control hierarchy, beginning with the main control module (main program). Modules subordinate (and ultimately subordinate) to
the main control module are incorporated into the structure in either a depth-first or breadth-first manner.
Bottom-up Integration
Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., components at the lowest
levels in the program structure). Because components are integrated from the bottom up, processing required for components subordinate to
a given level is always available and the need for stubs is eliminated.
Regression Testing
Each time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O may
occur, and new control logic is invoked. These changes may cause problems with functions that previously worked flawlessly. In the context
of an integration test strategy, regression testing is the re-execution of some subset of tests that have already been conducted to ensure that
changes have not propagated unintended side effects.
In a broader context, successful tests (of any kind) result in the discovery of errors, and errors must be corrected. Whenever software is
corrected, some aspect of the software configuration (the program, its documentation, or the data that support it) is changed. Regression
testing is the activity that helps to ensure that changes (due to testing or for other reasons) do not introduce unintended behavior or additional
errors.
Regression testing may be conducted manually, by re-executing a subset of all test cases or using automated capture/playback tools.
Capture/playback tools enable the software engineer to capture test cases and results for subsequent playback and comparison.
The regression test suite (the subset of tests to be executed) contains three different classes of test cases:
A representative sample of tests that will exercise all software functions.
Additional tests that focus on software functions that are likely to be affected by the change.
Tests that focus on the software components that have been changed.
As integration testing proceeds, the number of regression tests can grow quite large. Therefore, the regression test suite should be designed
to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient
to re-execute every test for every program function once a change has occurred.
Smoke Testing
Smoke testing is an integration testing approach that is commonly used when “shrink – wrapped” software products are being developed. It
is designed as a pacing mechanism for time-critical projects, allowing the software team to assess its project on a frequent basis. In essence,
the smoke testing approach encompasses the following activities:
1. Software components that have been translated into code are integrated into a “build.” A build includes all data files, libraries,
reusable modules, and engineered components that are required to implement one or more product functions.
2. A series of tests is designed to expose errors that will keep the build from properly performing its function. The intent should be
to uncover “show stopper” errors that have the highest likelihood of throwing the software project behind schedule.
3. The build is integrated with other builds and the entire product (in its current form) is smoke tested daily. The integration approach may be
top down or bottom up.
The daily frequency of testing the entire product may surprise some readers. However, frequent tests give both managers and practitioners a
realistic assessment of integration testing progress. McConnell describes the smoke test in the following manner:
“The smoke test should exercise the entire system from end to end. It does not have to be exhaustive, but it should be capable of exposing
major problems. The smoke test should be thorough enough that if the build passes, you can assume that it is stable enough to be tested more
thoroughly”.
Smoke testing provides a number of benefits when it is applied on complex, time critical software engineering projects:
Integration risk is minimized. Because smoke tests are conducted daily, incompatibilities and other show-stopper errors are
uncovered early, thereby reducing the likelihood of serious schedule impact when errors are uncovered.
The quality of the end-product is improved. Because the approach is construction (integration) oriented, smoke testing is likely to
uncover both functional errors and architectural and component-level design defects. If these defects are corrected early, better
product quality will result.
Error diagnosis and correction are simplified. Like all integration testing approaches, errors uncovered during smoke testing are
likely to be associated with “new software increments”—that is, the software that has just been added to the build(s) is a probable
cause of a newly discovered error.
Progress is easier to assess. With each passing day, more of the software has been integrated and more has been demonstrated
to work. This improves team morale and gives managers a good indication that progress is being made.
Validation Testing
At the culmination of integration testing, software is completely assembled as a package, interfacing errors have been uncovered and
corrected, and a final series of software tests—validation testing—may begin. Validation can be defined in many ways, but a simple (albeit
harsh) definition is that validation succeeds when software functions in a manner that can be reasonably expected by the customer. At this
point a battle-hardened software developer might protest: "Who or what is the arbiter of reasonable expectations?"
Reasonable expectations are defined in the Software Requirements Specification— a document that describes all user-visible attributes of
the software. The specification contains a section called Validation Criteria. Information contained in that section forms the basis for a
validation testing approach.
Configuration Review
An important element of the validation process is a configuration review. The intent of the review is to ensure that all elements of the
software configuration have been properly developed, are cataloged, and have the necessary detail to bolster the support phase of the
software life cycle.
Alpha Testing
The alpha test is conducted at the developer's site by a customer. The software is used in a natural setting with the developer "looking over
the shoulder" of the user and recording errors and usage problems. Alpha tests are conducted in a controlled environment.
Beta Testing
The beta test is conducted at one or more customer sites by the end-user of the software. Unlike alpha testing, the developer is generally not
present. Therefore, the beta test is a "live" application of the software in an environment that cannot be controlled by the developer. The
customer records all problems (real or imagined) that are encountered during beta testing and reports these to the developer at regular
intervals. As a result of problems reported during beta tests, software engineers make modifications and then prepare for release of the
software product to the entire customer base.
System Testing
Software is the only one element of a larger computer-based system. Ultimately, software is incorporated with other system elements (e.g.,
hardware, people, information), and a series of system integration and validation tests are conducted. These tests fall outside the scope of the
software process and are not conducted solely by software engineers. However, steps taken during software design and testing can greatly
improve the probability of successful software integration in the larger system.
A classic system testing problem is "finger-pointing." This occurs when an error is uncovered, and each system element developer blames the
other for the problem. Rather than indulging in such nonsense, the software engineer should anticipate potential interfacing problems and
(1) Design error-handling paths that test all information coming from other elements of the system,
(2) Conduct a series of tests that simulate bad data or other potential errors at the software interface,
(3) Record the results of tests to use as "evidence" if finger-pointing does occur, and
(4) Participate in planning and design of system tests to ensure that software is adequately tested.
System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Although each test
has a different purpose, all work to verify that system elements have been properly integrated and perform allocated functions.
Recovery Testing
Many computer based systems must recover from faults and resume processing within a pre-specified time. In some cases, a system must be
fault tolerant; that is, processing faults must not cause overall system function to cease. In other cases, a system failure must be corrected
within a specified period of time or severe economic damage will occur.
Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed.
If recovery is automatic (performed by the system itself), re-initialization, check-pointing mechanisms, data recovery, and restart are
evaluated for correctness. If recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to determine whether it is
within acceptable limits.
Security Testing
Any computer-based system that manages sensitive information or causes actions that can improperly harm (or benefit) individuals is a target
for improper or illegal penetration. Penetration spans a broad range of activities: hackers who attempt to penetrate systems for sport;
disgruntled employees who attempt to penetrate for revenge; dishonest individuals who attempt to penetrate for illicit personal gain.
Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration.
To quote Beizer: "The system's security must, of course, be tested for invulnerability from frontal attack—but must also be tested for
invulnerability from flank or rear attack." During security testing, the tester plays the role(s) of the individual who desires to penetrate the
system. Anything goes! The tester may attempt to acquire passwords through external clerical means; may attack the system with custom
software designed to breakdown any defenses that have been constructed; may overwhelm the system, thereby denying service to others;
may purposely cause system errors, hoping to penetrate during recovery; may browse through insecure data, hoping to find the key to
system entry.
Given enough time and resources, good security testing will ultimately penetrate a system. The role of the system designer is to make
penetration cost more than the value of the information that will be obtained.
Stress Testing
During earlier software testing steps, white-box and black-box techniques resulted in thorough evaluation of normal program functions and
performance. Stress tests are designed to confront programs with abnormal situations. In essence, the tester who performs stress testing asks:
"How high can we crank this up before it fails?"
Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. For example,
(1) Special tests may be designed that generate ten interrupts per second, when one or two is the average rate,
(2) Input data rates may be increased by an order of magnitude to determine how input functions will respond,
(3) Test cases that require maximum memory or other resources are executed,
(4) Test cases that may cause thrashing in a virtual operating system are designed,
(5) Test cases that may cause excessive hunting for disk-resident data are created. Essentially, the tester attempts to break
the program.
A variation of stress testing is a technique called sensitivity testing. In some situations (the most common occur in mathematical algorithms),
a very small range of data contained within the bounds of valid data for a program may cause extreme and even erroneous processing or
profound performance degradation. Sensitivity testing attempts to uncover data combinations within valid input classes that may cause
instability or improper processing.
Performance Testing
For real-time and embedded systems, software that provides required function but does not conform to performance requirements is
unacceptable. Performance testing is designed to test the run-time performance of software within the context of an integrated system.
Performance testing occurs throughout all steps in the testing process. Even at the unit level, the performance of an individual module may
be assessed as white-box tests are conducted. However, it is not until all system elements are fully integrated that the true performance of a
system can be ascertained.
Performance tests are often coupled with stress testing and usually require both hardware and software instrumentation. That is, it is often
necessary to measure resource utilization (e.g., processor cycles) in an exacting fashion. External instrumentation can monitor execution
intervals, log events (e.g., interrupts) as they occur, and sample machine states on a regular basis. By instrumenting a system, the tester can
uncover situations that lead to degradation and possible system failure.
It is generally seen that a large number of errors occur at the boundaries of the defined input values rather than the center. It is also known as BVA
and gives a selection of test cases which exercise bounding values.
This black box testing technique complements equivalence partitioning. This software testing technique base on the principle that, if a system
works well for these particular values then it will work perfectly well for all values which comes between the two boundary values .
Guidelines for Boundary Value analysis
If an input condition is restricted between values x and y, then the test cases should be designed with values x and y as well as values
which are above and below x and y.
If an input condition is a large number of values, the test case should be developed which need to exercise the minimum and maximum
numbers. Here, values above and below the minimum and maximum values are also tested.
Apply guidelines 1 and 2 to output conditions. It gives an output which reflects the minimum and the maximum values expected. It also
tests the below or above values.
Example:
Example:
1 to 10 and 20 to 30
Hence there are five equivalence classes
--- to 0 (invalid)
1 to 10 (valid)
11 to 19 (invalid)
20 to 30 (valid)
31 to --- (invalid)
You select values from each class, i.e.,
The first task is to identify functionalities where the output depends on a combination of inputs. If there are large input set of combinations, then
divide it into smaller subsets which are helpful for managing a decision table.
For every function, you need to create a table and list down all types of combinations of inputs and its respective outputs. This helps to identify a
condition that is overlooked by the tester.
Example: A submit button in a contact form is enabled only when all the inputs are entered by the end user.
State Transition
In State Transition technique changes in input conditions change the state of the Application Under Test (AUT). This testing technique allows the
tester to test the behavior of an AUT. The tester can perform this action by entering various input conditions in a sequence. In State transition
technique, the testing team provides positive as well as negative input test values for evaluating the system behavior .
State transition should be used when a testing team is testing the application for a limited set of input values.
The Test Case Design Technique should be used when the testing team wants to test sequence of events which happen in the application
under test.
Example:
In the following example, if the user enters a valid password in any of the first three attempts the user will be able to log in successfully. If the user
enters the invalid password in the first or second try, the user will be prompted to re-enter the password. When the user enters password incorrectly
3rd time, the action has taken, and the account will be blocked.
In this diagram when the user gives the correct PIN number, he or she is moved to Access granted state. Following Table is created based on the
diagram above-
The test should use the previous experience of testing similar applications
Understanding of the system under test
Knowledge of typical implementation errors
Remember previously troubled areas
Evaluate Historical data & Test results
Conclusion
Test Case Design Technique allow you to design better cases. There are five primarily used techniques.
Boundary value analysis is testing at the boundaries between partitions.
Equivalent Class Partitioning allows you to divide set of test condition into a partition which should be considered the same.
Decision Table software testing technique is used for functions which respond to a combination of inputs or events.
In State Transition technique changes in input conditions change the state of the Application Under Test (AUT)
Error guessing is a software testing technique which is based on guessing the error which can prevail in the code.
Mutation Testing
What is mutation testing?
Mutation testing, also known as code mutation testing, is a form of white box testing in which testers change specific components of an
application's source code to ensure a software test suite can detect the changes. Changes introduced to the software are intended to cause errors in
the program. Mutation testing is designed to ensure the quality of a software testing tool, not the applications it analyzes.
Mutation testing is typically used to conduct unit tests. The goal is to ensure a software test can detect code that isn't properly tested or hidden
defects that other testing methods don't catch. Changes called mutations can be implemented by modifying an existing line of code. For example, a
statement could be deleted or duplicated, true or false expressions can be changed or other variables can be altered. Code with the mutations is
If the tests with the mutants detect the same number of issues as the test with the original program, then either the code has failed to execute, or the
software testing suite being used has failed to detect the mutations. If this happens, the software test is worked on to become more effective. A
successful mutation test will have different test results from the mutant code. After this, the mutants are discarded.
The software test tool can then be scored using the mutation score. The mutation score is the number of killed mutants divided by the total number
Mutation score = (number of killed mutants/total number of mutants killed or surviving) x 100
A mutation is a small syntactic change made to a program statement. Mutations typically contain one variable that causes a fault or bug. For
example, a mutation could look like the statement (A<B) changed to (A>B).
Testers intentionally introduce mutations to a program's code. Multiple versions of the original program are made, each with its own mutation, or
mutants. The mutants are then tested along with the original application. After testing, testers compare the results to the original program test.
Once the testing software has been fixed, the mutants can be kept and reused in another code mutation test. If the test results from the mutant code
to the original programs are different, then the mutants can be discarded, or killed.
Mutants that are still alive after running the test are typically called live mutants, while those killed after mutation testing are called killed
mutants. Equivalent mutants have the same meaning as the original source code even though they have different syntax. Equivalent mutants aren't
Statement mutation. Statements are deleted or replaced with a different statement. For example, the statement "A=10 by B=5" is replaced
with "A=5 by B=15."
Value mutation. Values are changed to find errors. For example, "A= 15" is changed to "A= 10" or "A=25."
Decision mutation. Arithmetic or logical operators are changed to detect errors. For example, "(A<B)" is changed to "(A>B)."
Mutation testing tools can help speed up the mutant generation process. The following are examples of open source mutation testing tools:
Insure++.
Jester for JUnit.
PIT for Java and the Java Virtual Machine.
MuClipse for Eclipse.
A mutation testing tool can be used to run unit tests against automatically modified code. Tools can also create reports that show killed and live
3. Run the mutation test with the code, ensuring the code and test work.
4. Make some mutations to the code and run the new mutated code through the test. Every mutant should have one error to validate the test's
efficiency.
5. Compare the results from the original code and the mutated version.
a. If the results don't match, the test successfully identified and executed the mutant.
b. If the test case produced the same result between the original and mutant code, the test failed and the mutant is saved.
6. A mutation score can also be calculated. The score is given as a percentage that's determined using the formula noted above.
At first glance, regression testing could be confused with mutation testing. Regression testing tests new changes to a program to ensure the older
program still works with these changes. Test department coders develop code test scenarios that test new units of code after being written.
While regression testing is used to test if new changes to a program causes an issue, mutation tests make small changes to code to ensure a
Static testing is testing, which checks the application without executing the code. It is a verification process. Some of the essential activities are done
under static testing such as business requirement review, design review, code walkthroughs, and the test documentation review.
Static testing is performed in the white box testing phase, where the programmer checks every line of the code before handling over to the Test
Engineer.
Static testing can be done manually or with the help of tools to improve the quality of the application by finding the error at the early stage of
development; that why it is also called the verification process.
The documents review, high and low-level design review, code walkthrough take place in the verification process.
Dynamic Testing
Dynamic testing is testing, which is done when the code is executed at the run time environment. It is a validation process where functional testing
[unit, integration, and system testing] and non-functional testing [user acceptance testing] are performed.
We will perform the dynamic testing to check whether the application or software is working fine during and after the installation of the application
without any error.
In static testing, we will check the code or the In dynamic testing, we will check the code/application by executing the code.
application without executing the code.
Static testing includes activities like code Review, Dynamic testing includes activities like functional and non-functional testing such as UT
Walkthrough, etc. (usability testing), IT (integration testing), ST (System testing) & UAT (user acceptance
testing).
Static testing is used to prevent defects. Dynamic testing is used to find and fix the defects.
Static testing is a more cost-effective process. Dynamic testing is a less cost-effective process.
This type of testing can be performed before the Dynamic testing can be done only after the executables are prepared.
compilation of code.
Under static testing, we can perform the Equivalence Partitioning and Boundary Value Analysis technique are performed under
statement coverage testing and structural testing. dynamic testing.
It involves the checklist and process which has This type of testing required the test case for the execution of the code.
been followed by the test engineer.
Reliability Metrics
Reliability metrics are used to quantitatively expressed the reliability of the software product. The option of which metric is to be used depends upon
the type of system to which it applies & the requirements of the application domain.
Some reliability metrics which can be used to quantify the reliability of the software product are as follows:
1. Mean Time to Failure (MTTF)
MTTF is described as the time interval between the two successive failures. An MTTF of 200 mean that one failure can be expected each 200-time
units. The time units are entirely dependent on the system & it can even be stated in the number of transactions. MTTF is consistent for systems
with large transactions.
For example, It is suitable for computer-aided design systems where a designer will work on a design for several hours as well as for Word-processor
systems.
To measure MTTF, we can evidence the failure data for n failures. Let the failures appear at the time instants t1,t2.....tn.
Once failure occurs, some-time is required to fix the error. MTTR measures the average time it takes to track the errors causing the failure and to fix
them.
Thus, an MTBF of 300 denoted that once the failure appears, the next failure is expected to appear only after 300 hours. In this method, the time
measurements are real-time & not the execution time as in MTTF.
It is the number of failures appearing in a unit time interval. The number of unexpected events over a specific time of operation. ROCOF is the
frequency of occurrence with which unexpected role is likely to appear. A ROCOF of 0.02 mean that two failures are likely to occur in each 100
operational time unit steps. It is also called the failure intensity metric.
POFOD is described as the probability that the system will fail when a service is requested. It is the number of system deficiency given several
systems inputs.
POFOD is the possibility that the system will fail when a service request is made.
A POFOD of 0.1 means that one out of ten service requests may fail.POFOD is an essential measure for safety-critical systems. POFOD is relevant for
protection systems where services are demanded occasionally.
6. Availability (AVAIL)
Availability is the probability that the system is applicable for use at a given time. It takes into account the repair time & the restart time for the
system. An availability of 0.995 means that in every 1000 time units, the system is feasible to be available for 995 of these. The percentage of time
that a system is applicable for use, taking into account planned and unplanned downtime. If a system is down an average of four hours out of 100
hours of operation, its AVAIL is 96%.
Software Metrics for Reliability
The Metrics are used to improve the reliability of the system by identifying the areas of requirements.
Requirements denote what features the software must include. It specifies the functionality that must be contained in the software. The
requirements must be written such that is no misconception between the developer & the client. The requirements must include valid structure to
avoid the loss of valuable data.
The requirements should be thorough and in a detailed manner so that it is simple for the design stage. The requirements should not include
inadequate data. Requirement Reliability metrics calculates the above-said quality factors of the required document.
The quality methods that exists in design and coding plan are complexity, size, and modularity. Complex modules are tough to understand & there
is a high probability of occurring bugs. The reliability will reduce if modules have a combination of high complexity and large size or high complexity
and small size. These metrics are also available to object-oriented code, but in this, additional metrics are required to evaluate the quality.
First, it provides that the system is equipped with the tasks that are specified in the requirements. Because of this, the bugs due to the lack of
functionality reduces.
The second method is calculating the code, finding the bugs & fixing them. To ensure that the system includes the functionality specified, test plans
are written that include multiple test cases. Each test method is based on one system state and tests some tasks that are based on an associated set
of requirements. The goals of an effective verification program is to ensure that each elements is tested, the implication being that if the system
passes the test, the requirement?s functionality is contained in the delivered system.
Reliability Growth Models
The reliability growth group of models measures and predicts the improvement of reliability programs through the testing process. The growth
model represents the reliability or failure rate of a system as a function of time or the number of test cases. Models included in this group are as
following below.
1. Coutinho Model – Coutinho adapted the Duane growth model to represent the software testing process. Coutinho plotted the cumulative
number of deficiencies discovered and the number of correction actions made vs the cumulative testing weeks on log-log paper. Let N(t)
denote the cumulative number of failures and let t be the total testing time. The failure rate, (t), the model can be expressed as
where are the model parameters. The least squares method can be used to estimate the parameters of this model.
2. Wall and Ferguson Model – Wall and Ferguson proposed a model similar to the Weibull growth model for predicting the failure rate of
software during testing. The cumulative number of failures at time t, m(t), can be expressed as where are the unknown parameters. The
function b(t) can be obtained as the number of test cases or total testing time. Similarly, the failure rate function at time t is given by Wall
and Ferguson tested this model using several software failure data and observed that failure data correlate well with the model Reliability
growth models are mathematical models used to predict the reliability of a system over time. They are commonly used in software
engineering to predict the reliability of software systems, and to guide the testing and improvement process.
1. Non-homogeneous Poisson Process (NHPP) Model: This model is based on the assumption that the number of failures in a system
follows a Poisson distribution. It is used to model the reliability growth of a system over time, and to predict the number of failures that
will occur in the future.
2. Duane Model: This model is based on the assumption that the rate of failure of a system decreases over time as the system is improved. It
is used to model the reliability growth of a system over time, and to predict the reliability of the system at any given time.
3. Gooitzen Model: This model is based on the assumption that the rate of failure of a system decreases over time as the system is improved,
but that there may be periods of time where the rate of failure increases. It is used to model the reliability growth of a system over time,
and to predict the reliability of the system at any given time.
4. Littlewood Model: This model is based on the assumption that the rate of failure of a system decreases over time as the system is
improved, but that there may be periods of time where the rate of failure remains constant. It is used to model the reliability growth of a
system over time, and to predict the reliability of the system at any given time.
5. Reliability growth models are useful tools for software engineers, as they can help to predict the reliability of a system over time and to
guide the testing and improvement process. They can also help organizations to make informed decisions about the allocation of
resources, and to prioritize improvements to the system.
6. It is important to note that reliability growth models are only predictions, and actual results may differ from the predictions. Factors such
as changes in the system, changes in the environment, and unexpected failures can impact the accuracy of the predictions.
1. Predicting Reliability: Reliability growth models are used to predict the reliability of a system over time, which can help organizations to
make informed decisions about the allocation of resources and the prioritization of improvements to the system.
2. Guiding the Testing Process: Reliability growth models can be used to guide the testing process, by helping organizations to determine
which tests should be run, and when they should be run, in order to maximize the improvement of the system’s reliability.
3. Improving the Allocation of Resources: Reliability growth models can help organizations to make informed decisions about the allocation
of resources, by providing an estimate of the expected reliability of the system over time, and by helping to prioritize improvements to the
system.
4. Identifying Problem Areas: Reliability growth models can help organizations to identify problem areas in the system, and to focus their
efforts on improving these areas in order to improve the overall reliability of the system.
1. Predictive Accuracy: Reliability growth models are only predictions, and actual results may differ from the predictions. Factors such as
changes in the system, changes in the environment, and unexpected failures can impact the accuracy of the predictions.
2. Model Complexity: Reliability growth models can be complex, and may require a high level of technical expertise to understand and use
effectively.
3. Data Availability: Reliability growth models require data on the system’s reliability, which may not be available or may be difficult to
obtain.
What is Risk?
"Tomorrow problems are today's risk." Hence, a clear definition of a "risk" is a problem that could cause some loss or threaten the progress of the
project, but which has not happened yet.
These potential issues might harm cost, schedule or technical success of the project and the quality of our software device, or project team morale.
Risk Management is the system of identifying addressing and eliminating these problems before they can damage the project.
We need to differentiate risks, as potential issues, from the current problems of the project.
Current Time 0:04
Duration 18:10
Loaded: 0.37%
Â
Fullscreen
For example, staff storage, because we have not been able to select people with the right technical skills is a current problem, but the threat of our
technical persons being hired away by the competition is a risk.
Risk Management
A software project can be concerned with a large variety of risks. In order to be adept to systematically identify the significant risks which might
affect a software project, it is essential to classify risks into different classes. The project manager can then check which risks from each class are
relevant to the project.
There are three main classifications of risks which can affect a software project:
1. Project risks
2. Technical risks
3. Business risks
1. Project risks: Project risks concern differ forms of budgetary, schedule, personnel, resource, and customer-related problems. A vital project risk is
schedule slippage. Since the software is intangible, it is very tough to monitor and control a software project. It is very tough to control something
which cannot be identified. For any manufacturing program, such as the manufacturing of cars, the plan executive can recognize the product taking
shape.
2. Technical risks: Technical risks concern potential method, implementation, interfacing, testing, and maintenance issue. It also consists of an
ambiguous specification, incomplete specification, changing specification, technical uncertainty, and technical obsolescence. Most technical risks
appear due to the development team's insufficient knowledge about the project.
3. Business risks: This type of risks contain risks of building an excellent product that no one need, losing budgetary or personnel commitments,
etc.
1. 1. Known risks: Those risks that can be uncovered after careful assessment of the project program, the business and technical environment
in which the plan is being developed, and more reliable data sources (e.g., unrealistic delivery date)
2. 2. Predictable risks: Those risks that are hypothesized from previous project experience (e.g., past turnover)
3. 3. Unpredictable risks: Those risks that can and do occur, but are extremely tough to identify in advance.
Risk Assessment
The objective of risk assessment is to division the risks in the condition of their loss, causing potential. For risk assessment, first, every risk should be
rated in two methods:
Based on these two methods, the priority of each risk can be estimated:
p=r*s
risk becoming true, and s is the severity of loss caused due to the risk becoming true. If all identified risks are set up, then the most likely and
damaging risks can be controlled first, and more comprehensive risk abatement methods can be designed for these risks.
1. Risk Identification: The project organizer needs to anticipate the risk in the project as early as possible so that the impact of risk can be reduced
by making effective risk management planning.
A project can be of use by a large variety of risk. To identify the significant risk, this might affect a project. It is necessary to categories into the
different risk of classes.
There are different types of risks which can affect a software project:
1. Technology risks: Risks that assume from the software or hardware technologies that are used to develop the system.
2. People risks: Risks that are connected with the person in the development team.
3. Organizational risks: Risks that assume from the organizational environment where the software is being developed.
4. Tools risks: Risks that assume from the software tools and other support software used to create the system.
5. Requirement risks: Risks that assume from the changes to the customer requirement and the process of managing the requirements
change.
6. Estimation risks: Risks that assume from the management estimates of the resources required to build the system
2. Risk Analysis: During the risk analysis process, you have to consider every identified risk and make a perception of the probability and
seriousness of that risk.
There is no simple way to do this. You have to rely on your perception and experience of previous projects and the problems that arise in them.
It is not possible to make an exact, the numerical estimate of the probability and seriousness of each risk. Instead, you should authorize the risk to
one of several bands:
1. The probability of the risk might be determined as very low (0-10%), low (10-25%), moderate (25-50%), high (50-75%) or very high (+75%).
2. The effect of the risk might be determined as catastrophic (threaten the survival of the plan), serious (would cause significant delays),
tolerable (delays are within allowed contingency), or insignificant.
Risk Control
It is the process of managing risks to achieve desired outcomes. After all, the identified risks of a plan are determined; the project must be made to
include the most harmful and the most likely risks. Different risks need different containment methods. In fact, most risks need ingenuity on the part
of the project manager in tackling the risk.
1. Avoid the risk: This may take several ways such as discussing with the client to change the requirements to decrease the scope of the
work, giving incentives to the engineers to avoid the risk of human resources turnover, etc.
2. Transfer the risk: This method involves getting the risky element developed by a third party, buying insurance cover, etc.
3. Risk reduction: This means planning method to include the loss due to risk. For instance, if there is a risk that some key personnel might
leave, new recruitment can be planned.
Risk Leverage: To choose between the various methods of handling risk, the project plan must consider the amount of controlling the risk and the
corresponding reduction of risk. For this, the risk leverage of the various risks can be estimated.
Risk leverage is the variation in risk exposure divided by the amount of reducing the risk.
Risk leverage = (risk exposure before reduction - risk exposure after reduction) / (cost of reduction)
1. Risk planning: The risk planning method considers each of the key risks that have been identified and develop ways to maintain these risks.
For each of the risks, you have to think of the behavior that you may take to minimize the disruption to the plan if the issue identified in the risk
occurs.
You also should think about data that you might need to collect while monitoring the plan so that issues can be anticipated.
Again, there is no easy process that can be followed for contingency planning. It rely on the judgment and experience of the project manager.
2. Risk Monitoring: Risk monitoring is the method king that your assumption about the product, process, and business risks has not changed.