Unit-4
Unit-4
IV
IMPLEMENTATION AND
TESTING
contents
2
Software Implementation
• Direct Cutover. Replacing an old system with a new one at a point in time.
• Big Bang- A large scales direct cutover that impacts multiple systems and
processes.
• Emergent Change-Lunching changes on an extremely frequent basis such that
major changes can occur with incremental risks.
• Phased-Breaking projects into releases of manageable complexity. Change is
preplaned from a set of requirements and is less fluid and responsive than an
emergent implementation.
• Pilot-Implementing a change on a limited basis in order to reduce risk. Can apply
to implementation large and small experiment.
• Parallel Run-Operating both the old and new versions of systems and processes
until there is confidence that the new versions is ready to support business
objectives. 3
Software Implementation
Step-by-Step Procedure
Step 1: Assess Development Organization.
Step 2: Plan Process Implementation.
Step 3: Execute Process Implementation.
Step 4: Evaluate Process Implementation Effort.
5
Software implementation Techniques
Refactoring:
• Refactoring is usually motivated by noticing a c odesmell.
• For example the method at hand may be very long, or it may be a near duplicate of
another nearby method.
• Once recognized, such problems can be addressed by refactoring the source code, or
transforming it into a new form that behaves the same as before but that no
longer"smells".
There are two general categories of benefits to the activity of refactoring.
Maintainability. It is easier to fix bugs because the source code is easy to read and the intent of
its author is easy to grasp. This might be achieved by reducing large monolithic routines into a
set of individually concise, well-named, single-purpose methods. It might be achieved by
moving a method to a more appropriate class, or by removing misleading comments.
Extensibility. It is easier to extend the capabilities of the application if it uses recognizable d
esign patterns, and it provides some flexibility where none before may have existed.
• Before applying a refactoring to a section of code, a solid set of automatic unit tests is
needed. The tests are used to demonstrate that the behavior of the module is correct
before the refactoring.
• The tests can never prove that there are no bugs, but the important point is that this
process can be cost-effective: good unit tests can catch enough errors to make them
worthwhile and to make refactoring safe enough.
1
6
Challenges of Large Code Base
How to ensure…
Maintainable code?
DRY code?
Readable code?
Bug-free code?
The primary purpose of code review is to make sure that the overall code health
of Google's code base is improving over time. All of the tools and processes
of code review are designed to this end. In order to accomplish this, a series of
trade-offs have to be balanced. 1
7
Code Review
Code Review: A constructive review of a fellow developer’s code. A required sign-off from
another team member before a developer is permitted to check in changes or new code. Code
Review, or Peer Code Review, is the act of consciously and systematically convening with one's
fellow programmers to check each other's code for mistakes, and has been repeatedly shown to
accelerate and streamline the process of software development like few other practices can.
8
Mechanics of code reviews
what: Reviewer gives suggestions for improvement on a logical and/or structural level,
to conform to previously agreed upon set of quality standards.
Feedback leads to refactoring, followed by a 2nd code review.
Eventually reviewer approves code.
when: When code author has finished a coherent system change that is otherwise
ready for checkin
change shouldn't be too large or too small
before committing the code to the repository or incorporating it into the new
build
Code reviews are a very common industry practice.
Made easier by advanced tools that:
integrate with configuration management systems
highlight changes (i.e., diff function)
allow traversing back into history 1
Code Inspection is the most formal type of review, which is a kind of static testing to
avoid the defect multiplication at a later stage.
• The main purpose of code inspection is to find defects and it can also spot any
process improvement if any.
• An inspection report lists the findings, which include metrics that can be used to aid
improvements to the process as well as correcting defects in the document under
review.
• Preparation before the meeting is essential, which includes reading of any source
documents to ensure consistency.
• Inspections are often led by a trained moderator, who is not the author of the code.
• The inspection process is the most formal type of review based on rules and
checklists and makes use of entry and exit criteria.
• It usually involves peer examination of the code and each one has a defined set of
roles.
• After the meeting, a formal follow-up process is used to ensure that corrective
action is completed in a timely manner.
10
Code Inspection
Where Code Inspection fits in
11
Driver and Stub Module
In the field of software testing, the term stubs and drivers refers to the replica of the
modules, which acts as a substitute to the undeveloped or missing module. The stubs
and drives are specifically developed to meet the necessary requirements of the
unavailable modules and are immensely useful in getting expected results.
Stubs and drivers are two types of test harness, which is a collection of software
and test that is configured together in order to test a unit of a program by stimulating
variety of conditions while constantly monitoring its outputs and behaviour. Stubs and
drivers are used in top-down integration and bottom-up integration testing
respectively and are created mainly for the testing purpose.
12
Stubs and Driver
Stubs are used to test modules and are created by the team of testers during the process of Top-
Down Integration Testing. With the assistance of these test stubs testers are capable of
stimulating the behaviour of the lower level modules that are not yet integrated with the
software. Moreover, it helps stimulates the activity of the missing components.
Types of Stubs:
There are basically four types of stubs used in top-down approach of integration testing, which
are mentioned below:
• Displays the trace message.
• Values of parameter is displayed.
• Returns the values that are used by the modules.
• Returns the values selected by the parameters that were used by modules being tested.
Drivers, like stubs, are used by software testers to fulfil the requirements of missing or
incomplete components and modules. These are usually complex than stubs and are developed
during Bottom-Up approach of Integration Testing. Drivers can be utilized to test the lower levels
of the code, when the upper level of codes or modules are not developed. Drivers act as pseudo
codes that are mainly used when the stub modules are ready, but the primary modules 1 are not
ready. 13
Testing-Defination
SOFTWARE TESTING is defined as an activity to check whether the actual results match
the expected results and to ensure that the software system is Defect free. Software
testing also helps to identify errors, gaps, or missing requirements in contrary to the
actual requirements. Software testing is the process of verifying a system with the
purpose of identifying any errors, gaps or missing requirement versus the actual
requirement.
“Testing is the process of executing a program with the intention of finding errors.” –
Myers.
14
WHY TESTING IS NECESSARY?
2. To check if the system meets the requirements and be executed successfully in the
Intended environment.
• Business requirements
• Programmer code
• Hardware configuration
How do you test a system?
Input test data to the system.
Observe the output:
Check if the system behaved as expected or not?
If the program does not behave as expected:
note the conditions under which it failed.
later debug and correct.
16
Type
Manual Testing: Manual testing is the process of testing software by hand to learn more
about it, to find what is and isn’t working.
Automation Testing: Automation testing is the process of testing the software using an
automation tool to find the defects. In this process, testers execute the test scripts and
generate the test results automatically by using automation tools. Some of the famous
automation testing tools for functional testing are QTP/UFT and Selenium.
17
Method
Static Testing:
It is also known as Verification in Software Testing. Verification is a static method of checking
documents and files. Verification is the process, to ensure that whether we are building the
product right
Activities involved here are Inspections, Reviews, Walkthroughs.
Dynamic Testing
It is also known as Validation in Software Testing. Validation is a dynamic process of testing the real
product. Validation is the process, whether we are building the right product.
Activities involved in this is Testing the software application
18
Verification Vs Validation
1. Verification is a static practice of verifying 1. Validation is a dynamic mechanism of
documents, design, code and program. validating and testing the actual product.
2. It does not involve executing the code. 2. It always involves executing the code.
19
Internal and External Views
20
Approaches
White box Testing: It testing a software solution's internal structure, design, and coding. It is
also known as Clear Box testing, Open Box testing, Structural testing, Transparent Box testing,
Code-Based testing, and Glass Box testing. It is usually performed by developers. In white-box
testing, an internal perspective of the system, as well as programming skills, are used to design
test cases. This testing is usually done at the unit level.
21
Continued..
Black box Testing: It is also called as Behavioral/Specification-Based/Input-Output
Testing. Black Box Testing is a software testing method in which testers evaluate the
functionality of the software under test without looking at the internal code structure.
22
Continued..
Grey Box Testing: Grey box is the combination of both White Box and Black Box
Testing. The tester who works on this type of testing needs to have access to design
documents. This helps to create better test cases in this process.
23
Continued..
Terminologies of Testing
Error:-It is a mistake.
Failure:-It is manifestation of error.
Test Case:-It is a Triplet [Input, State, Output].
Test Suite:-Set of all test cases.
Testing Activities
Test Suite Design
Running test cases and checking the results to detect failures
Debugging
Error Correction
24
White Box Testing Techniques
Loop testing is a type of white box testing which exclusively focuses on the validity of loop
construct. Loops are simple to test unless dependencies exist between the loops or among the
loop and the code it contain.
Branch Testing
• The other synonym of branch testing is conditional testing or decision testing and comes
under white box testing technique.
• It makes sure that each possible outcome from the condition is tested at least once.
Control Flow Testing
Control flow testing uses the control structure of a program to develop the test cases for the
program.
The test cases are developed to sufficiently cover the whole control structure of the program.
Data Flow Testing
Data flow testing is another type of white box testing which looks at how data moves within
a program.
In data flow testing the control flow graph is annoted with the information about how the
program variables are defined and used.
Condition testing
Condition Testing is another structural testing method that is useful during unit testing,
using source code or detailed pseudocode as a reference for test design. Its goal is the
thorough testing of every condition or test that occurs in the source code. Condition
coverage is also known as Predicate Coverage in which each one of the Boolean expression
have been evaluated to both TRUE and FALSE.
Basis Path Testing
This testing allows the test case designer to produce a logical complexity measure of
procedural design and use this measure as an approach for outlining a basic set of execution
path.
These are test cases that exercise basic set will execute every statement at least once.
Basic path testing makes sure that each independent path through the code is taken in a
predetermined order.
Basis Path Testing
Draw the Flow Graph of the Function/Program under consideration as shown below:
Step 2 : Determine the independent paths.
Path 1: 1 - 2 - 5 - 7
Path 2: 1 - 2 - 5 - 6 -5- 7
Path 3: 1 - 2 - 3 - 2 - 5 - 6 - 7
Path 4: 1 - 2 - 3 - 4 - 2 - 5 - 6 - 7
Advantages of White Box Testing
35
Black Box Testing
Techniques
Following are some techniques that can be used for designing black box tests.
• Equivalence Partitioning: It is a software test design technique that involves
dividing input values into valid and invalid partitions and selecting representative
values from each partition as test data.
• Boundary Value Analysis: It is a software test design technique that involves the
determination of boundaries for input values and selecting values that are at the
boundaries and just inside/ outside of the boundaries as test data.
• Cause-Effect Graphing: It is a software test design technique that involves
identifying the cases (input conditions) and effects (output conditions),
producing a Cause-Effect Graph, and generating test cases accordingly.
36
Black Box Testing
Example
A tester, without knowledge of the internal structures of a website, tests the web
pages by using a browser; providing inputs (clicks, keystrokes) and verifying the
outputs against the expected outcome.
Levels Applicable To
Black Box Testing method is applicable to the following levels of software testing:
• Integration Testing
• System Testing
• Acceptance Testing
The higher the level, and hence the bigger and more complex the box, the more
black-box testing method comes into use.
37
Black Box Testing
Advantages
Tests are done from a user’s point of view and will help in exposing discrepancies in the
specifications.
Tester need not know programming languages or how the software has been implemented.
Tests can be conducted by a body independent from the developers, allowing for an objective
perspective and the avoidance of developer-bias.
Test cases can be designed as soon as the specifications are complete.
Disadvantages
Only a small number of possible inputs can be tested and many program paths will be left
untested.
Without clear specifications, which is the situation in many projects, test cases will be difficult
to design.
Tests can be redundant if the software designer/developer has already run a test case.
Ever wondered why a soothsayer closes the eyes when foretelling events? So is almost the
case in Black Box Testing.
38
Unit Testing
UNIT TESTING is a level of software testing where individual units/ components of a
software are tested. The purpose is to validate that each unit of the software performs as
designed. A unit is the smallest testable part of any software. It usually has one or a few
inputs and usually a single output.
39
Objective
Key reasons to perform unit testing.
• Unit tests help to fix bugs early in the development cycle and save costs.
• It helps the developers to understand the code base and enables them to make changes
quickly
• Good unit tests serve as project documentation
• Unit tests help with code re-use. Migrate both your code and your tests to your new
project. Tweak the code until the tests run again.
40
Technique and example
Techniques
Code coverage techniques used in unit testing are listed below:
Statement Coverage
Decision Coverage
Branch Coverage
Condition Coverage
Finite State Machine Coverage
Tools
• Junit
• Nunit
• Jmockit
• EMMA
• PHPUnit
• Symfony Lime
• Test Unit
• RSpec
41
Advantage and Disadvantage
Advantage
• Developers looking to learn what functionality is provided by a unit and how to use
it can look at the unit tests to gain a basic understanding of the unit API.
• Unit testing allows the programmer to refactor code at a later date, and make sure
the module still works correctly (i.e. Regression testing). The procedure is to write
test cases for all functions and methods so that whenever a change causes a fault, it
can be quickly identified and fixed.
• Due to the modular nature of the unit testing, we can test parts of the project
without waiting for others to be completed.
Disadvantages
• Unit testing can't be expected to catch every error in a program. It is not possible to
evaluate all execution paths even in the most trivial programs
• Unit testing by its very nature focuses on a unit of code. Hence it can't catch
integration errors or broad system level errors.
42
Regression Testing
REGRESSION TESTING is defined as a type of software testing to confirm that a recent program or
code change has not adversely affected existing features. Regression Testing is nothing but a full or
partial selection of already executed test cases which are re-executed to ensure existing
functionalities work fine.
Regression Testing is required when there is a
• Change in requirements and code is modified according to the requirement
• New feature is added to the software
• Defect fixing
• Performance issue fix
43
Regression Testing Tools
If your software undergoes frequent changes, regression testing costs will escalate.
In such cases, Manual execution of test cases increases test execution time as well as costs.
Automation of regression test cases is the smart choice in such cases. The extent of automation
depends on the number of test cases that remain re-usable for successive regression cycles.
Following are the most important tools used for both functional and regression testing in software
engineering.
• Selenium
• Quick Test Professional (QTP)
• Rational Functional Tester (RFT)
44
Re-Testing and Regression Testing
Retesting means testing the functionality or bug again to ensure the code is fixed. If it is not
fixed, Defect needs to be re-opened. If fixed, Defect is closed.
Regression testing means testing your software application when it undergoes a code change to
ensure that the new code has not affected other parts of the software.
Following are the major testing problems for doing regression testing:
• With successive regression runs, test suites become fairly large. Due to time and budget
constraints, the entire regression test suite cannot be executed
• Minimizing the test suite while achieving maximum Test coverage remains a challenge
• Determination of frequency of Regression Tests, i.e., after every modification or every build
update or after a bunch of bug fixes, is a challenge.
45
Integration Testing
INTEGRATION TESTING is a level of software testing where individual units are combined
and tested as a group. The purpose of this level of testing is to expose faults in the interaction
between integrated units. Test drivers and test stubs are used to assist in Integration Testing.
For Example, software and/or hardware components are combined and tested progressively until
the entire system has been integrated
46
Types of Integration testing
• Big Bang integration testing.
• Top Down integration testing.
• Bottom Up integration testing.
• Mixed or sandwiched integration testing.
Big-Bang Approach
Here all the modules are integrated in a single step or linked together and make up a system
and tested.
Problem-
• Once an error is found during testing , it’s very difficult to localize the error as error may
potentially lie in any modules.
• So, it’s very expensive.
Big-Bang Approach
Unit Test
A
Unit Test
B
Unit Test
C System Test
Unit Test
D
Unit Test
E
Unit Test
F
Top-down Approach
Then combine all the subsystems that are called by the tested subsystems and test the
resulting collection of subsystems.
A
Layer I
B C D Layer II
E F G
Layer III
Test
Test A Test A, B, C, D A, B, C, D,
E, F, G
The subsystem in the lowest layer of the call hierarchy are tested individually. Then
the next subsystems are tested that call the previously tested subsystems. This is done
repeatedly until all subsystems are included in the testing. Special program needed to
do the testing,
Test Driver:
A routine that calls a subsystem and passes a test case to it.
A
Layer I
B C D Layer II
E F G Layer III
Bottom-up approach
Test E
Test B, E, F
Test F
Test C Test
A, B, C, D,
E, F, G
Test D,G
Test G
Sandwich approach
Combines top-down strategy with bottom-up strategy. Here testing can start as and
modules are available after unit testing. Here, both stubs and drivers are required to be
designed. Overcomes the problem of top-down and bottom-up approach.
A
Layer I
B C D Layer II
E F G Layer III
Sandwich approach
Test E
up
Bottom Test B, E, F
Layer Test F
Tests
Test
A, B, C, D,
Test D,G E, F, G
Test G
Test A,B,C, D
56
System Testing
1. Recovery testing
– Tests for recovery from system faults
– Forces the software to fail in a variety of ways and verifies that recovery
is properly performed
– Tests reinitialization, checkpointing mechanisms, data recovery, and restart for
correctness
– If recovery is automatic, reinitialization, checkpointing mechanisms, data recovery, and
restart are evaluated for correctness
– If recovery requires human intervention, the mean-time-to-repair is evaluated to
determine whether it is within acceptable limits
57
2. Security testing
– Verifies that protection mechanisms built into a system will, in fact,
protect it from improper access
– During security testing, the tester plays the role of the individual
who desires to penetrate the system.
– Anything goes! The tester may attempt to acquire passwords through
external clerical means.
– may attack the system with custom software designed to break down any
defenses that have been constructed
3. Stress testing
– Executes a system in a manner that demands resources in abnormal
quantity, frequency, or volume.
– Stress tests are designed to confront programs with abnormal situations. In
essence, the tester who performs stress testing asks: How high can we
crank this up before it fails?
– A variation of stress testing is a technique called sensitivitytesting.
Debabrata Dansana 58
• A very small range of data contained within the bounds of valid data for a
program may cause extreme and even erroneous processing or profound
performance degradation.
• Sensitivity testing attempts to uncover data combinations within valid input
classes that may cause instability or improper processing.
4. Performance Testing
• Performance testing is designed to test the run-time performance of
software within the context of an integrated system.
• Performance testing occurs throughout all steps in the testing process.
Even at the unit level, the performance of an individual module may be
assessed as tests are conducted
• Performance tests are often coupled with stress testing and usually
requires both hardware and software instrumentation.
• That is, it is often necessary to measure resource utilization
(e.g., processor cycles) in an exacting fashion
Debabrata Dansana 59
System Testing
5.Usability Testing- mainly focuses on the user's ease to use the application, flexibility in handling
controls and ability of the system to meet its objectives
6.Load Testing- is necessary to know that a software solution will perform under real-life loads.
7.Regression Testing- involves testing done to make sure none of the changes made over the
course of the development process have caused new bugs. It also makes sure no old bugs appear
from the addition of new software modules over time.
8.Migration Testing- is done to ensure that the software can be moved from older system
infrastructures to current system infrastructures without any issues.
9.Functional Testing - Also known as functional completeness testing, Functional Testing involves
trying to think of any possible missing functions. Testers might make a list of additional
functionalities that a product could have to improve it during functional testing.
62
User Acceptance Testing
63
Validation Testing
Validation testing is the process of ensuring if the tested and developed software satisfies
the client /user needs. The business requirement logic or scenarios have to be tested in detail.
64
The Art of Debugging
• Debugging occurs as a consequence of successful testing. When a test
case uncovers an error, debugging is an action that results in the removal of the
error.
The debugging process
The debugging process attempts to match symptom with cause, thereby
leading to error correction.
The debugging process will usually have one of two outcomes:
(1) the cause will be found and corrected or
(2)the cause will not be found. In the latter case, the person performing
debugging may suspect a cause, design a test case to help validate that
suspicion, and work toward error correction in an iterative fashion.
– Debugging process beings with the execution of a test case.
Results are assessed and a lack of correspondence between expected and
actual performance is observed
– Debugging attempts to match symptomwith cause, thereby leading to error
correction.
65
The Art of Debugging
Debabrata Dansana 66
• Characteristics of bugs
– The symptom and the cause may be geographically remote.
– The symptom may disappear (temporarily) when another error is
corrected
– The symptom may actually be caused by non-errors
– The symptom may be caused by human error that is not easily
traced
– The symptom my be a result of timing problems, rather than processing
problems
– It may be difficult to accurately reproduce input conditions
– The symptom may be intermittent.
– The symptom may be due to causes that are distributed across a
number of tasks running on different processors
67
2. Psychological Considerations
• Unfortunately, there appears to be some evidence that debugging process is
an innate human trait. Some people are good at it and others are not.
• Although experimental evidence on debugging is open to many
interpretations, large variances in debugging ability have been reported for
programmers with the same education and experience.
3. Debugging Strategies
• Objective of debugging is to find and correct the cause of a software error or
defect.
• Bugs are found by a combination of systematic evaluation, intuition, and luck.
• Debugging methods and tools are not a substitute for careful evaluation
based on a complete design model and clear source code
• There are three main debugging strategies
1. Brute force 2. Backtracking 3. Cause elimination
68
Brute Force
• Most commonly used and least efficient method for isolating the cause of a
software error
• Used when all else fails.
• Involves the use of memory dumps, run-time traces, and
output statements
• Leads many times to wasted effort and time
Backtracking
– Can be used successfully in small programs
– The method starts at the location where a symptom has been uncovered
– The source code is then traced backward (manually) until the location
of the cause is found
– In large programs, the number of potential backward paths may become
unmanageably large
69
• Cause Elimination
– Involves the use of induction or deduction and introduces the concept of
binary partitioning
• Induction (specific to general): Prove that a specific starting valueis
true; then prove the general case is true
• Deduction (general to specific): Show
that
a specific conclusion follows from a set of
general premises.
– Data related to the error occurrence are organized to isolate potential
causes
– A cause hypothesis is devised, and the mentioned data are used to
prove or disprove the hypothesis
– Alternatively, to eliminate each cause a list of all possible causes
are developed, and tests are conducted
– If initial tests indicate that a particular cause hypothesis shows promise,
data are refined in an attempt to isolate the bug.
70
CASE TOOL
CASE tools are set of software application programs, which are used to automate SDLC activities.
CASE tools are used by software project managers, analysts and engineers to develop software
system.
There are number of CASE tools available to simplify various stages of Software Development Life
Cycle such as Analysis tools, Design tools, Project management tools, Database Management tools,
Documentation tools are to name a few.
Use of CASE tools accelerates the development of project to produce desired result and helps to
uncover flaws before moving ahead with next stage in software development.
71
CASE TOOL
72
CASE TOOL Types
Diagram tools
These tools are used to represent system components, data and control flow among various
software components and system structure in a graphical form. For example, Flow Chart Maker
tool for creating state-of-the-art flowcharts.
Process Modeling Tools
Process modeling is method to create software process model, which is used to develop the
software. Process modeling tools help the managers to choose a process model or modify it as per
the requirement of software product. For example, EPF Composer
Project Management Tools
These tools are used for project planning, cost and effort estimation, project scheduling and
resource planning. Managers have to strictly comply project execution with every mentioned step
in software project management. Project management tools help in storing and sharing project
information in real-time throughout the organization. For example, Creative Pro Office, Trac
Project, Basecamp.
Analysis Tools
These tools help to gather requirements, automatically check for any inconsistency, inaccuracy in
the diagrams, data redundancies or erroneous omissions. For example, Accept 360, Accompa, Case
Complete for requirement analysis, Visible Analyst for total analysis
73
CASE TOOL Types
Documentation Tools
Documentation tools generate documents for technical users and end users. Technical users are mostly in-
house professionals of the development team who refer to system manual, reference manual, training
manual, installation manuals etc. The end user documents describe the functioning and how-to of the system
such as user manual. For example, Doxygen, DrExplain, Adobe RoboHelp for documentation.
Design Tools
These tools help software designers to design the block structure of the software, which may
further be broken down in smaller modules using refinement techniques. These tools provides
detailing of each module and interconnections among modules. For example, Animated Software
Design
Configuration Management Tools
An instance of software is released under one version. Configuration Management tools deal with
1. Version and revision management
2. Baseline configuration management
3. Change control management
CASE tools help in this by automatic tracking, version management and release management. For
example, Fossil, Git, Accu REV.
74
CASE TOOL Types
75
CASE TOOL Types
76
Test case Design Technique
Test case design refers to how you set-up your test cases. It is important that your tests are designed
well, or you could fail to identify bugs and defects in your software during testing. There are many
different test case design techniques used to test the functionality and various features of your
software
Following are the typical design techniques in software engineering:
1. Specification-Based techniques
2. Structure-Based techniques
3. Experience-Based techniques
1. Deriving test cases directly from a requirement specification or black box test design technique.
The Techniques include:
Boundary Value Analysis (BVA)
Equivalence Partitioning (EP)
Decision Table Testing
State Transition Diagrams
Use Case Testing
77
Test case Design Technique
2. Deriving test cases directly from the structure of a component or system:
Statement Coverage
Branch Coverage
Path Coverage
LCSAJ Testing
3. Deriving test cases based on tester's experience on similar systems or testers intuition:
Error Guessing
Exploratory Testing
79
What is Reliability?
Failure
A failure is said to occur if the observable outcome of a program execution is different from the
expected outcome.
Fault
The adjudged cause of failure is called a fault.
Example: A failure may be cause by a defective block of code.
Time
Time is a key concept in the formulation of reliability. If the time gap between two successive
failures is short, we say that the system is less reliable.
Mean Time to Failure (MTTF)-Describe the time upto the first failure
Availability = MTBF / (MTBF+MTTR)
MTBF = Mean Time Between Failure
MTTR = Mean Time to Repair
80
What is Reliability?
Two ways to measure reliability
Counting failures in periodic intervals
Observer the trend of cumulative failure count - µ().
Failure intensity
Observe the trend of number of failures per unit time – λ().
µ()
This denotes the total number of failures observed until execution time from the beginning of
system execution.
λ()
This denotes the number of failures observed per unit time after time units of executing the
system from the beginning. This is also called the failure intensity at time .
81
Definitions of Software Reliability
First definition
Software reliability is defined as the probability of failure-free operation of a software system
for a specified time in a specified environment.
Key elements of the above definition
Probability of failure-free operation
Length of time of failure-free operation
A given execution environment
Example
The probability that a PC in a store is up and running for eight hours without crash is
0.99.
Second definition
Failure intensity is a measure of the reliability of a software system operating in a given
environment.
Example: An air traffic control system fails once in two years.
Comparing the two
The first puts emphasis on MTTF, whereas the second on count.
82
Factors Influencing Software Reliability
A user’s perception of the reliability of a software depends upon two categories of information.
The number of faults present in the software.
The ways users operate the system.
This is known as the operational profile.
The fault count in a system is influenced by the following.
Size and complexity of code
Characteristics of the development process used
Education, experience, and training of development personnel
Operational environment
83
Applications of Software Reliability
Comparison of software engineering technologies
What is the cost of adopting a technology?
What is the return from the technology -- in terms of cost and quality?
Measuring the progress of system testing
Key question: How of testing has been done?
The failure intensity measure tells us about the present quality of the system: high intensity
means more tests are to be performed.
Controlling the system in operation
The amount of change to a software for maintenance affects its reliability. Thus the amount of
change to be effected in one go is determined by how much reliability we are ready to
potentially lose.
Better insight into software development processes
Quantification of quality gives us a better insight into the development processes.
84
SEI and CMM
1) SEI
SEI stands for ‘Software Engineering Institute' at Carnegie-Mellon University, initiated by the U.S.
Defense Department to help improve software development processes.
2) CMM
CMM stands for ‘Capability Maturity Model', developed by the SEI. It's a model of 5 levels of
organizational ‘Maturity' that determine effectiveness in delivering quality software.
It is geared to large organizations such as large U.S. Defense Department contractors. However,
many of the QA processes involved are appropriate to any organization, and if reasonably applied
can be helpful.
Organizations can receive CMM ratings by undergoing assessments by qualified auditors.
Level 1 – Characterized by chaos, periodic panics, and heroic efforts required by individuals to
successfully complete projects. Few if any processes in place; successes may not be repeatable.
Level 2 – Software project tracking, requirements management, realistic planning, and configuration
management processes are in place, successful practices can be repeated.
Level 3 – Standard software development and maintenance processes are integrated throughout an
organization, a Software Engineering Process Group is in place to oversee software processes, and
training programs are used to ensure understanding and compliance.
.
85
ISO
Level 4 – Metrics are used to track productivity, processes, and products. Project performance is
predictable, and quality is consistently high.
Level 5 – The focus is on continuous process improvement. The impact of new processes and
technologies can be predicted and effectively implemented when required
3) ISO
ISO stands for ‘International Organization for Standards' – The ISO 9001, 9002, and 9003 standards
concern quality systems that are assessed by outside auditors, and they apply to many kinds of
production and manufacturing organizations, not just software.
The most comprehensive is 9001, and this is the one most often used by software development
organizations. It covers documentation, design, development, production, testing, installation,
servicing, and other processes.
ISO 9000-3 (not the same as 9003) is a guideline for applying ISO 9001 to software development
organizations. The U.S. version of the ISO 9000 series standards is exactly the same as the
international version and is called the ANSI/ASQ Q9000 series.
The U.S. version can be purchased directly from the ASQ (American Society for Quality) or the ANSI
organizations.
To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically
good for about 3 years, after which a complete reassessment is required.
86
Note that ISO 9000 certification does not necessarily indicate quality products, it indicates only that
IEEE and ANSI
4) IEEE
IEEE stands for ‘Institute of Electrical and Electronics Engineers'.
Among other things, creates standards such as ‘IEEE Standard for Software Test Documentation'
(IEEE/ANSI Standard 829), ‘IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), ‘IEEE
Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.
5) ANSI
ANSI stands for ‘American National Standards Institute', the primary industrial standards body in the
U.S. publishes some software-related standards in conjunction with the IEEE and ASQ (American
Society for Quality).
87
Software Maintenance
Software maintenance is widely accepted part of SDLC now a days. It stands for all the modifications
and updations done after the delivery of software product. There are number of reasons, why
modifications are required, some of them are briefly mentioned below:
Market Conditions - Policies, which changes over the time, such as taxation and newly introduced
constraints like, how to maintain bookkeeping, may trigger need for modification.
Client Requirements - Over the time, customer may ask for new features or functions in the
software.
Host Modifications - If any of the hardware and/or platform (such as operating system) of the target
host changes, software changes are needed to keep adaptability.
Organization Changes - If there is any business level change at client end, such as reduction of
organization strength, acquiring another company, organization venturing into new business, need
to modify in the original software may arise.
88
Types of Software Maintenance
Types of maintenance
In a software lifetime, type of maintenance may vary based on its nature. It may be just a routine
maintenance tasks as some bug discovered by some user or it may be a large event in itself based on
maintenance size or nature. Following are some types of maintenance based on their
characteristics:
Corrective Maintenance - This includes modifications and updations done in order to correct or fix
problems, which are either discovered by user or concluded by user error reports.
Adaptive Maintenance - This includes modifications and updations applied to keep the software
product up-to date and tuned to the ever changing world of technology and business environment.
Perfective Maintenance - This includes modifications and updates done in order to keep the
software usable over long period of time. It includes new features, new user requirements for
refining the software and improve its reliability and performance.
Preventive Maintenance - This includes modifications and updations to prevent future problems of
the software. It aims to attend problems, which are not significant at this moment but may cause
serious issues in future.
89
Software Maintenance Activities
90
Software Maintenance Activities
These activities go hand-in-hand with each of the following phase:
Identification & Tracing - It involves activities pertaining to identification of requirement of
modification or maintenance. It is generated by user or system may itself report via logs or error
messages.Here, the maintenance type is classified also.
Analysis - The modification is analyzed for its impact on the system including safety and security
implications. If probable impact is severe, alternative solution is looked for. A set of required
modifications is then materialized into requirement specifications. The cost of
modification/maintenance is analyzed and estimation is concluded.
Design - New modules, which need to be replaced or modified, are designed against requirement
specifications set in the previous stage. Test cases are created for validation and verification.
Implementation - The new modules are coded with the help of structured design created in the
design step.Every programmer is expected to do unit testing in parallel.
System Testing - Integration testing is done among newly created modules. Integration testing is
also carried out between new modules and the system. Finally the system is tested as a whole,
following regressive testing procedures.
91
Software Maintenance Activities
Acceptance Testing - After testing the system internally, it is tested for acceptance with the help of
users. If at this state, user complaints some issues they are addressed or noted to address in next
iteration.
Delivery - After acceptance test, the system is deployed all over the organization either by small
update package or fresh installation of the system. The final testing takes place at client end after
the software is delivered.
Training facility is provided if required, in addition to the hard copy of user manual.
Maintenance management - Configuration management is an essential part of system maintenance.
It is aided with version control tools to control versions, semi-version or patch management.
92
Software Re-engineering
When we need to update the software to keep it to the current market, without impacting its
functionality, it is called software re-engineering. It is a thorough process where the design of
software is changed and programs are re-written.
Legacy software cannot keep tuning with the latest technology available in the market. As the
hardware become obsolete, updating of software becomes a headache. Even if software grows old
with time, its functionality does not.
For example, initially Unix was developed in assembly language. When language C came into
existence, Unix was re-engineered in C, because working in assembly language was difficult.
Other than this, sometimes programmers notice that few parts of software need more maintenance
than others and they also need re-engineering.
Re-Engineering Process
Decide what to re-engineer. Is it whole software or a part of it?
Perform Reverse Engineering, in order to obtain specifications of existing software.
Restructure Program if required. For example, changing function-oriented programs into object-
oriented programs.
Re-structure data as required.
Apply Forward engineering concepts in order to get re-engineered software.
93
Software Re-engineering
94
Reverse Engineering
It is a process to achieve system specification by thoroughly analyzing, understanding the existing
system. This process can be seen as reverse SDLC model, i.e. we try to get higher abstraction level by
analyzing lower abstraction levels.
An existing system is previously implemented design, about which we know nothing. Designers then
do reverse engineering by looking at the code and try to get the design. With design in hand, they
try to conclude the specifications. Thus, going in reverse from code to system specification.
95