MODULE 3
MODULE 3
• The first stage is to develop an understanding of the relationships between the software that
is being designed and its external environment.
• This is essential for deciding how to provide the required system functionality and how to
structure the system to communicate with its environment.
• System context models and interaction models present relationships between a system and
its environment:
1.System context model - a structural model that demonstrates the other systems in the environment of
the system being developed.
2.Interaction model - a dynamic model that shows how the system interacts with its environment as it is
used.
• When you model the interactions of a system with its environment, you
should use an abstract approach that does not include too much detail.
• One way to do this is to use a use case model. Use case diagram
Weather station context diagram
2. Architectural Design:
• Once the interactions between the software system and the system’s
environment have been defined, use this information for designing the system
architecture.
• Identify the major components that make up the system and their interactions.
• Then design the system organization using an architectural pattern such as a
layered or client–server model.
• The weather station is composed of independent subsystems that communicate
by broadcasting messages on a common infrastructure, shown as
Communication link
• Each subsystem listens for messages on that infrastructure and picks up the
messages that are intended for them.
• This “listener model” is a commonly used architectural style for distributed
systems.
3. Object Class Identification:
• The use case description helps to identify objects and operations in the system.
• Various ways of identifying object classes in object-oriented systems are:
1. Use a grammatical analysis of a natural language description
2. Use tangible entities (things) in the application domain such as aircraft, roles such as manager,
events such as request, interactions such as meetings, locations such as offices, organizational
units such as companies, and so on.
3. Use a scenario-based analysis where various scenarios of system use are identified and
analysed in turn.
4. Design Models:
• Design models show the objects or object classes in a system.
• They also show the associations and relationships between these entities.
• These models are the bridge between the system requirements and the
implementation of a system.
• They have to be abstract.
• They also have to include enough detail for programmers to make
implementation decisions.
• 2 kinds of design model:
1. Structural models - describe the static structure of the system using
object classes and their relationships.
2. Dynamic models - describe the dynamic structure of the system and
show the expected runtime interactions between the system objects.
5. Interface Specification:
• Interface design is concerned with specifying the detail of the interface to an object or to a
group of objects.
• Interfaces can be specified in the UML using the same notation as a class diagram.
• “Patterns and Pattern Languages are ways to describe best practices, good
designs, and capture experience in a way that it is possible for others to reuse
this experience”
• Patterns are a way of reusing the knowledge and experience of other
designers.
• Published patterns often rely on object characteristics such as inheritance and
polymorphism to provide generality.
• 4 essential elements of design patterns
1. A name that is a meaningful reference to the pattern.
2. A description of the problem area that explains when the pattern may be applied.
3. A solution description of the parts of the design solution, their relationships and their
responsibilities.
4. A statement of the consequences—the results
The problem description was broken down into
motivation (a description of why the pattern is useful) and
applicability (a description of situations in which the pattern may be used).
Most modern software is constructed by reusing existing components or
systems.
When you are developing software, you should make as much use as possible
of existing code.
Software reuse is possible at a number of different levels:
1. Abstraction level →
At this level, you don’t reuse software directly but rather use knowledge
of successful abstractions in the design of your software.
Design patterns and architectural patterns are ways of representing
abstract knowledge for reuse.
2. The object level →
At this level, you directly reuse objects from a library rather than writing
the code yourself.
To implement this type of reuse, you have to find appropriate libraries
Discover if the objects and methods offer the functionality that you need .
3. Component level →
Components are collections of objects and object classes that operate together to
You often have to adapt and extend the component by adding some code of your own.
At this level, you reuse entire application systems.
This function usually involves some kind of configuration of these systems.
This may be done by adding and modifying code or by using the system’s own
configuration interface.
By reusing existing software, you can develop new systems more quickly, with fewer development
risks and at lower cost.
II. Configuration Management:
During the development process, many different versions of each software component are
created.
If you don’t keep track of these versions in a configuration management system, you are
liable to include the wrong versions of these components in your system.
4 fundamental configuration management activities:
1. Version management →
where support is provided to keep track of the different versions of software
components.
Version management systems include facilities to coordinate development by several
programmers.
They stop one developer from overwriting code that has been submitted to the system by
someone else
2. System integration →
where support is provided to help developers define what versions of
components are used to create each version of a system.
This description is then used to build a system automatically by compiling
and linking the required components.
3. Problem tracking →
where support is provided
to allow users to report bugs and other problems
to allow all developers to see who is working on theseproblems
when they are fixed.
4. Release management →
where new versions of a software system are released to customers.
Release management is concerned with planning the functionality of new releases
and organizing the software for distribution.
III. Host-target development
Production software does not usually execute on the same computer as the software
development environment.
Rather, you develop it on one computer (the host system) and execute it on a
separate computer (the target system).
The host and target systems are sometimes of the same type, but often they are
completely different.
Most professional software development is based on a host-target model.
A platform includes the installed operating system plus other supporting software
such as a database management system
Simulators are often used when developing embedded systems.
Simulators speed up the development process
A software development platform should provide a range of tools to support software engineering
processes. These may include:
An integrated compiler and syntax-directed editing system that allows you to create, edit, and compile
code.
A language debugging system
Graphical editing tools, such as tools to edit UML models.
Testing tools, such as JUnit, that can automatically run a set of tests on a new version of a program.
Tools to support refactoring and program visualization.
Configuration management tools to manage source code versions and to integrate and build systems.
Software development tools are now usually installed within an integrated development environment
(IDE).
An IDE is a set of software tools that supports different aspects of software development
The best-known general-purpose IDE is the Eclipse environment.
OPEN SOURCE DEVELOPMENT
Here source code of a software system is published and
volunteers are invited to participate in the development
process.
It is the backbone of the Internet and software engineering.
The Linux operating system is the most widely used server
system, as is the opensource Apache web server.
Other important and universally used opensource products are
Java, the Eclipse IDE, and the mySQL database management
system.
✔ It is usually cheap or even free to acquire opensource
software.
✔ The other key benefit
✔ They have a large population of users who are willing to fix
problems themselves rather than report these problems to the
developer and wait for a new release of the system.
✔ Bugs are discovered and repaired more quickly than is
usually possible with proprietary software.
Open-source licensing
A fundamental principle of open-source development is that source code should be
freely available.
The developer of the code owns the code.
They can place restrictions on how it is used by including legally binding conditions
Licensing issues are important because if you use open-source software as part of a
software product, then you may be obliged by the terms of the license to make your
own product open source..
Most open-source licenses are variants of one of three general models:
1. The GNU General Public License (GPL).
This is a so-called reciprocal license that means that if you use open-source
software that is licensed under the GPL license, then you must make that software
2. The GNU Lesser General Public License (LGPL).
This is a variant of the GPL license
you can write components that link to open-source code without having to publish
the source of these components.
However, if you change the licensed component, then you must publish this as
open source.
3. The Berkley Standard Distribution (BSD) License.
This is a nonreciprocal license,
which means you are not obliged to re-publish any changes or modifications made
to open-source code.
If you use open-source components, you must acknowledge the original creator of
the code. eg.The MIT license
Review
Software reviews are a “filter” for the software process.
Reviews are applied at various points during software engineering.
To uncover errors and defects that can then be removed.
Software reviews “purify” software engineering work products, including
•
Requirements design models
Code testing data
A review is a way of using the diversity of a group of people to:
1. Point out needed improvements in the product of a single person or team
2. Confirm those parts of a product in which improvement is either not
COST IMPACT OF SOFTWARE DEFECTS
software has been released to end users
The primary objective of technical reviews is to find errors during the process so that they
do not become defects after release of the software.
Error- a quality problem found before the software is released to end user.
Defects- a quality problem found after the software is released to end user.
The obvious benefit of technical reviews is the early discovery of errors
So that they do not propagate to the next step in the software process.
By detecting and removing a large percentage of these errors, the review process
substantially reduces the cost of subsequent activities in the software process.
DEFECT AMPLIFICATION AND REMOVAL
A defect amplification model can be used to
illustrate the generation and detection of
errors during the design and code generation
actions of a software process.
REVIEW METRICS AND THEIR USE
• Technical reviews are one of many actions that are required as part of good software engineering practice.
• Each action requires dedicated human effort
• Preparation effort, Ep—the effort (in person-hours) required to review a work product prior to the actual
review meeting
• Assessment effort, Ea— the effort (in person-hours) that is expended during the actual review
• Rework effort, Er — the effort (in person-hours) that is dedicated to the correction of those errors
uncovered during the review
• Work product size, WPS—a measure of the size of the work product that has been reviewed
• e.g., the number of UML model, the number of document pages
• the number of lines of code
• Minor errors found, Errminor—the number of errors found that can be categorized as minor (requiring
less than some efforT)
• Major errors found, Errmajor—the number of errors found that can be categorized as major (requiring
more than some prespecifi ed effort to correct
Analyzing Metrics
• The total review effort and the total number of errors discovered are defined as:
• Ereview = Ep + Ea + Er
• Errtot = Errminor + Errmajor
• Error density represents the errors found per unit of work product reviewed.
• Error density = Errtot/ WPS
Effort expended with and without reviews
REVIEWS : A FORMALITY SPECTRUM
• The formality of a review increases when
1) Distinct roles are explicitly defined for the reviewers,
(2) There is a sufficient amount of planning and preparation for the
review
(3) A distinct structure for the review is defined
• The results of the review would be formally recorded.
• The team would decide on the status of the work product
• Members of the review team might also verify that the corrections
Technical reviews:
• Informal reviews
• Formal technical reviews.
INFORMAL REVIEWS
• Informal reviews include a
simple desk check of a software engineering work product with a
colleague, a casual meeting for the purpose of reviewing a work
product.
One way to improve the efficiency of a deskcheck review is to develop a set
of simple review checklists for each major work product produced by the
software team.
Checklist for interfaces:
• Is the layout designed using standard conventions? Left to right? Top to
bottom?
• Are color and placement, typeface, and size used effectively?
• Are all navigation options or functions represented at the same level of
abstraction?
• Are all navigation choices clearly labeled?
• Pair programming can be characterized as a continuous desk check.
• Rather than scheduling a review at some point in time Pair programming
encourages continuous review as a work product is created.
• The benefit is immediate discovery of errors and better work product
quality.
• Unit testing focuses verification effort on the smallest unit of software design.
• The software component or module.
• Using the component-level design description, important control paths are
tested to uncover errors within the boundary of the module.
• The unit test focuses on the internal processing logic and data structures
within the boundaries of a component.
• This type of testing can be conducted in parallel for multiple components.
Unit Test Considerations
• The module interface is tested to ensure that information properly flows into and out of the program unit
• Local data structures are examined to ensure that data stored temporarily maintains its integrity during all
steps in an algorithm’s execution.
• All independent paths through the control structure are exercised to ensure that all statements in a module
have been executed at least once.
• Boundary conditions are tested to ensure that the module operates properly at boundaries established to
limit or restrict processing.
• And finally, all error handling paths are tested.
• Test cases should be designed to uncover errors due to erroneous computations, incorrect comparisons, or
improper control flow.
• Boundary testing is one of the most important unit testing tasks.
• Software often fails at its boundaries.
• That is, errors often occur when the n th element of an n -dimensional array is processed, when the ith
repetition of a loop with I passes is invoked, when the maximum or minimum allowable value is encountered.
• Test cases that exercise data structure, control flow, and data values just below, at, and
just above maxima and minima are very likely to uncover errors.
• A good design anticipates error conditions and establishes error-handling paths to reroute or
cleanly terminate processing when an error does occur -antibugging.
Potential errors
(1) error description is unintelligible,
• (2) error noted does not correspond to error encountered,
• (3) error condition causes system intervention prior to error handling,
• (4) exception-condition processing is incorrect, or
• (5) error description does not provide enough information to assist in the location of the
cause of the error.
Unit-Test Procedures.
• Unit testing is normally considered as an adjunct to the coding step.
• The design of unit tests can occur before coding begins or after source code has been generated.
• A review of design information provides guidance for establishing test cases that are likely to uncover errors
in each of the categories .
• Each test case should be coupled with a set of expected results.
• Because a component is not a stand-alone program, driver and/or stub
• Software must often be developed for each unit test.
• In most applications a driver is nothing more than a“main program” that accepts test-case data, passes such
data to the component(to be tested), and prints relevant results.
• Stubs serve to replace modules that are subordinate (invoked by) the component to be tested.
• If drivers and stubs are kept simple, actual overhead is relatively low. Unfortunately, many components
cannot be adequately unit tested with “simple” overhead software.
• In such cases, complete testing can be postponed until the integration test step (where drivers or stubs are also
used).
Integration Testing
• Components must be assembled or integrated to form the complete
software package.
• Integration testing addresses the issues associated with the dual
problems of verification and program construction.
• Integration testing is a systematic technique for constructing the
software architecture while at the same time conducting tests to
uncover errors associated with interfacing.
• The objective is to take unit-tested components and build a
program structure that has been dictated by design.
• To construct the program using a “big bang” approach.
• All components are combined in advance and the entire program is tested as a
whole.
• Errors are encountered, but correction is difficult because isolation of causes is
complicated by the vast expanse of the entire program.
• Incremental integration is the antithesis of the big bang approach.
• The program is constructed and tested in small increments, where errors are
easier to isolate and correct; interfaces are more likely to be tested completely;
and a systematic test approach may be applied.
Top-Down Integration.
• Top-down integration testing is an incremental approach to construction of the
software architecture.
• Modules are integrated by moving downward through the control hierarchy,
beginning with the main control module (main program).
• Modules subordinate are incorporated into the structure in either a depth first or
breadth-first manner..
• Depth-first integration integrates all components on a major control path of the
program structure.
• Selection of a major path is somewhat arbitrary and depends on application-
specific characteristics
• For example, selecting the left-hand path, components M1, M2 , M5 would be
integrated first.
• Next, M8 or (if necessary for proper functioning of M2) M6 would be
integrated.
• Then, the central and right-hand control paths are built.
• Breadth-first integration incorporates all components directly subordinate at
each level, moving across the structure horizontally.
• From the figure, components M2, M3, and M4 would be integrated first.
• The next control level, M5, M6, and so on, follows.
• The integration process is performed in a series of five steps:
• 1. The main control module is used as a test driver and stubs are
substituted for all components directly subordinate to the main
control module.
• 2. Depending on the integration approach selected (i.e., depth or
breadth first), subordinate stubs are replaced one at a time with
actual components.
• 3. Tests are conducted as each component is integrated.
• 4. On completion of each set of tests, another stub is replaced with
the real component.
• 5. Regression may be conducted to ensure that new errors have not
been introduced.
Bottom-Up Integration
• Bottom-up integration testing, as its name implies, begins
construction and testing with atomic modules (i.e., components at
the lowest levels in the program structure).
• Because components are integrated from the bottom up, the
functionality provided by components subordinate to a given level is
always available and the need for stubs is eliminated.
• A bottom-up integration strategy may be implemented with the
following steps:
• 1. Low-level components are combined into clusters (sometimes
called builds ) that perform a specific software sub function.
• 2. A driver (a control program for testing) is written to coordinate
test-case input and output.
• 3. The cluster is tested.
• 4. Drivers are removed and clusters are combined moving upward
in the program structure.
• Components are combined to form clusters 1, 2, and 3.
• Each of the clusters is tested using a driver (shown as a dashed block).
• Components in clusters 1 and 2 are subordinate to Ma .
• Drivers D1 and D2 are removed and the clusters are interfaced directly to Ma .
• Similarly, driver D3 for cluster 3 is removed prior to integration with module
Mb.
• Both Ma and Mb will ultimately be integrated with component Mc , and so
forth.
• As integration moves upward, the need for separate test drivers lessens.
• In fact, if the top two levels of program structure are integrated top down, the
number of drivers can be reduced substantially and integration of clusters is
greatly simplified.
VALIDATION TESTING
The process of evaluating software during the development process or at the
end of the development process to determine whether it satisfies specified
business requirements.
Validation Testing ensures that the product actually meets the client's needs.
It can also be defined as to demonstrate that the product fulfills its intended
use.
A. Validation-Test Criteria
Software validation is achieved through a series of tests that demonstrate
requirements.
A test procedure define specific test cases that are designed to ensure that
all functional requirements are satisfied
all behavioural characteristics are achieved
all content is accurate and properly presented
All performance requirements are attained
documentation is correct
usability and other requirements are met (e.g., transportability, compatibility, error
recovery, maintainability).
If a deviation from specification is uncovered, a deficiency list is created.
A method for resolving deficiencies (acceptable to stakeholders) must be established.
B. Alpha and Beta Testing
SYSTEM TESTING
A. Recovery Testing
B. Security Testing
C. Stress Testing
D. Performance Testing
E. Deployment Testing
A. Recovery Testing
Many computer-based systems must recover from faults and resume processing with
little or no downtime.
In some cases, a system must be fault tolerant;
That is, processing faults must not cause overall system function to cease.
In other cases, a system failure must be corrected within a specified period of time or
severe economic damage will occur.
Recovery testing is a system test that forces the software to fail in a variety of ways and
verifies that recovery is properly performed.
If recovery is automatic (performed by the system itself)
Reinitialization
checkpointing mechanisms
data recovery
restart are evaluated for correctness.
If recovery requires human intervention, the mean-time-to-repair (MTTR)
is evaluated to determine whether it is within acceptable limits.
B. Security Testing
Any computer-based system that manages sensitive information or causes actions that can improperly harm
(or benefit) individuals is a target for improper or illegal penetration.
Penetration spans a broad range of activities:
Security testing is an integral part of software testing, which is used
to discover the weaknesses, risks, or threats in the software application
also help us to stop the nasty attack from the outsiders
make sure the security of our software applications.
The primary objective of security testing is to find all the potential
ambiguities and vulnerabilities of the application so that the software does
not stop working.
If we perform security testing, then it helps us to identify all the possible
security threats and also help the programmer to fix those errors.
C. Stress Testing
Stress Testing is a type of software testing that verifies stability & reliability of software
application.
The goal of Stress testing is measuring software on its robustness and error handling
capabilities under extremely heavy load conditions and ensuring that software doesn’t
crash under crunch situations.
It even tests beyond normal operating points and evaluates how software works under
extreme conditions.
It is also known as Endurance Testing, fatigue testing or Torture Testing.
The stress testing includes the testing beyond standard operational size, repeatedly to a
breaking point, to get the outputs.
It highlights the error handling and robustness under a heavy load
D. Performance Testing
Performance testing is a non-functional software testing technique that
determines how the stability, speed, scalability, and responsiveness of an
application holds up under a given workload.
For real-time and embedded systems, software that provides required function
but does not conform to performance requirements is unacceptable.
Performance testing is designed to test the run-time performance of software
Performance testing occurs throughout all steps in the testing process.
Even at the unit level, the performance of an individual module may be
assessed as tests are conducted.
Performance tests are often coupled with stress testing and usually require
both hardware and software instrumentation.
That is, it is often necessary to measure resource utilization (e.g., processor
cycles) in an exacting fashion.
E. Deployment Testing
Software must execute on a variety of platforms and under more
than one operating system environment.
Deployment testing, sometimes called configuration testing,
exercises the software in each environment in which it is to
operate.
In addition, deployment testing examines all installation
procedures and specialized installation software (e.g.,“installers”)
that will be used by customers, and all documentation that will be
used to introduce the software to end users.
THE ART OF DEBUGGING
Debugging occurs as a consequence of successful testing.
That is, when a test case uncovers an error, debugging is the process that results
in the removal of the error.
A. The Debugging Process
Debugging is not testing but often occurs as a consequence of testing, the
debugging process begins with the execution of a test case.
Results are assessed and a lack of correspondence between expected and actual
performance is encountered.
The debugging process attempts to match symptom with cause, thereby leading to error
correction.
The debugging process will usually have one of two outcomes:
(1) the cause will be found and corrected or
(2) the cause will not be found.
In the latter case, the person performing debugging may suspect a cause, design a test case
to help validate that suspicion, and work toward error correction in an iterative fashion.
INTERNAL AND EXTERNAL VIEWS OF TESTING
Any engineered product can be tested in one of two ways:
(1) Knowing the specified function that a product has been designed to perform, tests can
be conducted that demonstrate each function is fully operational while at the same time
searching for errors in each function.
The first test approach takes an external view and is called black-box testing.
2) Knowing the internal workings of a product, tests can be conducted to ensure that “all
gears mesh,” that is, internal operations are performed according to specifications and all
internal components have been adequately exercised.
The second requires an internal view and is termed white-box testing.
WHITE-BOX TESTING
White-box testing, sometimes called glass-box testing or structural testing.
It is a test- case design philosophy that uses the control structure described as part of
component-level design to derive test cases.
Using white-box testing methods, we can derive test cases that
(1) guarantee that all independent paths within a module have been exercised at least once
(2) exercise all logical decisions on their true and false sides
(3) execute all loops at their boundaries and within their operational bounds
(4) exercise internal data structures to ensure their validity.
A. BASIS PATH TESTING
Path Testing is a method that is used to design the test cases.
In path testing method, the control flow graph of a program is designed to find a set of
In this method Cyclomatic Complexity is used to determine the number of linearly
independent paths and then test cases are generated for each path.
Path Testing Process
1. Control Flow Graph: Draw the corresponding control flow graph of the
program in which all the executable paths are to be discovered.
•2. Cyclomatic Complexity: After the generation of the control flow graph,
calculate the cyclomatic complexity of the program
•Cyclomatic Complexity = E - N + 2P
•Where, E = Number of edges in control flow graph
•N = Number of vertices in control floe graph
•P = Program factor
• 3. Make Set: Make a set of all the path according to the control flow graph and
calculated.
• 4. Create Test Cases: Create test case for each path of the set
Path Testing Techniques
1. Control Flow Graph: The program is converted into control flow graph by
representing the code into nodes and edges.
•2. Independent paths: Independent path is a path through a Decision-to-Decision
path graph which cannot be reproduced from other paths by other methods.
•3. Graph matrices: A graph matrix is a square matrix whose size (i.e., number of
rows and columns) is equal to the number of nodes on the flow graph.
A.1. Flow Graph Notation
The flow graph depicts logical control flow using the notation illustrated in
Figure
A.2. Independent Program Paths
Path 1: 1-11
Path 2: 1-2-3-4-5-10-1-11
Path 3: 1-2-3-6-8-9-10-1-11
Path 4: 1-2-3-6-7-9-10-1-11
Note that each new path introduces a new edge.
The path 1-2-3-4-5-10-1-2-3-6-8-9-10-1-11
A.3 Graph Matrices
Each row and column corresponds to an identified node, and matrix entries
correspond to connections (an edge) between nodes.
A simple example of a flow graph and its corresponding graph matrix Figure
below.
CONTROL STRUCTURE TESTING
• Control structure testing is used to increase the coverage area by testing various control
structures present in the program.
• The different types of testing performed under control structure testing are as follows1.
Condition Testing 2. Data Flow Testing 3. Loop Testing
1. Condition testing is a test cased design method, which ensures that the logical condition and
decision statements are free from errors.
• The errors present in logical conditions can be incorrect Boolean operators, missing
parenthesis in a booleans expression, error in relational operators, arithmetic
expressions
2. The data flow test method chooses the test path of a program based on the locations of the
definitions and uses all the variables in the program.
• 3. Loop testing is actually a white box testing technique.
• It specifically focuses on the validity of loop construction.
• Three types of loops: simple, structured and unstructured.
BLACK-BOX TESTING
Black-box testing, also called behavioral testing or functional testing focuses
on the functional requirements of the software.
That is, black-box testing techniques enable you to derive sets of input
conditions that will fully exercise all functional requirements for a program
Black-box testing attempts to find errors in the following categories:
(1) incorrect or missing functions
(2) interface errors
(3) errors in data structures or external database access
(4) behavior or performance errors
(5) initialization and termination errors.
1.Graph-Based Testing Methods
2. Equivalence Partitioning
3. Boundary Value Analysis
4 Orthogonal Array Testing
5. Model Based Testing
1. Graph-Based Testing Methods
The first step in black-box testing is to understand the objects that are modeled in software
and the relationships that connect these objects.
Once this has been accomplished, the next step is to define a series of tests that verify “all
objects have the expected relationship to one another” .
To accomplish these steps, you begin by creating a graph—a collection of nodes that represent objects
links that represent the relationships between objects
node weights that describe the properties of a node
link weights that describe some characteristic of a link.
The symbolic representation of a graph is shown in Figure a .
Nodes are represented as circles connected by links that take a number of different forms.
A directed link (represented by an arrow) indicates that a relationship moves in only one direction.
A bidirectional link, also called a symmetric link, implies that the relationship applies in both
directions.
Parallel links are used when a number of different relationships are established between graph nodes.
2. Equivalence Partitioning
Equivalence partitioning is a black-box testing method that divides the input
domain of a program into classes of data from which test cases can be
derived.
Test-case design for equivalence partitioning is based on an evaluation of
equivalence classes for an input condition.
Using if a set of objects can be linked by relationships that are symmetric,
transitive, and reflexive, an equivalence class is present .
3 Boundary Value Analysis
A greater number of errors occurs at the boundaries of the input domain
rather than in the “center.”
It is for this reason that boundary value analysis (BVA) has been developed as
a testing technique.
Boundary value analysis leads to a selection of test cases that exercise
bounding values.
BVA leads to the selection of test cases at the “edges” of the class.
4 Orthogonal Array Testing
Orthogonal array testing can be applied to problems in which the input domain
is relatively small but too large to accommodate exhaustive testing.
The orthogonal array testing method is particularly useful in finding region
faults
When orthogonal array testing occurs, an L9 orthogonal array of test cases is
created.
The L9 orthogonal array has a “balancing property”.
That is, test cases (represented by dark dots in the figure) are “dispersed
uniformly throughout the test domain,” as illustrated in the right-hand cube
in Figure .
Test coverage across the input domain is more complete.
5. MODEL -BASED TESTING
Model-based testing (MBT) is a black-box testing technique that uses
information contained in the requirements model as the basis for the
generation of test cases
In many cases, the model-based testing technique uses UML state diagrams as
the basis for the design of test cases.
TESTING DOCUMENTATION
Documentation testing can be approached in two phases.
The first phase, technical review examines the document for editorial clarity.
The second phase, live test, uses the documentation in conjunction with the actual
program.
Graph-based testing can be used to describe the use of the program equivalence
partitioning
boundary value analysis can be used to define various classes of input and associated
interactions.
MBT can be used to ensure that documented behavior and actual behavior coincide.
TEST AUTOMATION
• Automated testing is based on the idea that tests should be executable.
• An executable test includes
• ✗ the input data to the unit that is being tested
• ✗ the expected result
• ✗ a check that the unit returns the expected result.
• We run the test and the test passes if the unit returns the expected result.
• ✔ Normally we should develop hundreds or thousands of executable tests for a
• software product.
• ✔ The development of automated testing frameworks, such as JUnit for Java in
• the 1990s, reduced the effort involved in developing executable tests .
• ✔ Testing frameworks are now available for all widely used programming languages.
• A test report shows the tests that have passed
and failed.
• Two approaches to reduce the chances of test errors:
• 1. Make tests as simple as possible.
• The more complex the test, the more likely that it will be buggy.
• 2. Review all tests along with the code that they test.
• As part of the review process, someone apart from the test programmer should
check the tests for correctness.
TEST DRIVEN DEVELOPMENT
• Test driven development (TDD) is an approach to program development that is based on the general
idea that we should write an executable test or tests for code that are writing before you write the
code.
• TDD was introduced by early users of the Extreme Programming agile method, but it can be used
with any incremental development approach.
• Every time we add some functionality, we develop a new test and add it to the test suite.
• All of the tests in the test suite must pass before we move on to developing the next increment.
• A disadvantage of test driven development is that programmers focus on the details of passing
tests rather than considering the broader structure of their code and algorithms used.
Stages of Test Driven Development
•Identify partial implementation
• Break down the implementation of the functionality required into smaller mini units.
• Choose one of these mini units for implementation.
• Write mini unit tests
• Write one or more automated tests for the mini unit that you have chosen for implementation.
• The mini unit should pass these tests if it is properly implemented.
• Write a code stub that will fail test
• Write incomplete code that will be called to
• implement the miniunit.
• You know this will fail.
• Run all automated tests
• Run all existing automated tests.
• All previous tests should pass.
• The test for the incomplete code should fail.
• Implement code that should cause the failing test to pass
• Write code to implement the miniunit, which should cause it to operate
correctly.
• Rerun all automated tests
• If any tests fail, your code is incorrect.
• Keep working on it until all tests pass.
• Refactor code
• if required If all tests pass, you can move on to implementing the next mini
unit.
• If you see ways of improving your code, you should do this before the next
stage of implementation.
SECURITY TESTING
• The goals of program testing are to find bugs.
• Security testing aims to find vulnerabilities that an attacker may exploit
and to provide convincing evidence that the system is sufficiently secure.
• The tests should demonstrate that the system can resist
• attacks on its availability,
• attacks that try to inject malware, and
• attacks that try to corrupt or steal users’ data and identity.
• Discovering vulnerabilities is much harder than finding bugs.
• Functional tests to discover bugs are driven by an understanding of what the software
should do.
Examples of security risks
•✗ Unauthorized attacker gains access to a system using authorized credentials.
• ✗ Authorized individual accesses resources that are forbidden to that person.
• ✗ Authentication system fails to detect unauthorized attacker.
• ✗ Attacker gains access to database using SQL poisoning attack.
• ✗ Improper management of HTTP sessions.
• ✗ HTTP session cookies are revealed to an attacker.
• ✗ Confidential data are unencrypted.
• ✗ Encryption keys are leaked to potential attackers
• Once you have identified security risks, then analyze them to assess how they might arise.
• For example, for the first risk ie, unauthorized attacker) there are several possibilities:
• 1. The user has set weak passwords that an attacker can guess.
• 2. The system’s password file has been stolen and an attacker has discovered the passwords.
• When you navigate away from a secure application, the software should automatically log you
• ✔ Otherwise, if someone gets access to your computer, they could use the BACK button to get into
• The original development team was sometimes responsible for implementing software
changes.
• Everything that can be automated should be automated. All activities involved in testing,
• Measure first, change later. DevOps should be driven by a measurement program where
you collect data about the system and its operation. You then use the collected data to
• Faster repair DevOps teams work together to get the software up and running again as soon
as possible. There is no need to discover which team was responsible for the problem and to
wait for them to fix it.
• More productive teams DevOps teams are happier and more productive than the teams
involved in the separate activities. Because team members are happier, they are less likely to
leave to find jobs elsewhere.
CODE MANAGEMENT
• Code management is a set of software supported practices used to manage an evolving
codebase.
• Code management to ensure that changes made by different developers do not interfere
with each other and to create different product versions.
• Code management tools make it easy to create an executable product from its source code
files and to run automated tests on that product.
• DevOps automation and measurement tools all interact with the code management
system
Set of features
• 1. Code transfer Developers take code into their personal file store to work on
it; then they return it to the shared code management system.
• 2. Version storage and retrieval Files may be stored in several different
versions, and specific versions of these files can be retrieved.
• 3. Merging and branching Parallel development branches may be created for
concurrent working. Changes made by developers in different branches may
be merged.
• 4. Version information Information about the different versions maintained in
the system may be stored and retrieved. All source code management systems
have a shared repository and a set of features to manage the files in that
repository:
Features
• Version and release identification Managed versions of a code file are uniquely identified when they are
submitted to the system and can be retrieved using their identifier and other file attributes.
• Change history recording The reasons changes to a code file have been made are recorded and
maintained.
• Independent development Several developers can work on the same code file at the same time. When this is
submitted to the code management system, a new version is created so that files are never overwritten by
later changes.
• Project support All of the files associated with a project may be checked out at the same time. There is no
need to check out files one at a time.
• Storage management The code management system includes efficient storage mechanisms so that it doesn’t
keep multiple copies of files that have only small differences
DEVOPS AUTOMATION
• DevOps with automated support, can reduce the time and costs for integration, deployment, and delivery.
• “Everything that can be should be automated” is a fundamental principle of DevOps.
• Continuous deployment A new release of the system is made available to users every time a change is made to the master
branch of the software.
• servers, routers, etc.) on which the product executes are used by configuration management tools to build the software’s
execution platform.
CONTINUOUS INTEGRATION(CI)
• System integration (system building) is the process of
• ✔ gathering all of the elements required in a working system
• ✔ moving them into the right directories
• ✔ putting them together to create an operational system.
• ✔ This involves more than compiling the system.
• ✔ It builds the system and runs tests on your development computer or project
integration server.
• Continuous integration simply means that an integrated version of the system is created and tested
every time a change is pushed to the system’s shared code repository.
• On completion of the push operation, the repository sends a message to an integration server to
build a new version of the product.
• In a continuous integration environment, developers have to make sure that
they don’t “break the build.”
• ✗ Breaking the build means pushing code to the project repository, which
when integrated, causes some of the system tests to fail.
• ✗ If this happens ,our priority is to discover and fix the problem so that
normal development can continue.
CONTINUOUS DELIVERY AND DEPLOYMENT(CD/CD)