0% found this document useful (0 votes)
7 views

Unit 4

Tony

Uploaded by

shreevatsa bhat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Unit 4

Tony

Uploaded by

shreevatsa bhat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Explain top-down and bottom-up approach in testing

Top-Down and Bottom-Up Approaches


● Parts of the program are tested before testing the entire program.
● In incremental testing, some parts of the system are first tested
independently.
● Then these parts are combined to form a (sub) system, which is then tested
independently.
● Can be done in two ways:
▪ Modules that have been tested independently are combined.
▪ Some new untested modules are combined with tested modules.
● The order in which modules are to be tested and integrated has to be planned
before starting testing.
Top-Down Approach:
● In top-down strategy, testing is started from the top of the hierarchy and then
modules that it calls are incrementally added and then the new combined
system is tested.
● Requires stubs to be written.
● A stub is a dummy routine that simulates a module.
● To allow the modules to be tested before their subordinates have been coded,
stubs simulate the behavior of the subordinates.
Bottom-Up Approach:
● Starts from the bottom of the hierarchy.
● First, the modules at the very bottom, which have no subordinates, are
tested.
● Then these modules are combined with higher level modules for testing.
● Drivers are needed to set up the appropriate environment and invoke the
module.
● It is the job of the driver to invoke the module under testing with the
different set of test cases.

Top-Down and Bottom-Up Approaches


● Both methods are incremental.
● Top-down is advantageous if major flaws occur toward the top of the
hierarchy.
● Bottom-up is advantageous if major flaws occur toward the bottom of the
hierarchy.
● Writing stubs will be difficult than writing drivers.
● It is best to select the testing method to conform with the development
method.
what is software testing ? Explain Error, Fault and Failure in software testing

In a software development project, errors can be introduced at any stage during


development. Though errors are detected after each phase by techniques like
inspections, some errors remain undetected. Ultimately, these remaining errors are
reflected in the code. Hence, the final code is likely to have some requirements errors
and design errors, in addition to errors introduced during the coding activity. To ensure
quality of the final delivered software, these defects will have to be removed. There are
two types of approaches for identifying defects in the software— static and dynamic. In
static analysis, the code is not executed but is evaluated through some process or
some tools for locating defects. In dynamic analysis, code is executed, and the
execution is used for determining defects. Testing is the most common dynamic
technique that is employed. Indeed, testing is the most commonly used technique for
detecting defects, and performs a very critical role for ensuring quality.

Testing Concepts

Error, Fault, and Failure


While discussing testing we commonly use terms like error, fault, failure etc.
Let us start by defining these concepts clearly .The term error is used in two
different ways. It refers to the discrepancy between a computed, observed, or
measured value and the true, specified, or theoretically correct value. That is,
error refers to the difference between the actual output of a software and the correct
output. Error is also used to refer to human action that results in software containing a
defect or fault. This definition is quite general and encompasses all the phases.
Fault is a condition that causes a system to fail in performing its required
function. A fault is the basic reason for software malfunction and is practically
synonymous with the commonly used term bug, or the somewhat more general term
defect.
Failure is the inability of a system or component to perform a required function
according to its specifications. A software failure occurs if the behavior of the software
is different from the specified behavior. Failures may be caused by functional or
performance factors. Note that the definition does not imply that a failure must be
observed. It is possible that a failure may occur but not be detected.
There are some implications of these definitions. Presence of an error (in the
state) implies that a failure must have occurred, and the observance of a failure implies
that a fault must be present in the system
explain psychology of testing

As mentioned, in testing, the software under test (SUT) is executed with a set of test
cases. As discussed, devising a set of test cases that will guarantee that all errors will
be detected is not feasible. Moreover, there are no formal or precise methods for
selecting test cases. Even though there are a number of heuristics and rules of thumb
for deciding the test cases, selecting test cases is still a creative activity that relies on
the ingenuity of the tester. Because of this, the psychology of the person performing
the testing becomes important. A basic purpose of testing is to detect the errors that
may be present in the program. Hence, one should not start testing with the intent of
showing that a program works; rather the intent should be to show that a program does
not work; to reveal any defect that may exist. Due to this, testing has also been defined
as the process of executing a program with the intent of finding errors .
This emphasis on proper intent of testing is not a trivial matter because test cases are
designed by human beings, and human beings have a tendency to perform actions to
achieve the goal they have in mind. So, if the goal is to demonstrate that a program
works, we may consciously or subconsciously select test cases that will try to
demonstrate that goal and that will beat the basic purpose of testing. On the other
hand, if the intent is to show that the program does not work, we will challenge our
intellect to find test cases toward that end, and we are likely to detect more errors.
Testing is essentially a destructive process, where the tester has to treat the program
as an one's opponent in a contest that must be beaten by the tester by showing the
presence of errors. This is one of the reasons why many organizations employ
independent testing in which testing is done by a team that was not involved in
building the system.

Explain the process of testing with test oracle and test cases
A test oracle is a mechanism; different from the program itself that can be used to
check the correctness of the output of the program for the test cases. Conceptually, we
can consider testing a process in which the test cases are given to the test oracle and
the program under testing.
The output of the two is then compared to determine if the program behaved correctly
for test cases. To help the oracle determine the correct behavior, it is important that
the behavior of the system or component be unambiguously specified and that the
specification itself is error free.
There are some systems where oracles are automatically generated from specifications
of programs or modules. With such oracles, we are assured that the output of the
oracle is consistent with the specifications.
A test case is a document, which has a set of test data, preconditions, expected results
and post conditions, developed for a particular test scenario in order to verify
compliance against a specific requirement.
Test Case acts as the starting point for the test execution, and after applying a set of
input values, the application has a definitive outcome and leaves the system at some
end point or also known as execution post condition.

Explain Functional testing with any method


Boundary value analysis and Equivalence Class Partitioning both are test case
design techniques in black box testing.
Boundary value analysis
Boundary value analysis is one of the widely used case design technique for black box
testing. It is used to test boundary values because the input values near the boundary
have higher chances of error.
Whenever we do the testing by boundary value analysis, the tester focuses on, while
entering boundary value whether the software is producing correct output or not.
Boundary values are those that contain the upper and lower limit of a variable. Assume
that, age is a variable of any function, and its minimum value is 18 and the maximum
value is 30, both 18 and 30 will be considered as boundary values.
The basic assumption of boundary value analysis is, the test cases that are created
using boundary values are most likely to cause an error. There is 18 and 30 are the
boundary values that's why tester pays more attention to these values, but this doesn't
mean that the middle values like 19, 20, 21, 27, 29 are ignored. Test cases are
developed for each and every value of the range.
Explain the equivalence class partitioning
Boundary value analysis and Equivalence Class Partitioning both are test case
design techniques in black box testing.
Equivalence partitioning

Equivalence partitioning is a Test Case Design Technique to divide the input data of
software into different equivalence data classes. Test cases are designed for
equivalence data class. The equivalence partitions are frequently derived from the
requirements specification for input data that influence the processing of the test
object. A use of this method reduces the time necessary for testing software using less
and effective test cases.

Equivalence Partitioning = Equivalence Class Partitioning = ECP

It can be used at any level of software for testing and is preferably a good technique to
use first. In this technique, only one condition to be tested from each partition. Because
we assume that, all the conditions in one partition behave in the same manner by the
software. In a partition, if one condition works other will definitely work. Likewise we
assume that, if one of the condition does not work then none of the conditions in that
partition will work.

Equivalence partitioning is a testing technique where input values set into classes for
testing.

● Valid Input Class = Keeps all valid inputs.


● Invalid Input Class = Keeps all Invalid inputs.

Example of Equivalence Class Partitioning?

● A text field permits only numeric characters


● Length must be 6-10 characters long

Partition according to the requirement should be like this:


While evaluating Equivalence partitioning, values in all partitions are equivalent that’s
why 0-5 are equivalent, 6 – 10 are equivalent and 11- 14 are equivalent.

At the time of testing, test 4 and 12 as invalid values and 7 as valid one.

It is easy to test input ranges 6–10 but harder to test input ranges 2-600. Testing will be
easy in the case of lesser test cases but you should be very careful. Assuming, valid
input is 7. That means, you belief that the developer coded the correct valid range (6-
10).

Explain the cause effect graphing

Cause-Effect Graph graphically shows the connection between a given outcome and all
issues that manipulate the outcome. Cause Effect Graph is a black box testing technique.
It is generally uses for hardware testing but now adapted to software testing, usually
tests external behavior of a system. It is a testing technique that aids in choosing test
cases that logically relate Causes (inputs) to Effects (outputs) to produce test cases.
A “Cause” stands for a separate input condition that fetches about an internal change
in the system. An “Effect” represents an output condition, a system transformation or a
state resulting from a combination of causes.
Drawing Cause-Effect Graphs
Explain the structural testing with an approach
Structural testing, also known as glass box testing or white box testing is an approach
where the tests are derived from the knowledge of the software's structure or internal
implementation.
The other names of structural testing includes clear box testing, open box testing, logic
driven testing or path driven testing.
Structural Testing Techniques:
● Statement Coverage - This technique is aimed at exercising all programming
statements with minimal tests.
● Branch Coverage - This technique is running a series of tests to ensure that all
branches are tested at least once.
● Path Coverage - This technique corresponds to testing all possible paths which
means that each statement and branch are covered.

Structural testing is basically related to the internal design and implementation of the
software i.e. it involves the development team members in the testing team. It
basically tests different aspects of the software according to its types. Structural testing
is just the opposite of behavioral testing.

Control Flow Testing:


Control flow testing is a type of structural testing that uses the programs’s control flow
as a model. The entire code, design and structure of the software have to be known for
this type of testing. Often this type of testing is used by the developers to test their
own code and implementation. This method is used to test the logic of the code so that
required result can be obtained.
Data Flow Testing:
It uses the control flow graph to explore the unreasonable things that can happen to
data.
The detection of data flow anomalies are based on the associations between values and
variables. Without being initialized usage of variables. Initialized variables are not used
once.
Slice Based Testing:
It is useful for software debugging, software maintenance, program understanding and
quantification of functional cohesion. It divides the program into different slices and
tests that slice which can majorly affect the entire software.
Advantages of Structural Testing:

● It provides thorough testing of the software.


● It helps in finding out defects at an early stage.
● It helps in elimination of dead code.
● It is not time consuming as it is mostly automated.

Disadvantages of Structural Testing:

● It requires knowledge of the code to perform test.


● It requires training in the tool used for testing.
● Sometimes it is expensive.

Explain the control flow based criteria


Control flow testing is a testing technique that comes under white box testing. The aim
of this technique is to determine the execution order of statements or instructions of
the program through a control structure. The control structure of a program is used to
develop a test case for the program. In this technique, a particular part of a large
program is selected by the tester to set the testing path. It is mostly used in unit
testing. Test cases represented by the control graph of the program.
Control Flow Graph is formed from the node, edge, decision node, junction node to
specify all possible execution path.
Notations used for Control Flow Graph

1. Node
2. Edge
3. Decision Node
4. Junction node

Node
Nodes in the control flow graph are used to create a path of procedures. Basically, it
represents the sequence of procedures which procedure is next to come so, the tester
can determine the sequence of occurrence of procedures.
We can see below in example the first node represent the start procedure and the next
procedure is to assign the value of n after assigning the value there is decision node to
decide next node of procedure as per the value of n if it is 18 or more than 18 so
Eligible procedure will execute otherwise if it is less than 18 Not Eligible procedure
executes. The next node is the junction node, and the last node is stop node to stop the
procedure.
Edge
Edge in control flow graph is used to link the direction of nodes.
We can see below in example all arrows are used to link the nodes in an appropriate
direction.
Decision node
Decision node in the control flow graph is used to decide next node of procedure as per
the value.
We can see below in example decision node decide next node of procedure as per the
value of n if it is 18 or more than 18 so Eligible procedure will execute otherwise if it is
less than 18, Not Eligible procedure executes.
Junction node
Junction node in control flow graph is the point where at least three links meet.
Diagram - control flow graph
The above example shows eligibility criteria of age for voting where if age is 18 or more
than 18 so print message "You are eligible for voting" if it is less than 18 then print
"You are not eligible for voting."
Program for this scenario is written above, and the control flow graph is designed for
the testing purpose.
In the control flow graph, start, age, eligible, not eligible and stop are the nodes,
n>=18 is a decision node to decide which part (if or else) will execute as per the given
value. Connectivity of the eligible node and not eligible node is there on the stop node.
Test cases are designed through the flow graph of the programs to determine the
execution path is correct or not. All nodes, junction, edges, and decision are the
essential parts to design test cases.

Explain the data flow based testing with an example code


Data flow testing is used to analyze the flow of data in the program. It is the process of
collecting information about how the variables flow the data in the program. It tries to
obtain particular information of each particular point in the process.
Data flow testing is a group of testing strategies to examine the control flow of
programs in order to explore the sequence of variables according to the sequence of
events. It mainly focuses on the points at which values assigned to the variables and
the point at which these values are used by concentrating on both points, data flow can
be tested.
Data flow testing uses the control flow graph to detect illogical things that can interrupt
the flow of data. Anomalies in the flow of data are detected at the time of associations
between values and variables due to:

● If the variables are used without initialization.


● If the initialized variables are not used at least once.

Software maintenance
Software maintenance is the process of changing, modifying, and updating software to
keep up with customer needs. Software maintenance is done after the product has
launched for several reasons including improving the software overall, correcting issues
or bugs, to boost performance, and more.
Software maintenance is a natural part of SDLC (software development life cycle).
Software developers don’t have the luxury of launching a product and letting it run,
they constantly need to be on the lookout to both correct and improve their software to
remain competitive and relevant.
Using the right software maintenance techniques and strategies is a critical part of
keeping any software running for a long period of time and keeping customers and
users happy.

Why is software maintenance important?


Creating a new piece of software and launching it into the world is an exciting step for
any company. A lot goes into creating your software and its launch including the actual
building and coding, licensing models, marketing, and more. However, any great piece
of software must be able to adapt to the times.
This means monitoring and maintaining properly. As technology is changing at the
speed of light, software must keep up with the market changes and demands.
What are the 4 types of software maintenance?
The four different types of software maintenance are each performed for different
reasons and purposes. A given piece of software may have to undergo one, two, or all
types of maintenance throughout its lifespan.
The four types are:
● Corrective Software Maintenance
● Preventative Software Maintenance
● Perfective Software Maintenance
● Adaptive Software Maintenance

Corrective Software Maintenance


Corrective software maintenance is the typical, classic form of maintenance (for
software and anything else for that matter). Corrective software maintenance is
necessary when something goes wrong in a piece of software including faults and
errors. These can have a widespread impact on the functionality of the software in
general and therefore must be addressed as quickly as possible.
Many times, software vendors can address issues that require corrective maintenance
due to bug reports that users send in. If a company can recognize and take care of
faults before users discover them, this is an added advantage that will make your
company seem more reputable and reliable (no one likes an error message after all).

Preventative Software Maintenance


Preventative software maintenance is looking into the future so that your software can
keep working as desired for as long as possible.
This includes making necessary changes, upgrades, adaptations and more.
Preventative software maintenance may address small issues which at the given time
may lack significance but may turn into larger problems in the future. These are called
latent faults which need to be detected and corrected to make sure that they won’t
turn into effective faults.

Perfective Software Maintenance


As with any product on the market, once the software is released to the public, new
issues and ideas come to the surface. Users may see the need for new features or
requirements that they would like to see in the software to make it the best tool
available for their needs. This is when perfective software maintenance comes into
play.
Perfective software maintenance aims to adjust software by adding new features as
necessary and removing features that are irrelevant or not effective in the given
software. This process keeps software relevant as the market, and user needs, change.

Adaptive Software Maintenance


Adaptive software maintenance has to do with the changing technologies as well as
policies and rules regarding your software. These include operating system changes,
cloud storage, hardware, etc. When these changes are performed, your software must
adapt in order to properly meet new requirements and continue to run well.
The Software Maintenance Process
The software maintenance process involves various software maintenance techniques
that can change according to the type of maintenance and the software maintenance
plan in place.
Most software maintenance process models include the following steps:
1. Identification & Tracing – The process of determining what part of the software
needs to be modified (or maintained). This can be user-generated or identified by the
software developer itself depending on the situation and specific fault.
2. Analysis – The process of analyzing the suggested modification including
understanding the potential effects of such a change. This step typically includes cost
analysis to understand if the change is financially worthwhile.
3. Design – Designing the new changes using requirement specifications
4. Implementation – The process of implementing the new modules by
programmers.
5. System Testing – Before being launched, the software and system must be
tested. This includes the module itself, the system and the module, and the whole
system at once.
6. Acceptance Testing- Users test the modification for acceptance. This is an
important step as users can identify ongoing issues and generate recommendations for
more effective implementation and changes.
7. Delivery – Software updates or in some cases new installation of the software.
This is when the changes arrive at the customers.
Software Maintenance Cost
The cost of software maintenance can be high. However, this doesn’t negate the
importance of software maintenance. In certain cases, software maintenance can cost
up to two-thirds of the entire software process cycle or more than 50% of the SDLC
processes.
The costs involved in software maintenance are due to multiple factors and vary
depending on the specific situation. The older the software, the more maintenance will
cost, as technologies (and coding languages) change over time. Revamping an old
piece of software to meet today’s technology can be an exceptionally expensive
process in certain situations.
In addition, engineers may not always be able to target the exact issues when looking
to upgrade or maintain a specific piece of software. This causes them to use a trial and
error method, which can result in many hours of work.
When creating new software as well as taking on maintenance projects for older
models, software companies must take software maintenance costs into consideration.
Without maintenance, any software will be obsolete and essentially useless over time.

You might also like