0% found this document useful (0 votes)
96 views58 pages

Advances

Control flow graphs represent the flow of execution through a program. Dominator and post-dominator nodes control the flow into and out of sections of code. Equivalence class partitioning divides input domains into classes to derive test cases, identifying valid and invalid states for different input conditions. Test case specifications are produced by combining category choices according to constraints.

Uploaded by

Malli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views58 pages

Advances

Control flow graphs represent the flow of execution through a program. Dominator and post-dominator nodes control the flow into and out of sections of code. Equivalence class partitioning divides input domains into classes to derive test cases, identifying valid and invalid states for different input conditions. Test case specifications are produced by combining category choices according to constraints.

Uploaded by

Malli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Code: 17D25201

M.Tech I Semester Supplementary Examinations May/June 2022

ADVANCES IN SOFTWARE TESTING

(Common to CN. SE, CSE & CS)

For students admitted in 2017 (LC), 2018, 2019&2020 only)

Time: 3 hours Max. Marks 60


1.a Discuss on dominators and post Dominators 6M
In computer science, a node d of a control-flow graph dominates a node n if every path from
the entry node to n must go through d. Notationally, this is written as d dom n (or sometimes d ≫ n).
By definition, every node dominates itself.

There are a number of related concepts:

A node d strictly dominates a node n if d dominates n and d does not equal n.

The immediate dominator or idom of a node n is the unique node that strictly dominates n but does
not strictly dominate any other node that strictly dominates n. Every node, except the entry node, has
an immediate dominator.[1]

The dominance frontier of a node d is the set of all nodes ni such that d dominates an immediate
predecessor of ni, but d does not strictly dominate ni. It is the set of nodes where d's dominance
stops.

A dominator tree is a tree where each node's children are those nodes it immediately dominates.
Because the immediate dominator is unique, it is a tree. The start node is the root of the tree.

Post dominators
A vertex v Post-dominatw a vertex w if all paths from w to the end of the program
must pass through v.

As in the case of dominators we define the notion of generalized post-dominators


by considering mttltiplevertex post-dominators. The generalized post-dominator
information for a CFG is same as the generalized dominator information for the
reverse CFG, which is constructed by simply reversing the direction of all edges in
the CFG. Thus, the generalized post-dominators can be computed by applying the
algorithms developed in section 2 to the reverse CFG. Alternatively, the
postdominator set of a vertex v can be computed from the post-dominator sets of
the successors of v. The allvertex inclusion property is also defined in the context of
post-dominators. The application of the all-vertex inclusion transformation to the
reverse CFG ensures that this property is satisfied.
6M
What is category Category portioning? With an example.
1(b)
As we saw in the previous exploration, the Category Partition Method is a way of
reducing the potentially thousands of different combinations of testable features
(i.e. test cases) down to a number that can be realistically implemented.
Identify Testable Features
In this step, you will take the software specification and identify the individually
testable units. What does this sound like? That's right! It sounds like identifying
unit tests.
Identify Categories
A category can be thought of as a way of describing an input. Going back to our
password example, the input we are providing is a string. How might we describe
a given input string? Another way is to describe the string's content. We have just
identified two categories for this particular testable feature.
Partition Categories into Choices
This step is all about identifying subdomains. We do this by identifying unique
cases that are likely to be worth testing. This requires using everything we have
learned about what makes a good test case. For example, an interesting case for
our password string would be at length 7 based on our knowledge of edge cases.
For the content of the string a good subdomain would be valid passwords that are
only missing a number.
length
0
7
8
etc.
content
all numbers
all lowercase
all special characters
etc.
Identify Constraints
Remember that to have an effective test suite, we need to combine all our
subdomains/choices together. You can see how this would quickly get out of
hand. That is where constraints come in. Some combinations just don't make
sense. For example, there is no reason to test more than a single combination that
includes length: 0. By putting in this constraint, we are able to eliminate a large
number of possible test cases.

Produce/Evaluate Test Case Specification


This is where the categories' choices are combined according to the identified
constraints to produce test frames. You may be asking yourself what that actually
means. It means we take all the possible subdomains for each category and match
them up with the subdomains of the other categories. We then use the
constraints to eliminate some of the combinations. What we are left is a series of
choice combinations (e.g. passwords of length 7 with all numbers) which are
called test frames. Each test frame is a formal description of a test case. This
covers the production portion of this step.
We also need to evaluate the test frames. This will usually mean realizing that we
have produced too many test frames to realistically implement. When this
happens we need to re-examine our constraints, choices, and maybe even our
categories. Yes, our testing won't be as thorough, but at least will have some
tests!

If this sounds like a lot of work, you aren't wrong. Luckily, there are tools out there
that can automate the test frame generation!

Generate Test Cases


We are finally ready to actually convert the test frames into actual tests cases in
our testing framework!

2.a Write a note on Control Dependence 6M


Consider the following code sequence −
mul r1, r2, r3;
jz zproc;
sub r4, r7, r1;
:
zproc:load r1, x;
:
In this example, the real direction of implementation depends on the result of the
multiplication. This represents that the instructions following a conditional branch are
dependent on it. In a similar method, all conditional control instructions, including
conditional branches, calls, skips, etc. promulgate dependencies on the logically
subsequent instructions is known as control dependencies.
The term general-purpose program stands for compilers, operating systems, or non-
numeric application programs. The data indicates that that general-purpose program
has a high percentage of branches, up to 20-30%. In contrast, scientific/technical
programs contain fewer branches; the probable frequency is as low as 5-10%.
The ratio of conditional branches to branches seems to be quite stable in different
programs, remaining within the range of 75-85%. As a consequence, the expected
frequency of conditional branches in general-purpose code is about 20%, whereas in
the scientific program it is merely 5-10%.
Frequent conditional branches impose a huge execution constraint on ILP-Processor.
ILP-Processor can boost performance mainly by executing more and more instructions
in parallel.
To achieve this, the processor must incorporate more and more EUs and is forced to raise
the instruction issue rate. But the more instructions are issued in each cycle, the higher
the probability of encountering a conditional control dependency in each cycle.
For example, let us consider a code sequence where every sixth instruction is a
conditional branch, as shown in the figure. Let us assume that the code sequence does
not contain any data or resource dependencies, so the instruction issue

mechanism can issue two, three, or six instructions at will. When the issue rate will be
increased from two instructions/cycle, each third, second or even every issue will contain
a conditional branch, giving rise to possibly more and more severe performance
degradation.

Control dependency graphs


Just as data dependencies, control dependencies can also be defined by directed graphs.
Instructions transferring control dependencies are generally defined by nodes with two
successor arcs, as displayed in the figure. The outgoing arcs define the true (T) and false
(F) paths and are generally labeled accordingly.

Nodes with only one outgoing arc define either an operational instruction or a sequence
of conditional branch-free operational instructions (straight-line code). The general
method for directed graphs representing control dependencies is Control Dependency
Graph (CDG).
2(b) Explain Equivalence class partitioning with an example 6M

Equivalence Partitioning Method is also known as Equivalence class


partitioning (ECP). It is a software testing technique or black-box
testing that divides input domain into classes of data, and with the help of
these classes of data, test cases can be derived. An ideal test case
identifies class of error that might require many arbitrary test cases to be
executed before general error is observed.
In equivalence partitioning, equivalence classes are evaluated for given
input conditions. Whenever any input is given, then type of input
condition is checked, then for this input conditions, Equivalence class
represents or describes set of valid or invalid states.
Guidelines for Equivalence Partitioning :
• If the range condition is given as an input, then one valid
and two invalid equivalence classes are defined.

• If a specific value is given as input, then one valid and


two invalid equivalence classes are defined.

• If a member of set is given as an input, then one valid


and one invalid equivalence class is defined.

• If Boolean no. is given as an input condition, then one


valid and one invalid equivalence class is defined.
Example-1:
Let us consider an example of any college admission process. There is a
college that gives admissions to students based upon their percentage.
Consider percentage field that will accept percentage only between
50 to 90 %, more and even less than not be accepted, and
application will redirect user to an error page. If percentage entered
by user is less than 50 %or more than 90 %, that equivalence
partitioning method will show an invalid percentage. If percentage
entered is between 50 to 90 %, then equivalence partitioning
method will show valid percentage.
3.a What is Conformance Testing? Why is it required? explain 6M

There are many types of testing including testing for


performance, robustness, behavior, functions and
interoperability. Conformance testing may include some of these
kinds of tests but it has one fundamental difference.
Conformance testing is testing to see if an implementation meets
the requirements of a standard or specification. The
requirements or criteria for conformance must be specified in the
standard or specification, usually in a conformance clause or
conformance statement. Some standards have subsequent
standards for the test methodology and assertions to be tested. If
the criteria or requirements for conformance are not specified
there can be no conformance testing.
The general definition for conformance has changed over time
and been refined for specific standards. In 1991, ISO/IEC DIS
10641 defined conformance testing as "test to evaluate the
adherence or nonadherence of a candidate implementation to a
standard." ISO/IEC TR 13233 defined conformance and
conformity as "fulfillment by a product, process or service of all
relevant specified conformance requirements." In recent years,
the term conformity has gained international use and has
generally replaced the term conformance in ISO documents.
In 1996 ISO/IEC Guide 2 defined the three major terms used in
this field.
"conformity - fulfillment of a product, process or service of
specified requirements."
"conformity assessment - any activity concerned with determining
directly or indirectly that relevant requirements are fulfilled."
"conformity testing - conformity evaluation by means of testing."
ISO/IEC Guide 2 also mentions that "Typical examples of
conformity assessment activities are sampling, testing and
inspection; evaluation, verification and assurance of conformity
(supplier's declaration, certification); registration, accreditation
and approval as well as their combinations."
Conformity assessment is meant to provide the users of
conforming products some assurance or confidence that the
product behaves as expected, performs functions in a known
manner, or has an interface or format that is known. Conformity
assessment is NOT a way to judge if one product is better than
another. Conformity assessment is a neutral mechanism to judge
a product against the criteria of a standard or specification.
2. Conformity Assessment Program
Not all standards or specifications have a conformity assessment
program or a testing program. Usually assessment programs are
limited to those standards or specifications that are critical for
applications to run correctly, for interoperability with other
applications, or for security of the systems. The decision to
establish a program is based on the risk of nonconformance
versus the costs of creating and running a program.
A conformity assessment program usually requires:

• Standard or specification
• Test method standard or conformance clause
• Test suite or test tools
• Procedures for testing
• Qualified body to do testing
The first two requirements in any conformity assessment
program are to have a standard or specification and something
that defines what conformance is. If there is no conformance
clause or test method standard then there is no definition of
conformance for that standard or specification.
The next requirement is for some mechanism for doing the
testing, a test suite or testing tools. Development of the test suite
or testing tools is the costliest part of the conformity assessment
program. The costs are dependent on the type of testing that is
required (see below).
The other two requirements for a conformity assessment
program are the procedures to do the testing and someone to do
the testing following the specified procedures. The quality of the
test suite or testing tools, the detail of the procedures, and the
expertise of the tester, determine the quality, reliability, and
repeatability of the test results. The procedures have to be
detailed enough to ensure that they can be repeated with no
change in test results. They are the documentation of how the
testing is done and the directions for the tester to follow. These
procedures should also contain information on what must be
done when failures occur. Most testing programs strive to obtain
impartial and objective results, i.e. to remove subjectivity as
much as possible both in the procedures and the testing tools.
3. Types of Testing
A standard or specification may require one or more types of
testing. However, the type of testing required has a significant
impact on the costs of testing. To illustrate this IEEE Std 2003-
1997 defines three types of testing:
Exhaustive testing - "seeks to verify the behavior of every aspect
of an element, including all permutations. For example,
exhaustive testing of a given user command would require
testing the command with no options, with each option, with each
pair of options, and so on up to every permutation of options."
Exhaustive testing or developing tests for all requirements of a
standard or specification can take many staff years and be
prohibitively expensive. In some cases it is impossible to test all
of the possible test cases in a reasonable amount of time.
"As an example, there are approximately 37 unique error
conditions in POSIX.1. The occurrence of one error can (and
often does) affect the proper detection of another error. An
exhaustive test of the 37 errors would require not just one test
per error but one test per possible permutation of errors. Thus,
instead of 37 tests, billions of tests would be needed (2 to the
37th power)." Even in a more simple example, if thirteen fields on
a page have three possible inputs per field, the number of
possible test cases is 1,594,323. Thus the number of test cases
for a specification can grow exponentially very quickly.
Thorough testing - "seeks to verify the behavior of every aspect
of an element, but does not include all permutations. For
example, to perform thorough testing of a given command, the
command shall be tested with no options, then with each option
individually. Possible combinations of options may also be
tested." Usually a test method or conformance clause may
specify these boundaries which can be used for thorough testing
or suggest a range of possibilities which could be tested.
Identification testing - "seeks to verify some distinguishing
characteristic of the element in question. It consists of a cursory
examination of the element, invoking it with the minimal
command syntax and verifying its minimal function." An example
might be to simply determine if any value is in a field, if the field
exists, as opposed to testing all of the acceptable values.
4. Factors for Success

For any testing program to be successful it must meet the


specific goals of the conformity assessment program. Usually a
conformity assessment program must be efficient, effective and
repeatable.
To minimize costs and the burden on participants a program
must be efficient. Test tools must be optimized to maximize
automation and minimize human intervention. Critical areas need
to be identified for testing. Other areas which are not critical and
don't require testing also need to be identified. It is too expensive
to "just test everything." Testing procedures and procedures for
processing test results need to be automated where possible.
The testing program must be effective. It must test the critical
areas required by the specification or standard to meet the
requirements. It must provide the desired level of assurance for
6its customer base.To meet international guidelines test results
must be repeatable and reproducible. Repeatable results mean
that different testers, following the same procedures and test
methodology, should be able to get the same results on the
same platform. Some testing programs require reproducibility,
that different testers, following the same procedures and test
methodology, should be able to repeat the results on different
platforms.
3(b) Explain in Multi-Valued factorswith an example 6M
For designing Test Cases the following factors are considered:
1. Correctness
2. Negative
3. User Interface
4. Usability
5. Performance
6. Security
7. Integration
8. Reliability
9. Compatibility
Correctness : Correctness is the minimum requirement of software, the essential purpose of
testing. The tester may or may not know the inside details of the software module under test
e.g. control flow, data flow etc.
Negative : In this factor we can check what the product it is not supposed to do.

User Interface : In UI testing we check the user interfaces. For example in a web page we may
check for a button. In this we check for button size and shape. We can also check the navigation
links.

Usability : Usability testing measures the suitability of the software for its users, and is directed
at measuring the following factors with which specified users can achieve specified goals in
particular environments.
1. Effectiveness : The capability of the software product to enable users to
achieve specified goals with the accuracy and completeness in a specified
context of use.

2. Efficiency : The capability of the product to enable users to expend


appropriate amounts of resources in relation to the effectiveness achieved in
a specified context of use.
Performance : In software engineering, performance testing is testing that is performed from
one perspective to determine how fast some aspect of a system performs under a particular
workload.

Performance testing can serve various purposes. It can demonstrate that the system needs
performance criteria.
1. Load Testing: This is the simplest form of performance testing. A load test
is usually conducted to understand the behavior of the application under a
specific expected load.

2. Stress Testing: Stress testing focuses on the ability of a system to handle


loads beyond maximum capacity. System performance should degrade
slowly and predictably without failure as stress levels are increased.

3. Volume Testing: Volume testing belongs to the group of non-functional


values tests. Volume testing refers to testing a software application for a
certain data volume. This volume can in generic terms be the database size
or it could also be the size of an interface file that is the subject of volume
testing.
Security : Process to determine that an Information System protects data and maintains
functionality as intended. The basic security concepts that need to be covered by security testing
are the following:
1. Confidentiality : A security measure which protects
against the disclosure of information to parties other
than the intended recipient that is by no means the
only way of ensuring

2. Integrity: A measure intended to allow the receiver


to determine that the information which it receives
has not been altered in transit other than by the
originator of the information.

3. Authentication: A measure designed to establish the


validity of a transmission, message or originator.
Allows a receiver to have confidence that the
information it receives originated from a specific
known source.

4. Authorization: The process of determining that a


requester is allowed to receive a service/perform an
operation.
Integration : Integration testing is a logical extension of unit testing.
In its simplest form, two units that have already been tested are
combined into a component and the interface between them is
tested.
Reliability : Reliability testing is to monitor a statistical measure of
software maturity over time and compare this to a desired reliability
goal.
Compatibility : Compatibility testing of a part of software's non-
functional tests. This testing is conducted on the application to
evaluate the application's compatibility with the computing
environment. Browser compatibility testing can be more
appropriately referred to as user experience testing. This requires
that the web applications are tested on various web browsers to
ensure the following:
Users have the same visual experience irrespective of the browsers through
which they view the web application.
In terms of functionality , the application must behave and respond the same
across various browsers.

4(a) Explain Test Optimization with an example 6M


At its core, test optimization refers to making the test process more time and cost-
efficient without compromising the accuracy of results. When implemented the right
way, test optimization helps QA teams get the best out of their testing efforts. It also
helps teams streamline processes to deliver consistent results in the long run.
Test optimizations involve focusing on the following activities:

1. Reducing the size of test suites


2. Create a minimal subset of test suites sufficient for achieving given
requirements
3. Eliminating redundant test cases
4. Evaluating the ideal test coverage criteria
5. Increasing the maintainability of test cases
Optimizing test suites right from the initial stages is integral to ensuring software quality.
There are several useful strategies to optimize test suites for providing efficient results.
Listed below are a few proven test optimization techniques teams can adopt to streamline
the pipeline
Key Test Optimization Techniques
1. Incorporate testing from the early stages of development
The test process must be performed simultaneously with development from the
initial stages. One must consider testing as a continuous process rather than a
phase to be executed and completed only after a particular point.
As the code base is considerably smaller in the initial phase, identifying and
resolving critical bugs becomes much easier as compared to fixing them in the latter
stages. Naturally, this approach proves to be cost-effective and helps both
developers and testers deliver better software.

2. Creating precise and valuable test suites


QA engineers must focus on creating test scenarios that evaluate the critical
functionalities of an application. Grouping the most essential tests in small suites
and executing them helps qualify the application for further testing. For example, if
it’s an e-commerce app, the key tests would include testing product categories,
selecting any product, adding multiple products to the cart, making payments, and
checking out.
Naturally, if the products are not visible in the cart, it won’t make sense to proceed with
further tests. Such tests are termed as build verification or build acceptance

tests. Teams must create these test suites, and if test suites are already formed,
one needs to ensure that they run quickly and successfully.

3. Select the right tools or frameworks


Although one might think of this step as too obvious, it plays a significant role in the
long run. It’s very critical for teams to understand their test goals and accordingly
choose the right tool or framework for automation testing. Teams must consider the
following parameters before choosing an automation tool:

• Nature of Application Under Test:


o Is the application under test mobile-based or web-based?

o Which platforms or browsers does it need to be tested for?


For running cross-browser tests, consider an open-source tool like Selenium, and for
test automation on mobile platforms tools like Appium are ideal.

• Programming skillset: Teams must choose tools that support the programming
languages QAs are comfortable with, while also meeting the needs of the software
being tested.
4. Conduct reviews at regular intervals
Reviews can be conducted either formally or informally among team members in
both development and QA teams. Formal reviews include meetings for code
reviews, walkthroughs, and inspections. Reviews at regular intervals help monitor
overall progress. They also allow teams to evaluate whether the product is meeting
the predetermined requirements and ensure that code quality standards are
maintained.
Regular reviews are also necessary to track the project’s progress as well as keep
team members aligned with their goals.

5. Prioritize the use of Wait commands


Conventionally, testers use Sleep commands to deal with the delay in server
response or to wait for a resource (a web element, for example) to load. However, it
makes test scripts inefficient as it didn’t guarantee successful test execution. This is
because sleep commands pause the execution of test scripts for a specified amount
of time. The back end response time or the time taken to load an element can be
less than or greater than the time specified in the script. Naturally, this makes tests
flakier.
In order to tackle this issue, replace all sleep commands with wait commands. Proper
application of implicit and explicit wait commands enables QAs to handle certain elements
that take more time to load. As soon as Selenium WebDriver locates the element, the test
will continue its execution. Additionally, it also eliminates unnecessary delays in execution
as in the case of the Sleep
command. Thus, wait commands optimize the overall test scripts and make
execution more reliable.

6. Opt for parallel testing on real devices


An ideal way to speed up the execution of automated test scripts is to execute tests
simultaneously across multiple device-browser-OS combinations. This will help
complete the execution of the entire test suite in much less time.
For example, if there are twenty tests to be run, execute each test on different
unique devices in parallel. If each test takes five seconds to run, the entire test suite
will be completed in five seconds. On the other hand, if the test were executed on a
single device, it would have taken a hundred seconds.
Teams can opt for cloud-based platforms like BrowserStack that offer automated
parallel testing in real environments. Its real device cloud empowers teams to
choose from 2000+ real devices and browsers for running automated and manual
tests for both websites and mobile apps.
Try Parallel Testing on Real Device for Free
Optimizing test scripts increases the reliability and speed of test execution. It
enhances stability and helps eliminate flaky tests. Incorporating the test optimization
techniques mentioned above will enable teams to speed up test execution and get
the best out of their efforts.

What is Orthogonal Array? What is the significance of


4(b) that? explain 6M
Orthogonal Array Testing (OAT) is software testing technique that uses
orthogonal arrays to create test cases. It is statistical testing approach
especially useful when system to be tested has huge data inputs. Orthogonal
array testing helps to maximize test coverage by pairing and combining the
inputs and testing the system with comparatively less number of test cases for
time saving.
For example, when a train ticket has to be verified, factors such as – the number
of passengers, ticket number, seat numbers, and train numbers have to be
tested. One by one testing of each factor/input is cumbersome. It is more
efficient when the QA engineer combines more inputs together and does
testing. In such cases, we can use the Orthogonal Array testing method.
This type of pairing or combining of inputs and testing the system to save time
is called Pairwise testing. OATS technique is used for pairwise testing.
In this tutorial, you will learn-
In the present scenario, delivering a quality software product to the customer
has become challenging due to the complexity of the code.
In the conventional method, test suites include test cases that have been
derived from all combination of input values and pre-conditions. As a result, n
number of test cases has to be covered.
But in a real scenario, the testers won’t have the leisure to execute all the test
cases to uncover the defects as there are other processes such as
documentation, suggestions, and feedback from the customer that has to be
taken into account while in the testing phase.
Hence, the test managers wanted to optimize the number and quality of the
test cases to ensure maximum Test coverage with minimum effort. This effort is
called Test Case Optimization.
Systematic and Statistical way to test pairwise interactions
Interactions and Integration points are a major source of defects.
Execute a well-defined, concise of test cases that are likely to uncover most (not
all) bugs.
Orthogonal approach guarantees the pairwise coverage of all variables.
The formula to calculate OAT

Runs (N) – Number of rows in the array, which translates into a number of test
cases that will be generated.
Factors (K) – Number of columns in the array, which translates into a maximum
number of variables that can be handled.
Levels (V) – Maximum number of values that can be taken on any single factor.
A single factor has 2 to 3 inputs to be tested. That maximum number of inputs
decide the Levels.
How to do Orthogonal Array Testing: Examples
Identify the independent variable for the scenario.

Find the smallest array with the number of runs.

Map the factors to the array.

Choose the values for any “leftover” levels.

Transcribe the Runs into test cases, adding any particularly suspicious combinations that
aren’t generated.

Example 1
A Web page has three distinct sections (Top, Middle, Bottom) that can be individually
shown or hidden from a user

No of Factors = 3 (Top, Middle, Bottom)

No of Levels (Visibility) = 2 (Hidden or Shown)

Array Type = L4(23)

(4 is the number of runs arrived after creating the OAT array)

If we go for Conventional testing technique, we need test cases like 2 X 3 = 6 Test Cases

5(a) 6M
Write a note on JUnit tool
JUnit is a unit testing open-source framework for the Java programming language. Java
Developers use this framework to write and execute automated tests. In Java, there are test
cases that have to be re-executed every time a new code is added. This is done to make
sure that nothing in the code is broken. JUnit has several graphs that represent the progress
of a test. When the test runs smoothly, the graph displays a green color, and it turns red if
the test fails. JUnit Testing enables developers to develop highly reliable and bug-free code.

JUnit plays a huge role when it comes to regression testing. Regression Testing is a type of
software testing that checks if the recent changes made to the code do not adversely affect
the previously written code.

To have a better answer to the question ‘What is JUnit’, let's have a look at what Unit Testing
is.

What is Unit Testing

Unit testing, as the name suggests, refers to the testing of small segments of code. Here, a
unit indicates the smallest bit of code that can be fetched out of the system. This small bit
can be a line of the code, a method, or a class. The smaller the chunk of code, the better it is,
as smaller chunks will tend to run faster. And this provides a better insight into the code and
its performance.

When the chunk is small, it is easy to identify the defects from the dormant phase itself. The
developers now spend more time reading the code than writing it. A successful code boosts
the confidence of the developer and makes them work better.

What is the need for JUnit Testing?

The top reasons to take up JUnit Testing are:

To find bugs early in the development phase, which increases the code’s reliability

The framework enables the developer to invest more time in reading the code than writing it

This makes the code more readable, reliable, and bug-free

It boosts the confidence of the developer and motivates them immensely

Features of JUnit

There are several features of JUnit that make it so popular. Some of them are as follows:

Open Source Network:


JUnit is an open-source network that enables developers to write codes fast and with better
quality.

Provides Annotations:
It provides several annotations to identify test methods.

5(b) Discuss on the Effectiveness of testing


Test Effectiveness can be defined as how effectively testing is 6M
done or goal is achieved that meets the customer requirement. In
SDLC (Software development Life Cycle), we have requirements
gathering phase where SRS (Software Requirements Specification)
and FRD (Functional Requirements Document) are prepared and
based on that development team starts building the software
application, at the same time test cases are carved out of SRS and
FRD documents by the testing team. Test effectiveness starts right
at the beginning of the development and execution of test cases
and after development is completed to count the number of
defects. Defects can be valid or invalid. Valid defects are required
to be fixed in the application or product and invalid ones need to
be closed or ignored. Thus, mathematically it is calculated as a
percentage of a number of valid defects fixed in software
application divided by the sum of a total number of defects
injected and a total number of defects escaped.

6M
6(a) Explain Infeasibility in test adequacy.
Most of the white box testing approaches we have discussed so far are associated
with application of an adequacy criterion. Testers are often faced with the
decision of which criterion to apply to a given item under test given the nature of
the item and the constraints of the test environment (time, costs, resources) One
source of information the tester can use to select an appropriate criterion is the
test adequacy criterion hierarchy as shown in Figure 5.5 which describes a
subsumes relationship among the criteria. Satisfying an adequacy criterion at the
higher levels of the hierarchy implies a greater thoroughness in testing [1,14-16].
The criteria at the top of the hierarchy are said to subsume those at the lower
levels. For example, achieving all definition-use (def-use) path adequacy means
the tester has also achieved both branch and statement adequacy. Note from the
hierarchy that statement adequacy is the weakest of the test adequacy criteria.
Unfortunately, in many organizations achieving a high level of statement
coverage is not even included as a minimal testing goal.
As a conscientious tester you might at first reason that your testing goal
should be to develop tests that can satisfy the most stringent criterion. However,
you should consider that each adequacy criterion has both strengths and
weaknesses. Each, is effective in revealing certain types of defects. Application
of the so-called Stronger criteria usually requires more tester time and resources.
This translates into higher testing costs. Testing conditions, and the nature of the
software should guide your choice of a criterion.

Support for evaluating test adequacy criteria comes from a theoretical


treatment developed by Weyuker . She presents a set of axioms that allow testers
to formalize properties which should be satisfied by any good program-based
test data adequacy criterion. Testers can use the axioms to

• recognize both strong and weak adequacy criteria; a tester may decide to use a
weak criterion, but should be aware of its weakness with respect to the
properties described by the axioms;

• focus attention on the properties that an effective test data adequacy criterion
should exhibit;

• select an appropriate criterion for the item under test;


• stimulate thought for the development of new criteria; the axioms are the
framework with which to evaluate these new criteria.
The axioms are based on the following set of assumptions :
(i) programs are written in a structured programming language;
(ii) programs are SESE (single entry/single exit);
(iii) all input statements appear at the beginning of the program;
(iv) all output statements appear at the end of the program.

What is Defect Leakage ratio? Explain with an example


6(b)
Defect Leakage is the metric which is used to identify the efficiency of
the QA testing i.e., how many defects are missed/slipped during the 6M
QA testing.
Defect Leakage = (No. of Defects found in UAT / No. of Defects found
in QA testing.)

Defects happen. It is a fact of life and as software developers we are in


a constant war that we will never fully win. As software developers,
we may even create defects purposefully as requirements or timelines
require us to make a decision that introduces necessary risk. But how
can we eliminate the unwanted defects that make our software
difficult to use and tarnish our reputation?

Good test logging, regular reporting, customer involvement, and


transparency in your product can go a long way to mitigating defects.
You can have the best logging in the world, but it is somewhat
worthless if you do not address the defects. More time should be
dedicated to handling errors from the production site as the project
importance increases. These reports help everyone on the team to
understand what problems are being faced by customers.

Depending on how the team best responds, these are some ways to
share feedback with the team.

1. To get a quick sense of the overall health of a mission critical


application, develop an understandable, clear report which can
be used in communications with and across the business and
company leadership.
2. Automatically log defects from production. This approach can
get challenging quickly, so have a separate place to log defects,
and then pull in relevant issues.
3. If your team is large enough, or the project critical enough,
create a small SWAT team, or rapid response team that can react
to critical issues quickly and cure problems.
4. And, as a general practice, every developer should be aware of
what is happening with their software and be actively engaged
and responsible for their code, even when in production.

Keeping track of defects found and repaired prior to release is an


indicator of good software development health and maintains a
reasonable defect removal efficiency. Equally important is to keep
records of all defects found after release and bring those back to the
product, development and quality teams so that test cases can be
updated, and process adjusted when necessary. Transparency with
software defects is just as important as identification and resolution,
because your customers want to know that you own the problems and
are working to resolve them.

7(a) What is Regression Testing? Why isi this test required?


Regression Testing is defined as a type of software testing to confirm that a 6M
recent program or code change has not adversely affected existing features.
Regression Testing is nothing but a full or partial selection of already executed
test cases which are re-executed to ensure existing functionalities work fine.
This testing is done to make sure that new code changes should not have side
effects on the existing functionalities. It ensures that the old code still works
once the latest code changes are done

Need of Regression Testing


The Need of Regression Testing mainly arises whenever there is requirement
to change the code and we need to test whether the modified code affects the
other part of software application or not. Moreover, regression testing is
needed, when a new feature is added to the software application and for defect
fixing as well as performance issue fixing.

How to do Regression Testing


In order to do Regression Testing process, we need to first debug the code to
identify the bugs. Once the bugs are identified, required changes are made to fix
it, then the regression testing is done by selecting relevant test cases from the
test suite that covers both modified and affected parts of the code.
Software maintenance is an activity which includes enhancements, error
corrections, optimization and deletion of existing features. These modifications
may cause the system to work incorrectly. Therefore, Regression Testing
becomes necessary. Regression Testing can be carried out using the following
techniques:
Retest All
• This is one of the methods for Regression Testing in which all the tests in
the existing test bucket or suite should be re-executed. This is very
expensive as it requires huge time and resources.

Regression Test Selection


Regression Test Selection is a technique in which some selected test cases
from test suite are executed to test whether the modified code affects the
software application or not. Test cases are categorized into two parts, reusable
test cases which can be used in further regression cycles and obsolete test cases
which can not be used in succeeding cycles.
Prioritization of Test Cases
• Prioritize the test cases depending on business impact, critical &
frequently used functionalities. Selection of test cases based on priority
will greatly reduce the regression test suite.

Selecting test cases for regression testing


It was found from industry data that a good number of the defects reported by
customers were due to last minute bug fixes creating side effects and hence
selecting the Test Case for regression testing is an art and not that
easy. Effective Regression Tests can be done by selecting the following test
cases –

• Test cases which have frequent defects


• Functionalities which are more visible to the users
• Test cases which verify core features of the product
• Test cases of Functionalities which has undergone more and recent
changes
• All Integration Test Cases
• All Complex Test Cases
• Boundary value test cases
• A sample of Successful test cases
• A sample of Failure test cases

Regression Testing Tools


If your software undergoes frequent changes, regression testing costs will
escalate. In such cases, Manual execution of test cases increases test execution
time as well as costs. Automation of regression test cases is the smart choice in
such cases. The extent of automation depends on the number of test cases that
remain re-usable for successive regression cycles.

Following are the most important tools used for both functional and regression
testing in software engineering:

1) Avo Assure
Avo Assure is a technology agnostic, no-code test automation solution that
helps you test end-to-end business processes with a few clicks of the buttons.
This makes regression testing more straightforward and faster.

Features

• Autogenerate test cases with a 100% no-code approach


• Test across the web, desktop, mobile, ERP applications, Mainframes,
associated emulators, and more with a single solution.
• Enable accessibility testing
• Execute test cases in a single VM independently or in parallel with Smart
Scheduling
• Integrate with Jira, Jenkins, ALM, QTest, Salesforce, Sauce Labs, TFS, etc.
• Define test plans and design test cases through the Mindmaps feature.

Discuss on Random slection in Regression Testing


7(b) 6M
Random slection in Regression Testing is the process of testing the
modified parts of the code and the parts that might get affected due to the
modifications to ensure that no new errors have been introduced in the
software after the modifications have been made. Regression means return
of something and in the software field, it refers to the return of a bug.

When to do regression testing

• When a new functionality is added to the system and the code has
been modified to absorb and integrate that functionality with the
existing code.
• When some defect has been identified in the software and the code
is debugged to fix it.
• When the code is modified to optimize its working.

Process of Regression testing

Firstly, whenever we make some changes to the source code for any
reasons like adding new functionality, optimization, etc. then our program
when executed fails in the previously designed test suite for obvious
reasons. After the failure, the source code is debugged in order to identify
the bugs in the program. After identification of the bugs in the source code,
appropriate modifications are made. Then appropriate test cases are
selected from the already existing test suite which covers all the modified
and affected parts of the source code. We can add new test cases if
required. In the end regression testing is performed using the selected test
Techniques for the selection of Test cases for Regression Testing:
• Select all test cases: In this technique, all the test cases are
selected from the already existing test suite. It is the most simple
and safest technique but not much efficient.
• Select test cases randomly: In this technique, test cases are
selected randomly from the existing test-suite but it is only useful if
all the test cases are equally good in their fault detection capability
which is very rare. Hence, it is not used in most of the cases.
• Select modification traversing test cases: In this technique, only
those test cases are selected which covers and tests the modified
portions of the source code the parts which are affected by these
modifications.
• Select higher priority test cases: In this technique, priority codes
are assigned to each test case of the test suite based upon their
bug detection capability, customer requirements, etc. After
assigning the priority codes, test cases with highest priorities are
selected for the process of regression testing.
Test case with highest priority has highest rank. For example, test
case with priority code 2 is less important than test case with
priority code 1.
Tools for regression testing: In regression testing, we generally select the
test cases form the existing test suite itself and hence, we need not to
compute their expected output and it can be easily automated due to this
reason. Automating the process of regression testing will be very much
effective and time saving.
Most commonly used tools for regression testing are:
• Selenium
• WATIR (Web Application Testing In Ruby)
• QTP (Quick Test Professional)
• RFT (Rational Functional Tester)
• Winrunner
• Silktest
Advantages of Regression Testing:
• It ensures that no new bugs has been introduced after adding new
functionalities to the system.
• As most of the test cases used in Regression Testing are selected
from the existing test suite and we already know their expected
outputs. Hence, it can be easily automated by the automated tools.
• It helps to maintain the quality of the source code.
Disadvantages of Regression Testing:
• It can be time and resource consuming if automated tools are not
used.
• It is required even after very small changes in the code.
8(a) Illustrate with an example Regression Test Process 6M

Regression testing is a crucial part of software maintenance. Its main


purpose is to find bugs in the overall system that have been overlooked
after the introduction of a new feature. Here’s an example of regression
testing in software:

Example: App A is a database management tool. There are three basic


functions – Add, Save, Delete – that allow users to enter data or delete a
row. In a new build, an ‘Update’ feature has been introduced as well to
allow users to edit the changes and save the input. During regression
testing, a QA specialist will have to determine if the introduction of a new
feature didn’t impact the way ‘Add’, ‘Save’, and ‘Delete’ buttons work.

There are several approaches to regression testing. Here’s a brief rundown


of the most widely used techniques.

• Retest everything. This approach implies that all the tests of the
system should be re-executed. While it’s the safest way to ensure
the project is bug-free, it takes a lot of time and commitment to run a
full suite of tests. That’s why the ‘retest everything’ practice is rarely
used among testers and, in the case where a team decides to go
with it, the sessions will most likely be automated.
• Regression test selection. By selecting a subset of existing test
cases, a QA specialist can cut the operating costs tremendously
compared to retesting the entire system. There are several practices
testers use to select a case of regression test sessions. To start with,
you can only test a suite that yields coverage to the modified section
of the original program. Another popular approach is a Safe
Technique where a tester works with the number of cases that
expose one or multiple faults in the modified program. Other
approaches to test selection include Data Flow Coverage
Techniques and Random Techniques.
• Prioritization of test cases. This approach allows a QA specialist to
focus on testing the most frequently used functionalities and cases
that have a crucial business impact while temporarily putting all the
secondary features aside. By prioritizing test cases, you will cut the
size of the testing suite tremendously and have more time to
thoroughly assess the performance of the crucial parts of the system.
Unfortunately, it’s hard to imagine a product that would never need to
undergo changes. In order to stay relevant and attract more users,
developers have to upgrade their projects with new features, change the
back-end to make the tool’s performance more effective, and adapt to
managing a bigger amount of incoming traffic.

Maintaining a software product without regression testing will result in


massive tech debt and the fall of user satisfaction.

The regression testing meaning for developers consists of the following:

• Ensure a bug-free performance after changing the back-end


code. Introducing a new feature can impact the entire system – and
not necessarily in a good way. Testers run regression testing
sessions in order to ensure the performance of the system didn’t take
a hit after as much as a small code modification.
• Test the performance of the application after adding a new
feature. Regression testing allows developers to know that new
functionalities align well with the old ones, that the infrastructure of
the product is capable of executing more complex actions without
losing in load time speed or crashing, and so on.
• Detect and fix performance issues. Even if you did no major code
changes, it’s still wise to run a regression testing session in case a
performance defect has been recorded. Coming back to retest the
original test suite allows developers to save time as they don’t have
to write new cases.
• Define the parts of an application that are at the highest risk of
failure. Regression testing will help the development team
understand which parts of the system are the most vulnerable to
changes and have the highest odds of crashing. Testers will know to
pay more attention to the maintenance of these features in the future.

Types of Regression Testing

There isn’t a single defined approach to regression testing. Apart from the
techniques discussed above (those that have to do with the size of the test
suite), there are a few types of regression testing. Let’s take a look at go-to
approaches testers normally use:

• Corrective regression testing. This approach is used when the


program hasn’t been changed significantly. For such sessions,
developers rarely write new test cases – instead, they prefer reusing
the old ones.
• Progressive regression testing. This approach is used to test the
impact of a new component on the system. In order to use
progressive regression testing, team members should be well aware
of the exact number and the nature of code changes. For this type of
testing, new cases are written.
• Selective regression testing. In this case, a tester chooses a range
of test cases in order to speed up the progress. While this approach
is a good way to put less money and effort into retesting, it’s quite
challenging for developers to set the conditions between the
experiment and the range of covered program elements.
• Complete regression testing. This approach is normally used when
the development team struggles to define the number of changes
made or the impact of these modifications. Complete regression
testing gives a QA professional a complete snapshot of a system as
a whole. Normally, this is deployed in the final stages of development
before the release of a build.

Creating a strategy during the early stages of development and aligning


with it until the product release is a good way to do regression testing. The
good news is, building a testing framework is relatively straightforward.
Here are the steps QA specialists normally take in order to get started.

Step 1. Gather tests for execution


The first step in designing a regression test strategy is collecting all cases
a QA specialist intends to re-execute. Here are a few tips on smart test
selection:

• Include cases in error-prone areas of the program as they are likely


to be most vulnerable to system changes as well.
• Add cases that verify the main functions of the product. This includes
the homepage, the login page, the checkout gateway, and so on.
• Include complex cases such as GUI event sequences.

Step 2. Estimate the time for test cases execution


Be sure to estimate the time needed to test every chosen feature. Keep in
mind that, apart from a session, your testers might need to take some time
in order to get to know the range of tools used to execute and report
particular tests and add it to the schedule. Here are a few other factors that
can influence the amount of estimated time for testing:

• Creation of test data;


• Regression test planning (especially for a beginning QA specialist);
• Reviewing test cases.

Step 3. Outline which tests can be automated


Automated tests are faster and more reliable than manual ones. In the long
run, you’ll be able to reuse such scripts for your next project – this
improves the efficiency of software maintenance and creates a set of
standards within the team.

When it comes to regression testing, developers tend to automate most


cases. However, if you’re looking at a complex sequence of events – it’s
better to execute a manual check. The same stays true for all GUI-related
cases – here, manual testing is often the only option.

Dividing manual and automated tests into two separate groups is the best
way to avoid miscommunication within the team and keep reports in
order. <>br

Step 4. Prioritize test cases


It’s always helpful for a tester to determine which cases are the most
relevant for the program and focus on executing them as a first priority. In
order to manage sessions productively, it crucial to prioritize. Here’s a
simple framework you can follow while grading the value of test cases.

• Priority 0. All the sanity test cases fall into the category. The tests of
the basic functionality of the product and pre-system acceptance are
the first a QA specialist should concentrate on as they provide the
most value both for users and engineers.
• Priority 1. If your program has features that are crucial but not core
(in other words, a tool would still work without them but the
performance wouldn’t be satisfactory), the cases to test them fall
under Priority 1 and are to be handled as soon as all the scenarios
labeled as Priority 0 are checked.
• Priority 2. Includes test cases that are not providing high project
value but are crucial to avoid tech debt and complications for
developers. On a user’s side, the impact of these features is not
noticeable.
Step 5. Use tools to speed up the testing process
There’s a wide range of tools for regression testing that help QA specialists
handle planning, preparation, and reporting. Using these off-the-shelf
solutions allows the team to speed up the process and use the best
practices of regression testing.

Here are some tools developers can consider using to improve the
efficiency of testing:

• Selenium – a portable framework for web application testing;


• QTP – provides regression testing for environments and software;
• Watir – a tool that enables the automated testing of Ruby-based
apps;
• Rational Functional Tester – an automated testing tool that skillfully
mimics the actions of a human tester.

Examples of Regression Tests

Regression tests have a broad range of applications. Let’s take a look at


the most popular regression testing example list.

• Bug regression – a tester checks if a specific bug that has allegedly


been fixed is in fact eliminated;
• General functional regression – a range of broad tests across all
areas of the app to ensure if recent changes have resulted in code
destabilization;
• Conversion and port testing – a suite of test cases is executed to
ensure that the application has been successfully ported to a new
platform;
• Localization testing – in case a program has been modified and
rewritten in a new programming language, a tester assesses the
performance of the interface and ensures that the application follows
its new set of cultural rules. In order to execute such a test, you may
have to modify old cases taking the change of a programming
language into account or even write new ones.
• Build verification testing – a series of small tests aimed at verifying
if a build is worth fixing or the damage is irreparable. A failed test
would result in a build rejection.
8(b) What is Test Prioritization? Explain with an example
As the name suggests, test case prioritization refers to prioritizing test cases in
6M
test suite on basis of different factors. Factors could be code coverage,
risk/critical modules, functionality, features, etc.
Why should test cases be prioritized?
As the size of software increases, test suite also grows bigger and also requires
more efforts to maintain test suite. In order to detect bugs in software as early as
possible, it is important to prioritize test cases so that important test cases can
be executed first.
Types of Test Case Prioritization :
• General Prioritization :
In this type of prioritization, test cases that will be useful for the
subsequent modified versions of product are prioritized. It does not
require any information regarding modifications made in the product.
• Version – Specific Prioritization :
Test cases can also be prioritized such that they are useful on specific
version of product. This type of prioritization requires knowledge
about changes that have been introduced in product.
Prioritization Techniques :
1. Coverage – based Test Case Prioritization :
This type of prioritization is based on code coverage i.e. test cases are prioritized
on basis of their code coverage.
• Total Statement Coverage Prioritization –
In this technique, total number of statements covered by test case is
used as factor to prioritize test cases. For example, test case covering
10 statements will be given higher priority than test case covering 5
statements.
• Additional Statement Coverage Prioritization –
This technique involves iteratively selecting test case with maximum
statement coverage, then selecting test case which covers statements
that were left uncovered by previous test case. This process is repeated
till all statements have been covered.
• Total Branch Coverage Prioritization –
Using total branch coverage as factor for ordering test cases,
prioritization can be achieved. Here, branch coverage refers to
coverage of each possible outcome of condition.
• Additional Branch Coverage Prioritization –
Similar to additional statement coverage technique, it first selects text
case with maximum branch coverage and then iteratively selects test
case which covers branch outcomes that were left uncovered by
previous test case.
• Total Fault-Exposing-Potential Prioritization –
Fault-exposing-potential (FEP) refers to ability of test case to expose
fault. Statement and Branch Coverage Techniques do not take into
account fact that some bugs can be more easily detected than others
and also that some test cases have more potential to detect bugs than
others. FEP depends on :
1. Whether test cases cover faulty statements or not.
2. Probability that faulty statement will cause test case to fail.

2. Risk – based Prioritization :


This technique uses risk analysis to identify potential problem areas which if
failed, could lead to bad consequences. Therefore, test cases are prioritized
keeping in mind potential problem areas. In risk analysis, following steps are
performed :
• List potential problems.
• Assigning probability of occurrence for each problem.
• Calculating severity of impact for each problem.
After performing above steps, risk analysis table is formed to present results.
The table consists of columns like Problem ID, Potential problem identified,
Severity of Impact, Risk exposure, etc.
3. Prioritization using Relevant Slice :
In this type of prioritization, slicing technique is used – when program is
modified, all existing regression test cases are executed in order to make sure
that program yields same result as before, except where it has been modified.
For this purpose, we try to find part of program which has been affected by
modification, and then prioritization of test cases is performed for this affected
part. There are 3 parts to slicing technique :
• Execution slice –
The statements executed under test case form execution slice.
• Dynamic slice –
Statements executed under test case that might impact program
output.
• Relevant Slice –
Statements that are executed under test case and don’t have any
impact on the program output but may impact output of test case.

4. Requirements – based Prioritization :


Some requirements are more important than others or are more critical in
nature, hence test cases for such requirements should be prioritized first. The
following factors can be considered while prioritizing test cases based on
requirements :
• Customer assigned priority –
The customer assigns weight to requirements according to his need or
understanding of requirements of product.
• Developer perceived implementation complexity –
Priority is assigned by developer on basis of efforts or time that would
be required to implement that requirement.
• Requirement volatility –
This factor determines frequency of change of requirement.
• Fault proneness of requirements –
Priority is assigned based on how error-prone requirement has been in
previous versions of software.

Metric for measuring Effectiveness of Prioritized Test Suite :


For measuring how effective prioritized test suite is, we can use metric
called APFD (Average Percentage of Faults Detected). The formula for APFD is
given by :
APFD = 1 - ( (TF1 + TF2 + ....... + TFm) / nm ) + 1 / 2n

where,
TFi = position of first Test case in Test suite T that exposes Fault
i
m = total number of Faults exposed under T
n = total number of Test cases in T

9(a) Discuss on Security Testing Tools


6M

Security testing is a type of software testing that identifies system flaws and
ensures that the data and resources of the system are protected from intruders.
It assures that the software system and application are free of dangers or risks
that could result in data loss. Any system’s security testing is aimed at identifying
all conceivable flaws and weaknesses that could lead to the loss of data or the
organization’s reputation.
The following are some of the Security testing tools:
1. Zed Attack Proxy (ZAP)
2. SonarQube
3. Wapiti
4. Netsparker
5. Arachni
6. Iron Wasp
7. Grabber
8. SQLMap
9. Wfuzz
10. W3af
1. Zed Attack Proxy (ZAP)
ZAP, or Zed Attack Proxy, is a multi-platform, open-source online application
security testing tool developed by OWASP (Open Web Application Security
Project). During the development and testing phases of a web app, ZAP is used to
uncover a variety of security flaws. Zed Attack Proxy can be utilized by both
newcomers and experts thanks to its user-friendly interface. Advanced users can
utilize the security testing tool with command-line access. It has been designated
as a flagship project, in addition to being one of the most well-known OWASP
projects. ZAP is a Java application. Apart from being a scanner, ZAP may also be
used to intercept a proxy and test a webpage manually. ZAP reveals:
• Application error disclosure
• Cookie not HttpOnly flag
• SQL injection
• Application error disclosure
• XSS injection
• Missing anti-CSRF tokens and security headers
• Private IP disclosure
• Cookie not HttpOnly flag
• Session ID in URL rewrite
Key Features:
• For advanced users, it will support command-line access.
• It has the capability of being used as a scanner.
• It will perform web application scanning automatically.
• It works with a variety of operating systems, including Windows, OS X,
and Linux.
• It takes advantage of AJAX spiders, which are both powerful and old.
2. SonarQube
Sonar Source created this open-source security tool. It is used to verify the
quality of code and run automated reviews on web applications written in
various programming languages such as Java, C#, JavaScript, PHP, Ruby,
Cobol, C/C++, and so on by discovering bugs, code analysis, and security
exposures. The Java programming language is used to create the SonarQube
utility. It will produce reports on code coverage, code complexity, code
repetition, security flaws, and bugs. It provides comprehensive analysis using a
variety of tools such as Ant, Maven, Gradle, Jenkins, and others.
Key Features:
• It will use SonarLint plug-ins to interface with a variety of development
environments, including Visual Studio, Eclipse, and IntelliJ IDEA.
• External technologies such as GitHub, LDAP, and Active Directory are
also supported.
• It can keep track of metric history and provide graphs of evolution.
• It will assist us in identifying the more complicated issues.
• It will ensure the security of the application.
3. Wapiti
Wapiti is a free, open-source project from SourceForge and develop that is one of
the leading web application security testing tools. Wapiti uses black-box testing
to look for security vulnerabilities in online applications. Because Wapiti is a
command-line tool, familiarity with the various commands is required. Wapiti is
simple to use for experienced users, but it can be challenging for newbies. But
don’t worry; all Wapiti instructions may be found in the official paperwork.
Wapiti injects payloads into scripts to see if they are vulnerable. Both GET and
POST HTTP attack methods are supported by the open-source security testing
tool. Wapiti exposes the following vulnerabilities:
• Command Execution detection
• CRLF injection
• Database injection
• File disclosure
• Shellshock or Bash bug
• SSRF (Server Side Request Forgery)
• Weak .htaccess configurations that can be bypassed
• XSS injection
• XXE injection
Key Features:
• Allows for several types of authentication, such as Kerberos and NTLM.
• It includes a buster module that allows you to brute force directory and
file names on the webserver you’re targeting.
• It works in the same way that a fuzzer would.
• Attacks can be carried out using both the GET and POST HTTP
protocols.
4. Netsparker
It is used to detect the web application’s vulnerabilities in a unique way, as well
as to verify whether the application’s weaknesses are correct or erroneous. It’s a
Windows program that’s simple to use. We can undertake automatic
vulnerability assessments and address vulnerabilities with the help of this
solution, avoiding resource-intensive human methods. Netsparker is an
automated online application security scanner that allows you to scan websites,
web applications, and web services for security issues while remaining fully
customizable. Netsparker is capable of scanning any web application, regardless
of the platform or programming language used to build it.
Key Features:
• It will scan all forms of legacy as well as new online applications such
as Web 2.0, HTML5, and SPA (single page apps).
• It will provide a variety of out-of-the-box reports for both developers
and management for various objectives.
• With the help of our templates, we can create unique reports.
• To safeguard our application, we can use this tool in conjunction with
CI/CD platforms like Bamboo, Jenkins, or TeamCity.
5. Arachni
Arachni is a web application security scanner that is suitable for both
penetration testers and administrators. This open-source security testing
program may detect a variety of flaws, including the following:
• Invalidated redirect
• Local and remote file inclusion
• SQL injection
• XSS injection
Key Features:
• Immediately deployable
• Ruby framework that is modular and high-performing
• Support for several platforms

6. Iron Wasp

Iron Wasp is a strong open-source scanning tool that can detect over 25 different
types of web application flaws. It can also distinguish between false positives and
false negatives. Iron Wasp aids in the discovery of a wide range of flaws,
including:
• Broken authentication
• Cross-site scripting
• CSRF
• Hidden parameters
• Privilege escalation
Key Features:
• C#, Python, Ruby, or VB.NET are used to extend the system via plugins
or modules.
• HTML and RTF formats are used to create reports.
7. Grabber
The Grabber is a simple web application scanner that can be used to search
forums and personal websites. The Python-based lightweight security testing
tool has no graphical user interface. Grabber discovered the following
vulnerabilities:
• Backup files verification
• Cross-site scripting
• File inclusion
• Hidden parameters
• Privilege escalation
• Simple AJAX verification
• SQL injection
Key Features:
• Produces a statistics analysis file.
• Simple and easy to transport
• Supports the examination of JS code.

8. SQLMap

SQLmap is an open-source tool for detecting and exploiting SQL injection


problems in penetration testing. SQLmap is a tool that automates the detection
and use of SQL injection. SQL Injection attacks have the ability to gain control of
SQL databases. They can harm any website or online program that uses a SQL
database, including MySQL, SQL Server, Oracle, and a variety of others. Customer
information, personal data, trade secrets, financial data, and other sensitive data
are frequently stored in these systems. It’s critical to be able to detect SQL flaws
and defend against them. SQLmap can assist in the discovery of these flaws.
SQLMap is a free tool that automates the process of finding and exploiting SQL
injection vulnerabilities in a website’s database. The security testing tool has a
robust testing engine that can support six different SQL injection techniques:
• Boolean-based blind
• Error-based
• Out-of-band
• Stacked queries
• Time-based blind
• UNION query
Key Features
• This tool automates the process of locating SQL injection flaws.
• It can also be used to test a website’s security.
• A powerful detecting engine
• MySQL, Oracle, and PostgreSQL are among the databases supported.
9. Wfuzz
Wfuzz is a tool for brute-forcing Web applications. It can be used to find non-
linked directories, servlets, scripts, and other resources, as well as brute-force,
GET and POST parameters for checking various types of injections (SQL, XSS,
LDAP, and so on), brute-force Forms parameters (User/Password), and fuzzing.
Wfuzz is a popular tool for brute-forcing web applications that were created in
Python. The open-source security testing tool has no GUI interface and is usable
only via the command line. Vulnerabilities exposed by Wfuzz are:
• LDAP injection
• SQL injection
• XSS injection
Key Features:
• Numerous injection sites with multiple dictionaries, HTML output,
recursion (when performing directory brute force attacks), colored
outputs with formatting, and so on are some of the capabilities of this
application.
• Other features include brute-forcing posts, headers, authentication
data, fuzzing cookies, time delays between requests, and support for
SOCK/authentication/proxy.
• Wfuzz also allows you to combine payloads with iterators, perform
HEAD scans, use brute force HTTP methods (POST), use several proxy
servers (each request goes through a separate proxy), and hide results
using return codes, word numbers, line numbers, and responses or
regex.
10. W3af

The open-source w3af (web application attack and audit framework) web
application security scanner. The project offers a Web application vulnerability
scanner and exploitation tool. It gives information about security flaws that can
be used in penetration testing projects. A graphical user interface and a
command-line interface are also available on the scanner.
The framework has been dubbed “Metasploit for the web,” but it’s much more
than that, as it also uses black-box scanning techniques to find web application
vulnerabilities! The w3af core and plugins are developed entirely in Python.
More than 130 plugins are included in the project, which detects and exploits
SQL injection, cross-site scripting (XSS), remote file inclusion, and other
vulnerabilities.
Key Features:
• Support for authentication
• It’s simple to get started with and has a user-friendly interface.
• The output can be saved to a terminal, a file, or sent through email.

Explain the significance of Load Testing


Load Testing is a type of Performance Testing that determines the
6M
9(b)
performance of a system, software product, or software application under
real-life based load conditions. Basically, load testing determines the behavior
of the application when multiple users use it at the same time. It is the
response of the system measured under varying load conditions. The load
testing is carried out for normal and extreme load conditions.

Objectives of Load Testing: The objective of load testing is:

To maximize the operating capacity of a software application.


To determine whether the latest infrastructure is capable to run the software
application or not.
To determine the sustainability of application with respect to extreme user
load.
To find out the total count of users that can access the application at the same
time.
To determine scalability of the application.
To allow more users to access the application.
Load Testing Process:

Test Environment Setup: Firstly create a dedicated test environment setup for
performing the load testing. It ensures that testing would be done in a proper
way.
Load Test Scenario: In second step load test scenarios are created. Then load
testing transactions are determined for an application and data is prepared for
each transaction.
Test Scenario Execution: Load test scenarios that were created in previous step
are know executed. Different measurements and metrices are gathered to
collect the information.
Test Result Analysis: Results of the testing performed is analyzed and various
recommendations are made.
Re-test: If the test is failed then the test is performed again in order to get the
result in correct way.
Metrics of Load Testing :

Metrics are used in knowing the performance of load testing under different
circumstances. It tells how accurately the load testing is working under
different test cases. It is usually carried out after the preparation of load test
scripts/cases. There are many metrics to evaluate the load testing. Some of
them are listed below.

1. Average Response Time : It tells the average time taken to respond to the
request generated by the clients or customers or users. It also shows the
speed of the application depending upon the time taken to respond to the all
requests generated.

2. Error Rate : The Error Rate is mentioned in terms of percentage denotes the
number of errors occurred during the requests to the total number of
requests. These errors are usually raised when the application is no longer
handling the request at the given time or for some other technical problems. It
makes the application less efficient when the error rate keeps on increasing.

3. Throughput : This metric is used in knowing the range of bandwidth


consumed during the load scripts or tests and it is also used in knowing the
amount of data which is being used for checking the request that flows
between the user server and application main server. It is measured in
kilobytes per second.

4. Requests Per Second : It tells that how many requests are being generated
to the application server per second. The requests could be anything like
requesting of images, documents, web pages, articles or any other resources.

5. Concurrent Users : This metric is used to take the count of the users who are
actively present at the particular time or at any time. It just keeps track of
count those who are visiting the application at any time without raising any
request in the application. From this, we can easily know that at which time
the high number of users are visiting the application or website.

6. Peak Response Time : Peak Response Time measures the time taken to
handle the request. It also helps in finding the duration of the peak
time(longest time) at which the request and response cycle is handled and
finding that which resource is taking longer time to respond the request.

Load Testing Tools:

1. Apache Jmeter
2. WebLoad
3. NeoLoad
4. LoadNinja
5. HP Performance Tester
6. LoadUI Pro
7. LoadView
Advantages of Load Testing:

Load testing enhances the sustainability of the system or software application.


It improves the scalability of the system or software application.
It helps in the minimization of the risks related to system downtime.
It reduces the costs of failure of the system.
It increases customer’s satisfaction.
Disadvantages of Load Testing:

To perform load testing there in need of programming knowledge.


Load testing tools can be costly.
Article Tags : Software Engineering Software Testing

10(a)
What is GUI testing? Explain in detail. 6M

What is GUI
There are two types of interfaces for a computer application. Command Line
Interface is where you type text and computer responds to that command. GUI
stands for Graphical User Interface where you interact with the computer using
images rather than text.

Following are the GUI elements which can be used for interaction between the
user and application:
GUI Testing is a validation of the above elements

GUI Testing
GUI Testing is a software testing type that checks the Graphical User Interface
of the Software. The purpose of Graphical User Interface (GUI) Testing is to
ensure the functionalities of software application work as per specifications by
checking screens and controls like menus, buttons, icons, etc.
GUI is what the user sees. Say if you visit guru99.com what you will see say
homepage it is the GUI (graphical user interface) of the site. A user does not see
the source code. The interface is visible to the user. Especially the focus is on
the design structure, images that they are working properly or not.

Need of GUI Testing


Now the basic concept of GUI testing is clear. The few questions that will strike in your mind
will be

• Why do GUI testing?


• Is it really needed?
• Does testing of functionally and logic of Application is not more than enough?? Then why to
waste time on UI testing.

To get the answer to think as a user, not as a tester. A user doesn’t have any knowledge about
XYZ software/Application. It is the UI of the Application which decides that a user is going
to use the Application further or not.

A normal User first observes the design and looks of the Application/Software and how easy
it is for him to understand the UI. If a user is not comfortable with the Interface or find
Application complex to understand he would never going to use that Application Again.
That’s why, GUI is a matter for concern, and proper testing should be carried out in order to
make sure that GUI is free of Bugs.What do you Check-in GUI Testing

The following checklist will ensure detailed GUI Testing in Software Testing.

• Check all the GUI elements for size, position, width, length, and acceptance of characters or
numbers. For instance, you must be able to provide inputs to the input fields.
• Check you can execute the intended functionality of the application using the GUI
• Check Error Messages are displayed correctly
• Check for Clear demarcation of different sections on screen
• Check Font used in an application is readable
• Check the alignment of the text is proper
• Check the Color of the font and warning messages is aesthetically pleasing
• Check that the images have good clarity
• Check that the images are properly aligned
• Check the positioning of GUI elements for different screen resolution.

GUI Testing Techniques


GUI Testing Techniques can be categorized into three parts:

Manual Based Testing

Under this approach, graphical screens are checked manually by testers in


conformance with the requirements stated in the business requirements
document.

Discuss on Different security testing tools


10(b)
6M
Software security testing tools are one of the best ways to prevent and
analyze network and application layer attacks. They are commonly used to
identify vulnerabilities in both applications and networks. Network security
testing tools aim to avoid unauthorized access and network-level attacks.
Whereas, application security tools are designed to test an application
against layer 7 attacks.

This Blog Includes show

List of the top 5 software security testing tools

• Astra Pentest Platform


• NMap
• WireShark
• OpenVAS
• Metasploit

Introduction
There are certain things that make a software security testing tool better
than others. This post is about helping you understand those things so that
you can make an educated choice. Of course, we will talk about the top 5
security testing tools in some detail, starting with the following table.

Security Testing
Key Features
Tools

Astra Pentest Continuous pentesting, CI/CD integration, scan behind login,


Platform cloud pentest

Network exploration, port scanning, network mapping


NMap

WireShark Packet analyzer, network troubleshooting, protocol analysis

OpenVAS Vulnerability scanning

Metasploit Helps you write, test, and execute exploit code

Top 5 software security testing tools


Cybercriminals are constantly working on new ways of breaching network
security and stealing valuable information, which is why software security
testing tools are becoming common. Also, you need to be thorough in
your network security testing and find vulnerabilities in networks before
hackers do. There are a lot of tools out there for network security testing, but
some of the best are listed below.

1. Astra Security

Astra’s Network Security Solution is a unique product of Astra Security, a


comprehensive security assessment of your network that can help you find
and fix security risks. Astra’s solution is a solution that helps you to
identify the security gaps in your network and helps you in plugging
the holes.
The Astra Network Security Solution is the most comprehensive solution to
perform a complete network security assessment. The solution scans and
checks your network to identify the network devices, network ports, and
network protocols to find out the vulnerabilities in your network and help you
fix the vulnerabilities in a timely manner.

2. NMAP

Network Mapper, or Nmap, is an open-source utility for network exploration,


security auditing, and network discovery. It was designed to rapidly scan
large networks, although it works fine against single hosts.

Nmap uses raw IP packets in novel ways to determine what hosts are
available on the network, what services (application name and version)
those hosts are offering, what operating systems (and OS versions) they are
running, what type of packet filters/firewalls are in use, and dozens of other
characteristics.

While Nmap was developed for UNIX-based operating systems, it also runs
on Windows, and there are also versions available for most other major
operating systems.

3. Wireshark

Wireshark is a free and open-source packet analyzer. It is used for network


troubleshooting, analysis, software and communications protocol
development, and education. Wireshark can be used to capture and
interactively browse the contents of network traffic.

Wireshark is also commonly used to analyze data from a trace file, generally
in the form of a pcap (the file format of libpcap). Wireshark has a GUI and
comes in both 32-bit and 64-bit versions.

Also Read: Top Penetration Testing Software & Tools Pros Use

4. OpenVAS

OpenVAS is a vulnerability scanner that can perform a complete


vulnerability scan of the network infrastructure. OpenVAS is an international
project that is used by many organizations all over the world. It is available
for free and can be used with commercial products.

OpenVAS tool is owned by Greenbone and the paid solution is called


Greenbone Security feed while the free one is called Greenbone Community
feed

5. Metasploit

The Metasploit Project is a computer security project that provides


information about security vulnerabilities and aids in penetration testing and
IDS signature development. It is open-source, free, and available to the
public.

The project provides information about security vulnerabilities used by


penetration testers during security audits and network administrators to
ensure the correct configuration of the network’s devices.

Also Read: Continuous Penetration Testing: The Best Tool You’ll Find
in 2022

What is Software Security Testing?


Software Security Testing is an essential part of the security process as it
ensures that all systems and resources accessible from outside the
organization are safe. It is recommended to do regularly scheduled software
security testing to keep up with the latest threats and vulnerabilities.

Software security testing, also known as Software penetration testing, is a


process of testing a software application for security loopholes and finding
vulnerabilities that malicious actors can exploit.

While there are many types of penetration testing, such as vulnerability


scanning, functional testing, and IDS/IPS testing, most of them focus on
finding flaws in the security of the overall infrastructure.

Reading Guide: Network Security Audits

Why is Software Security Testing important?


Software security testing enables organizations to keep abreast of the latest
security threats and vulnerabilities. Audited software helps organizations
determine their current security posture and plan for the next stage of
software security. Software security is a continuous process and not a one-
time project.

Software security testing is performed to determine whether a network is


vulnerable to attacks from the internet or the internal network. This testing
includes a review of all software infrastructure and systems accessible from
the internet.

The main goal of software security testing is to determine the level of risk
that exists in an organization’s IT network. This testing is crucial because it
can prevent the risk of your company’s data and systems being
compromised.

Also Read: Top Penetration Testing Software & Tools Pros Use

Benefits of Software Security Testing Tools


Software security testing tools are an essential part of the information
security plan. Software security testing tools are used to perform security
testing on a network to identify and prevent security risks in the networks.

The results of the tests are analyzed to find any holes in the safety and to
point out weaknesses in the existing security system. These security tools
have proven to be very helpful in the network testing process.

Also, these security testing tools can increase IT security and keep data
safe by identifying the weaknesses in a company’s network and pointing out
the necessary improvements. It can also identify potential threats and
recommend immediate action to prevent potential problems.

Also Read: A Complete Guide to Cloud Security Testing

5 Different techniques used to perform Software


Security Testing
1. Network Scanning

The Network scanner is a potent tool to scan a network and get information
about the network. The network scanning tool can monitor the network,
identify the hosts connected to the network, and identify the services running
on the network like FTP, HTTP, POP3, and SMTP.

The Network scanner also identifies the operating system running on the
host and the version of the operating system.

2. Vulnerability Scanning

Vulnerability scanning is a network security process that detects and


analyzes flaws in computers and computer systems and reports the
information to administrators. This information helps plan security patches or
upgrades. It can also help in determining the security status of a network.

Vulnerability scanners have been around for a long time. Still, they have
been made more effective by using sophisticated techniques, such as
fuzzing, and they are now considered an essential tool in supporting
compliance with regulatory standards.

Also Read: Online Website Penetration Testing: A Complete Guide

3. Ethical Hacking

Ethical hacking is the practice of testing a computer system, network, or web


application to find security weaknesses (holes) before a malicious hacker
does. It is the surface area testing of a system, network, or web application.
Ethical hacking aims to find security weaknesses before a malicious hacker
does.

4. Password Cracking

Password cracking is of two types:

Dictionary Attack: This method uses a dictionary (a word list) to crack


passwords. The word list has all the possible passwords. So the computer
compares the password given by the user to the word list to find out the
matching password.

Brute Force Attack: This method uses an automatic program to crack


passwords. The program tries all possible combinations of characters until it
finds the correct password. Brute force attack is a time-consuming process.

5. Penetration Testing

Penetration testing evaluates computer security by simulating an active


attack on a computer system or network. Penetration testing is typically
performed by ethical hackers, also known as white hat hackers, or by
security professionals attempting to determine the extent of damage or risk
before an actual attack.

Penetration testing differs from vulnerability scanning and compliance


auditing in that the primary aim of penetration testing is to exploit potential
vulnerabilities in a given target. In contrast, vulnerability scanning and
compliance auditing are more passive tests.

Also Read: What is Automated Penetration Testing? Difference


between Automatic & Manual Pentesting Software

How much does a Software Security Testing Tools


Cost?
Security testing tools can be costly, and it depends on the tools you are
using and the number of apps you are scanning, and a lot more factors that
are usually discussed before signing a contract. A security scan should be
conducted at least twice a year to check the security and ensure it is secure
against threats. On average, the cost usually ranges from $100 to $500
per month.

3 things to know before buying a Software Security


Testing Tool
With the number of different network security testing tools available,
businesses are faced with a bewildering number of choices when it comes
to selecting the best network security testing solution, and keeping that in
mind, we have prepared a list of a few things to keep in a while buying a
network security testing tool.

1. Ease of use and Friendly UI

One of the critical factors for organizations to choose a network security


testing tool is the ease of use. Simple interface and easy-to-follow
instructions are always appreciated. Even the most advanced tools are
rendered useless when the user does not know how to use them. A good
tool will have an easy-to-use interface, step-by-step instructions, and a
detailed user guide.

2. Comprehensive scan report

Understanding the threats against your business is crucial when it comes to


risk management. A comprehensive security testing report is essential to
keeping your business safe. A comprehensive security testing report can
uncover high-risk vulnerabilities, help you better understand your network,
and help achieve compliance.

3. Updated with Latest Vulnerabilities

No automated security testing tool is perfect. Hackers are constantly finding


and releasing new vulnerabilities. An automated network security testing tool
should have an updated database of security vulnerabilities so that no
vulnerability is left unnoticed.

Also Read: 7 Best API Penetration Testing Tools And Everything


Related

Astra’s Pentest Solution: All in one Security Solution


No matter how big or small your company is, hiring a penetration testing
company to protect your network and applications is vital. Hiring a good pen
testing solution will not only protect your business but your data as well.
Astra Security is an excellent solution for your business.
Astra Security has been in the industry for many years now; it offers multiple
pen testing solutions,
including Network, Web, API, Blockchain, and Cloud penetration testing.

The Astra Penetration Testing Solution is a “Next Generation” Penetration


Testing software used by thousands of organizations worldwide. Astra’s
pentest solution is well-known for its excellent vulnerability scanner with
more than 3000 tests, making it a perfect choice for penetration testing.

You might also like