0% found this document useful (0 votes)
40 views47 pages

Unit-3 STA 2024

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views47 pages

Unit-3 STA 2024

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Unit-III Levels of Testing 9

The need for Levels of Testing – Unit Test – Unit Test Planning – Designing the Unit Tests – The
Test Harness – Running the Unit tests and Recording results – Integration tests –
Designing Integration Tests – Integration Test Planning, System Testing – Acceptance testing –
Performance testing – Regression Testing – Internationalization testing – Ad-hoc testing –
Alpha, Beta Tests – Usability and Accessibility testing – Configuration testing –Compatibility
testing– Website testing.

Need For Levels Of Testing:-


• Execution based software testing, especially large systems, is usually carried out at
different levels
• Major Phases of testing:
•Unit Test
•Integration Test
•System Test
•Acceptance Test
Principal goal is to detect functional and structural defects in the unit. At the
integration level several components are tested as group, and tester investigates
component interaction. At the system level the system as a whole is tested anda principle
goal is to evaluate attribute such as ability, reliability and performance

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


Level of Testing and Software Development Paradigms
The approach used to design and develop a software system has an impact on how a
testers plan and design suitabletests.
• The major approaches to system development- 1) Bottom up 2) Top down
• These approaches are supported by two major types of programming languages-
1) procedure Oriented and 2) Object Oriented
• The different levels of systems developed with both approached using their
traditional procedural programminglanguages or object oriented programming
languages.

• Systems developed with procedural languages

• are generally viewed as being composed of passive data and active


procedures
• When test cases are developed the focus is on generating input
data to pass to the procedures (orfunctions) in order to reveal
defects.
• Object Oriented systems
• are viewed as being composed of active data along with
allowed operations on that data, allencapsulated within a
unit similar to abstract data type.
Unit test: Functions, Procedures, Classes, and Method as Unit
• A workable definition for a software unit is as follows
• A Unit is the smallest possible testable software component
• It can be characterized in several ways. For example a
unit in a typical procedure orientedsoftware system”
• Perform a single cohesive function
• Can be compiled separately
• Is a task in a work breakdown structure (from the manager’s
point of view)
• Contain code that can fit on a single page or screen.
• A unit is traditionally viewed as a function or procedure
implemented in a procedural (imperative)programming language.
• In object oriented systems both the method and the class/object have been suggested by
researchers
• A unit may also be a small sized COTS component purchased from an
outside vendor that is undergoingevaluation by the purchaser, or simple

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


module retrieved from an in-house reuse library

Unit Test- The Need for Preparation


• The principal goal for unit testing is insure that each individual
software is functioning according to itsspecification
• Good testing practice calls for unit tests that are planned and public.
• Planning includes
• designing test to reveal defects such as functional description defects,
algorithmic defects , data defects,and control logic and sequence defects.
• Resources should be allocated and test cases should be developed,
using both white and black box testdesign strategies.
• The unit should be tested by an independent tester (other than testers) and the test results and defects
found Each unit should also be reviewed by a team of reviewers, preferably before the unit test.
• Unit test in many cases is performed informally by the unit developer soon after the
module is completed, and itcompiles cleanly.
• Some developers also perform an informal review of the unit .
• To prepare for unit test the developers/ testers must perform several tasks. These are:-
1. Plan the general approach to unit testing
2. Design the test cases, and test procedures
3. Define relationships between the test
4. Prepare the auxiliary code necessary for unit test.
Unit test Planning
A general unit test plan should be prepared.It may be prepared as a component of the master
test plan or a standalone plan.It should be developed in conjunction with the master plan and
the project plan for each project
• Phase 1 : Describe Unit test Approach and Risk
In this phase of unit test planning the general approach to unit test planning is outlined:
The test planner
1. Identifies test risks
2. Describes techniques to be used for designing the test cases for units
3. Describes techniques to be used for data validation and recording of test results
4. Describes the requirement for test harness and other software that interfaces
with unit to be tested eg:- anyspecial software needed for testing object oriented

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


units
• During this phase the planner also identifies completeness requirements ie what will be
covered by the unit testand to what degree (state , functionality, control, data flow
patterns)
• Planner also identifies termination condition for unit test.
• They include coverage requirement and special cases
• Special cases may result in abnormal termination of unit test
• Planner estimate the resources needed for unit test such as hardware, software and
staff and develop tentativeschedule under constraints identified at that time
Phase 2:- Identify Unit Features to be Tested
• This phase requires information from the unit specification and detailed design description
• The planner determines which features of each unit will be tested, for
example functions, performancerequirement , state and state transition ,
control structures , messages and data flow patterns
• Some features will be covered by the tests, they should be mentioned and risks of not
testing them be assessed.
• Input and output of each test unit should be identified.
Phase 3: Add levels of Detail to the Plan
• In this phase the planner refines the plan as produced in the previous two phases
• The planner adds new details to the approach, resource and scheduling portions of the unit test
plan
• Eg:- Existing test cases that can be reused for this project can be identified in the
• Unit availability and integration scheduling information should be included in the
revised version of the testplan
• Planner must be sure to include a description of how test results will be recorded.
• Test related documents that will be required for this task eg test logs, test incidents report
should be described.
Designing the Unit test
• It is important to specify the following test design information with the unit test plan
• The test cases (including I/O and expected output for each test cases)
• The test procedures (steps required run the test)
• As a part of the unit test design process, developers / tester should
also describe the relationshipbetween the tests.
• Test suites can be defined that binds related tests together as a group.
• Test cases, test procedures and test suites may be reused from the past projects if the
organization has beencareful to store them so that they can be easily retrievable and
reusable
2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT
• Test case design at unit level can be base on the use of black and white box design
strategies
• Both of these approaches are useful for designing test cases for functions and procedures
• They are useful to designing test for individual methods in a class. This approach gives the
tester the opportunity to exercise logic structure and /or data flow sequence or to use
mutation analysis, all with the goal of evaluating the structural integrity of the unit
Class as a testable Unit
• If an organization is using the object oriented paradigm to develop software system it will
need to select the component to be considered for unit test.
• Choice consist of 1) individual methods as a unit or 2) the class as a whole.
• Additional code in the form of tests harness, must be built to represent the called
methods within the class. This iscostly;
• Building such test harness for each individual method often require developing code
equivalent to that already existing in the class itself.
• In spite of the potential advantages of testing each method individually, many
developers/testers consider the class to be the component of choice for unit testing. The
process of testing classes as units is sometimes called component test
• When testing on the class level we are able detect not only traditional types of defects, for
example, those due to control or data flow errors, but also defects due to the nature of
object oriented systems, for example, defects due to encapsulation, inheritance, and
polymorphism errors.
Issue 1:- Adequately Testing Classes
• The potentially high costs for testing each individual method in a class
• These high cost will be particularly apparent when there are many methods in a high as 20
to 30.
• If the class is selected as the unit to test, it is possible to reduce these cost since
many cases the methods in asingle class server as drivers and stubs for one
another.
• This has the effect of lowering the complexity of the test harness that needs to be
developed.
• some cases driver classes that represent outside classes using the methods of the class
under test will have to bedeveloped.
• For example : create, pop, push empty, full and show_top methods associated with the
stack class.
• When testers unit(or components) test this class what they will need to focus on is the
operation of each of themethods in the class and the interaction between them
• For example, a test sequence for a stack that can
2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT
hold three items might be:create(s,3), empty(s),
push(s,item-1), push(s,item-2), push(s,item-3),

full(s), show_top(s), pop(s,item), pop(s,item),


pop(s,item), empty(s), . . .

Issue 2: Observation of Object States and State Changes


• Methods may not return a specific value to a caller
• They may instead change the state of an object .
• The state of an object is represented by a specific set of values for its attributes or state
variables
• Methods often modify the state of an object, and the tester must ensure that each state
transistor is proper
• The test designers can prepare a state table that specifies states , the object can
assume, and then in the table indicate sequence of messages and parameters that will cause
the object to ensure each state.
• When the test are run the tester can enter results in this table. The first call to the method
push in the stack class, changes the state of the stack so that empty is no longer true. It also
changes the value of the stack pointer variable,top.
• To determine if the method push is working properly the value of the variable top must be
visible both before and after the invocation of this method. In this case show_top within the
class may be called to perform this task.
• The methods full and empty also probe the state of the stack. A sample augmented
sequence of calls to check the value of top and the full/ empty state of the three item stack
is :
.
Issue 3:- The Retesting of classes-I

• One of the most beneficial features of object oriented development is encapsulation


used to hide information
• A program unit , in this case a class, can be built with a well defined public interface that
proclaims its services to client classes. The implementation of the services is private.
2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT
Client who use the service s are unaware of implementation details. The interface is
unchanged , making changes to the implementation should not affect the client classes. A
tester object oriented code would therefore conclude that only the class with
implementation changes to its methods needed to be retested.
• In an object-oriented system, if a developer changes a class implementation that class
needs to be retested as well as all the classes that depend on it. If a superclass, for example,
is changed, then it is necessary to retest all of itssubclasses
Issue 4:- The Retesting of classes-II
• Classes are usually a part of a class hierarchy where there are existing inheritance
relationships
• Subclasses inherit methods from their super classes
• Tester may assume that once a method in a super class has been tested , it does not need
retested in a subclasses that inherit it.

• There may be overriding of methods where a subclass may replace an inherited


methods with a locally definemethods.
• Designing a new set of test cases may be necessary.
• This is because the two methods may be structurally different

Shape (display() , color() ) triangle (color() )


equilateral triangle (display() )

• Suppose the shape superclass has a subclass, triangle, and triangle has a subclass,
equilateral triangle. Alsosuppose that the method display in shape needs to call the
method color for its operation.

• Equilateral triangle could have a local definition for the method display. That definition for
color which has beendefined in triangle.
• This local definition of the color method in triangle has been tested to work with the
inherited display method inshape, but not with the locally defined display in equilateral
triangle.
• This is a new context that must be retested. A set of new test cases should be developed.

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


• The tester must carefully examine all the relationships between members of a class to
detect such occurrences.
The Test Harness
• The auxiliary code developed to support testing of units and components is called as test
harness
• The harness consist
• drivers call the target code
• stubs represent modules it calls.
• The development of drivers and stubs requires testing resources.
• The drivers and stubs must be tested themselves to insure they are working properly
and that they are reusablefor subsequent releases of the software
• Drivers and stubs can be developed at several levels of functionality
• Eg:- a driver could have the following options and combinations of options:
• Call the target unit
• Do 1, and pad pass input parameters from the table
• Do 1,2, and display parameters
• Do 1,2,3 and display
result (output parameters) The stub
should also exhibit bit different
levels of functionality
• For example a stub could
• Display a message that it has been called the target unit
• Do1, and display any input parameters passes from the target units
• Do1,2, and pass back result from a table
• Do1,2,3 and display result from table

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


.
Running a Unit tests and Recording Results
• Unit test can begin when
• The unit become available from the developers
• The test cases have been designed and reviewed
• The test harness and any other supplemental to supporting tools

• The status of the test efforts for a unit, and a summary of test results must be
recorded in a unit testworksheet

• It is very important that the tester at any level of testing to carefully record, review and check test
results.
• The tester must determine from the results whether the unit has passed or failed the test
• If the test is failed, the nature of the problem should be recorded in what is sometimes
called the test incidentreport.
• Differences from expected behavior should be described. When a unit fails a test there may be
several reasonsfor the failure.
• fault in the unit implementation
• A fault in the test case specification (the input or the output was not specified correctly)
• A fault in test procedures execution( the test should be rerun)
• A fault in the test environment (perhaps a database was not set up properly)
• A fault in the unit design (the code correctly adheres to the design
specification , but the latter isincorrect)
• When a unit has been completely tested and finally passes all of the required tests it is ready for
integration
Integration Test-Goals
• Integration test for procedural code has two major goals
• To detect defects that occur on the interfaces of units
• To assemble the individual unit into working subsystem and finally a complete
system that is ready forsystem test.
• In unit test the tester attempts to detect defects that are related to the functionality and structure of
2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT
the unit.
• Some simple unit interfaces are more adequately tested during integration test when each unit is
finally Few minor expectations, integration test should only be performed on unit successfully
passed unit testing.
• A tester might believe erroneously that since a unit has already been tested during a unit test
with a drivers andstubs, it does not need to be retested in combinations with other units during
integration test.
• Integration testing works best as an iterative process procedural oriented system.
• One unit at a time integrated into a set of previously integrated modules which have passed a
set of integrationtests.
• The interface and functionality of the new unit is combination with the previously integrated units is
tested
• When a subsystem is built from units integrated in the stepwise manner, then performance,
security and stresstest can be performed in this subsystem.
• Integrating one unit at a time helps tester in several ways.
• It keeps the number of new interfaces to be examined small, so that can focus on these interfaces
only.
• Experienced tester know that many defects occur at module interface.
• Another advantage is that the massive failures that often occur multiple units are integrated at
once is avoided.
• Approach also helps the developers, it allows defect search and repair confined to a small
known number ofcomponents and interfaces
• Integration process is object oriented systems is driven by assembly of the classes into
cooperating groups.
• The cooperating groups of classes are tested as a whole and then combined into higher level
groups.
Designing Integration tests
• Integration test using a black or white box approach , Some unit test can be reused
• Since many error occur at module interfaces, test designers need to focus on
exercising all input/outputparameter pairs and all calling relationships
• The tester needs to insure the parameters are of the correct type and in the correct order.
• The author has had the personal experience of spending many hours trying to locate a fault

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


that was due to ancorrect ordering of parameters in the calling routine

• Example : Procedure_b is being integrated with Procedure_a. Procedure_a


parameters in3, in4. Procedure_b uses those parameters and then returns a value for the output
parameter out1.Terms such as lhs and rhs could be any variable or expression.
• The parameter could be involved in a number of def and/or use data flow patterns
• The actual usage patterns of the parameters must be checked at integration time.
• Some black box test used for module integration may be reusable from unit testing.
• When units are integrated and subsystems are to be tested as a whole, new tests will a
have to be designed tocover the functionality tests at the integration level are the
requirements document and the user manual.
• Tester need to work with requirement analyst to insure that the requirement are testable, accurate and
complete.
• Black Box tests should be developed to insure proper functionally and ability to handle subsystem
stress.
• Integration Testing of clusters of classes also involves building test harness which in this
case are specialclasses of objects built for testing
• Class testing we evaluated intra class method interaction , at the cluster level we test
inter class methodinteraction as well
• We want to insure that message are being passed properly to interfacing objects, object
state transition arecorrect when specific events occur , and that the cluster are
performing their required functions.
• A group of cooperative classes is selected for a test as a cluster.( packages in java)
• If developers have used the Coad and Yourdon’s approach , then a subject layer could be
used to represent acluster.
Integration test Planning
• Integration test must be planned
• Planning can begin when high level design is complete so that the system architecture is defined.

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


• Documents relevant to integration test planning are the requirement document, the use manual and
usage scenario. These document contains structure charts, the state charts and data dictionary
, cross reference table , module interface description
• The strategy for integration of the unit must be defined .
• For procedural-oriented system
• the order of integration of the units of the units should be defined. This depends on the strategy
selected. Consider the fact that the testing objectives are to assemble components into
subsystems and to demonstrate that the subsystem functions properly with the
integration test cases.
• For object-oriented
• systems a working definition of a cluster or similar construct must be described, and
relevant test cases must be specified. In addition, testing resources and schedules for
integration should be included in the test plan. The plan includes the following items:
• Cluster this cluster is dependent on

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


A natural language description of the
functionality of the cluster to betested.
• List a classes in the cluster
• A set of cluster test cases
Integration Testing Types:-
Integration testing can be viewed as
1) type of testing
2) phase of testing.

Integration is defined to be a set of interactions, all defined interaction among the components need to
be tested. The architecture and design can give the details of interactions within the systems, however
testing the interactions between one system and another system required detailed understanding
of how they work together.
Integration Testing As a Type of Testing :-
Integration testing means testing of interfaces. They are
Internal Interfaces - provide communication across two modules within a projects or product,
internal to the product,and not exposed to the customer or external developers
Exported or External Interfaces.. - Exported interfaces are those that are visible outside
the product to third partydevelopers and solution providers.

“Intergration Testing Type Focuses on testing interfaces that are “Implicit and Explicit” and
“Internal andExternal”
Implicit interface ->
Documentation given
Explicit interface->
C o mpo ne nt 1

Component 3
Component 4
Component 2
No documentation
give
Component 9
Component 7
Component 10 Component 5 Component 6
Component 8

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


“A set of Modules and Interfaces”
In the above diagram, it is clear that there are at least 12 interfaces between the modules to be
tested (9 explicit and 3 explicit). Now what will be the order of interface to be tested. There are
several methodologies available , to in decide the order for integration testing. These are as
follows:-
1. Top Down Integration
.
3. Bi-Directional Integration
4. System (Big Bang) Integration
Top-Down Integration :-

Integration Testing involves testing the topmost component interface with other
components in same orderas you navigate from top to bottom, till we cover all the components. To
understand this methodology, we will assume that a new product/ software development where
components become available one after another in the order of component number specified
.The integration starts with testing the interface between Component 1 and Component 2 .To complete
the integration testing all interfaces mentioned covering all the arrows, have to be tested together. The
order in which the interfaces are to be tested is depicted in the table below. In an incremental product
development, where one or two components gets added to the product

in each increment, the integration testingComponentsmethodolog1 y pertains to only those new


interfaces that are added .

Components 2 Components 3 Components 4

Components 5 Components 6 Components 7 Components 8

Order of testing Interfaces

S t e ps Interfaces Tested
1 1-2
2 1-3
3 1-4
4 1-2-5
5 1-3-6

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


6 1-3-6-(3-7)
7 (1-2-5)-(1-3-6-(3-7))
8 1-4-8
9 (1-2-5)-(1-3-6-(3-7))-(1-4-8)

For example , assume one component (component 8) is added for the current release , then the
integration testing forcurrent release need to include steps 4,7,8 and 9.
To optimize no of steps in integration(optimization of elapsed time) , following steps can be
combined ,
• step 6,step 7 executed as single step,
Subsystem : set of components and their related interfaces can deliver functionality components is called
as sub system . Ex:components in steps 4, 6 and 8 can be considered as subsystem.
Bottom-Up Integration:-
Bottom-up integration is just the opposite of top-down integration, where the components for a new
product development becomes available in reverse order, starting from the bottom. Testing takes place
from the bottom of the control flow upwards. Components or systems are substituted by drivers.
Logic Flow is from top to bottom and integration path is from bottom to top. Navigation in
bottom-up integration starts from component 1 converting all sub systems , till components 8 is
reached. The order is listed below. The number of steps in the bottom up can be

Component 8

Component 5 Component 6 Component 7

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


optimized into four steps. BCoy cmombiponninent 2g step2 and step3Coamndponbyencombit 3
ning step 5-8 in the previous table.

Component 1

Component 4

Order of Interface tested using Bottom Up Integration

S t eps Interfaces Tested


1 1-5
2 2-6,3-6
3 2-6-(3-6)
4 4-7
5 1-5-8
6 2-6-(3-6)-8
7 4-7-8
8 (1-5-8)-(2-6-(3-6)-8)-(4-7-8)

Bidirectional Integration:- (Sandwich Integration) Bi directional integration is a combination of the


top-down and bottom –up integration approaches used together to derive integration steps. Let us
assume software components become available in the order mentioned by the component numbers.
The Individual component 1, 2, 3, 4, and 5 are tested separately and bi-directional integration is
performed initially with the use of studs and drivers. Drivers are used to provide upstream
connectivity while stubs are provided for downstreamconnectivity.
A driver is a function which redirects the request to some other components and stubs simulate the
behavior of a missing components. After the functionality of these integrated components are tested,
the drivers and stubs are

discarded .Once component 6,7 and 8 becomes available, the integration methodology focus on only
those components , asthese are the components which need focus and are new.

Component 1

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


Component 6
Component
8 Component 7

Component 2 Component 5
Component 3 Component 4

Steps for Integration Using Sandwich Testing :-

S t e ps Integration Tested
1 6-2
2 7-3-4
3 8-5
4 (1-6-2)-(1-7-3-4)-(1-8-5)

System (Big Bang) Integration:-


System Integration means that all the components of the system are integrated and tested as a
single unit.Integration testing, which is testing of interface, can be divided into two types:-

Components or Sub-System Integration


Final Integration testing or system Integration
Big bang Integration is deal for a product where the interfaces are stable with less number of
defects.
There are some major important disadvantages that can have a bearing on the release dates
and quality of a product are as follows :-
1. When a Failure or defects is encountered during system integration, it is very difficult to
locate the problem, to find out in which interface the defects exists. The debug cycle may
involve focusing on specific interfaces and testing them again.
2. The ownership for correcting the root cause of the defects may be a difficult issue to pin point.
3. When integration testing happens in the end , the pressure from the approaching release
date is very high. Thispressure on the engineers may cause them to compromise on the
quality of the product .

4. A certain components may take an excessive amount of time to be ready. This precludes testing
other interfaces and wastes time till the end.The Integration testing phases focuses on finding
defects which predominantly arise because of combining various components for testing, and
should not be focused on for component or few components .Integration testing as a typefocuses
on testing the interfaces. This is a subnet of the integration testing phase.

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


• Sandwich Integration
– Compromise between bottom-up and top-down testing
– Simultaneously begin bottom-up and top-down testing and meet at a predetermined
point in the middle

Integration Test Planing

• Integration test must be planned. Planning can begin when high-level design is complete so that
the system architecture is defined.
• Other documents relevant to integration test planning are the requirements document, the user
manual, and usage scenarios.
• These documents contain structure charts, state charts, data dictionaries, cross-reference tables,
module interface descriptions, data flow descriptions, messages and event descriptions, all
necessary to plan integration tests.

• Consider the fact that the testing objectives are to assemble components into subsystems and to
demonstrate that the subsystem functions properly with the integration test cases.
• For object-oriented systems a working definition of a cluster or similar construct must be
described, and relevant test cases must be specified.
• In addition, testing resources and schedules for integration should be included in the test plan.

The plan includes the following items:

(i) clusters this cluster is dependent on;

(ii) a natural language description of the functionality of the cluster to be tested;

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


(iii) list of classes in the cluster;

(iv) a set of cluster test cases.

• Designing Integration Tests


• Integration tests for procedural software can be designed using a black or white box
approach.
• Both are recommended. Some unit tests can be reused.
• Since many errors occur at module interfaces, test designers need to focus on exercising all
input/output parameter pairs, and all calling relationships.
• The tester needs to insure the parameters are of the correct type and in the correct order.
• The author has had the personal experience
• of spending many hours trying to locate a fault that was due to an incorrect ordering of
parameters in the calling routine.
The tester must also insure that once the parameters are passed to a routine they are used
correctly
• For conventional systems, input/output parameters and calling relationships will appear in a
structure chart built during detailed design.
• Testers must insure that test cases are designed so that all modules in the structure chart are
called at least once, and all called modules are called by every caller.
• The reader can visualize these as coverage criteria for integration test. Coverage
requirements for the internal logic of each of the integrated units should be achieved during
Some black box tests used for module integration may be reusable from unit testing.
• However, when units are integrated and subsystems are to be tested as whole, new tests will
have to be designed to cover their functionality and adherence to performance and other
requirements.

• Integration testing of clusters of classes also involves building test harnesses which in this case
are special classes of objects built especially for testing.
• Whereas in class testing we evaluated intraclass method interactions, at the cluster level we test
interclass method interaction as well.tests.
• A group of cooperating classes is selected for test as a cluster. If developers have used the Coad
and Yourdon’s approach, then a subject layer could be used to represent a cluster.
• Jorgenson et al. have reported on a notation for a cluster that helps to formalize object- oriented
integration.
The methods and the classes they belong to are connected into clusters of classes that are
represented by a directed graph that has two special types of entities

• These are method-message paths, and atomic systems functions that represent input port events.
• A method-message path is described as a sequence of method executions linked by
messages.
• An atomic system function is an input port event (start event) followed by a set of method
messages paths and terminated by an output port event (system response).

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


User Acceptance Testing (UAT)

What is UAT?
• User Acceptance Testing (UAT) is a type of testing performed by the end user or the client to
verify/accept the software system before moving the software application to the production
environment.
• UAT is done in the final phase of testing after functional, integration and system testing is
done.

Purpose of UAT
The main Purpose of UAT is to validate end to end business flow. It does not focus on cosmetic errors,
spelling mistakes or system testing. User Acceptance Testing is carried out in a separate testing environment
with production-like data setup. It is kind of black box testing where two or more end-users will be involved.

• The main Purpose of UAT is to validate end to end business flow. It does not focus on
cosmetic errors, spelling mistakes or system testing.
• User Acceptance Testing is carried out in a separate testing environment with production-
like data setup.
• It is kind of black box testing where two or more end-users will be involved.

UAT is performed by –
1. Client
2. End users

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


1. Analysis of Business Requirements
2. Creation of UAT test plan
3. Identify Test Scenarios
4. Create UAT Test Cases
5. Preparation of Test Data(Production like Data)
6. Run the Test cases
7. Record the Results
8. Confirm business objectives

Features:
• All-in-one UAT solution
• Works across all ERPs and applications
• Test any process end-to-end
• Automatically capture everything
• Train and use within 30 minutes
• Instant test result notifications
• Easy annotations & comments

Performance testing

Objective of Performance Testing:


1. The objective of performance testing is to eliminate performance congestion.
2. It uncovers what is needed to be improved before the product is launched in market.
3. The objective of performance testing is to make software rapid.
4. The objective of performance testing is to make software stable and reliable.
5. The objective of performance testing is to evaluate the performance and scalability of a system or
application under various loads and conditions.

• Performance Testing is a type of software testing that ensures software applications to


perform properly under their expected workload.
• It is a testing technique carried out to determine system performance in terms of
sensitivity, reactivity and stability under a particular workload.

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


• Performance testing is a type of software testing that focuses on evaluating the
performance and scalability of a system or application.
• The goal of performance testing is to identify bottlenecks, measure system performance under
various loads and conditions, and ensure that the system can handle the expected number of
users or transactions.

There are several types of performance testing, including:


• Load testing: Load testing simulates a real-world load on the system to see how it performs
under stress. It helps identify bottlenecks and determine the maximum number of users or
transactions the system can handle.
• Stress testing: Stress testing is a type of load testing that tests the system’s ability to handle a high
load above normal usage levels. It helps identify the breaking point of the system and any potential
issues that may occur under heavy load conditions.
• Spike testing: Spike testing is a type of load testing that tests the system’s ability to handle sudden
spikes in traffic. It helps identify any issues that may occur when the system is suddenly hit with a
high number of requests.
• Soak testing: Soak testing is a type of load testing that tests the system’s ability to handle a
sustained load over a prolonged period of time. It helps identify any issues that may occur after
prolonged usage of the system.
• Endurance testing: This type of testing is similar to soak testing, but it focuses on the long-term
behavior of the system under a constant load.
• Performance Testing is the process of analyzing the quality and capability of a product. It is a testing
method performed to determine the system performance in terms of speed, reliability and stability
under varying workload. Performance testing is also known as Perf Testing.

Performance Testing Attributes:


• Speed:
It determines whether the software product responds rapidly.
• Scalability:
It determines amount of load the software product can handle at a time.
• Stability:
It determines whether the software product is stable in case of varying workloads.
• Reliability:
It determines whether the software product is secure or not.

Types of Performance Testing:


1. Load testing:
It checks the product’s ability to perform under anticipated user loads. The objective is to
identify performance congestion before the software product is launched in market.
2. Stress testing:
It involves testing a product under extreme workloads to see whether it handles high traffic or not.
The objective is to identify the breaking point of a software product.
3. Endurance testing:
It is performed to ensure the software can handle the expected load over a long period of time.

4. Spike testing:
It tests the product’s reaction to sudden large spikes in the load generated by users.

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


5.Volume testing:
In volume testing large number of data is saved in a database and the overall software system’s behavior is
observed. The objective is to check product’s performance under varying database volumes.

6.Scalability testing:
In scalability testing, software application’s effectiveness is determined in scaling up to support an
increase in user load. It helps in planning capacity addition to your software system.

Advantages of Performance Testing :


• Performance testing ensures the speed, load capability, accuracy and other performances of the
system.
• It identifies, monitors and resolves the issues if anything occurs.
• It ensures the great optimization of the software and also allows large number of users to use it on
same time.
• It ensures the client as well as end-customers satisfaction.Performance testing has several
advantages that make it an important aspect of software testing:

Internationalization Testing

Why is internationalization testing done?


• To ensure the proper encoding of characters when a language is converted to another language.
• To check, if the search query or string is not supported with the targeted language then the
software will not crash or malfunction.
• To attract audiences globally by providing convenience on using the application in their
preferred languages.
• To make sure that the look and feel of the font and font size are rendered accordingly.

• Internationalization Testing :

Internationalization testing is a process of ensuring the adaptability of software to different
cultures and languages around the world accordingly without any modifications in source
code.
• It is also shortly known as i18n, in which 18 represents the number of characters in
between I & N in the word Internationalization.

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


Internationalization simply makes applications ready for localization.

What are types of internationalization testing?


Internationalization focuses on
1. compatibility testing,
2. functionality testing,
3. interoperability testing,
4. usability testing,
5. installation testing,
6. user interface validation testing

Where is i18n testing done ?


Internationalization testing is done on several important aspects that are classified into the following two parts.
1. Internationalization testing at the Front end :
i18n test done at the user side of the application.

• Content localization –
Localization of the static contents like labels, buttons, tabs and other fixed elements in
applications, and the dynamic contents like dialogue boxes, pop-ups, toolbars, etc.
• Local/Cultural Awareness –
Cultural awareness testing has to be done to ensure the appropriate rendering of time, date,
currencies, telephone numbers, zip codes, special events and festivals on calendars used in
different regions

• Feature-based Testing –

Several features of an application work for certain regional users and not for others.
• So those features should be hidden for non-applicable users and it should be visible and
functional to the users for whom they work. This is ensured by Feature-based testing.
• File transferring and rendering –

Property files of different languages need to be tested whether the interface of file
transfer is localized as per the language being selected.
• Rendering means providing or displaying contents (scripts) that are appropriately
displayed without misalignment or random words.

• 2. Internationalization testing at the Back end :


Internationalization testing at the back end requires an in-depth understanding of the
database.
• This testing includes the support of Unicode characters in the database.
This testing also facilitates the back end (server-side) of an application to handle different
languages, currencies, encoding, site search and form data submission.
Benefits of internationalization testing:

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


• Increased visibility and reach of target audience around the world with personalized content
rendering.
• Single source code with the international standard for all versions of the application.
• The global release of the product (application) with lesser cost and time.
• Improved good quality and architecture with simpler maintenance.
• Reduced ownership cost for the various versions of the product with compliance with
international standards.
Benefits of internationalization testing:
Increased visibility and reach of target audience around the world with personalized content
rendering.
Single source code with the international standard for all versions of the application. The
global release of the product (application) with lesser cost and time.
Improved good quality and architecture with simpler maintenance.
Reduced ownership cost for the various versions of the product with compliance with
international standards.

Regression testing:
Regression Testing is the process of testing the modified parts of the code and the parts that might get
affected due to the modifications to ensure that no new errors have been introduced in the software after the
modifications have been made.

Regression means return of something and in the software field, it refers to the return of a bug.

When to do regression testing?


• Typically, regression testing is applied under these circumstances:
• A new requirement is added to an existing feature
• A new feature or functionality is added
• The codebase is fixed to solve defects
• The source code is optimized to improve performance
• Patch fixes are added
• A new version of the software is released
• When changes to the User Interface are made
• Changes in configuration
• A new third-party system is integrated with the current system
Process of Regression testing:
Firstly, whenever we make some changes to the source code for any reasons like adding new
functionality, optimization, etc. then our program when executed fails in the previously designed test
suite for obvious reasons.
After the failure, the source code is debugged in order to identify the bugs in the program.
After identification of the bugs in the source code, appropriate modifications are made
• Then appropriate test cases are selected from the already existing test suite which covers
all the modified and affected parts of the source code.
• We can add new test cases if required.
2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT
In the end regression testing is performed using the selected test cases

Techniques for the selection of Test cases for Regression Testing:

Techniques for the selection of Test cases for Regression Testing:


• Select all test cases: In this technique, all the test cases are selected from the already existing test
suite. It is the most simple and safest technique but not much efficient.
• Select test cases randomly: In this technique, test cases are selected randomly from the existing test-
suite but it is only useful if all the test cases are equally good in their fault detection capability which
is very rare. Hence, it is not used in most of the cases.
• Select modification traversing test cases: In this technique, only those test cases are selected
which covers and tests the modified portions of the source code the parts which are affected by
these modifications.
• Select higher priority test cases: In this technique, priority codes are assigned to each test case of the
test suite based upon their bug detection capability, customer requirements, etc

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


Advantages of Regression Testing:
• It ensures that no new bugs has been introduced after adding new functionalities to the system.
• As most of the test cases used in Regression Testing are selected from the existing test suite and we
already know their expected outputs. Hence, it can be easily automated by the automated tools.
• It helps to maintain the quality of the source code.

Disadvantages of Regression Testing:


• It can be time and resource consuming if automated tools are not used.
• It is required even after very small changes in the code.

Ad hoc testing

AdhocTesting
: Adhoc testing is a type of software testing which is performed informally and randomly after the formal
testing is completed to find out any loophole in the system.
2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT
• For this reason, it is also known as Random testing or Monkey testing.
• Adhoc testing is not performed in an structured way so it is not based on any
methodological approach.

That’s why Adhoc testing is a type of Unstructured Software Testing.

Adhoc testing has –

• No Documentation.

• No Test cases.

• No Test Design.
As it is not based on any test cases or require documentation or test design so resolving issues that are
identified at last becomes very difficult for developers.
Adhoc testing saves a lot of time and one great example of Adhoc testing can be when the client needs the
product by today 6 PM but the product development will be completed at 4 PM same day.

So in hand only limited time i.e. 2hrs only, so within that 2hrs the developer and tester team can test
the system as a whole by taking some random inputs and can check for any errors
Types of Adhoc Testing :
Adhoc testing is divided into three types as follows.

Buddy Testing –
Buddy testing is a type of Adhoc testing where two bodies will be involved one is from Developer
team and one from tester team. So that after completing one module and after completing Unit testing
the tester can test by giving random inputs and the developer can fix the issues too early based on the
currently designed test cases
1. Pair Testing –
Pair testing is a type of Adhoc testing where two bodies from the testing team can be involved to test
the same module.
2. When one tester can perform the random test and another tester can maintain the record of
findings.
So when two testers get paired they exchange their ideas, opinions and knowledge so good
testing is performed on the module

1. Monkey Testing –
Monkey testing is a type of Adhoc testing in which the system is tested based on random inputs
without any test cases and the behavior of the system is tracked and all the functionalities of the
system is working or not is monitored. As the randomness approach is followed there is no constraint
on inputs so it is called as Monkey testing.

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


Characteristics of Adhoc Testing :

• Adhoc testing is performed randomly.

• Based on no documentation, no test cases and no test designs.

• It is done after formal testing.

• It follows an unstructured way of testing.

• It takes comparatively lesser time than other testing techniques.

• It is good for finding bugs and inconsistencies which are mentioned in test cases.

When to conduct Adhoc testing :

• When there limited time in hand to test the system.

• When there is no clear test cases to test the product.

• When formal testing is completed.

• When the development is mostly complete.

Advantages of Adhoc testing :

• The errors which can not be identified with written test cases can be identified by Adhoc testing.

• It can be performed within very limited time.

• Helps to create unique test cases.

• This test helps to build a strong product which is less prone towards future problems.

• This testing can be performed any time during Software Development Life Cycle Process
(SDLC)

Five practices to follow to conduct Adhoc testing :

1. Good Software Knowledge.

2. Find Out Error-Prone Areas.

3. Prioritize Test Areas.

4. Roughly Plan The Test Plan.


5. Use of the right kind of tools.
2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT
ALPHA TESTING& BETA TESTING
Alpha testing – In house virtual user environment can be created for this type of testing.
Testing is done at the end of development.
Still minor design changes may be made as a result of such testing.
• Alpha Testing (Verification testing)
– real world operating environment
– simulated data, in a lab setting
– systems professionals present
• observers, record errors, usage problems, etc.

Example of alpha testing:


For example, suppose that a software product was intended to support many simultaneous users.
Alpha testing might include load tests to ensure the underlying code and physical architecture can
support the product's functionality under various conditions.

Types of alpha testing are there:


Types of testing done by tester in Alpha phase includes
Smoke testing, Integration Testing, System testing, UI and Usability testing, Functional Testing,
Security Testing, Performance Testing, Regression testing, Sanity Testing and Acceptance Testing

Beta testing
Beta testing – Testing typically done by end-users or others.
Final testing before releasing application for commercial
purpose.
• Beta Testing (Validation Testing)
– live environment, using real data
– no systems professional present
– performance (throughput, response-time)
– peak workload performance, human factors test, methods and procedures, backup and
recovery - audit test

Example of a beta test:


For example, Microsoft conducted the largest of all Beta Tests for its operating system Windows 8
before officially releasing it. Technical Beta Testing:

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


Product is released to a group of employees of an organization and collects feedback/data
from the employees of the organization

Types of beta testing are there:


There are various types of Beta testing like
1. traditional beta testing,
2. public beta testing,
3. technical beta testing, and
4. focused and post-release beta testing

steps in beta testing:

How to perform Beta Testing for Apps?


1. Fix the number and type of testers required. ...
2. Set a time limit for testing. ...
3. Locate your beta testers. ...
4. Release the App's Beta Version. ...
5. Keep the testers engaged. ...
6. Seek valuable feedback. ...
7. Implement the changes.

Alpha Testing Beta Testing

Alpha testing involves both the white box and black box testing. Beta testing commonly uses black-box testing.

Alpha testing is performed by testers who are usually internal employees of the organization. Beta testing is performed by clients who are not part of the organization.

Alpha testing is performed at the developer’s site. Beta testing is performed at the end-user of the product.

Reliability and security testing are not checked in alpha testing. Reliability, security and robustness are checked during beta testing.

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


Alpha testing ensures the quality of the product before forwarding to beta testing. Beta testing als o concentr ates on the quali ty of the product but colle cts users input on the product and ensures that the product is ready for real time users.

Alpha testing requires a testing environment or a lab. Beta testing doesn’t require a testing environment or lab.

Alpha testing may require a long execution cycle. Beta testing requires only a few weeks of execution.

Developers can immediately address the critical issues or fixes in alpha testing. Most of the issues or feedback collected from the beta testing will be implemented in future versions of the product.

Multiple test cycles are organized in alpha testing. Only one or two test cycles are there in beta testing.

system testing
Definition: System testing is defined as testing of a complete and fully integrated software product.
This testing falls in black-box testing wherein knowledge of the inner design of the code is not a pre- requisite
and is done by the testing team.

System Testing – Approach


• It is carried out after the Integration Testing has been finished.
• It is mostly a sort of Black-box testing.
• With the use of a specification document, this testing assesses the system's functionality from the
perspective of the user.
• It does not need any internal system expertise, such as code design or structure.
It includes both functional and non-functional application/product domains.
• It is basically defined as a type of testing which verifies that each function of the software
application works in conformance with the requirement and specification.
• This testing is not concerned about the source code of the application.
• Each functionality of the software application is tested by providing appropriate test input,
expecting the output and comparing the actual output with the expected output.

• Non-functional testing is defined as a type of software testing to check non-functional aspects


of a software application.

It is designed to test the readiness of a system as per nonfunctional parameters which are never
addressed by functional testing. Non-functional testing is as important as functional testing.

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


• Non-functional testing is defined as a type of software testing to check non-functional aspects
of a software application.

It is designed to test the readiness of a system as per nonfunctional parameters which are never
addressed by functional testing. Non-functional testing is as important as functional testing.
Non-Functional Testing Techniques
• Compatibility testing: A type of testing to ensure that a software program or system is
compatible with other software programs or systems.
• Compliance testing: A type of testing to ensure that a software program or system meets a
specific compliance standard, such as HIPAA or Sarbanes-Oxley.
• Endurance testing: A type of testing to ensure that a software program or system can handle a
long-term, continuous load.
• Load testing: A type of testing to ensure that a software program or system can handle a large
number of users or transactions.

• Performance testing: A type of testing to ensure that a software program or system meets
specific performance goals, such as response time or throughput.
• Recovery testing: A type of testing to ensure that a software program or system can be
recovered from a failure or data loss.
• Security testing: A type of testing to ensure that a software program or system is secure from
unauthorized access or attack.
• Scalability testing: A type of testing to ensure that a software program or system can be scaled up
or down to meet changing needs.
• Stress testing: A type of testing to ensure that a software program or system can handle an
unusually high load.

• Usability testing: A type of testing to ensure that a software program or system is easy to use.
Volume testing: A type of testing to ensure that a software program or system can handle a large
volume of data

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


Functional Testing Non-functional Testing

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


It verifies the operations and actions of an application. It verifies the behavior of an application.

It is based on requirements of customer. It is based on expectations of customer.

It helps to enhance the behavior of the application. It helps to improve the performance of the application.

Functional testing is easy to execute manually. It is hard to execute non-functional testing manually.

It tests what the product does. It describes how the product does.

Functional testing is based on the business requirement. Non-functional testing is based on the performance requirement.

E x a m p l e s : E x a m p l e s :
1. Unit Testing 2. Smoke Testing 3. Integration Testing 4. Regression Testing 1. Performance Testing 2. Load Testing 3. Stress Testing 4. Scalability Testing

Here are a few common tools used for System Testing:


1. HP Quality Center/ALM
2. IBM Rational Quality Manager
3. Microsoft Test Manager
4. Selenium
5. Appium
2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT
6. LoadRunner
7. Gatling
8. JMeter
9. Apache JServ
10. SoapUI

advantages of System Testing:


• Verifies the overall functionality of the system.
• Detects and identifies system-level problems early in the development cycle.
• Helps to validate the requirements and ensure the system meets the user needs.
• Improves system reliability and quality.
• Facilitates collaboration and communication between development and testing teams.
• Enhances the overall performance of the system.
• Increases user confidence and reduces risks.
• Facilitates early detection and resolution of bugs and defects.
• Supports the identification of system-level dependencies and inter-module interactions.

Disadvantages of System Testing :


• This testing is time consuming process than another testing techniques since it checks the entire
product or software.
• The cost for the testing will be high since it covers the testing of entire software.
It needs good debugging tool otherwise the hidden errors will not be found

Web testing
• Web testing is a software testing technique to test web applications or websites for finding errors and
bugs.
• A web application must be tested properly before it goes to the end-users.
• Also, testing a web application does not only means finding common bugs or errors but also
testing the quality-related risks associated with the application.
• Software Testing should be done with proper tools and resources and should be done effectively.
• We should know the architecture and key areas of a web application to effectively plan and
execute the testing.
• Testing a web application is very common while testing any other application like testing
functionality, configuration, or compatibility, etc.
• Testing a web application includes the analysis of the web fault compared to the general
software faults.
Web applications are required to be tested on different browsers and platforms so that we can identify the
areas that need special focus while testing a web application

Types of Web Testing:


Basically, there are 4 types of web-based testing that are available and all four of them are discussed below:
• Static Website Testing: A static website is a type of website in which the content shown or displayed
is exactly the same as it is stored in the server. This type of website has great UI but does not have any
dynamic feature that a user or visitor can use

• Dynamic Website Testing:


2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT
A dynamic website is a type of website that consists of both frontend i.e, UI, and the
backend of the website like a database, etc.
• This type of website gets updated or change regularly as per the user’s requirements.
• In this website, there are a lot of functionalities involved like what a button will do if it is
pressed, are error messages are shown properly at their defined time, etc.

E-Commerce Website Testing:


• An e-commerce website is very difficult in maintaining as it consists of different pages and
functionalities, etc.
• In this testing, the tester or developer has to check various things like checking if the shopping
cart is working as per the requirements or not, are user registration or login functionality is
also working properly or not, etc.

• Mobile-Based Web Testing: In this testing, the developer or tester basically checks the website
compatibility on different devices and generally on mobile devices because many of the users open the
website on their mobile devices.
• So, keeping that thing in mind, we must check that the site is responsive on all devices or platforms.

Website Testing
• Web Page Fundamentals
• Black-Box Testing
• Gray-Box Testing
• White-Box Testing
• Configuration and Compatibility Testing
• Usability Testing

Web Page Fundamentals


Internet Web pages are just documents of text, pictures, sounds, video, and hyperlinksWeb page
features.
Text of different sizes, fonts, and colors (okay, you can’t see the
colors in this book)Graphics and photos
Hyperlinked text and graphicsVarying advertisements
Drop-down selection boxes
Fields in which the users can enter data features that make the Web site much more
complex:
Customizable layout that allows users to change where

information is positionedonscreen
Customizable content that allows users to select what news and
information they want to seeDynamic drop-down selection boxes
Dynamically changing text

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


Dynamic layout and optional information based on screen resolution
Compatibility with different Web browsers, browser versions, and hardware and
software platforms Lots of hidden formatting, tagging, and embedded
information that enhances the Web page’s usability
Testing Techniques apply to Web page testingbasic white-box and black-box techniques
configuration and compatibility testing usability testing
1) Black-Box Testing
screen image of Apple’s Web site, www.apple.com, a fairly straightforward and typical Web site. It
has all the basicelements—text, graphics, hyperlinks to other pages on the site, and hyperlinks to
other Web sites.
The easiest place to start is by treating the Web page or the entire
Web site as a black
When testing a Web site, you first should create a state table , treating each page as a different state with the
hyperlinks as the lines connecting them. A completed state map will give you a better view of the overall task.

box What would you test? Whatwould you choose not to test?
Web pages are made up of just text, graphics, links, and the occasional form. Testing them isn’t
difficult.

Text
Check the audience level,
• the terminology,
• the content and subject matter,
• the accuracy—especially of information that can become outdated—and
• always check spelling.
• each page has a correct title
An often overlooked type of text is called ALT text, for ALTernate text. Figure shows an example
of ALT text. When a user puts the mouse cursor over a graphic on the page he gets a pop-up
description of what the graphic represents. Web browsers that don’t display graphics use ALT
text. Also, with ALT text blind userscan use graphically rich Web sites—an audible reader
interprets the ALT text and reads it out through the computer’sspeakers.

Hyperlinks

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


Links can be tied to text or graphics. Each link should be checked to make sure that it jumps to the
correct destinationand opens in the correct window.

Check
Text links are usually underlined, and the mouse pointer should change to a hand
pointer when it’s over anykind of hyperlink—text or graphic.
Look for orphan pages, which are part of the Web site but can’t be accessed through a
hyperlink
do all graphics load and display properly? If a graphic is missing or is incorrectly
named, it won’t load and theWeb page will display an error where the graphic was to
be placed.
If text and graphics are intermixed on the page, make sure that the text wraps properly around
the graphics. Tryresizing the browser’s window to see if strange wrapping occurs around the
graphic.
How’s the performance of loading the page? Are there so many graphics on the page,
resulting in a largeamount of data to be transferred and displayed, that the Web site’s
performance is too slow?
What if it’s displayed over a slow dial-up modem connection on a poor-quality phone line?

If a graphic can’t load onto a Web page, an error box is put in its location

Forms
Forms are the text boxes, list boxes, and other fields for entering or selecting information on a Web
page. In the example a signup form for potential Mac developers. There are fields for entering your
first name, middle initial, lastname, and email address.

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


Make sure your Web site’s form fields are positioned properly. Notice in this Apple
Developer signup form that themiddle initial (M.I.) field is misplaced.

Configuration Testing

Configuration Testing is the process of testing the system under each configuration of the supported software
and hardware.

Here, the different configurations of hardware and software means the multiple operating system
versions, various browsers, various supported drivers, distinct memory sizes, different hard drive types,
various types of CPU etc.

Configuration Testing is the process of testing the system under each configuration of the supported software
and hardware.

Here, the different configurations of hardware and software means the multiple operating system
versions, various browsers, various supported drivers, distinct memory sizes, different hard drive types,
various types of CPU etc.

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


Objectives of Configuration Testing:
The objective of configuration testing is:

To determine whether the software application fulfills the configurability requirements.
• To identify the defects that were not efficiently found during different testing processes.
• To determine an optimal configuration of the application under test.

Objectives of Configuration Testing:


The objective of configuration testing is:

To determine whether the software application fulfills the configurability requirements.
• To identify the defects that were not efficiently found during different testing processes.
• To determine an optimal configuration of the application under test.

• To do analyse of the performance of software application by changing the hardware and software
resources.
• To do analyse of the system efficiency based on the prioritization.
• To verify the degree of ease to how the bugs are reproducible irrespective of the configuration
changes

Configuration Testing Process

various Configurations:
• Operating System Configuration:Win XP, Win 7 32/64 bit, Win 8 32/64 bit, Win 10 etc.
• Database Configuration:Oracle, DB2, MySql, MSSQL Server, Sybase etc.
• Browser Configuration:IE 8, IE 9, FF 16.0, Chrome, Microsoft Edge etc.

Types of Configuration Testing:


Configuration testing is of 2 types:
1. Software Configuration Testing:
Software configuration testing is done over the Application Under Test with various operating
system versions and various browser versions etc.

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


2. It is a time consuming testing as it takes long time to install and uninstall the various software which
are to be used for testing. When the build is released, software configuration begins after passing
through the unit test and integration test.

• Hardware Configuration Testing:


Hardware configuration testing is typically performed in labs where physical machines are
used with various hardware connected to them.

When a build is released, the software is installed in all the physical machines to which the
hardware is attached and the test is carried out on each and every machine to confirm that the
application is working fine

configuration Testing can also be classified into following 2 types:


1. Client level Testing:
Client level testing is associated with the usability and functionality testing. This testing is done from
the point of view of its direct interest of the users.
2. Server level Testing:
Server level testing is carried out to determine the communication between the software and the
external environment when it is planned to be integrated after the release.

Compatibility testing :

• Compatibility testing :
Compatibility testing is software testing which comes under the non functional
testing category, and it is performed on an application to check its compatibility (running
capability) on different platform/environments.
• This testing is done only when the application becomes stable.
• Means simply this compatibility test aims to check the developed software application
functionality on various software, hardware platforms, network and browser etc.
This compatibility testing is very important in product production and implementation point of view as it is
performed to avoid future issues regarding compatibility

Types of Compatibility Testing :


Several examples of compatibility testing are given below.
1. Software :
• Testing the compatibility of an application with an Operating System like Linux, Mac, Windows
• Testing compatibility on Database like Oracle SQL server, MongoDB server.
• Testing compatibility on different devices like in mobile phones, computers.

Types based on Version Testing :


There are two types of compatibility testing based on version testing
1. Forward compatibility testing : When the behavior and compatibility of a software or
hardware is checked with its newer version then it is called as forward compatibility testing.
2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT
2. Backward compatibility testing : When the behavior and compatibility of a software or
hardware is checked with its older version then it is called as backward compatibility testing.

2. Hardware :
Checking compatibility with a particular size of
• RAM
• ROM
• Hard Disk
• Memory Cards
• Processor
• Graphics Card

3.Smartphones :
Checking compatibility with different mobile platforms like android, iOS etc.
4.Network :
Checking compatibility with different :
• Bandwidth
• Operating speed
• Capacity

Testing the application in a same versions but having different environment.


For example, to test compatibility of Facebook application in your android mobile.
First check for the compatibility with a Facebook application of lower version with a Android 10.0(or
your choice) and then with a Facebook application of higher version with a same version of Android.

Why compatibility testing is important ?


1. It ensures complete customer satisfaction.
2. It provides service across multiple platforms.
3. Identifying bugs during development process.
Compatibility testing defects :
1. Variety of user interface.
2. Changes with respect to font size.
3. Alignment issues.
4. Issues related to existence of broken frames.
Issues related to overlapping of content

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


Usability and Accessibility testing

Accessibility Testing
• Accessibility Testing is one of the Software Testing, in which the process of testing the degree of ease
of use of a software application for individuals with certain disabilities.
• It is performed to ensure that any new component can easily be accessible by physically
disabled individuals despite any respective handicaps.
• Accessibility testing is part of the system testing process and is somehow similar to usability
testing.
• In the accessibility testing process, the tester uses the system or component as it would be used by
individuals with disabilities.
• Individuals can have the disabilities like visual disability, hearing disability, learning disability, or
non-functional organs.
• Accessibility testing is a subset of usability testing where in the users under consideration are
specific people with disabilities.
This testing focuses to verify both usability and accessibility. Some examples of such software are:
• Speech recognition software: This software changes the spoken words to text and works as an
input to the computer system.
• Screen reader software: This software is used to help low vision or blind individuals to read the text
on the screen with a braille display or voice synthesizer.
• Screen magnification software: This software is used to help vision-impaired persons as it will
enlarge the text and objects on the screen, thus making reading easier.
• Special keyboard: There are some specially designed keyboards for individuals with motor control
problems. These keyboards help them to type quickly. Factors to Measure Web Accessibility
• Pop-ups: Pop-ups can confuse visually disabled users. The screen reader reads out the page from top
to bottom and a sudden pop-up arrives the reader will start reading it first before the actual content.
• Language: It is very important to make sentences simple and easily readable for cognitively
disabled users as they have learning difficulties.
• Navigation: It is important to maintain the consistency of the website and not to modify the web
pages on a regular basis. Adjusting to new layouts is time-consuming.
• Marque text: It is best practice to avoid shiny text and keep the text on the website simple.

Purpose of Accessibility Testing


1. Cater market for disabled people: To serve to market for disabled people like individuals with
blindness, deaf, and handicapped, to support social inclusion for people with disabilities as well as
other categories of people like older people, and people living in rural areas.

3. Abide by accessibility legislation: Government agencies have come out with legalizations that require
It products to be accessible to disabled people. Some of the legal acts by various government agencies…:
4. Avoid potential lawsuits: In the past few companies like Netflix, Blue Apron, and Winn-Dixie were
sued because their products were not disabled-friendly.

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


Example of Accessibility Testing
Types of Disability
These are the following types of disability:
• Visual Impairment:
• Physical Impairment:
• Hearing Impairment:
• Cognitive Impairment:
• Learning Impairment

How to Perform Accessibility Testing:


Accessibility Testing can be performed in 2 ways:
1. Manual
There are various tools available in the market to test the accessibility of a software application but
may be available tools are highly costly and/or are less skilled as per requirements. Therefore,
manual testing is performed to check the accessibility of the software product

For example:
1. Test brightness of software:
2. Test the sound of software:
3. Testing for captions:
4. Modifying font size to large:
5. Use high contrast mode:
6. Turning off cascading style sheet (CSS):
7. Use field label:
8. Testing zooming:
9. Skip Navigation:

2. Automated
Automation is widely used in different testing techniques. In the automated process, there are
several automated tools for accessibility testing. These tools include:
• WebAnywhere: It is a screen reader tool and it requires no special installation.
• Hera: It is used to check the style of the software application.
• aDesigner: This tool is useful for testing the software from the viewpoint of visually impaired
people.
• Vischeck: This tool helps to reproduce the image in various forms and helps to visualize how the
image will look when it is accessed by different types of users.

Usability testing

What is usability in software testing?


Usability testing refers to evaluating a product or service by testing it with representative users.
Typically, during a test, participants will try to complete typical tasks while observers watch, listen
and takes notes.

Usability Testing also known as User Experience (UX) Testing, is a testing method for
measuring how easy and user-friendly a software application is.

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


A small set of target end-users, use software application to expose usability defects.
Usability testing mainly focuses on user’s ease of using application, flexibility of application to
handle controls and ability of application to meet its objectives.

main purpose of usability testing:


The primary purpose of a usability test is to gather the data needed to identify usability issues and
improve a website's or app's design.

Prepare your product or design to test:


• The first phase of usability testing is choosing a product and then making it ready for usability
testing.
• For usability testing, more functions and operations are required than this phase provided that type
of requirement.
• Hence this is one of the most important phases in usability testing.:

Find your participants:


• The second phase of usability testing is finding an employee who is helping you with performing
usability testing.
• Generally, the number of participants that you need is based on a number of case studies.
• Generally, five participants are able to find almost as many usability problems as you’d find
using many more test participants.
.:
Write a test plan:
• This is the third phase of usability testing. The plan is one of the first steps in each round of
usability testing is to develop a plan for the test.
• The main purpose of the plan is to document what you are going to do, how you are going to conduct
the test, what metrics you are going to find, the number of participants you are going to test, and
what scenarios you will use.

Take on the role of the moderator:


• This is the fourth phase of usability testing and here the moderator plays a vital role that involves
building a partnership with the participant.
• Most of the research findings are derived by observing the participant’s actions and gathering
verbal feedback to be an effective moderator, you need to be able to make instant decisions while
simultaneously overseeing various aspects of the research session

Present your findings/ final report:


• This phase generally involves combining your results into an overall score and presenting it
meaningfully to your audience.
• An easy method to do this is to compare each data point to a target goal and represent this as one
single metric based on a percentage of users who achieved this goal.

Methods of Usability Testing: 2 Techniques


There are two methods available to do usability testing –
1. Laboratory Usability Testing
2. Remote Usability Testing

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT


Laboratory Usability Testing:.
This testing is conducted in a separate lab room in presence of the observers. The testers are
assigned tasks to execute . The role of the observer is to monitor the behavior of the testers and
report the outcome of testing. The observer remains silent during the course of testing. In this
testing, both observers and testers are present in a same physical location.

Remote Usability Testing:


Under this testing observers and testers are remotely located.
Testers access the System Under Test, remotely and perform assigned tasks. Tester’s voice ,
screen activity , testers facial expressions are recorded by an automated software.
Observers analyze this data and report findings of the test. Example of such a software

Example Usability Testing Test Cases

Usability Testing Advantages


As with anything in life, usability testing has its merits and de-merits. Let’s look at them
• It helps uncover usability issues before the product is marketed.
• It helps improve end-user satisfaction
• It makes your system highly effective and efficient
It helps gather true feedback from your target audience who actually use your system during a
usability test. You do not need to rely on “opinions” from random people

2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT

You might also like