Unit-3 STA 2024
Unit-3 STA 2024
The need for Levels of Testing – Unit Test – Unit Test Planning – Designing the Unit Tests – The
Test Harness – Running the Unit tests and Recording results – Integration tests –
Designing Integration Tests – Integration Test Planning, System Testing – Acceptance testing –
Performance testing – Regression Testing – Internationalization testing – Ad-hoc testing –
Alpha, Beta Tests – Usability and Accessibility testing – Configuration testing –Compatibility
testing– Website testing.
• Suppose the shape superclass has a subclass, triangle, and triangle has a subclass,
equilateral triangle. Alsosuppose that the method display in shape needs to call the
method color for its operation.
• Equilateral triangle could have a local definition for the method display. That definition for
color which has beendefined in triangle.
• This local definition of the color method in triangle has been tested to work with the
inherited display method inshape, but not with the locally defined display in equilateral
triangle.
• This is a new context that must be retested. A set of new test cases should be developed.
• The status of the test efforts for a unit, and a summary of test results must be
recorded in a unit testworksheet
• It is very important that the tester at any level of testing to carefully record, review and check test
results.
• The tester must determine from the results whether the unit has passed or failed the test
• If the test is failed, the nature of the problem should be recorded in what is sometimes
called the test incidentreport.
• Differences from expected behavior should be described. When a unit fails a test there may be
several reasonsfor the failure.
• fault in the unit implementation
• A fault in the test case specification (the input or the output was not specified correctly)
• A fault in test procedures execution( the test should be rerun)
• A fault in the test environment (perhaps a database was not set up properly)
• A fault in the unit design (the code correctly adheres to the design
specification , but the latter isincorrect)
• When a unit has been completely tested and finally passes all of the required tests it is ready for
integration
Integration Test-Goals
• Integration test for procedural code has two major goals
• To detect defects that occur on the interfaces of units
• To assemble the individual unit into working subsystem and finally a complete
system that is ready forsystem test.
• In unit test the tester attempts to detect defects that are related to the functionality and structure of
2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT
the unit.
• Some simple unit interfaces are more adequately tested during integration test when each unit is
finally Few minor expectations, integration test should only be performed on unit successfully
passed unit testing.
• A tester might believe erroneously that since a unit has already been tested during a unit test
with a drivers andstubs, it does not need to be retested in combinations with other units during
integration test.
• Integration testing works best as an iterative process procedural oriented system.
• One unit at a time integrated into a set of previously integrated modules which have passed a
set of integrationtests.
• The interface and functionality of the new unit is combination with the previously integrated units is
tested
• When a subsystem is built from units integrated in the stepwise manner, then performance,
security and stresstest can be performed in this subsystem.
• Integrating one unit at a time helps tester in several ways.
• It keeps the number of new interfaces to be examined small, so that can focus on these interfaces
only.
• Experienced tester know that many defects occur at module interface.
• Another advantage is that the massive failures that often occur multiple units are integrated at
once is avoided.
• Approach also helps the developers, it allows defect search and repair confined to a small
known number ofcomponents and interfaces
• Integration process is object oriented systems is driven by assembly of the classes into
cooperating groups.
• The cooperating groups of classes are tested as a whole and then combined into higher level
groups.
Designing Integration tests
• Integration test using a black or white box approach , Some unit test can be reused
• Since many error occur at module interfaces, test designers need to focus on
exercising all input/outputparameter pairs and all calling relationships
• The tester needs to insure the parameters are of the correct type and in the correct order.
• The author has had the personal experience of spending many hours trying to locate a fault
Integration is defined to be a set of interactions, all defined interaction among the components need to
be tested. The architecture and design can give the details of interactions within the systems, however
testing the interactions between one system and another system required detailed understanding
of how they work together.
Integration Testing As a Type of Testing :-
Integration testing means testing of interfaces. They are
Internal Interfaces - provide communication across two modules within a projects or product,
internal to the product,and not exposed to the customer or external developers
Exported or External Interfaces.. - Exported interfaces are those that are visible outside
the product to third partydevelopers and solution providers.
“Intergration Testing Type Focuses on testing interfaces that are “Implicit and Explicit” and
“Internal andExternal”
Implicit interface ->
Documentation given
Explicit interface->
C o mpo ne nt 1
Component 3
Component 4
Component 2
No documentation
give
Component 9
Component 7
Component 10 Component 5 Component 6
Component 8
Integration Testing involves testing the topmost component interface with other
components in same orderas you navigate from top to bottom, till we cover all the components. To
understand this methodology, we will assume that a new product/ software development where
components become available one after another in the order of component number specified
.The integration starts with testing the interface between Component 1 and Component 2 .To complete
the integration testing all interfaces mentioned covering all the arrows, have to be tested together. The
order in which the interfaces are to be tested is depicted in the table below. In an incremental product
development, where one or two components gets added to the product
S t e ps Interfaces Tested
1 1-2
2 1-3
3 1-4
4 1-2-5
5 1-3-6
For example , assume one component (component 8) is added for the current release , then the
integration testing forcurrent release need to include steps 4,7,8 and 9.
To optimize no of steps in integration(optimization of elapsed time) , following steps can be
combined ,
• step 6,step 7 executed as single step,
Subsystem : set of components and their related interfaces can deliver functionality components is called
as sub system . Ex:components in steps 4, 6 and 8 can be considered as subsystem.
Bottom-Up Integration:-
Bottom-up integration is just the opposite of top-down integration, where the components for a new
product development becomes available in reverse order, starting from the bottom. Testing takes place
from the bottom of the control flow upwards. Components or systems are substituted by drivers.
Logic Flow is from top to bottom and integration path is from bottom to top. Navigation in
bottom-up integration starts from component 1 converting all sub systems , till components 8 is
reached. The order is listed below. The number of steps in the bottom up can be
Component 8
Component 1
Component 4
discarded .Once component 6,7 and 8 becomes available, the integration methodology focus on only
those components , asthese are the components which need focus and are new.
Component 1
Component 2 Component 5
Component 3 Component 4
S t e ps Integration Tested
1 6-2
2 7-3-4
3 8-5
4 (1-6-2)-(1-7-3-4)-(1-8-5)
4. A certain components may take an excessive amount of time to be ready. This precludes testing
other interfaces and wastes time till the end.The Integration testing phases focuses on finding
defects which predominantly arise because of combining various components for testing, and
should not be focused on for component or few components .Integration testing as a typefocuses
on testing the interfaces. This is a subnet of the integration testing phase.
• Integration test must be planned. Planning can begin when high-level design is complete so that
the system architecture is defined.
• Other documents relevant to integration test planning are the requirements document, the user
manual, and usage scenarios.
• These documents contain structure charts, state charts, data dictionaries, cross-reference tables,
module interface descriptions, data flow descriptions, messages and event descriptions, all
necessary to plan integration tests.
• Consider the fact that the testing objectives are to assemble components into subsystems and to
demonstrate that the subsystem functions properly with the integration test cases.
• For object-oriented systems a working definition of a cluster or similar construct must be
described, and relevant test cases must be specified.
• In addition, testing resources and schedules for integration should be included in the test plan.
• Integration testing of clusters of classes also involves building test harnesses which in this case
are special classes of objects built especially for testing.
• Whereas in class testing we evaluated intraclass method interactions, at the cluster level we test
interclass method interaction as well.tests.
• A group of cooperating classes is selected for test as a cluster. If developers have used the Coad
and Yourdon’s approach, then a subject layer could be used to represent a cluster.
• Jorgenson et al. have reported on a notation for a cluster that helps to formalize object- oriented
integration.
The methods and the classes they belong to are connected into clusters of classes that are
represented by a directed graph that has two special types of entities
• These are method-message paths, and atomic systems functions that represent input port events.
• A method-message path is described as a sequence of method executions linked by
messages.
• An atomic system function is an input port event (start event) followed by a set of method
messages paths and terminated by an output port event (system response).
What is UAT?
• User Acceptance Testing (UAT) is a type of testing performed by the end user or the client to
verify/accept the software system before moving the software application to the production
environment.
• UAT is done in the final phase of testing after functional, integration and system testing is
done.
Purpose of UAT
The main Purpose of UAT is to validate end to end business flow. It does not focus on cosmetic errors,
spelling mistakes or system testing. User Acceptance Testing is carried out in a separate testing environment
with production-like data setup. It is kind of black box testing where two or more end-users will be involved.
• The main Purpose of UAT is to validate end to end business flow. It does not focus on
cosmetic errors, spelling mistakes or system testing.
• User Acceptance Testing is carried out in a separate testing environment with production-
like data setup.
• It is kind of black box testing where two or more end-users will be involved.
UAT is performed by –
1. Client
2. End users
Features:
• All-in-one UAT solution
• Works across all ERPs and applications
• Test any process end-to-end
• Automatically capture everything
• Train and use within 30 minutes
• Instant test result notifications
• Easy annotations & comments
Performance testing
4. Spike testing:
It tests the product’s reaction to sudden large spikes in the load generated by users.
6.Scalability testing:
In scalability testing, software application’s effectiveness is determined in scaling up to support an
increase in user load. It helps in planning capacity addition to your software system.
Internationalization Testing
• Internationalization Testing :
•
Internationalization testing is a process of ensuring the adaptability of software to different
cultures and languages around the world accordingly without any modifications in source
code.
• It is also shortly known as i18n, in which 18 represents the number of characters in
between I & N in the word Internationalization.
• Content localization –
Localization of the static contents like labels, buttons, tabs and other fixed elements in
applications, and the dynamic contents like dialogue boxes, pop-ups, toolbars, etc.
• Local/Cultural Awareness –
Cultural awareness testing has to be done to ensure the appropriate rendering of time, date,
currencies, telephone numbers, zip codes, special events and festivals on calendars used in
different regions
• Feature-based Testing –
•
Several features of an application work for certain regional users and not for others.
• So those features should be hidden for non-applicable users and it should be visible and
functional to the users for whom they work. This is ensured by Feature-based testing.
• File transferring and rendering –
•
Property files of different languages need to be tested whether the interface of file
transfer is localized as per the language being selected.
• Rendering means providing or displaying contents (scripts) that are appropriately
displayed without misalignment or random words.
Regression testing:
Regression Testing is the process of testing the modified parts of the code and the parts that might get
affected due to the modifications to ensure that no new errors have been introduced in the software after the
modifications have been made.
Regression means return of something and in the software field, it refers to the return of a bug.
Ad hoc testing
AdhocTesting
: Adhoc testing is a type of software testing which is performed informally and randomly after the formal
testing is completed to find out any loophole in the system.
2024-2025 JIT1026-STA K.ARUN PRASAD, ASP/IT
• For this reason, it is also known as Random testing or Monkey testing.
• Adhoc testing is not performed in an structured way so it is not based on any
methodological approach.
• No Documentation.
• No Test cases.
• No Test Design.
As it is not based on any test cases or require documentation or test design so resolving issues that are
identified at last becomes very difficult for developers.
Adhoc testing saves a lot of time and one great example of Adhoc testing can be when the client needs the
product by today 6 PM but the product development will be completed at 4 PM same day.
So in hand only limited time i.e. 2hrs only, so within that 2hrs the developer and tester team can test
the system as a whole by taking some random inputs and can check for any errors
Types of Adhoc Testing :
Adhoc testing is divided into three types as follows.
Buddy Testing –
Buddy testing is a type of Adhoc testing where two bodies will be involved one is from Developer
team and one from tester team. So that after completing one module and after completing Unit testing
the tester can test by giving random inputs and the developer can fix the issues too early based on the
currently designed test cases
1. Pair Testing –
Pair testing is a type of Adhoc testing where two bodies from the testing team can be involved to test
the same module.
2. When one tester can perform the random test and another tester can maintain the record of
findings.
So when two testers get paired they exchange their ideas, opinions and knowledge so good
testing is performed on the module
1. Monkey Testing –
Monkey testing is a type of Adhoc testing in which the system is tested based on random inputs
without any test cases and the behavior of the system is tracked and all the functionalities of the
system is working or not is monitored. As the randomness approach is followed there is no constraint
on inputs so it is called as Monkey testing.
• It is good for finding bugs and inconsistencies which are mentioned in test cases.
• The errors which can not be identified with written test cases can be identified by Adhoc testing.
• This test helps to build a strong product which is less prone towards future problems.
• This testing can be performed any time during Software Development Life Cycle Process
(SDLC)
Beta testing
Beta testing – Testing typically done by end-users or others.
Final testing before releasing application for commercial
purpose.
• Beta Testing (Validation Testing)
– live environment, using real data
– no systems professional present
– performance (throughput, response-time)
– peak workload performance, human factors test, methods and procedures, backup and
recovery - audit test
Alpha testing involves both the white box and black box testing. Beta testing commonly uses black-box testing.
Alpha testing is performed by testers who are usually internal employees of the organization. Beta testing is performed by clients who are not part of the organization.
Alpha testing is performed at the developer’s site. Beta testing is performed at the end-user of the product.
Reliability and security testing are not checked in alpha testing. Reliability, security and robustness are checked during beta testing.
Alpha testing requires a testing environment or a lab. Beta testing doesn’t require a testing environment or lab.
Alpha testing may require a long execution cycle. Beta testing requires only a few weeks of execution.
Developers can immediately address the critical issues or fixes in alpha testing. Most of the issues or feedback collected from the beta testing will be implemented in future versions of the product.
Multiple test cycles are organized in alpha testing. Only one or two test cycles are there in beta testing.
system testing
Definition: System testing is defined as testing of a complete and fully integrated software product.
This testing falls in black-box testing wherein knowledge of the inner design of the code is not a pre- requisite
and is done by the testing team.
It is designed to test the readiness of a system as per nonfunctional parameters which are never
addressed by functional testing. Non-functional testing is as important as functional testing.
It is designed to test the readiness of a system as per nonfunctional parameters which are never
addressed by functional testing. Non-functional testing is as important as functional testing.
Non-Functional Testing Techniques
• Compatibility testing: A type of testing to ensure that a software program or system is
compatible with other software programs or systems.
• Compliance testing: A type of testing to ensure that a software program or system meets a
specific compliance standard, such as HIPAA or Sarbanes-Oxley.
• Endurance testing: A type of testing to ensure that a software program or system can handle a
long-term, continuous load.
• Load testing: A type of testing to ensure that a software program or system can handle a large
number of users or transactions.
• Performance testing: A type of testing to ensure that a software program or system meets
specific performance goals, such as response time or throughput.
• Recovery testing: A type of testing to ensure that a software program or system can be
recovered from a failure or data loss.
• Security testing: A type of testing to ensure that a software program or system is secure from
unauthorized access or attack.
• Scalability testing: A type of testing to ensure that a software program or system can be scaled up
or down to meet changing needs.
• Stress testing: A type of testing to ensure that a software program or system can handle an
unusually high load.
• Usability testing: A type of testing to ensure that a software program or system is easy to use.
Volume testing: A type of testing to ensure that a software program or system can handle a large
volume of data
It helps to enhance the behavior of the application. It helps to improve the performance of the application.
Functional testing is easy to execute manually. It is hard to execute non-functional testing manually.
It tests what the product does. It describes how the product does.
Functional testing is based on the business requirement. Non-functional testing is based on the performance requirement.
E x a m p l e s : E x a m p l e s :
1. Unit Testing 2. Smoke Testing 3. Integration Testing 4. Regression Testing 1. Performance Testing 2. Load Testing 3. Stress Testing 4. Scalability Testing
Web testing
• Web testing is a software testing technique to test web applications or websites for finding errors and
bugs.
• A web application must be tested properly before it goes to the end-users.
• Also, testing a web application does not only means finding common bugs or errors but also
testing the quality-related risks associated with the application.
• Software Testing should be done with proper tools and resources and should be done effectively.
• We should know the architecture and key areas of a web application to effectively plan and
execute the testing.
• Testing a web application is very common while testing any other application like testing
functionality, configuration, or compatibility, etc.
• Testing a web application includes the analysis of the web fault compared to the general
software faults.
Web applications are required to be tested on different browsers and platforms so that we can identify the
areas that need special focus while testing a web application
• Mobile-Based Web Testing: In this testing, the developer or tester basically checks the website
compatibility on different devices and generally on mobile devices because many of the users open the
website on their mobile devices.
• So, keeping that thing in mind, we must check that the site is responsive on all devices or platforms.
Website Testing
• Web Page Fundamentals
• Black-Box Testing
• Gray-Box Testing
• White-Box Testing
• Configuration and Compatibility Testing
• Usability Testing
information is positionedonscreen
Customizable content that allows users to select what news and
information they want to seeDynamic drop-down selection boxes
Dynamically changing text
box What would you test? Whatwould you choose not to test?
Web pages are made up of just text, graphics, links, and the occasional form. Testing them isn’t
difficult.
Text
Check the audience level,
• the terminology,
• the content and subject matter,
• the accuracy—especially of information that can become outdated—and
• always check spelling.
• each page has a correct title
An often overlooked type of text is called ALT text, for ALTernate text. Figure shows an example
of ALT text. When a user puts the mouse cursor over a graphic on the page he gets a pop-up
description of what the graphic represents. Web browsers that don’t display graphics use ALT
text. Also, with ALT text blind userscan use graphically rich Web sites—an audible reader
interprets the ALT text and reads it out through the computer’sspeakers.
Hyperlinks
Check
Text links are usually underlined, and the mouse pointer should change to a hand
pointer when it’s over anykind of hyperlink—text or graphic.
Look for orphan pages, which are part of the Web site but can’t be accessed through a
hyperlink
do all graphics load and display properly? If a graphic is missing or is incorrectly
named, it won’t load and theWeb page will display an error where the graphic was to
be placed.
If text and graphics are intermixed on the page, make sure that the text wraps properly around
the graphics. Tryresizing the browser’s window to see if strange wrapping occurs around the
graphic.
How’s the performance of loading the page? Are there so many graphics on the page,
resulting in a largeamount of data to be transferred and displayed, that the Web site’s
performance is too slow?
What if it’s displayed over a slow dial-up modem connection on a poor-quality phone line?
If a graphic can’t load onto a Web page, an error box is put in its location
Forms
Forms are the text boxes, list boxes, and other fields for entering or selecting information on a Web
page. In the example a signup form for potential Mac developers. There are fields for entering your
first name, middle initial, lastname, and email address.
Configuration Testing
Configuration Testing is the process of testing the system under each configuration of the supported software
and hardware.
Here, the different configurations of hardware and software means the multiple operating system
versions, various browsers, various supported drivers, distinct memory sizes, different hard drive types,
various types of CPU etc.
Configuration Testing is the process of testing the system under each configuration of the supported software
and hardware.
Here, the different configurations of hardware and software means the multiple operating system
versions, various browsers, various supported drivers, distinct memory sizes, different hard drive types,
various types of CPU etc.
• To do analyse of the performance of software application by changing the hardware and software
resources.
• To do analyse of the system efficiency based on the prioritization.
• To verify the degree of ease to how the bugs are reproducible irrespective of the configuration
changes
various Configurations:
• Operating System Configuration:Win XP, Win 7 32/64 bit, Win 8 32/64 bit, Win 10 etc.
• Database Configuration:Oracle, DB2, MySql, MSSQL Server, Sybase etc.
• Browser Configuration:IE 8, IE 9, FF 16.0, Chrome, Microsoft Edge etc.
Compatibility testing :
• Compatibility testing :
Compatibility testing is software testing which comes under the non functional
testing category, and it is performed on an application to check its compatibility (running
capability) on different platform/environments.
• This testing is done only when the application becomes stable.
• Means simply this compatibility test aims to check the developed software application
functionality on various software, hardware platforms, network and browser etc.
This compatibility testing is very important in product production and implementation point of view as it is
performed to avoid future issues regarding compatibility
2. Hardware :
Checking compatibility with a particular size of
• RAM
• ROM
• Hard Disk
• Memory Cards
• Processor
• Graphics Card
3.Smartphones :
Checking compatibility with different mobile platforms like android, iOS etc.
4.Network :
Checking compatibility with different :
• Bandwidth
• Operating speed
• Capacity
Accessibility Testing
• Accessibility Testing is one of the Software Testing, in which the process of testing the degree of ease
of use of a software application for individuals with certain disabilities.
• It is performed to ensure that any new component can easily be accessible by physically
disabled individuals despite any respective handicaps.
• Accessibility testing is part of the system testing process and is somehow similar to usability
testing.
• In the accessibility testing process, the tester uses the system or component as it would be used by
individuals with disabilities.
• Individuals can have the disabilities like visual disability, hearing disability, learning disability, or
non-functional organs.
• Accessibility testing is a subset of usability testing where in the users under consideration are
specific people with disabilities.
This testing focuses to verify both usability and accessibility. Some examples of such software are:
• Speech recognition software: This software changes the spoken words to text and works as an
input to the computer system.
• Screen reader software: This software is used to help low vision or blind individuals to read the text
on the screen with a braille display or voice synthesizer.
• Screen magnification software: This software is used to help vision-impaired persons as it will
enlarge the text and objects on the screen, thus making reading easier.
• Special keyboard: There are some specially designed keyboards for individuals with motor control
problems. These keyboards help them to type quickly. Factors to Measure Web Accessibility
• Pop-ups: Pop-ups can confuse visually disabled users. The screen reader reads out the page from top
to bottom and a sudden pop-up arrives the reader will start reading it first before the actual content.
• Language: It is very important to make sentences simple and easily readable for cognitively
disabled users as they have learning difficulties.
• Navigation: It is important to maintain the consistency of the website and not to modify the web
pages on a regular basis. Adjusting to new layouts is time-consuming.
• Marque text: It is best practice to avoid shiny text and keep the text on the website simple.
3. Abide by accessibility legislation: Government agencies have come out with legalizations that require
It products to be accessible to disabled people. Some of the legal acts by various government agencies…:
4. Avoid potential lawsuits: In the past few companies like Netflix, Blue Apron, and Winn-Dixie were
sued because their products were not disabled-friendly.
For example:
1. Test brightness of software:
2. Test the sound of software:
3. Testing for captions:
4. Modifying font size to large:
5. Use high contrast mode:
6. Turning off cascading style sheet (CSS):
7. Use field label:
8. Testing zooming:
9. Skip Navigation:
2. Automated
Automation is widely used in different testing techniques. In the automated process, there are
several automated tools for accessibility testing. These tools include:
• WebAnywhere: It is a screen reader tool and it requires no special installation.
• Hera: It is used to check the style of the software application.
• aDesigner: This tool is useful for testing the software from the viewpoint of visually impaired
people.
• Vischeck: This tool helps to reproduce the image in various forms and helps to visualize how the
image will look when it is accessed by different types of users.
Usability testing
Usability Testing also known as User Experience (UX) Testing, is a testing method for
measuring how easy and user-friendly a software application is.