Unit 01: Basics of Software Testing
Unit 01: Basics of Software Testing
Software testing
Software Testing is a method to check whether the actual software product matches
expected requirements and to ensure that software product is defect free.
The purpose of software testing is to identify errors, gaps or missing requirements in
contrast to actual requirements.
The goal of software tester to find whether developed software or product met the specified
requirement or not . And to identifys the defect to ensure that the product is defect free in
order to produce a quality product.
In simple word, software testing is an activity to check that the software system is defect
free.
So the goal of software tester is to find bugs and find them as early as possible and make
sure they are fixed.
Testing can be done in two ways: Manual testing and automated testing. Manual testing is
done by human testers who check codes and bugs in software manually. On the other hand,
automated testing is the process of using computer programs to execute a system.
Quality:
"Fit for use or purpose." It is all about meeting the needs and expectations of customers
with respect to functionality, design, reliability, durability, & price of the product.
Quality of a product or service is its ability to satisfy the needs and expectations of the
customer.
Quality can briefly be defined as “a degree of excellence”. High quality software
usually conforms to the user requirements.
Software Quality:
software quality measures how well the software is designed (quality of design), and how
well the software conforms to that design (quality of conformance). It is often described as
the ‘fitness for purpose’ of a piece of software.”
fault :fault (defect) is introduced into the software as the result of an error. It is an anomaly in the
software that may cause it to behave incorrectly, and not according to its specification
BUG: A bug is the result of a coding error. An Error found in the development environment before the
product is shipped to the customer. A programming error that causes a program to work poorly,
produce incorrect results or crash. An error in software or hardware that causes a program to
malfunction. Bug is terminology of Tester.
Objectives of testing
Executing a program with the intent of finding an error.
To check if the system meets the requirements and be executed successfully in the Intended
environment.
To check if the system is “ Fit for purpose”.
To check if the system does what it is expected to do.
A good test case is one that has a probability of finding an as yet undiscovered error.
Gaining confidence in software application and providing information about the level of quality.
Verifying that the final result meets the business and user requirements.
Ensuring that it satisfies the BRS that is Business Requirement Specification and SRS that is System
Requirement Specifications.
Gaining customers confidence by providing them a quality product.
TEST CASE
• A TEST CASE is a set of actions executed to verify a particular feature or functionality of your software
application. A Test Case contains test steps, test data,precondition, expected result,actual result
developed for specific test scenario to verify any requirement. The test case includes specific variables
or conditions, using which a testing engineer can compare expected and actual results to determine
whether a software product is functioning as per the requirements of the customer.
• It is an in-details document that contains all possible inputs (positive as well as negative) and the
navigation steps, which are used for the test execution process.
• Test case gives detailed information about testing strategy, testing process, preconditions, and
expected output. These are executed during the testing process to check whether the software
application is performing the task for that it was developed or not.
• Test case helps the tester in defect reporting by linking defect with test case ID.
• Detailed test case documentation works as a full proof guard for the testing team because if
developer missed something, then it can be caught during execution of these full-proof test cases.
• A test case is the set of steps that need to be done in order to test a specific function of the software.
They are developed for various scenarios so that testers can determine whether the software is
working the way it should and producing the expected results.
What is SDLC?
SDLC is a process followed for a software project, within a software organization. It consists of a
detailed plan describing how to develop, maintain, replace and alter or enhance specific software. The
life cycle defines a methodology for improving the quality of software and the overall development
process.
The following figure is a graphical representation of the various stages of a typical SDLC.
Quality Assurance
• Quality Assurance is also known as QA Testing. QA is defined as an activity to ensure that an
organization is providing the best product or service to the customers.
• Quality Assurance is a systematic way of creating an environment to ensure that the software product
being developed meets the quality requirements. It is a preventive process whose aim is to establish
the correct methodology and standard to provide a quality environment to the product being
developed. Quality Assurance focuses on process standard, projects audit, and procedures for
development.
• QA is also known as a set of activities designed to evaluate the process to produce a product.
• Software Quality assurance is all about the Software Development lifecycle that includes requirements
management, software design, coding, testing, and release management.
• QA establishes and maintains set requirements for developing or manufacturing reliable products. A
quality assurance system is meant to increase customer confidence and a company's credibility, while
also improving work processes and efficiency, and it enables a company to better compete with
others.
• Quality assurance helps a company create products and services that meet the needs, expectations
and requirements of customers. It yields high-quality product offerings that build trust and loyalty
with customers. The standards and procedures defined by a quality assurance program help prevent
product defects before they arise.
• It's a Preventive technique.
• It is performed before Quality Control
Quality Control
Quality Control also known as QC is a sequence of tasks to ensure the quality of software by
identifying defects and correction of defects in the developed software. It is a reactive process, and
the main purpose of this process is to correct all types of defects before releasing the software. The
process is done by eliminating sources of problems (which cause to low the quality) through the
corrective tools so that software can meet customer's requirements and high quality.
The responsibility of quality control is of a specific team which is known as a testing team that tests
the defects of software by validation and corrective tools.
Quality control involves testing of units and determining if they are within the specifications for the
final product. The purpose of the testing is to determine any needs for corrective actions in the
manufacturing process. Good quality control helps companies meet consumer demands for better
products.
● It does not involve executing the program ● It always involves executing a program
● It is the procedure to create the deliverables ● It is the procedure to verify that deliverables
● QA involves in full software development life cycle ● QC involves in full software testing life cycle
● In order to meet the customer requirements, QA ● QC confirms that the standards are followed while
defines standards and methodologies working on the product
● It is a Low-Level Activity, it can identify an error and● It is a High-Level Activity, it can identify an error that
mistakes which QC cannot QA cannot
● Its main motive is to prevent defects in the system.●It Its main motive is to identify defects or bugs in the
is a less time-consuming activity system. It is a more time-consuming activity
● QA ensures that everything is executed in the right● QC ensures that whatever we have done is as per the
way, and that is why it falls under verification activity requirement, and that is why it falls under validation
activity
● It requires the involvement of the whole team ● It requires the involvement of the Testing team
● The statistical technique applied on QA is known as● The statistical technique applied to QC is known as
SPC or Statistical Process Control (SPC) SQC or Statistical Quality Control
KEY DIFFERENCE
● Quality Assurance is aimed to avoid the defect whereas Quality control is aimed to identify and fix the
defects.
● Quality Assurance provides assurance that quality requested will be achieved whereas Quality Control
is a procedure that focuses on fulfilling the quality requested.
● Quality Assurance is done in software development life cycle whereas Quality Control is done in
software testing life cycle.
● Quality Assurance is a proactive measure whereas Quality Control is a Reactive measure.
● Quality Assurance requires the involvement of all team members whereas Quality Control needs only
testing team.
● Quality Assurance is performed before Quality Control.
Verification
is a process of checking documents, design, code, and program in order to check if the software has
been built according to the requirements or not. The main goal of verification process is to ensure
quality of software application, design, architecture etc. The verification process involves activities
like reviews, walk-throughs and inspection.
Verification is the process of evaluating work-products of a development phase to determine whether
they meet the specified requirements.
verification ensures that the product is built according to the requirements and design specifications.
It also answers to the question, Are we building the product right?
Verification testing includes different activities such as business requirements, system requirements,
design review, and code walkthrough while developing a product.
It is also known as static testing, where we are ensuring that "we are developing the right product or
not". And it also checks that the developed application fulfilling all the requirements given by the
client.
Validation
The process of evaluating software during the development process or at the end of the development
process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be defined
as to demonstrate that the product fulfills its intended use when deployed on appropriate
environment.
It answers to the question, Are we building the right product?
Validation testing is testing where tester performed functional and non-functional testing.
Here functional testing includes Unit Testing (UT), Integration Testing (IT) and System Testing (ST),
and non-functional testing includes User acceptance testing (UAT).
Validation testing is also known as dynamic testing, where we are ensuring that "we have developed
the product right." And it also checks that the software meets the business needs of the client.
● Validation is done at the end of the development process and takes place after verifications are
completed.
● It is a High level activity.
● Performed after a work product is produced against established criteria ensuring that the product
integrates correctly into the environment.
● Determination of correctness of the final software product by a development project with respect to
the user needs and requirements.
Waterfall Model:
The Waterfall Model was the first Process Model to be introduced. It is also referred to as a linear-
sequential life cycle model. It is very simple to understand and use. In a waterfall model, each phase
must be completed before the next phase can begin and there is no overlapping in the phases.
● The Waterfall model is the earliest SDLC approach that was used for software development.
● The waterfall Model illustrates the software development process in a linear sequential flow. This
means that any phase in the development process begins only if the previous phase is complete. In this
waterfall model, the phases do not overlap.
● Each phase is designed for performing specific activity during the SDLC phase. It was introduced in
1970 by Winston Royce.
●
Requirement Gathering and analysis − All possible requirements of the system to be developed are
captured in this phase and documented in a requirement specification document.
In this phase, a large document called Software Requirement Specification (SRS) document is created
which contained a detailed description of what the system will do in the common language.
Design phase − The requirement specifications from first phase are studied in this phase and the system
design is prepared. This system design helps in specifying hardware and system requirements and helps
in defining the overall system architecture.
This phase aims to transform the requirements gathered in the SRS into a suitable form which permits
further coding in a programming language. It defines the overall software architecture together with
high level and detailed design. All this work is documented as a Software Design Document (SDD).
Implementation: During this phase, design is implemented. If the SDD is complete, the implementation
or coding phase proceeds smoothly, because all the information needed by software developers is
contained in the SDD.
The system is first developed in small programs called units, which are integrated in the next phase.
Each unit is developed and tested for its functionality, which is referred to as Unit Testing.
Testing:
All the units developed in the implementation phase are integrated into a system after testing of each
unit. Post integration the entire system is tested for any faults and failures.
Deployment of system − Once the functional and non-functional testing is done; the product is
deployed in the customer environment or released into the market. The product or application is
deemed fully functional and is deployed to a live environment.
Maintenance − There are some issues which come up in the client environment. To fix those issues,
patches are released. Also to enhance the product some better versions are released. Maintenance is
done to deliver these changes in the customer environment.
The V-model
The V-model is a type of SDLC model where process executes in a sequential manner in V-shape. It is
also known as Verification and Validation model. It is based on the association of a testing phase for
each corresponding development stage. Development of each step directly associated with the testing
phase. The next phase starts only after completion of the previous phase i.e. for each development
activity, there is a testing activity corresponding to it.
Verification: It involves static analysis technique (review) done without executing code. It is the
process of evaluation of the product development phase to find whether specified requirements
meet.
Validation: It involves dynamic analysis technique (functional, non-functional), testing done by
executing code. Validation is the process to evaluate the software after the completion of the
development phase to determine whether software meets the customer expectations and
requirements.
● So V-Model contains Verification phases on one side of the Validation phases on the other side.
Verification and Validation phases are joined by coding phase in V-shape. Thus it is called V-Model.
Design Phase:
● Requirement Analysis: This phase contains detailed communication with the customer to understand
their requirements and expectations. This stage is known as Requirement Gathering.
● System Design: This phase contains the system design and the complete hardware and
communication setup for developing product.
● Architectural Design: System design is broken down further into modules taking up different
functionalities. The data transfer and communication between the internal modules and with the
outside world (other systems) is clearly understood.
● Module Design: In this phase the system breaks dowm into small modules. The detailed design of
modules is specified, also known
as Low-Level Design (LLD).
Coding Phase: After designing, the coding phase is started. Based on the requirements, a suitable
programming language is decided. There are some guidelines and standards for coding. Before
checking in the repository, the final build is optimized for better performance, and the code goes
through many code reviews to check the performance.
Testing Phases:
● Unit Testing: Unit Test Plans are developed during module design phase. These Unit Test Plans are
executed to eliminate bugs at code or unit level.
● Integration testing: After completion of unit testing Integration testing is performed. In integration
testing, the modules are integrated and the system is tested. Integration testing is performed on the
Architecture design phase. This test verifies the communication of modules among themselves.
● System Testing: System testing test the complete application with its functionality, inter dependency,
and communication.It tests the functional and non-functional requirements of the developed
application.
● User Acceptance Testing (UAT): UAT is performed in a user environment that resembles the
production environment. UAT verifies that the delivered system meets user’s requirement and system
is ready for use in real world.
Advantages:
● This is a highly disciplined model and Phases are completed one at a time.
● V-Model is used for small projects where project requirements are clear.
● Simple and easy to understand and use.
● This model focuses on verification and validation activities early in the life cycle thereby enhancing the
probability of building an error-free and good quality product.
● It enables project management to track progress accurately.
Disadvantages:
● High risk and uncertainty.
● It is not a good for complex and object-oriented projects.
● It is not suitable for projects where requirements are not clear and contains high risk of changing.
● This model does not support iteration of phases.
● It does not easily handle concurrent events.
Spiral Model:
Spiral Model was first described by Barry W. Boehm (American Software Engineer) in 1986.
The spiral model works in an iterative nature. It is a combination of both the Prototype development
process and the Linear development process (waterfall model). This model places more emphasis on
risk analysis. Mostly this model adapts to large and complicated projects where risk is high. Every
Iteration starts with planning and ends with the product evaluation by the client.
The Radius of the spiral at any point represents the expenses(cost) of the project so far, and the
angular dimension represents the progress made so far in the current phase.
Spiral model is one of the most important Software Development Life Cycle models, which provides
support for Risk Handling. In its diagrammatic representation, it looks like a spiral with many loops. The
exact number of loops of the spiral is unknown and can vary from project to project. Each loop of the
spiral is called a Phase of the software development process. The exact number of phases needed to
develop the product can be varied by the project manager depending upon the project risks. As the
project manager dynamically determines the number of phases, so the project manager has an
important role to develop a product using the spiral model.
A software project repeatedly passes through these phases in iterations called Spirals.
Planning Phase: Requirements are gathered during the planning phase. Requirements like ‘BRS’ that is
‘Bussiness Requirement Specifications’ and ‘SRS’ that is ‘System Requirement specifications.
The analyst gathered information about the requirements and start to understand what needs to be
done.
This phase also includes understanding the system requirements by continuous communication
between the customer and the system analyst.
Risk Analysis: In the risk analysis phase, a process is undertaken to identify risk and alternate
solutions. A prototype is produced at the end of the risk analysis phase. If any risk is found during the
risk analysis then alternate solutions are suggested and implemented.
Engineering Phase: In this phase software is developed, along with testing at the end of the phase.
Hence in this phase the development and testing is done.
Evaluation phase: This phase allows the customer to evaluate the output of the project to date before
the project continues to the next spiral.
Advantages
-suitable for high risk project.
Project monitoring is easy
Changing requirements can be accommodate
Users see the system early
Additional Functionality can be added at a later date
Disadvantages
Can be a costly model to use.
Doesn’t work well for smaller projects.
Spiral may go on indefinitely.
Large number of intermediate stages requires excessive documentation.
Management is more complex.
Unit 02 Types of testing
White Box Testing
• White box testing techniques analyze the internal structures the used data structures, internal
design, code structure and the working of the software .
• White Box Testing is software testing technique in which internal structure, design and coding of
software are tested to verify flow of input-output and to improve design, usability and security.
• White box testing is software testing method in which internal structure/design is known to tester
.The main aim of white box testing is check on how system is performing based on the code.it is
mainly performed by the developer or white box testers who has knowledge on the programming.
• In white box testing, code is visible to testers so it is also called Clear box testing, Open box testing,
Transparent box testing,
Code-based testing and Glass box testing, or structural testing.
1. Static Testing : Static Testing is a type of a software testing method which is performed to check
the defects in software without actually executing the code of the software application. Whereas in
Dynamic Testing checks the code is executed to detect the defects
Static testing is done to avoid errors at an early stage of development as it is easier to identify the
errors and solve the errors. It also helps finding errors that may not be found by Dynamic Testing
Static Testing, a software testing technique in which the software is tested without executing the
code. It has two parts as listed below:
● Review - Typically used to find and eliminate errors or ambiguities in documents such as
requirements, design, test cases, etc.
● Static analysis - The code written by developers are analysed (usually by tools) for structural
defects that may lead to defects.
• Static testing involves manual or automated reviews of the documents. This review is done
during an initial phase of testing to catch Defect early in STLC. It examines work documents
and provides review comments. It is also called Non-execution testing or verification testing.
1.Informal Reviews: This is one of the type of review which doesn't follow any process to find errors
in the document. Under this technique, you just review the document and give informal comments
on it.
2. Walkthrough -- the author of whichever document is being reviewed will explain the document to
their team. Participants will ask questions, and any notes are written down.
The walkthrough can be formal or informal review.
Team member does not need to have detailed knowledge of the content as the author is well
prepared for that and it is kind of knowledge transfer session .
Main objective is to enable learning and giving knowledge to other team members about the
content.
2)Inspection -- a designated moderator will conduct a strict review as a process to find defects.
• inspection is one of the most formal kinds of Reviews.
• It is led by a trained Moderator who is not the author of the meeting.
• Reviewers are well prepared before the meeting about the documents or what needs to be
discussed.
• Rules and checklists are used in this meeting during which time the product is examined and
defects are logged.
• Defects found in the meeting are documented in the issue log or logging list.
• Meeting has proper entry and exit criteria.
• Reports created during the meeting are shared with the Author to take appropriate actions
on that.
3)Technical reviews -- technical specifications are reviewed by peers in order to detect any errors.
• it is well documented and follows defect detection technique which involves peers
and technical experts
• It is usually led by a trained Moderator and not the Author.
• In Technical Review, the product is examined and the defects are found which are mainly
technical ones.
• No management participation is there in Technical Review.
• The full report is prepared to have a list of issues addressed.
A team consisting of your peers, review the technical specification of the software product
and checks whether it is suitable for the project. They try to find any discrepancies in the
specifications and standards followed. This review concentrates mainly on the technical
documentation related to the software such as Test Strategy, Test Plan and requirement
specification documents.
Summary:
● Static testing is to find defects as early as possible.
● Static testing not a substitute for dynamic testing, both find a different type of defects
● Reviews are an effective technique for Static Testing
● Reviews not only help to find defects but also understand missing requirements, design
defects, non-maintainable code.
Structural Testing:
Structure-based testing, therefore, can be defined as a type of software testing that tests
the code’s structure and intended flows. For example, verifying the actual code for aspects
like the correct implementation of conditional statements, and whether every statement in
the code is correctly executed. It is also known as structure-based testing.
● To carry out this type of testing, we need to thoroughly understand the code. This is why this
testing is usually done by the developers who wrote the code as they understand it best.
● Structural testing is the type of testing carried out to test the structure of code. It is also
known as White Box testing or Glass Box testing. This type of testing requires knowledge of
the code, so, it is mostly done by the developers. It is more concerned with how system does
it rather than the functionality of the system. It provides more coverage to the testing. For
ex, to test certain error message in an application, we need to test the trigger condition for
it, but there must be many trigger for it. It is possible to miss out one while testing the
requirements drafted in SRS. But using this testing, the trigger is most likely to be covered
since structural testing aims to cover all the nodes and paths in the structure of code.
● Structural testing is a type of software testing which uses the internal design of the software
for testing or in other words the software testing which is performed by the team which
knows the development phase of the software, is known as structural testing.
● Structural testing is basically related to the internal design and implementation of the
software i.e. it involves the development team members in the testing team. It basically
tests different aspects of the software according to its types. Structural testing is just the
opposite of behavioral testing.
● The knowledge of the code's internal executions and how the software is implemented is a
necessity for the test engineer to implement the structural testing.
● Throughout the structural testing, the test engineer intends on how the software performs,
and it can be used at all levels of testing.
The intention behind the testing process is finding out how the system works not the functionality
of it. To be more specific, if an error message is popping up in an application there will be a reason
behind it. Structural testing can be used to find that issue and fix it
Advantages of Structural Testing:
· Forces test developer to reason carefully about implementation
Reveals errors in "hidden" code
· Spots the Dead Code or other issues with respect to best programming practices.
Disadvantages of Structural Box Testing:
· Expensive as one has to spend both time and money to perform white box testing.
Every possibility that few lines of code is missed accidentally.
· In-depth knowledge about the programming language is necessary to perform white box testing.
Functional Testing
• FUNCTIONAL TESTING is a type of software testing that validates the software system
against the functional requirements/specifications. The purpose of Functional tests is to test
each function of the software application, by providing appropriate input, verifying the
output against the Functional requirements.
• Functional testing mainly involves black box testing and it is not concerned about the source
code of the application. This testing checks User Interface, APIs, Database, Security,
Client/Server communication and other functionality of the Application Under Test. The
testing can be done either manually or using automation.
• Functional Testing is a type of Software Testing in which the system is tested against the
functional requirements and specifications. Functional testing ensures that the requirements
or specifications are properly satisfied by the application. This type of testing is particularly
concerned with the result of processing. It focuses on simulation of actual system usage but
does not develop any system structure assumptions.
Functional Testing:
It is a type of software testing which is used to verify the functionality of the software application,
whether the function is working according to the requirement specification. In functional testing, each
function tested by giving the value, determining the output, and verifying the actual output with the
expected value. Functional testing performed as black-box testing which is presented to confirm that
the functionality of an application or system behaves as we are expecting. It is done to verify the
functionality of the application.
Functional testing also called as black-box testing, because it focuses on application specification
rather than actual code. Tester has to test only the program rather than the system.
Code Coverage :
Code coverage is a software testing metric that determines the number of lines of code that is
successfully validated under a test procedure, which in turn, helps in analyzing how comprehensively
a software is verified.
Code coverage is a software testing metric or also termed as a Code Coverage Testing which helps in
determining how much code of the source is tested which helps in accessing quality of test suite and
analyzing how comprehensively a software is verified. Actually in simple code coverage refers to the
degree of which the source code of the software code has been tested. This Code Coverage is
considered as one of the form of white box testing
As we know at last of the development each client wants a quality software product as well as the
developer team is also responsible for delivering a quality software product to the customer/client.
Where this quality refers to the product’s performance, functionalities, behavior, correctness,
reliability, effectiveness, security, and maintainability. Where Code Coverage metric helps in
determining the performance and quality aspects of any software.
Code coverage is one such software testing metric that can help in assessing the test performance and
quality aspects of any software.
Such an insight will equally be beneficial to the development and QA team. For developers, this metric
can help in dead code detection and elimination. On the other hand, for QA, it can help to check
missed or uncovered test cases. They can track the health status and quality of the source code while
paying more heed to the uncaptured parts of the code.
• Code Coverage testing is determining how much code is being tested. It can be calculated
using the formula:
• Code Coverage = (Number of lines of code executed)/(Total Number of lines of code in a
system component) * 100
Code Coverage Criteria
1.Statement coverage: how many of the statements in the program have been executed.
2.Branches coverage: how many of the branches of the control structures (if statements for
instance) have been executed.
Condition coverage: how many of the boolean sub-expressions have been tested for a true and
a false value.
3.Line coverage: how many of lines of source code have been tested.
These metrics are usually represented as the number of items actually tested, the items found in
your code, and a coverage percentage (items tested / items found)
4.Function coverage: how many of the functions defined have been called.
3. Decision Table
• Decision Table Technique is a systematic approach where various input combinations and
their respective system behavior are captured in a tabular form. It is appropriate for the
functions that have a logical relationship between two and more than two inputs.
• Decision Table is aka Cause-Effect Table. This test technique is appropriate for functionalities
which has logical relationships between inputs (if-else logic). In Decision table technique, we
deal with combinations of inputs. To identify the test cases with decision table, we consider
conditions and actions. We take conditions as inputs and actions as outputs.
• In some instances, the inputs combinations can become very complicated for tracking several
possibilities.
• Such complex situations rely on decision tables, as it offers the testers an organized view
about the inputs combination and the expected output.
S1 First Invalid S2
Attempt
S2 Second Invalid S3
Attempt
S3 Third Invalid S5
Attempt
S4 Home
Page
S5 Error Page
In the above state transition table, we see that state S1 denotes first login attempt. When the first
attempt is invalid, the user will be directed to the second attempt (state S2). If the second attempt is
also invalid, then the user will be directed to the third attempt (state S3). Now if the third and last
attempt is invalid, then the user will be directed to the error page (state S5).
But if the third attempt is valid, then it will be directed to the homepage (state S4).
6. Positive Testing
Positive Testing is a type of testing which is performed on a software application by providing the
valid data sets as an input. It checks whether the software application behaves as expected with
positive inputs or not. Positive testing is performed in order to check whether the software
application does exactly what it is expected to do.
• There is a text box in an application which can accept only numbers. Entering values up to
99999 will be acceptable by the system and any other values apart from this should not be
acceptable. To do positive testing, set the valid input values from 0 to 99999 and check
whether the system is accepting the values.
Positive Testing is testing process where the system is validated against the valid input data. In this
testing, tester always check for only valid set of values and check if a application behaves as expected
with its expected inputs. The main intention of this testing is to check whether software application does
that what it is supposed to do. Positive Testing always tries to prove that a given product and project
always meets the requirements and specifications. Positive testing is testing of the normal day to day life
scenarios and to check the expected behavior of application.
Example of Positive Testing:Consider a scenario where you want to test an application which contains a
simple text box to enter age and requirements say that it should take only numerical values. So here
provide only positive numerical values to check whether it is working as expected or not is the Positive
Testing.
7. Negative Testing
• Negative Testing is a testing method performed on the software application by providing
invalid or improper data sets as input. It checks whether the software application behaves as
expected with the negative or unwanted user inputs. The purpose of negative testing is to
ensure that the software application does not crash and remains stable with invalid data
inputs.
• For example -
• Negative testing can be performed by entering characters A to Z or from a to z. Either
software system should not accept the values or else it should throw an error message for
these invalid data inputs.
In Negative Testing the system is validated by providing invalid data as input. A negative test checks if an
application behaves as expected with its negative inputs. This is to test the application that does not do
anything that it is not supposed to do so. Such testing is to be carried out keeping negative point of view
& only execute the test cases for only invalid set of input data.
The main reason behind Negative testing is to check the stability of the software application against the
influences of different variety of incorrect validation data set. Negative testing helps to find more defects
& improve the quality of the software application under test but it should be done once the positive
testing is complete.
Example of Negative Testing :Considering example as we know phone no field accepts only numbers and
does not accept the alphabets and special characters but if we type alphabets and special characters on
phone number field to check it accepts the alphabets and special characters or not than it is negative
testing.
Here, expectation is that the text box will not accept invalid values and will display an error message for
the wrong entry.
In both the testing, the following needs to be considered:
• Input data
• An action which needs to be performed
• Output Result
Testing Technique used for Positive and Negative Testing:
Following techniques are used for Positive and negative validation of testing is:
• Boundary Value Analysis
• Equivalence Partitioning
UNIT TESTING :
Unit testing is a type of software testing where individual units or components of a software are tested.
The purpose is to validate that each unit of the software code performs as expected. Unit Testing is
done during the development (coding phase) of an application by the developers. Unit Tests isolate a
section of code and verify its correctness. A unit may be an individual function, method, procedure,
module, or object.
Unit testing is a White Box testing technique that is usually performed by the developer.
Unit testing, a testing technique using which individual modules are tested to determine if there are
any issues by the developer himself. It is concerned with functional correctness of the standalone
modules.
The main aim is to isolate each unit of the system to identify, analyze and fix the defects.
Unit testing involves the testing of each unit or an individual component of the software application. It
is the first level of functional testing. The aim behind unit testing is to validate unit components with its
performance.
The purpose of unit testing is to test the correctness of isolated code. A unit component is an individual
function or code of the application. White box testing approach used for unit testing and usually done
by the developers.
Whenever the application is ready and given to the Test engineer, he/she will start checking every
component of the module or module of the application independently or one by one, and this process
is known as Unit testing or components testing.
UNIT TESTING, also known as COMPONENT TESTING, first is a level of software testing where
individual units / components of a software are tested. The purpose is to validate that each unit of the
software performs as designed.
In a testing level hierarchy, unit testing is the first level of testing done before integration and other
remaining levels of the testing.
INTEGRATION TESTING
INTEGRATION TESTING is defined as a type of testing where software modules are integrated logically
and tested as a group. A typical software project consists of multiple software modules, coded by
different programmers. The purpose of this level of testing is to expose defects in the interaction
between these software modules when they are integrated.
Integration testing is the second level of the software testing process comes after unit testing. In this
testing, units or individual components of the software are tested in a group. The focus of the
integration testing level is to expose defects at the time of interaction between integrated components
or units.
Unit testing uses modules for testing purpose, and these modules are combined and tested in
integration testing. The Software is developed with a number of software modules that are coded by
different coders or programmers. The goal of integration testing is to check the correctness of
communication among all the modules.
Integration testing is the process of testing the interface between two software units or module. It’s
focus on determining the correctness of the interface. The purpose of the integration testing is to
expose faults in the interaction between integrated units. Once all the modules have been unit tested,
integration testing is performed.
Once all the components or modules are working independently, then we need to check the data flow
between the dependent modules is known as integration testing.
Reason behind Integration Testing
Although all modules of software application already tested in unit testing, errors still exist due to the
following reasons:
1. Each module is designed by individual software developer whose programming logic may differ
from developers of other modules so; integration testing becomes essential to determine the working
of software modules.
2. To check the interaction of software modules with the database whether it is an erroneous or not.
3. Requirements can be changed or enhanced at the time of module development. These new
requirements may not be tested at the level of unit testing hence integration testing becomes
mandatory.
4. Incompatibility between modules of software could create errors.
When we speak about the types of integration testing, we usually mean different approaches
Big-Bang Integration (non-incremental integration)
Incremental Testing: which is further divided into the following
Top Down Integration
Bottom Up Integration
Sandwich/ Hybrid Integration
Advantages:
Fault localization is easier
The test product is extremely consistent
The stubs can be written in lesser time compared to drivers
Critical modules are tested on priority
Major design flaws are detected as early as possible
Disadvantages
Requires several stubs
Poor support for early release
Basic functionality is tested at the end of the cycle
Advantages
Here development & testing can be done together so the product will be efficient
Test conditions are much easy to create
Disadvantages
Requires several drivers
Data flow is tested very late
Need for drivers complicates test data management
Poor support for early release
Key interfaces defects are detected late
Advantages
Top-Down and Bottom-Up testing techniques can be performed in parallel or one after the other
Very useful for large enterprises and huge projects that further have several subprojects
Disadvantages
The cost requirement is very high
Cannot be used for smaller systems with huge interdependence between the modules
Different skill sets are required for testers at different levels
System Testing
SYSTEM TESTING
SYSTEM TESTING is a level of testing that validates the complete and fully integrated software product.
The purpose of a system test is to evaluate the end-to-end system specifications. Usually, the software
is only one element of a larger computer-based system. Ultimately, the software is interfaced with
other software/hardware systems. System Testing is actually a series of different tests whose sole
purpose is to exercise the full computer-based system.
system Testing is a type of software testing that is performed on a complete integrated system to
evaluate the compliance of the system with the corresponding requirements.
In system testing, integration testing passed components are taken as input. The goal of integration
testing is to detect any irregularity between the units that are integrated together. System testing
detects defects within both the integrated units and the whole system. The result of system testing is
the observed behavior of a component or a system when it is tested.
System testing is defined as testing of a complete and fully integrated software product. This testing
falls in black-box testing wherein knowledge of the inner design of the code is not a pre-requisite and is
done by the testing team.
System testing is performed in the context of a System Requirement Specification (SRS) and/or a
Functional Requirement Specifications (FRS). It is the final test to verify that the product to be
delivered meets the specifications mentioned in the requirement document. It should investigate both
functional and non-functional requirements.
There are various types of system testing and the team should choose which ones they would need
before application deployment.
a) Usability Testing
Usability Testing also known as User Experience(UX) Testing, is a testing method for measuring how
easy and user-friendly a software application is. A small set of target end-users, use software
application to expose usability defects. Usability testing mainly focuses on user's ease of using
application, flexibility of application to handle controls and ability of application to meet its objectives.
This testing is recommended during the initial design phase of SDLC, which gives more visibility on the
expectations of the users.
It is type of non functional testing
To make sure that the system is easy to use, learn and operate
Usability testing is testing, which checks the defect in the end-user interaction of software or the
product.
It is also known as User Experience (UX) Testing.
Checking the user-friendliness, efficiency, and accuracy of the application is known as Usability
Testing."
USABILITY TESTING is a type of software testing done from an end-user’s perspective to determine if
the system is easily usable. It falls under non-functional testing.
It is a wide testing where we need to have an application knowledge.
When we use usability testing, it makes sure that the developed software is easy while using the
system without facing any problem and makes end-user life easier.
Usability testing is testing, which checks the defect in the end-user interaction of software or the
product.
Here, the user-friendliness can be described in many aspects, such as:
Easy to understand
Easy to access
Look & feel
Faster to Access
Effective Navigation
Good Error Handling
b) Regression Testing
REGRESSION TESTING is defined as a type of software testing to confirm that a recent program or code
change has not adversely affected existing features.
Regression Testing is nothing but a full or partial selection of already executed test cases which are re-
executed to ensure existing functionalities work fine.
This testing is done to make sure that new code changes should not have side effects on the existing
functionalities. It ensures that the old code still works once the latest code changes are done.
c) Performance Testing
Performance Testing is a software testing process used for testing the speed, response time, stability,
reliability, scalability and resource usage of a software application under particular workload. The main
purpose of performance testing is to identify and eliminate the performance bottlenecks in the
software application. It is a subset of performance engineering and also known as “Perf Testing”.
Performance testing will determine whether their software meets speed, scalability and stability
requirements under expected workloads. Applications sent to market with poor performance metrics
due to nonexistent or poor performance testing are likely to gain a bad reputation and fail to meet
expected sales goals.
The focus of Performance Testing is checking a software program's
Speed - Determines whether the application responds quickly
Scalability - Determines maximum user load the software application can handle.
Stability - Determines if the application is stable under varying loads
d) Load Testing
Load Testing is a non-functional software testing process in which the performance of software
application is tested under a specific expected load. It determines how the software application
behaves while being accessed by multiple users simultaneously. The goal of Load Testing is to improve
performance bottlenecks and to ensure stability and smooth functioning of software application before
deployment.
Load testing examines how the system behaves during normal and high loads and determines if a
system, piece of software, or computing device can handle high loads given a high demand of end-
users.
This testing usually identifies -
The maximum operating capacity of an application
Determine whether the current infrastructure is sufficient to run the application
Sustainability of application with respect to peak user load
Number of concurrent users that an application can support, and scalability to allow more users to
access it.
It is a type of non-functional testing. In Software Engineering, Load testing is commonly used for the
Client/Server, Web-based applications - both Intranet and Internet.
e) Stress Testing
Stress Testing is a type of software testing that verifies stability & reliability of software application.
The goal of Stress testing is measuring software on its robustness and error handling capabilities under
extremely heavy load conditions and ensuring that software doesn't crash under crunch situations. It
even tests beyond normal operating points and evaluates how software works under extreme
conditions.
The application under testing will be stressed when 5GB data is copied from the website and pasted in
notepad. Notepad is under stress and gives 'Not Responded' error message.
Stress testing is also extremely valuable for the following reasons:
To check whether the system works under abnormal conditions.
Displaying appropriate error message when the system is under stress.
System failure under extreme conditions could result in enormous revenue loss
It is better to be prepared for extreme conditions by executing Stress Testing.
The goal of stress testing is to analyze the behavior of the system after a failure. For stress testing to be
successful, a system should display an appropriate error message while it is under extreme conditions.
stress testing. It refers to the testing of the software in determining whether its performance is
satisfactory under extreme load conditions or not.
It is a type of non-functional testing
It involves testing beyond normal operational capacity, often to a breaking point, in order to observe
the results
It is a form of software testing that is used to determine the stability of a given system
The main purpose of stress testing is to make sure that the system recovers after failure which is called
as recoverability.
f) Recovery Testing
Recovery testing is non -functional testing that determines the capability of the software to recover
from failures such as software/hardware crashes or any network failures.
Recovery testing is done in order to check how fast and better the application can recover after it has
gone through any type of crash or hardware failure etc.
To perform recovery testing software/hardware is forcefully failed to verify
If recovery is successful or not.
Whether the further operations of the software can be performed or not.
The duration it will take to resume the operations.
Lost data can be recovered completely or not.
Percentage of scenarios in which the system can recover back.
Before this testing is performed, backup is taken and saved to a secured location to avoid any data loss
in case data is not recovered back successfully.
Common failures that should be tested for recovery:
1. Network issue
2. Power failure
3. External server not reachable
4. Server not responding
5. dll file missing
6. Database overload
7. Stopped services
8. Physical conditions
9. External device not responding
10. Wireless network signal loss
Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is
properly performed.
For example: When an application is receiving data from a network, unplug the connecting cable. After
some time, plug the cable back in and analyze the application’s ability to continue receiving data from
the point at which the network connection was broken.
g) Compatibility Testing
Compatibility Testing is a type of Software testing to check whether your software is capable of running
on different hardware, operating systems, applications, network environments or mobile devices.
Compatibility Testing is a type of Non-functional testing
Let's look into compatibility testing types
Hardware: It checks software to be compatible with different hardware configurations.
Operating Systems: It checks your software to be compatible with different Operating Systems like
Windows, Unix, Mac OS etc.
Software: It checks your developed software to be compatible with other software. For example, MS
Word application should be compatible with other software like MS Outlook, MS Excel, VBA etc.
Network: Evaluation of performance of a system in a network with varying parameters such as
Bandwidth, Operating speed, Capacity. It also checks application in different networks with all
parameters mentioned earlier.
Browser: It checks the compatibility of your website with different browsers like Firefox, Google
Chrome, Internet Explorer etc.
Devices: It checks compatibility of your software with different devices like USB port Devices, Printers
and Scanners, Other media devices and Blue tooth.
Mobile: Checking your software is compatible with mobile platforms like Android, iOS etc.
Versions of the software: It is verifying your software application to be compatible with different
versions of the software. For instance checking your Microsoft Word to be compatible with Windows 7,
Windows 7 SP1, Windows 7 SP2, Windows 7 SP3.
h) Security Testing
Security Testing is a type of software testing that uncovers vulnerabilities of the system and
determines that the data and resources of the system are protected from possible intruders. It ensures
that the software system and application are free from any threats or risks that can cause a loss.
Security testing of any system is focuses on finding all possible loopholes and weaknesses of the system
which might result into the loss of information or repute of the organization.
Security testing is an integral part of software testing, which is used to discover the weaknesses, risks,
or threats in the software application and also help us to stop the nasty attack from the outsiders and
make sure the security of our software applications.
The primary objective of security testing is to find all the potential ambiguities and vulnerabilities of
the application so that the software does not stop working. If we perform security testing, then it helps
us to identify all the possible security threats and also help the programmer to fix those errors.
It is a testing procedure, which is used to define that the data will be safe and also continue the
working process of the software.
Goal of Security Testing:
The goal of security testing is to:
To identify the threats in the system.
To measure the potential vulnerabilities of the system.
To help in detecting every possible security risks in the system.
To help developers in fixing the security problems through coding.
Acceptance Testing
Acceptance Testing is a method of software testing where a system is tested for acceptability. The
major aim of this test is to evaluate the compliance of the system with the business requirements and
assess whether it is acceptable for delivery or not.
Acceptance Testing is the last phase of software testing performed after System Testing and before
making the system available for actual use.
Acceptance testing is formal testing based on user requirements and function processing. It determines
whether the software is conforming specified requirements and user requirements or not. It is
conducted as a kind of Black Box testing where the number of required users involved testing the
acceptance level of the system. It is the fourth and last level of software testing.
User acceptance testing (UAT) is a type of testing, which is done by the customer before accepting the
final product. Generally, UAT is done by the customer (domain expert) for their satisfaction, and check
whether the application is working according to given business scenarios, real-time scenarios.
In this, we concentrate only on those features and scenarios which are regularly used by the customer
or mostly user scenarios for the business or those scenarios which are used daily by the end-user or the
customer.
However, the software has passed through three testing levels (Unit Testing, Integration Testing,
System Testing) But still there are some minor errors which can be identified when the system is used
by the end user in the actual scenario.
Acceptance criteria
the acceptance criteria consists of various predefined requirements and conditions, which are required
to be met and accomplished, to make the software acceptable for end users and customers.
Acceptance criteria validates the development of the software as well as ensures that it fulfills its
expected purpose, without any hindrance or issue.
In short, acceptance criteria can be termed as a validation technique, which ascertains every aspect of
the functional and nonfunctional requirements of the software and ensures that it meets the specified
acceptance criteria accurately.
Acceptance Criteria are conditions in which a software application should satisfy to be accepted by a
user or customer. It mentions the defined standards of a software product must meet. These are a set
of rules which cover the system behavior and from which we can make acceptance scenarios.
Acceptance Criteria is a set of statements which mentions the result that is pass or fail for both
functional and non-functional requirements of the project at the current stage. These functional and
non-functional requirements are the conditions that can be accepted. There is no partial acceptance in
acceptance criteria, it is is either passed or failed.
This aims to solve the problem by considering the problem from a customer’s point of view so
therefore it must be written in the context of how a user actually experiences any particular
application. It is about defining the user stories by considering all the predefined requirements of the
customer. They decide that what are the user requirements and the scope of software application that
needs to be completed via developers by taking into consideration the user story.
Acceptance Criteria
Acceptance criteria are defined on the basis of the following attributes
Functional Correctness and Completeness
Data Integrity
Data Conversion
Usability
Performance
Timeliness
Confidentiality and Availability
Installability and Upgradability
Scalability
Documentation
Alpha Testing
Alpha Testing is a type of acceptance testing; performed to identify all possible issues and bugs before
releasing the final product to the end users. Alpha testing is carried out by the testers who are internal
employees of the organization. The main goal is to identify the tasks that a typical user might perform
and test them.
Alpha testing is performed at developer’s site.
Beta Testing
Beta Testing is performed by "real users" of the software application in "real environment" and it can
be considered as a form of external user acceptance testing. It is the final test before shipping a
product to the customers. Direct feedback from customers is a major advantage of Beta Testing. This
testing helps to test products in customer's environment.
Beta testing is performed at end-user of the product.
Testing is performed at client site.
Reliability and Security Testing are Reliability, Security, Robustness are checked during
not performed in-depth Alpha Beta Testing
Testing
Alpha testing involves both the Beta Testing typically uses Black Box Testing
white box and black box techniques
Alpha testing requires a lab Beta testing doesn't require any lab environment or
environment or testing testing environment. The software is made available
environment to the public and is said to be real time environment
Long execution cycle may be Only a few weeks of execution are required for Beta
required for Alpha testing testing
Critical issues or fixes can be Most of the issues or feedback is collected from Beta
addressed by developers testing will be implemented in future versions of the
immediately in Alpha testing product
Alpha testing is to ensure the Beta testing also concentrates on the quality of the
quality of the product before product, but gathers users input on the product and
moving to Beta testing ensures that the product is ready for real time users.
Special Tests
1.Smoke Testing
Smoke Testing is a software testing process that determines whether the deployed software
build is stable or not. Smoke testing is a confirmation for QA team to proceed with further
software testing. It consists of a minimal set of tests run on each build to test software
functionalities. Smoke testing is also known as "Build Verification Testing" or “Confidence
Testing.”
Smoke Testing is a software testing technique performed post software build to verify that the
critical functionalities of software are working fine. It is executed before any detailed functional
or regression tests are executed. The main purpose of smoke testing is to reject a software
application with defects so that QA team does not waste time testing broken software
application.
The smoke tests qualify the build for further formal testing. The main aim of smoke testing is to
detect early major issues. Smoke tests are designed to demonstrate system stability and
conformance to requirements.
2. Sanity testing
Sanity testing is a kind of Software Testing performed after receiving a software build, with minor
changes in code, or functionality, to ascertain that the bugs have been fixed and no further issues
are introduced due to these changes. The goal is to determine that the proposed functionality
works roughly as expected. If sanity test fails, the build is rejected to save the time and costs
involved in a more rigorous testing
In other words, we can say that sanity testing is performed to make sure that all the defects have
been solved and no added issues come into the presence because of these modifications.
3. GUI Testing
GUI Testing is a software testing type that checks the Graphical User Interface of the Software.
The purpose of Graphical User Interface (GUI) Testing is to ensure the functionalities of software
application work as per specifications by checking screens and controls like menus, buttons,
icons, etc.
Graphical User Interface Testing (GUI) Testing is the process for ensuring proper functionality of
the graphical user interface (GUI) for a specific application. GUI testing generally evaluates a
design of elements such as layout, colors and also fonts, font sizes, labels, text boxes, text
formatting, captions, buttons, lists, icons, links and content. GUI testing processes may be either
manual or automatic and are often performed by third -party companies, rather than developers
or end users.
1. Class Testing
Class testing is also known as unit testing.
In class testing, every individual classes are tested for errors or bugs.
Class testing ensures that the attributes of class are implemented as per the design and
specifications. Also, it checks whether the interfaces and methods are error free of not.
2. Inter-Class Testing
It is also called as integration or subsystem testing.
Inter class testing involves the testing of modules or sub-systems and their coordination with
other modules.
3. System Testing
In system testing, the system is tested as whole and primarily functional testing techniques are
used to test the system. Non-functional requirements like performance, reliability, usability and
test-ability are also tested.
5. Application Testing
Client-server Software: Client-server is software architecture consists of client and server systems
which communicate to each other either over the computer network or on the same machine. In
Client-Server Application Testing, the client system sends the request to the server system and
the server system sends the response to the client system. This is also known as a two-tier
application. Such applications are developed in Visual Basic, VC++, C, C++, Core JAVA, etc. and the
back-end database could be IBM DB2, MS Access, Oracle, Sybase, SQL Server, Quad base, MySQL,
etc.
Application Testing
What is Client Server Testing?
Client Server applications run on two or more systems. It required knowledge on networking.
System is installed on the server and an executable file in run of the systems/client machines in
intranet. In this type of testing we test the application GUI on both the systems (server and
client), we check the functionality, load, database and the interaction between client and server.
In Client server testing the user needs to find out the load and performances issues and work on
the code area. The test cases and test scenarios for this type of testing are derived from the
requirements and experience.
This type of testing is usually done for 2 tier applications (usually developed for LAN). Here we
will be having front-end and backend. The application launched on front-end will be having forms
and reports which will be monitoring and manipulating data.
E.g: applications developed in VB, VC++, Core Java, C, C++, D2K, PowerBuilder etc.
The backend for these applications would be MS Access, SQL Server, Oracle, Sybase, Mysql,
Quadbase
The tests performed under the web application testing would be:
User interface testing
Functionality testing
Security testing
Browser compatibility testing
Operating System compatibility testing
Load testing and performance testing, stress testing
Inter-operability testing
Storage and data volume testing
Test Management Unit 04
Test Plan
A Test Plan is a detailed document that describes the test strategy, objectives, schedule, estimation,
deliverables, and resources required to perform testing for a software product. Test Plan helps us determine
the effort needed to validate the quality of the application under test.
The test plan serves as a blueprint to conduct software testing activities as a defined process, which is
minutely monitored and controlled by the test manager.
As per ISTQB definition: “Test Plan is A document describing the scope, approach, resources, and schedule of
intended test activities.”
A TEST PLAN is a document describing software testing scope and activities. It is the basis for formally testing
any software / product in a project.
Test Plan is a dynamic document. The success of a testing project depends upon a well-written Test Plan
document that is current at all times. Test Plan is more or less like a blueprint of how the testing activity is
going to take place in a project.
Given below are a few pointers on a Test Plan:
#1) Test Plan is a document that acts as a point of reference and only based on that testing is carried out
within the QA team.
#2) It is also a document that we share with the Business Analysts, Project Managers, Dev team and the other
teams. This helps to enhance the level of transparency of the QA team’s work to the external teams.
#3) It is documented by the QA manager/QA lead based on the inputs from the QA team members.
What is Lifecycle
Lifecycle in the simple term refers to the sequence of changes from one form to other forms. In a similar
fashion, Software is also an entity. Just like developing software involves a sequence of steps, testing also has
steps which should be executed in a definite sequence.
STLC Phases
There are different phases in STLC which are given below. The testing activities start from the Requirements
analysis phase and goes through all the phases one by one before completing with the Test cycle closure
phase.
There are 6 STLC Phases in the STLC Lifecycle
The entry criteria must be fulfilled before each phase can start
The exit criteria should be fulfilled before exiting a phase
Every phase has one or more deliverables that are produced at the end of the phase
The phases are executed in a sequence
Each of the step mentioned above has some Entry Criteria (it is a minimum set of conditions that should be
met before starting the software testing) as well as Exit Criteria (it is a minimum set of conditions that should
be completed in order to stop the software testing) on the basis of which it can be decided whether we can
move to the next phase of Testing Life cycle or not.
Each of these stages has a definite Entry and Exit criteria, Activities & Deliverables associated with it.
What is Entry and Exit Criteria in STLC?
Entry Criteria: Entry Criteria gives the prerequisite items that must be completed before testing can begin.
Exit Criteria: Exit Criteria defines the items that must be completed before testing can be concluded
You have Entry and Exit Criteria for all levels in the Software Testing Life Cycle (STLC)
You already know that making a Test Plan is the most important task of Test Management Process. Follow the
seven steps below to create a test plan as Analyze the product
1. Design the Test Strategy
2. Define the Test Objectives
3. Define Test Criteria
4. Resource Planning
5. Plan Test Environment
6. Schedule & Estimation
7. Determine Test Deliverables
Step 1) Analyze the product
How can you test a product without any information about it? The answer is Impossible. You must learn a
product thoroughly before testing it.
The product under test is ABC banking website. You should research clients and the end users to know their
needs and expectations from the application
Who will use the website?
What is it used for?
How will it work?
What are software/ hardware the product uses?
Risk Mitigation
Team member lack the required skills for website Plan training course to skill up your members
testing.
The project schedule is too tight; it's hard to complete Set Test Priority for each of the test activity.
this project on time
Test Manager has poor management skill Plan leadership training for manager
A lack of cooperation negatively affects your Encourage each team member in his task, and
employees' productivity inspire them to greater efforts.
Wrong budget estimate and cost overruns Establish the scope before beginning work,
pay a lot of attention to project planning and
constantly track and measure the progress
Step 2.4) Create Test Logistics
In Test Logistics, the Test Manager should answer the following questions:
Who will test?
When will the test occur?
Who will test?
You may not know exact names of the tester who will test, but the type of tester can be defined.
To select the right member for specified task, you have to consider if his skill is qualified for the task or not,
also estimate the project budget. Selecting wrong member for the task may cause the project to fail or delay.
Person having the following skills is most ideal for performing software testing:
Ability to understand customers point of view
Strong desire for quality
Attention to detail
Good cooperation
In your project, the member who will take in charge for the test execution is the tester. Base on the project
budget, you can choose in-source or outsource member as the tester.
When will the test occur?
Test activities must be matched with associated development activities.
You will start to test when you have all required items shown in following
Test Specification
Human Resources
Test Environmnet
Exit Criteria
It specifies the criteria that denote a successful completion of a test phase. The exit criteria are the targeted
results of the test and are necessary before proceeding to the next phase of development. Example: 95% of all
critical test cases must pass.
Example: 95% of all critical test cases must pass.
Some methods of defining exit criteria are by specifying a targeted run rate and pass rate.
Run rate is ratio between number test cases executed/total test cases of test specification. For example, the
test specification has total 120 TCs, but the tester only executed 100 TCs, So the run rate is 100/120 = 0.83
(83%)
Pass rate is ratio between numbers test cases passed / test cases executed. For example, in above 100 TCs
executed, there’re 80 TCs that passed, so the pass rate is 80/100 = 0.8 (80%)
This data can be retrieved in Test Metric documents.
Run rate is mandatory to be 100% unless a clear reason is given.
Pass rate is dependent on project scope, but achieving high pass rate is a goal.
3. Developer Implement the test cases, test program, test suite etc.
4. Test Administrator Builds up and ensures Test Environment and assets are managed and maintained
SupportTester to use the test environment for test execution
You should ask the developer some questions to understand the web application under test clearly. Here’re
some recommended questions. Of course, you can ask the other questions if you need.
What is the maximum user connection which this website can handle at the same time?
What are hardware/software requirements to install this website?
Does the user's computer need any particular setting to browse the website?
Test Management
Test Management is a process of managing the testing activities in order to ensure high quality and high-end
testing of the software application.
The method consists of organizing, controlling, ensuring traceability and visibility of the testing process in
order to deliver the high quality software application.
It ensures that the software testing process runs as expected.
As suggested by its name, test management is a process of managing testing activities, such as planning,
execution, monitoring, and controlling activities. Test management involves several crucial activities that
caters to both manual and automation testing. With the assistance of this process, a team lead can easily
manage the entire testing team, while monitoring their activities, as well as paying close attention to various
details of SDLC.
Test management, process of managing the tests. A test management is also performed using tools to manage
both types of tests, automated and manual, that have been previously specified by a test procedure.
Test Management Responsibilities:
Test Management has a clear set of roles and responsibilities for improving the quality of the product.
Test management helps the development and maintenance of product metrics during the course of project.
Test management enables developers to make sure that there are fewer design or coding faults.
Planning
Test Estimation
An estimate is a forecast or prediction. Test Estimation is approximately determining how long a task would
take to complete. Estimating effort for the test is one of the major and important tasks in Test Management.
Benefits of correct estimation:
1. Accurate test estimates lead to better planning, execution and monitoring of tasks under a test manager's
attention.
2. Allow for more accurate scheduling and help realize results more confidently.
Test Planning
A Test Plan can be defined as a document describing the scope, approach, resources, and schedule of
intended Testing activities.
A project may fail without a complete Test Plan. Test planning is particularly important in large software
system development.
In software testing, a test plan gives detailed testing information regarding an upcoming testing effort,
including:
Test Strategy
Test Objective
Exit /Suspension Criteria
Resource Planning
Test Deliverables
Test Organization
Test Organization in Software Testing is a procedure of defining roles in the testing process. It defines who is
responsible for which activities in testing process. Test functions, facilities and activities are also explained in
the same process. The competencies and knowledge of the people involved are also defined however
everyone is responsible for quality of testing process.
Execution
1. Test Monitoring and Control
Monitoring
Monitoring is a process of collecting, recording, and reporting information about the project activity that the
project manager and stakeholder needs to know.
To Monitor, Test Manager does following activities
Define the project goal, or project performance standard
Observe the project performance, and compare between the actual and the planned performance
expectations
Record and report any detected problem which happens to the project
Controlling
Project Controlling is a process of using data from monitoring activity to bring actual performance to planned
performance.
In this step, the Test Manager takes action to correct the deviations from the plan. In some cases, the plan has
to be adjusted according to project situation.
While testing work is in progress or while the testers are executing the test plan, all these work progress must
be monitored. One should keep track of all this testing work. If test monitoring is done, then the test team and
test manager will get feedback on how the testing progress is?
Using this feedback, the test manager can guide the team members to improve the quality of further testing
work. With the help of test monitoring, the project team will get visibility on the test results. It also helps to
know about test coverage.
2. Issue Management
In the life cycle of any project, there will be always an unexpected problems and questions that crop up. For an
example:
The company cuts down your project budget
Your project team lacks the skills to complete project
The project schedule is too tight for your team to finish the project at the deadline.
Risk to be avoided while testing:
Missing the deadline
Exceed the project budget
Lose the customer trust
Every software requires an infrastructure to perform its actions. Infrastructure testing is the testing process
that covers hardware, software, and networks. It involves testing of any code that reads configuration values
from different things in the IT framework and compares them to intended results.
It reduces the risks of failure. This testing incorporates testing exercises, procedures to guarantee that IT
applications and the fundamental infrastructure are tuned to deliver on execution, adaptability, unwavering
quality, accessibility, performance, and scalability. The aim is to test infrastructure between test environments,
test tools, and office environments.
Test Process
Testing is not a single activity instead it’s a set of number of processes.
It include following activities.
1.Base Lining a Test plan
2.Test Case Specification
3.Update of Traceability Matrix
Test processes are a vital part of Software Development Life Cycle (SDLC) and consist of various activities,
which are carried out to improve the quality of the software product. From planning to execution, each stage
of the process is systematically planned and require discipline to act upon them. These steps and stages are
extremely important, as they have their own entry criteria and deliverable, which are combined and evaluated
to get expected results and outcomes.
Therefore, we can say that the quality and effectiveness of the software testing is primarily determined by the
quality of the test processes used by the software testers. Moreover, by following a fundamental test process,
testers can simplify their work and keep a track of every major and minor activity.
In Requirement Traceability Matrix or RTM, we set up a process of documenting the links between the user
requirements proposed by the client to the system being built. In short, it’s a high-level document to map and
trace user requirements with test cases to ensure that for each and every requirement adequate level of
testing is being achieved.
The traceability matrix is typically a worksheet that contains the requirements with its all possible test
scenarios and cases and their current state, i.e. if they have been passed or failed. This would help the testing
team to understand the level of testing activities done for the specific product.
Requirement ID
Requirement Type and Description
Test Cases with Status
Test Report
Test Report is a document which contains a summary of all test activities and final test results of a testing
project. Test report is an assessment of how well the Testing is performed. Based on the test report,
stakeholders can evaluate the quality of the tested product and make a decision on the software release.
For example, if the test report informs that there are many defects remaining in the product, stakeholders can
delay the release until all the defects are fixed.
A test report is an organized summary of testing objectives, activities, and results. It is created and used to
help stakeholders (product manager, analysts, testing team, and developers) understand product quality and
decide whether a product, feature, or a defect resolution is on track for release.
Test reporting is essential for making sure your web or mobile app is achieving an acceptable level of quality.
test report as a document containing information about the performed actions (run test cases, detected bugs,
spent time etc.) and the results of this perfomance (failed/passed test cases, the number of bugs and crashes
etc.).
Test Objective
As mentioned in Test planning , Test Report should include the objective of each round of testing, such as Unit
Test, Performance Test, System Test …Etc.
Test Summary
This section includes the summary of testing activity in general. Information detailed here includes
The number of test cases executed
The numbers of test cases pass
The numbers of test cases fail
Pass percentage
Fail percentage
Comments
Defect
One of the most important information in Test Report is defect. The report should contain following
information
Total number of bugs
Status of bugs (open, closed, responding)
Number of bugs open, resolved, closed
Breakdown by severity and priority
Like test summary, you can include some simple metrics like Defect density, % of fixed defects.
The project team sent you the Defect information as followin
Defect density is 20 defects/1000 lines of code average
90% defects fixed in total
When test are executed, information about test execution gets collected in test logs and other files. The basic
measurements from running the tests are then converted to meaningful metrics by the use of appropriate
transformation and formulae
Percentage test cases executed= (No of test cases executed/ Total no of test cases written) X 100
Passed Test Cases Percentage = (Number of Passed Tests/Total number of tests executed) X 100
Failed Test Cases Percentage = (Number of Failed Tests/Total number of tests executed) X 100
Blocked Test Cases Percentage = (Number of Blocked Tests/Total number of tests executed) X 100
Fixed Defects Percentage = (Defects Fixed/Defects Reported) X 100
Accepted Defects Percentage = (Defects Accepted as Valid by Dev Team /Total Defects Reported) X 100
Defects Deferred Percentage = (Defects deferred for future releases /Total Defects Reported) X 100
Critical Defects Percentage = (Critical Defects / Total Defects Reported) X 100
Number of tests run per time period = Number of tests run/Total time
Test design efficiency = Number of tests designed /Total time
Test review efficiency = Number of tests reviewed /Total time
Bug find rote or Number of defects per test hour = Total number of defects/Total number of test hours
What is Bug?
A bug is the consequence/outcome of a coding fault.
The main purpose of DMP for different projects or organization is given below:
Operational support for simply resolving and retesting defects being found.
To give input for status and progress report regarding defect.
To give input for advice regarding release of defect.
To identify the main reason that the defect occurred and how to handle it
So during unit testing, if the developer finds some issues then it is not called as a defect as these issues
are identified before the meeting of the milestone deadline. Once the coding and unit testing have
been completed, the developer hand-overs the code for system testing and then you can say that the
code is “baselined” and ready for next milestone, here, in this case, it is “system testing”.
Now, if the issues are identified during testing then it is called as the defect as it is identified after the
completion of the earlier milestone i.e. coding and unit testing.
Basically, the deliverables are baselined when the changes in the deliverables are finalized and all
possible defects are identified and fixed. Then the same deliverable passes on to the next group who
will work on it.
Even these minor defects give an opportunity to learn how to improve the process and prevent the
occurrences of any defect which may impact system failure in the future. Identification of a defect
having a lower impact on the system may not be a big deal but the occurrences of such defect in the
system itself is a big deal.
For process improvement, everyone in the project needs to look back and check from where the defect
was originated. Based on that you can make changes in the validation process, base-lining document,
review process which may catch the defects early in the process which are less expensive.
Defect Status
Defect Status or Bug Status in defect life cycle is the present state from which the defect or a bug is
currently undergoing. The goal of defect status is to precisely convey the current state or progress of a
defect or bug in order to better track and understand the actual progress of the defect life cycle.
The bug has different states in the Life Cycle. The Life cycle of the bug
New: When a new defect is logged and posted for the first time. It is assigned a status as NEW.
Assigned: Once the bug is posted by the tester, the lead of the tester approves the bug and assigns the
bug to the developer team
Open: The developer starts analysing and works on the defect fix
Fixed: When a developer makes a necessary code change and verifies the change, he or she can make
bug status as "Fixed."
Pending retest: Once the defect is fixed the developer gives a particular code for retesting the code to
the tester. Since the software testing remains pending from the testers end, the status assigned is
"pending retest."
Retest: Tester does the retesting of the code at this stage to check whether the defect is fixed by the
developer or not and changes the status to "Re-test."
Verified: The tester re-tests the bug after it got fixed by the developer. If there is no bug detected in
the software, then the bug is fixed and the status assigned is "verified."
Reopen: If the bug persists even after the developer has fixed the bug, the tester changes the status to
"reopened". Once again the bug goes through the life cycle.
Closed: If the bug is no longer exists then tester assigns the status "Closed."
Duplicate: If the defect is repeated twice or the defect corresponds to the same concept of the bug,
the status is changed to "duplicate."
Rejected: If the developer feels the defect is not a genuine defect then it changes the defect to
"rejected."
Deferred: If the present bug is not of a prime priority and if it is expected to get fixed in the next
release, then status "Deferred" is assigned to such bugs
Not a bug: If it does not affect the functionality of the application then the status assigned to a bug is
"Not a bug".
Classification of bug
Defects are classified from the QA team perspective as Priority and from the development perspective
as Severity (complexity of code to fix it). These are two major classifications that play an important role
in the timeframe and the amount of work that goes in to fix defects.
Severity
Bug/Defect severity can be defined as the impact of the bug on the application. It can be Critical, Major
or Minor. In simple words, how much effect will be there on the system because of a particular defect?
For example: If an application or web page crashes when a remote link is clicked, in this case clicking
the remote link by an user is rare but the impact of application crashing is severe. So the severity is
high but priority is low.
What is Priority?
Priority is defined as the order in which a defect should be fixed. Higher the priority the sooner the
defect should be resolved.
Defect priority can be defined as an impact of the bug on the customers’ business. Main focus on how
soon the defect should be fixed. It gives the order in which a defect should be resolved. Developers
decide which defect they should take up next based on the priority. It can be High, Medium or Low.
For example: If the company name is misspelled in the home page of the website, then the priority is
high and severity is low to fix it.
Severity Listing
Severity can be categorized in the following ways −
Critical /Severity 1 − Defect impacts most crucial functionality of Application and the QA team cannot
continue with the validation of application under test without fixing it. For example, App/Product crash
frequently.
Major / Severity 2 − Defect impacts a functional module; the QA team cannot test that particular
module but continue with the validation of other modules. For example, flight reservation is not
working.
Medium / Severity 3 − Defect has issue with single screen or related to a single function, but the
system is still functioning. The defect here does not block any functionality. For example, Ticket# is a
representation which does not follow proper alpha numeric characters like the first five characters and
the last five as numeric.
Low / Severity 4 − It does not impact the functionality. It may be a cosmetic defect, UI inconsistency
for a field or a suggestion to improve the end user experience from the UI side. For example, the
background colour of the Submit button does not match with that of the Save button.
Priority Types
In most companies, a defect tracking tool is used and the elements of a defect report can vary from
one tool to the other. However, in general, a defect report can consist of the following elements.
ID Unique identifier given to the defect. (Usually, automated)
Module Specific module of the product where the defect was detected.
Detected Build Build version of the product where the defect was detected (e.g. 1.2.3.5)
Version
Summary Summary of the defect. Keep this clear and concise.
Steps to Step by step description of the way to reproduce the defect. Number the
Replicate steps.
Actual Result The actual result you received when you followed the steps.
Assigned To The name of the person that is assigned to analyse/ fix the defect.
Fixed Build Build version of the product where the defect was fixed (e.g. 1.2.3.9)
Version
Techniques to Identify Defects:
Different techniques are used to identify defect. These techniques are categorized into three
categories as given below:
1. Static Techniques:
Static technique, as name suggests, is a technique in which software is tested without any execution or
program or system. In this, software products are tested or examined manually, or with help of
different automation tools that are available, but it’s not executed.
Different type of causes of defects can be found by this technique such as :
Missing requirements
Design defects
Deviation from standards
Inconsistent interface specification
Non-maintainable code
Insufficient maintainability, etc.
2. Dynamic Techniques:
Dynamic technique, as name suggests, is a technique in which software is tested by execution of
program or system. This technique can only be applied to software code as in this technique, testing is
done only by execution of program or software code. Different types of defects can be found by this
technique such as :
Functional defects –
These defects arise when functionality of system or software does not work as per Software
Requirement Specification (SRS). Defects that are very critical largely affect essential functionalities of
the system that in turn affect a software product or its functionality. Software product might not work
properly and stop working. These defects are simply related to working of system.
Non-functional defects –
A defect in software products largely affects its non-functional aspects. These defects can affect
performance, usability, etc.
3. Operational Techniques:
Operational techniques, as name suggests, are a technique that produces a deliverable i.e. Product.
Then user, customer, or control personnel identify or found defects by inspection, checking, reviewing,
etc. In simple words, defect is found as a result of failure.
Manual Testing
Manual Testing
Manual testing is a software testing process in which test cases are executed manually without using any
automated tool. All test cases executed by the tester manually according to the end user's perspective. It
ensures whether the application is working, as mentioned in the requirement document or not. Test cases
are planned and implemented to complete almost 100 percent of the software application. Test case
reports are also generated manually.
Manual testing requires more resources and time as the complete process is manual.
It's prone to human error for the cases where intensive data calculation is involved.
Testing is a repetitive process. For each release you have certain test cases which are executed just to make
sure that nothing is broken due to new features. These are called regression cases. To execute same cases
again and again is boring.
It's not suitable for large scale and time bounded projects.
Verifying a large amount of data is not possible.
Performance testing is impractical manually.
GUI difference, component size are difficult to verify manually.
GUI objects size difference and color combination etc is not easy to find out in manual testing.
Load testing and performance testing is not possible in manual testing.
Running test manually is very time consuming job.
Regression Test cases are time consuming if it is manual testing.
Automation Testing
Automation testing is the application of tools and technology to testing software with the goal of reducing
testing efforts, delivering capability faster and more affordably. It helps in building better quality software
with less effort.
Software Test automation makes use of specialized tools to control the execution of tests and compares the
actual results against the expected result. Usually, regression tests, which are repetitive actions, are
automated.
Testing Tools not only helps us to perform regression tests but also helps us to automate data set up
generation, product installation, GUI interaction, defect logging, etc. Automation tools are used for both
Functional and Non-Functional testing.
5.Reusability: The scripts are reusable and you don’t need new scripts every time. Also, you can redo the
steps that are exactly as the previous ones.
6. Bugs: Automation helps you find bugs in the early stages of software development, reducing expenses
and working hours to fix these problems as well.
8 Better Insights: Automated testing provides better insights than manual testing when some tests fail.
Automated software testing not only gives insights into the application but also shows you the memory
contents, data tables, file contents, and other internal program states. This helps developers determine
what’s gone wrong.
9 Improved Accuracy
Even the best testing engineer will make mistakes during manual testing. Especially when testing a complex
use case, faults can occur. On the other side, automated tests can execute tests with 100-percent accuracy
as they produce the same result every time you run them.
12 Fewer Human Resources: You just need a test automation engineer to write your scripts to automate
your tests, instead of a lot of people doing boring manual tests over and over again.
Execution is done through software tools, Execution of test cases is time consuming
so it is faster than manual testing and and needs more human resources
needs less human resources compared to
manual testing.
It can be done in parallel and reduce test Its not an easy task to execute test cases
execution time. in parallel in manual testing. We need
more human resources to do this and
becomes more expensive.
Build verification testing (BVT) is highly Build verification testing (BVT) is not
recommended recommended
Selenium
Selenium is a popular open-source (released under the Apache License 2.0) automation testing framework.
Originally developed in 2004 by Jason Hugging, Selenium remains a widely-known and used tool for
testing web applications. It operates across multiple browsers and platforms (macOS, Windows, and Linux)
and can write tests in various programming languages, such as Python, Java, C#, Scala, Groovy, Ruby, Perl,
and PHP.
Bugzilla?
Bugzilla is an open-source issue/bug tracking system that allows developers to keep track of outstanding
problems with their product. It is written in Perl and uses MYSQL database.
Bugzilla is a Defect tracking tool, however, it can be used as a test management tool as such it can be easily
linked with other Test Case management tools like Quality Center, Testlink etc.
QTP is an automated functional Testing tool that helps testers to execute automated tests in order to
identify any errors, defects or gaps in contrary to the expected results of the application under test. It was
designed by Mercury Interactive and later on acquired by HP and now MicroFocus. Full form of QTP is
QuickTest Professional while UFT means Unified Functional Testing.
Why QTP is the best testing tool?
It is an icon-based tool that automates the regression and Functional Testing of an application
Both technical, as well as a non-technical tester, can use Micro Focus QTP
It provides both features- Record as well as Playback
We can test Desktop as well as the Web-based applications
It allows Business Process Testing (BPT)
QTP Testing is based on scripting language VB script
Selenium
Selenium is often used for regression testing. It offers testers a playback tool that allows them to record and
playback regression tests. In fact, Selenium is not a single tool but rather a suite of software that includes
various tools (or components):
Selenium IDE (Integrated Development Environment)
Selenium WebDriver
Selenium client API
Selenium Remote Control
Selenium Grid
MANTIS is an open source bug tracking software that can be used to track software defects for various
software projects. You can easily download and install the Mantis for your use. Mantisbt now also provides
a hosted version of the software. You can easily customize Mantis to map your software development
workflow.
Some salient features of Mantis Bt are
Email notifications: It sends out emails of updates, comments, resolutions to the concerned stakeholders.
Access Control: You can control user access at a project level
Customize: You can easily customize Mantis as per your requirements.
Mobile Support: Mantis supports iPhone, Android, and Windows Phone Platforms.
LambdaTest
LambdaTest is a cloud-based automation testing tool for desktop and mobile applications. This tool allows
for manual and automated cross-browser testing across more than 2000 operating systems, browsers, and
devices.
LambdaTest allows testers to record real-time browser compatibility testing. Plus, it enables screen
recording and automated screenshot testing on several combinations at a time.
Ranorex
Ranorex is a test automation tool for web, desktop, and mobile. This tool provides numerous benefits, such
as codeless test creation, recording and replaying testing phases, and reusable test scripts.
Appium
Appium is an open-source test automation framework. This framework supports multiple programming
languages (Python, Java, PHP, JavaScript, etc.) for writing tests and can integrate CI/CD tools (e.g., Jenkins).
Eggplant
Eggplant was developed by TestPlant to provide testers the possibility to execute different types of testing.
Similar to Selenium, Eggplant is not a single tool but rather a suite of tools for automation testing, and each
tool performs different types of testing.
7. Kobiton
Kobiton is a cloud-based platform that can perform both manual and automated mobile and web testing.
Its AI-driven scriptless approach can automate performance, visual and UX, functional, and compatibility
testing. In addition, Kobiton offers automated crash detection, which ensures comprehensive quality.
Dynamic Test Tools: These tools test the software system with ‘live’ data. Dynamic test tools include the
following
1) Test driver: It inputs data into a module-under-test (MUT).
2) Test beds: It simultaneously displays source code along with the program under execution.
3) Emulators: The response facilities are used to emulate parts of the system not yet developed.
4) Mutation analyzers: The errors are deliberately ‘fed’ into the code in order to test fault tolerance of the
system.
Testing Metrics are the quantitative measures used to estimate the progress, quality, productivity and
health of the software testing process. The goal of software testing metrics is to improve the efficiency and
effectiveness in the software testing process and to help make better decisions for further testing process
by providing reliable data about the testing process.