0% found this document useful (0 votes)
6 views29 pages

Manual Console

Uploaded by

rohitwahane.rw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views29 pages

Manual Console

Uploaded by

rohitwahane.rw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

MANUAL TESTING

Manual Testing Automated Testing


Manual testing requires human Automation Testing is use of tools to
intervention for test execution. execute test cases
Automation Testing saves time, cost
Manual testing will require skilled labour,
and manpower. Once recorded, it’s
long time & will imply high costs.
easier to run an automated test suite
Any type of application can be tested
Automated testing is recommended
manually, certain testing types like
only for stable systems and is mostly
ad-hoc and monkey testing are more
used for Regression Testing
suited for manual execution.
The boring part of executing same
Manual testing can become repetitive test cases time and again is handled
and boring. by automation software in
Automation Testing.

1) Software Development Life Cycle and SDLC Model

> Software Development Life Cycle is a systematic approach to develop software. It is a Process followed
by Software Developers and Software Testing is an integral part of Software Development, so it is also
important for Software Testers...

i) Requirement Gathering

ii) Analysis

iii) Design

iv) Coding / Development

v) Testing

vi) Deployment & Maintenance

1) Requirement Gathering :
Requirement Gathering is the most important phase in software development life cycle,
Business Analyst collects the requirements from the Customer/Client as per the clients
business needs and documents the requirements in the Business Requirement Specification
and provides the same to Development Team
2) Analysis:
Once the Requirement Gathering is done the next step is to define and document the product
requirements and get them approved by the customer. This is done through SRS (Software
Requirement Specification) document. SRS consists of all the product requirements to be
designed and developed during the project life cycle.
3) Design
> In Design phase Senior Developers and Architects, they give the architecture of the software
product to be developed. It has two steps one is HLD (High Level Design) or Global Design and
another is LLD (Low Level Design) or Detailed Design,
> High Level Design (HLD) is the overall system design, covers the system architecture and
database design. It describes the relation between various modules and functions of the system.
> Low Level Design (LLD) is the detailed system design, covers how each and every feature in the
product should work and how every component should work. > The outcome of this phase is
High Level Document and Low Level Document which works as an input to the next phase
Coding...

4) Development

Developers (seniors, juniors and fresher) involved in this phase, this is the phase where we
start building the software and start writing the code for the product.

> The outcome of this phase is Source Code Document (SCD) and the developed product.

5) Testing

> Once the software is complete then it is deployed in the testing environment.
The testing team starts testing (either test the software manually or using automated test tools depends
on process defined in STLC)

6) Deployment and Maintenance

> After successful testing, the product is delivered (deployed to the customer for their use),
Deployment is done by the Deployment/Implementation engineers and Once when the customers start
using the developed system then the actual problems will come up and needs to be solved from time to
time.

> Fixing the issues found by the customer comes in the maintenance phase. 100% testing is not possible
– because, the way testers test the product is different from the way customers use the product.

SDLC Models

i) Waterfall Model
ii) V&V model
iii) Prototype Model

1) Waterfall Model:
Advantages of Waterfall Model:
a) Simple and easy to use

b) Easy to manage due to the rigidity of the model- each phase has specific deliverables and a
review process.

c) Phases are processed and completed one at a time.

d) Works well for smaller projects where requirements are very well

Disadvantages of waterfall model:

a) Once an application is in the testing stage, it is very difficult to go back and change
b) No working software is produced until late during the life cycle.
c) High amounts of risk
d) Not a good model for complex and object-oriented projects
e) Poor model for long and ongoing projects

2) V&V

V & V MODEL (Verification and Validation Model ) :


This model came up in order to overcome the drawback of waterfall model – here testing starts from the
requirement stage itself. The V & V model is shown in the figure

1) In the first stage, the client send the CRS both to developers and testers. The developers translate the
CRS to the SRS. The testers do the following tests on CRS,

1. Review CRS

a. conflicts in the requirements

b. missing requirements
c. wrong requirements

2. Write Acceptance Test plan

3. Write Acceptance Test cases The testing team reviews the CRS and identifies mistakes and defects and
send it to the development team for correcting the bugs. The development updates the CRS and
continues developing SRS simultaneously.

2 ) In the next stage, the SRS is sent to the testing team for review and the developers start building the
HLD of the product. The testers do the following tests on SRS,

1. Review SRS against CRS

a. every CRS is converted to SRS

b. CRS not converted properly to SRS

2. Write System Test plan

3. Write System Test case

The testing team reviews every detail of the SRS if the CRS has been converted properly to SRS.

3 ) In the next stage, the developers start building the LLD of the product. The testers do the following
tests on HLD,

1. Review HLD

2. Write Integration test plan

3. Write Integration test case

4 ) In the next stage, the developers start with the coding of the product. The testing team carries out
the following tasks,

1. Review LLD

2. Write Functional test plan

3. Write Functional Test case

After coding, the developers themselves carry out unit testing or also known as white box testing. Here
the developers check each and every line of code and if the code is correct. After white-box testing, the
s/w product is sent to the testing team which tests the s/w product and carries out functional testing,
integration testing, system testing and acceptance testing and finally deliver the product to the client
Types of verification:

1. Peer Reviews –
The very easiest method and informal way of reviewing the documents or the
programs/software for the purpose of finding out the faults during the verification process is the
peer-review method. In this method, we give the document or software programs to others and
ask them to review those documents or software programs where we expect their views about
the quality of our product and also expect them to find the faults in the program/document. The
activities that are involved in this method may include SRS document verification, SDD
verification, and program verification. In this method, the reviewers may also prepare a short
report on their observations or findings, etc.
2. Walk-through –
Walk-throughs are the formal and very systematic type of verification method as compared to
peer-review. In a walkthrough, the author of the software document presents the document to
other persons which can range from 2 to 7. Participants are not expected to prepare anything.
The presenter is responsible for preparing the meeting. The document(s) is/are distributed to all
participants. At the time of the meeting of the walk-through, the author introduces the content
in order to make them familiar with it and all the participants are free to ask their doubts
3. Inspections –
Inspections are the most structured and most formal type of verification method and are
commonly known as inspections. A team of three to six participants is constituted which is led by
an impartial moderator. Every person in the group participates openly, actively, and follows the
rules about how such a review is to be conducted. Everyone may get time to express their views,
potential faults, and critical areas. After the meeting, a final report is prepared after
incorporating necessary suggestions by the moderator.
4. Desk Checking-
Desk checking is nothing but a debugging of an application or program code

Difference between Verification and Validation:

Verification Validations
It includes checking documents, design, codes It includes testing and validating the actual
and programs. product.
Verification is the static testing Validation is the dynamic testing
It does not include the execution of the code. It includes the execution of the code
Methods used in verification are reviews, Methods used in validation are Black Box Testing,
walkthroughs, inspections and desk-checking. White Box Testing and non-functional testing
The goal of verification is application and The goal of validation is an actual product
software architecture and specification.
t checks whether the software conforms to It checks whether the software meets the
specifications or not. requirements and expectations of a customer or
not.

Advantages of V&V model

1) Testing starts in very early stages of product development which avoids downward flow of defects
which in turn reduces lot of rework

2) Testing is involved in every stage of product development

3) Deliverables are parallel/simultaneous – as developers are building SRS, testers are testing CRS and
also writing ATP and ATC and so on. Thus as the developers give the finished product to testing team, the
testing team is ready with all the test plans and test cases and thus the project is completed fast.

4) Total investment is less – as there is no downward flow of defects. Thus there is less or no re-work

Drawbacks of V&V model

1) Initial investment is more – because right from the beginning testing team is needed

2) More documentation work – because of the test plans and test cases and all other documents
Applications of V&V model

We go for V&V model in the following cases,

1) for long term projects

2) for complex applications

3) when customer is expecting a very high quality product within stipulated time frame because every
stage is tested and developers & testing team are working in parallel

3) Prototype Model

Prototyping Model is a software development model in which prototype is built, tested, and reworked
until an acceptable prototype is achieved. It also creates base to produce the final system or software. It
works best in scenarios where the project’s requirements are not known in detail. It is an iterative, trial
and error method which takes place between developer and client.

Prototyping Model has following six SDLC phases as follow:

Step 1: Requirements gathering and analysis


A prototyping model starts with requirement analysis. In this phase, the requirements of the system are
defined in detail. During the process, the users of the system are interviewed to know what is their
expectation from the system.

Step 2: Quick design


The second phase is a preliminary design or a quick design. In this stage, a simple design of the system is
created. However, it is not a complete design. It gives a brief idea of the system to the user. The quick
design helps in developing the prototype.

Step 3: Build a Prototype


In this phase, an actual prototype is designed based on the information gathered from quick design. It is
a small working model of the required system.

Step 4: Initial user evaluation


In this stage, the proposed system is presented to the client for an initial evaluation. It helps to find out
the strength and weakness of the working model. Comment and suggestion are collected from the
customer and provided to the developer.
Step 5: Refining prototype
If the user is not happy with the current prototype, you need to refine the prototype according to the
user’s feedback and suggestions.

This phase will not over until all the requirements specified by the user are met. Once the user is
satisfied with the developed prototype, a final system is developed based on the approved final
prototype.

Step 6: Implement Product and Maintain


Once the final system is developed based on the final prototype, it is thoroughly tested and deployed to
production. The system undergoes routine maintenance for minimizing downtime and prevent
large-scale failures.

Advantages of the Prototyping Model


Here, are important pros/benefits of using Prototyping models:

● Users are actively involved in development. Therefore, errors can be detected in the initial stage
of the software development process.
● Missing functionality can be identified, which helps to reduce the risk of failure as Prototyping is
also considered as a risk reduction activity.
● Helps team member to communicate effectively
● Customer satisfaction exists because the customer can feel the product at a very early stage.

Disadvantages of the Prototyping Model


Here, are important cons/drawbacks of prototyping model:

● Prototyping is a slow and time taking process.


● The cost of developing a prototype is a total waste as the prototype is ultimately thrown away.
● Prototyping may encourage excessive change requests.
● Some times customers may not be willing to participate in the iteration cycle for the longer time
duration.
● There may be far too many variations in software requirements when each time the prototype is
evaluated by the customer.

● There are 3 types of s/w testing, namely,


● 1) White box testing – also called unit testing or structural testing or glass box testing or
transparent testing or open-box testing
● 2) Grey box testing
● 3) Black box testing – also called as functional testing or behavioral testing

WHITE BOX TESTING (WBT)


● Entire WBT is done by developers. It is the testing of each and every line of code in the program.
Developers do WBT, sends the s/w to testing team. The testing team does black box testing and
checks the s/w against requirements and finds any defects and sends it to the developer. The
developers fixes the defect and does WBT and sends it to the testing team. Fixing defect means
the defect is removed and the feature is working fine.

● Test engineers should not be involved in fixing the bug because,


● 1) if they spend time in fixing the bug, they lose time to catch some more other defects in the
s/w
● 2) fixing a defect might break a lot of other features. Thus, testers should always identify defects
and developers should always be involved in fixing defects.

● WBT consists of the following tests :


● a) Path testing
● Write flow graphs and test all the independent paths.
● Writing flow graphs means – flow graphs means representing the flow of the program, how
each program is interlinked with one another.

Test all independent paths – Consider a path from main( ) to function 7. Set the parameters and test if
the program is correctly in that path. Similarly test all other paths and fix defects.

b) Condition testing
Test all the logical conditions for both true and false values i.e, we check for both “if” and “else”
condition.
If( condition) - true
{
…….
…….
}
Else - false
{
…..
…..
}
The program should work correctly for both conditions i.e, if condition is true, then else should be false
and vice-versa
c) Loop testing
Test the loops(for, while, do-while, etc) for all the cycles and also check for terminating condition if
working properly and if the size of the condition is sufficient enough.
For ex, let us consider a program where in the developer has given about 1lakh loops
{
While ( 1,00,000 )
…….
…….
}
We cannot test this manually for all 1lakh cycles. So we write a small program,
Test A
{
……
}

Difference between White Box Testing and Black Box testing


1) White Box Testing 2) Black Box Testing
a) 1) Done by developers
2) Done by test engineers

b) 1) Look into the source code and test the logic of the code
2) Verifying the functionality of the application against requirement specifications

c) 1) Should have knowledge of internal design of the code


2) No need to have knowledge of internal design of the code

d) 1) Should have knowledge of programming


2) No need to have knowledge of programming

BLACK BOX TESTING


It is verifying the functionality ( behavior ) against requirement specifications.

Types of Black Box Testing

1) FUNCTIONAL TESTING
Also called component testing. Testing each and every component thoroughly (rigorously) against
requirement specifications is known as functional testing.
For ex, let us consider that Citibank wants a s/w for banking purpose and it asks the company Iflex to
develop this s/w. The s/w is something as shown below. When the user clicks his valid user name and
enters his password, then he is taken into the homepage. Once inside the homepage, he clicks on
amount transfer and the below page is displayed. He enters his valid account number and then the
account number to which the money is to be transferred. He then enters the necessary amount and
clicks on transfer. The amount must be transferred to the other account number

2) INTEGRATION TESTING
Testing the data flow or interface between two features is known as integration testing.
Take 2 features A & B. Send some data from A to B. Check if A is sending data and also check if B is
receiving data.
Now let us consider the example of banking s/w as shown in the figure above ( amount transfer ).

Scenario 1 – Login as A to amount transfer – send 100rs amount – message should be displayed saying
‘amount transfer successful’ – now logout as A and login as B – go to amount balance and check balance
– balance is increased by 100rs – thus integration test is successful.

Scenario 2 – also we check if amount balance has decreased by 100rs in A

Scenario 3 – click on transactions – in A and B, message should be displayed regarding the data and time
of amount transfer

Thus in Integration Testing, we must remember the following points,


1) Understand the application thoroughly i.e, understand how each and every feature works. Also
understand how each and every feature are related or linked to each other.
2) Identify all possible scenarios
3) Prioritize all the scenarios for execution
4) Test all the scenarios
5) If you find defects, communicate defect report to developers
6) Do positive and negative integration testing. Positive – if there is total balance of 10,000 – send
1000rs and see if amount transfer works fine – if it does, then test is pass. Negative – if there is total
balance of 10,000 – send 15000rs and see if amount transfer happens – if it doesn’t happen, test is pass
– if it happens, then there is a bug in the program and send it to development team for repairing defects
Let us consider gmail software as shown above. We first do functional testing for username and
password and submit and cancel button. Then we do integration testing for the above. The following
scenarios can be considered,
Scenario 1 – Login as A and click on compose mail. We then do functional testing for the individual fields.
Now we click on send and also check for save drafts. After we send mail to B, we should check in the sent
items folder of A to see if the sent mail is there. Now we logout as A and login as B. Go to inbox and
check if the mail has arrived.
Scenario 2 – we also do integration testing for spam folders. If the particular contact has been marked as
spam, then any mail sent by that user should go to spam folder and not to the inbox.
Let us consider gmail software as shown above. We first do functional testing for username and
password and submit and cancel button. Then we do integration testing for the above. The following
scenarios can be considered,

Scenario 1 – Login as A and click on compose mail. We then do functional testing for the individual
fields. Now we click on send and also check for save drafts. After we send mail to B, we should check in
the sent items folder of A to see if the sent mail is there. Now we logout as A and login as B. Go to inbox
and check if the mail has arrived.

Scenario 2 – we also do integration testing for spam folders. If the particular contact has been marked
as spam, then any mail sent by that user should go to spam folder and not to the inbox

We also do functional testing for each and every feature like – inbox,sent items etc

There are two types of integration testing,


Incremental Integration Testing :

Take two modules. Check if data flow between the two is working
fine. If it is, then add one more module and test again. Continue like
this. Incrementally add the modules and test the data flow between
the modules.
There are two ways,
a) Top-down Incremental Integration Testing
b) Bottom – up Incremental Integration Testing

Top-down Incremental Integration Testing:

Incrementally add the modules and test the data flow between the
modules. Make sure that the module that we are adding is child of
previous one.
Child3 is child of child2 and so on.

Bottom-up Integration Testing :


Testing starts from last child upto parent. Incrementally add the modules and test the data flow between
modules. Make sure that the module you are adding is the parent of the previous one
Non – incremental Integration Testing
We use this method when,
a) When data flow is very complex
b) When it is difficult to identify who is parent and who is child. It is also called Big – Bang method.
Combine all the modules at a shot and start testing the data flow between the modules.
The disadvantage of this is that,
a) We may miss to test some of the interfaces
b) Root cause analysis of the defect is difficult – identifying the bug where it came from is a problem. We
don’t know the origin of the bug.

STUB and DRIVER

Stub is a dummy module which just receives data and generates a whole lot of expected data,
but it behaves like a real module. When a data is sent from real module A to stub B, then B just
accepts the data without validating and verifying the data and it generates expected results for
the given data. The function of a driver is it checks the data from A and sends it to stub and also
checks the expected data from stub and sends it to A. Driver is one which sets up the test
environment and takes care of communications, analyses results and sends the report. We

never use stubs and drivers in testing.


3 SYSTEM TESTING
It is end-to-end testing wherein testing environment is similar to the production environment. End –
to – end testing Here, we navigate through all the features of the software and test if the end
business / end feature works. We just test the end feature and don’t check for data flow or do
functional testing and all

Testing Environment and Why it should be similar to Production Environment ? After the
requirements have been collected and the design of the s/w has been developed, the CRS is then
given to the development team for coding and building of the modules and the s/w. the
development team stores all the modules and the code it builds in a development server which they
name it REX (any name can be given to the server). The development team builds module A of the
s/w – does WBT – installs the s/w at http://qa.citibank.com - zips the code of module A and stores it
in REX – the team lead of the development team then emails the zip file of module A to the test lead
and tells him that the module A has been built and WBT has been performed and that they can start
testing the module A – the test lead first unzips the module A and installs it in the testing team
server named QA - the test lead then calls in the test engineers in his team and assigns them
different parts of the module A for testing – this is the first cycle – the testing team do functional
testing on A – let’s say the testing team finds 100bugs in module A – for each bug found, the testing
team prepares a report on the bug in a Word document file and each bug is assigned a number – like
this, the testing team finds 100bugs in the s/w – each test engineer when he finds a bug, he
immediately emails bug report to the development team for defect repair – the testing team take
5days to test module A. The developers are reading the defect reports, goes through the code, fixes
the problem – when testing team is testing s/w, the developers are fixing defects and also preparing
another module and also doing WBT for the repaired program – now the developers fix majority of
the defects(say 70) and also build module B – now the team lead of the development team installs
the s/w at the above website, zips the code of the module B and sends a mail to the test lead
containing the code – the test lead first uninstalls the old s/w

Whenever a new build comes in, the testing team concentrates on testing the new feature first –
because the probability of finding the bugs is more, we expect more number of bugs in the new
feature – as soon as new build comes in, a) Test new features b) Do integration testing c) Retest all
the fixed defects d) test unchanged (old) feature to make sure that it is not broken e) in the new
build, we retest only fixed defects f) each test engineer retests only his bugs which are fixed, he is
not responsible for other bugs found by other test engineers. We find new bugs in old feature
because – a) fixing the bugs may lead to other bugs b) Adding new features (modules) c) might have
missed it in the earlier test cycle In the second cycle – we do both functional and integration testing
for A and B – we find 80 bugs – each bug is sent in a report of Word format – the developers repair
about 40bugs and also repair 5bugs of the remaining 30bugs in the first test cycle. Like this we carry
on, and do about 20cycles and reach a stage wherein the developers are developing the 20th build,
say module L – now the testing team gets a server which is similar to the production server (real
–time server on which the s/w will run at the client’s place) – and install the s/w there – and they
start off with system testing. We start System Testing –
a) when the minimum number of features are ready
b) basic functionality of all the modules must be working
c) testing environment should be similar to production environment We say that the product is
ready for release when,
a) all the features requested by customer are ready
b) when all the functionality, integration and end-to-end scenarios are working fine
c) when there are no critical bugs
d) bugs are there, but all are minor and less number of bugs e) by this time, we would have met the
deadline or release date is very near. The entire period right from collecting requirements to
delivering the s/w to the client is known as release.

ACCEPTANCE TESTING
Acceptance testing is done by end users. Here, they use the s/w for the business for a particular
period of time and check whether the s/w can handle all kinds of real-time business scenarios /
situations.
For Acceptance testing, let us consider the example shown below.

Fed-ex with its requirements asks Wipro to develop the s/w and Wipro agrees to give the s/w in 2
releases like below, 25crores 18crores Jan 2010 Sept 2010 Sept 2010 Feb 2011 On September 8th,
test manager tells the project manager that there is a critical bug in the application which will take
another 5days to fix it. But the project manager says you just deliver the application and by the time
they implement in Fed-ex, it takes another 25days so we can fix the bugs or otherwise we will have
to pay the penalty for each day after the said release day. Is this the real scenario ? – No. Then what
happens, we will see now in 3 cases which really and who really does the acceptance testing.

Alpha Testing
Alpha Testing is a type of software testing performed to identify bugs before releasing the software
product to the real users or public. It is a type of acceptance testing. The main objective of alpha testing
is to refine the software product by finding and fixing the bugs that were not discovered through
previous tests.
This testing is referred to as an alpha testing only because it is done early on, near the end of the
development of the software, and before Beta Testing.
Who is involved in Alpha testing?
Alpha testing has two phases,

1. The first phase of testing is done by in-house developers. They either use hardware-assisted
debuggers or debugger software. The aim to catch bugs quickly. Usually while alpha testing, a
tester will come across to plenty of bugs, crashes, missing features, and docs.
2. While the second phase of alpha testing is done by software QA staff, for additional testing in an
environment. It involves both black box

Alpha Testing Process Example


Usually, an alpha testing takes place in the test lab environment on a separate system. In this technique,
project manager teams up with the developer to define specific goals for alpha testing, and to integrate
the results into evolving project plans.

As such alpha testing is done on a prototype, in-depth reliability testing, installation testing, and
documentation testing can be ignored.

A good alpha test must have a well-defined with comprehensive test cases. Various activities involved in
alpha testing are logging defects, fixing defects, retesting, several iterations, etc.

Although Alpha testing is not completely functional, QA team must ensure that whatever is on hand
should be thoroughly tested, especially those which has to be sent to the customer.

For best practice, the QA team should gather early all additional information like usability feedback on an
alpha stage storage code, look and feel of the software, navigation scheme, etc.

Also, e-mail to the customer citing all the details about the test is recommended to make the customer
aware of the current condition of the software.

BETA TESTING

Beta Testing is one of the Acceptance Testing types, which adds value to the product as the end-user
(intended real user) validates the product for functionality, usability, reliability, and compatibility.

Inputs provided by the end-users helps in enhancing the quality of the product further and leads to its
success. This also helps in decision making to invest further in the future products or the same product
for improvisation.

Since Beta Testing happens at the end user’s side, it cannot be the controlled activity.

Purpose of Beta Testing


The points mentioned below can even be considered as the objectives for Beta Test and are very much
required to produce far better results for a product.

#1) Beta Test provides a complete overview of the true experience gained by the end users while
experiencing the product.
#2) It is performed by a wide range of users and the reasons for which the product is being used varies
highly. Marketing managers focus on target market’s opinion on each and every feature, while a usability
engineer / common real users focus on product usage and easiness, technical users focus on installation
and uninstallation experience, etc..
But the actual perception of the end users clearly exhibits why they need this product and how they are
going to use it.

#3) Real world compatibility for a product can be ensured to a greater extent through this testing, as a
great combination of real platforms is used here for testing on a wide range of devices, OS, Browsers,
etc.

Software Testing Life Cycle (STLC)


is a sequence of specific activities conducted during the testing process to ensure software quality goals
are met. STLC involves both verification and validation activities. Contrary to popular belief, Software
Testing is not just a single/isolate activity, i.e. testing. It consists of a series of activities carried out
methodologically to help certify your software product. STLC stands for Software Testing Life Cycle.
STLC Phases
There are following six major phases in every Software Testing Life Cycle Model (STLC Model):

1. Requirement Analysis
2. Test Planning
3. Test case development
4. Test Environment setup
5. Test Execution
6. Test Cycle closure
What is Entry and Exit Criteria in STLC?

● Entry Criteria: Entry Criteria gives the prerequisite items that must be completed before testing
can begin.
● Exit Criteria: Exit Criteria defines the items that must be completed before testing can be
concluded

You have Entry and Exit Criteria for all levels in the Software Testing Life Cycle (STLC)

In an Ideal world, you will not enter the next stage until the exit criteria for the previous stage is met. But
practically this is not always possible. So for this tutorial, we will focus on activities and deliverables for
the different stages in STLC life cycle. Let’s look into them in detail.

Requirement Phase Testing


Requirement Phase Testing also known as Requirement Analysis in which test team studies the
requirements from a testing point of view to identify testable requirements and the QA team may
interact with various stakeholders to understand requirements in detail. Requirements could be either
functional or non-functional. Automation feasibility for the testing project is also done in this stage.
Activities in Requirement Phase Testing

● Identify types of tests to be performed.


● Gather details about testing priorities and focus.
● Identify test environment details where testing is supposed to be carried out.
● Automation feasibility analysis (if required).
Test Planning in STLC
Test Planning in STLC is a phase in which a Senior QA manager determines the test plan strategy along
with efforts and cost estimates for the project. Moreover, the resources, test environment, test
limitations and the testing schedule are also determined. The Test Plan gets prepared and finalized in the
same phase.
Test Planning Activities

● Preparation of test plan/strategy document for various types of testing


● Test tool selection
● Test effort estimation
● Resource planning and determining roles and responsibilities.
● Training requirement

Deliverables of Test Planning

● Test plan /strategy document.


● Effort estimation document.

Test Case Development Phase


The Test Case Development Phase involves the creation, verification and rework of test cases & test
scripts after the test plan is ready. Initially, the Test data is identified then created and reviewed and then
reworked based on the preconditions. Then the QA team starts the development process of test cases
for individual units.
Test Case Development Activities

● Create test cases, automation scripts (if applicable)


● Review and baseline test cases and scripts
● Create test data (If Test Environment is available)

Deliverables of Test Case Development

● Test cases/scripts
● Test data

Test Environment Setup


Test Environment Setup decides the software and hardware conditions under which a work product is
tested. It is one of the critical aspects of the testing process and can be done in parallel with the Test
Case Development Phase. Test team may not be involved in this activity if the development team
provides the test environment. The test team is required to do a readiness check (smoke testing) of the
given environment.
Test Environment Setup Activities

● Understand the required architecture, environment set-up and prepare hardware and software
requirement list for the Test Environment.
● Setup test Environment and test data
● Perform smoke test on the build

Deliverables of Test Environment Setup

● Environment ready with test data set up


● Smoke Test Results.

Test Execution Phase


Test Execution Phase is carried out by the testers in which testing of the software build is done based on
test plans and test cases prepared. The process consists of test script execution, test script maintenance
and bug reporting. If bugs are reported then it is reverted back to development team for correction and
retesting will be performed.
Test Execution Activities

● Execute tests as per plan


● Document test results, and log defects for failed cases
● Map defects to test cases in RTM
● Retest the Defect fixes
● Track the defects to closure

Deliverables of Test Execution

● Completed RTM with the execution status


● Test cases updated with results
● Defect reports

Test Cycle Closure


Test Cycle Closure phase is completion of test execution which involves several activities like test
completion reporting, collection of test completion matrices and test results. Testing team members
meet, discuss and analyze testing artifacts to identify strategies that have to be implemented in future,
taking lessons from current test cycle. The idea is to remove process bottlenecks for future test cycles.
Test Cycle Closure Activities

● Evaluate cycle completion criteria based on Time, Test coverage, Cost,Software, Critical Business
Objectives, Quality
● Prepare test metrics based on the above parameters.
● Document the learning out of the project
● Prepare Test closure report
● Qualitative and quantitative reporting of quality of the work product to the customer.
● Test result analysis to find out the defect distribution by type and severity.

SMOKE TESTING
Testing the basic or critical features of an application before doing thorough testing or rigorous testing is
called as smoke testing. It is also called Build Verification Testing – because we check whether the build is
broken or not. Whenever a new build comes in, we always start with smoke testing, because for every
new build – there might be some changes which might have broken a major feature ( fixing the bug or
adding a new feature could have affected a major portion of the original software). In smoke testing, we
do only positive testing – i.e, we enter only valid data and not invalid data.

From the above diagram, it may be confusing when we actually do smoke testing Now, we have to
understand that smoke testing is done in all testing before proceeding deep into the testing we do. The
below example will make us understand better when to do smoke testing, Developers develop
application and gives it for testing. The testing team will start with FT. suppose we assume that 5days we
are given for FT. on the 1st day, we check one module and later 2nd day we go for another module. On
the 5th day, we find a critical bug, when it is given to the developer – he says it will take another 3days to
fix it. Then we have to stretch the release date to extra 3days.

How the smoke testing covers

Here we ara doing smoke testing in each phase.


What is Sanity Testing?
Generally, Sanity testing is performed on stable builds and it is also known as a variant of regression
testing.
Sanity testing was performed when we are receiving software build (with minor code changes) from the
development team. It is a checkpoint to assess if testing for the build can proceed or not.

In other words, we can say that sanity testing is performed to make sure that all the defects have been
solved and no added issues come into the presence because of these modifications.

Sanity testing also ensures that the modification in the code or functions does not affect the associated
modules. Consequently, it can be applied only on connected modules that can be impacted.

Sanity Testing Process


The main purpose of performing sanity testing is to check the incorrect outcomes or defects which are
not existing in component procedures. And also, ensure that the newly added features may not affect
the functionalities of current features.

Therefore, we need to follow the below steps to implement the sanity testing process gradually:

o Identification
o Evaluation
o Testing

Step1: Identification

The first step in the sanity testing process is Identification, where we detect the newly added
components and features as well as the modification presented in the code while fixing the bug.

Step2: Evaluation

After completing the identification step, we will analyze newly implemented components, attributes and
modify them to check their intended and appropriate working as per the given requirements.

Step3: Testing

Once the identification and evaluation step are successfully processed, we will move to the next step,
which is testing.
In this step, we inspect and assess all the linked parameters, components, and essentials of the above
analyzed attributed and modified them to make sure that they are working fine.
If all the above steps are working fine, the build can be subjected to more detailed and exhausting
testing, and the release can be passed for thorough testing.
Test case Design Technique or BBT method

Test Case Design Techniques are,

• Error Guessing

• Equivalence Partitioning

• Boundary Value Analysis (BVA)

Error Guessing : Guessing the error. If the Amount text field asks for only integers, we enter all other
values, like – decimal, special character, negative etc. Check for all the values mentioned above.

Equivalence Class Partitioning : It reduces the test data to the managerable level

Ex: consider the input field must have 50 characters are allowed so here we are having 2 inputs like valid
and invalid

Valid Input: enter the data in between 1 to 50

Invalid Input : Enter the data above 50 or less than 1.

Boundary Value analysis : Sometimes application behaves differently at boundry condition so we are
having this analysis with below formula

Min-1 Min Number Max Max+1

Consider input field must must have 50 characters are allowed so lets focus on below boundry value
analysis

As per the formula the values for inputs are

0 1 45 50 51.

Test Scenario
What is needed to test at higher level is test scenario.
EX: Check the login functionality of Gmail
Test Cases
How to test the particular functionality means Test cases. Refer the excel sheet for example.
Test Suite:
Collection of the test cases means Test suite.

Defect ;
Difference between expected result and actual result is known as Defect.
Regression Testing:
Regression testing is a software testing practice that ensures an application still functions as expected
after any code changes, updates, or improvements

Types:

1) Unit Regression Testing:

In this, we are going to test only the changed unit, not the impact area, because it may affect the
components of the same module.
Example1
In the below application, and in the first build, the developer develops the Search button that
accepts 1-15 characters. Then the test engineer tests the Search button with the help of the test case
design technique.

Now, the client does some modification in the requirement and also requests that the Search
button can accept the 1-35 characters. The test engineer will test only the Search button to verify
that it takes 1-35 characters and does not check any further feature of the first build.

2) Regional Regression
In this, we are going to test the modification along with the impact area or regions, are called
the Regional Regression testing. Here, we are testing the impact area because if there are dependable
modules, it will affect the other modules also.
For example:
In the below image as we can see that we have four different modules, such as Module A, Module B,
Module C, and Module D, which are provided by the developers for the testing during the first build.
Now, the test engineer will identify the bugs in Module D. The bug report is sent to the developers, and
the development team fixes those defects and sends the second build.

In the second build, the previous defects are fixed. Now the test engineer understands that the bug
fixing in Module D has impacted some features in Module A and Module C. Hence, the test engineer
first tests the Module D where the bug has been fixed and then checks the impact areas in Module A
and Module C. Therefore, this testing is known as Regional regression testing

3)Full Regression :

During the second and the third release of the product, the client asks for adding 3-4 new features, and
also some defects need to be fixed from the previous release. Then the testing team will do the Impact
Analysis and identify that the above modification will lead us to test the entire product.
When we perform Full Regression testing?

We will perform the FRT when we have the following conditions:


o When the modification is happening in the source file of the product. For example, JVM is the
root file of the JAVA application, and if any change is going to happen in JVM, then the entire
JAVA program will be tested.
o When we have to perform n-number of changes.

Note:
The regional regression testing is the ideal approach of regression testing, but the issue is, we may miss
lots of defects while performing the Regional Regression testing.
And here we are going to solve this issue with the help of the following approach:
o When the application is given for the testing, the test engineer will test the first 10-14 cycle, and
will do the RRT.
o Then for the 15th cycle, we do FRT. And again, for the next 10-15 cycle, we do Regional
regression testing, and for the 31th cycle, we do the full regression testing, and we will continue
like this.
o But for the last ten cycle of the release, we will perform only complete regression testing.

Therefore, if we follow the above approach, we can get more defects.


The drawback of doing regression testing manually repeatedly:
o Productivity will decrease.
o It is a difficult job to do.
o There is no consistency in test execution.
o And the test execution time is also increased.

Hence, we will go for the automation to get over with these issues; when we have n-number of the
regression test cycle, we will go for the automation regression testing process.

Retesting:

Retesting is a process to check specific test cases that are found with bug/s in the final execution.
Generally, testers find these bugs while testing the software application and assign it to the developers
to fix it. Then the developers fix the bug/s and assign it back to the testers for verification. This
continuous process is called Retesting.

You might also like