Manual Console
Manual Console
> Software Development Life Cycle is a systematic approach to develop software. It is a Process followed
by Software Developers and Software Testing is an integral part of Software Development, so it is also
important for Software Testers...
i) Requirement Gathering
ii) Analysis
iii) Design
v) Testing
1) Requirement Gathering :
Requirement Gathering is the most important phase in software development life cycle,
Business Analyst collects the requirements from the Customer/Client as per the clients
business needs and documents the requirements in the Business Requirement Specification
and provides the same to Development Team
2) Analysis:
Once the Requirement Gathering is done the next step is to define and document the product
requirements and get them approved by the customer. This is done through SRS (Software
Requirement Specification) document. SRS consists of all the product requirements to be
designed and developed during the project life cycle.
3) Design
> In Design phase Senior Developers and Architects, they give the architecture of the software
product to be developed. It has two steps one is HLD (High Level Design) or Global Design and
another is LLD (Low Level Design) or Detailed Design,
> High Level Design (HLD) is the overall system design, covers the system architecture and
database design. It describes the relation between various modules and functions of the system.
> Low Level Design (LLD) is the detailed system design, covers how each and every feature in the
product should work and how every component should work. > The outcome of this phase is
High Level Document and Low Level Document which works as an input to the next phase
Coding...
4) Development
Developers (seniors, juniors and fresher) involved in this phase, this is the phase where we
start building the software and start writing the code for the product.
> The outcome of this phase is Source Code Document (SCD) and the developed product.
5) Testing
> Once the software is complete then it is deployed in the testing environment.
The testing team starts testing (either test the software manually or using automated test tools depends
on process defined in STLC)
> After successful testing, the product is delivered (deployed to the customer for their use),
Deployment is done by the Deployment/Implementation engineers and Once when the customers start
using the developed system then the actual problems will come up and needs to be solved from time to
time.
> Fixing the issues found by the customer comes in the maintenance phase. 100% testing is not possible
– because, the way testers test the product is different from the way customers use the product.
SDLC Models
i) Waterfall Model
ii) V&V model
iii) Prototype Model
1) Waterfall Model:
Advantages of Waterfall Model:
a) Simple and easy to use
b) Easy to manage due to the rigidity of the model- each phase has specific deliverables and a
review process.
d) Works well for smaller projects where requirements are very well
a) Once an application is in the testing stage, it is very difficult to go back and change
b) No working software is produced until late during the life cycle.
c) High amounts of risk
d) Not a good model for complex and object-oriented projects
e) Poor model for long and ongoing projects
2) V&V
1) In the first stage, the client send the CRS both to developers and testers. The developers translate the
CRS to the SRS. The testers do the following tests on CRS,
1. Review CRS
b. missing requirements
c. wrong requirements
3. Write Acceptance Test cases The testing team reviews the CRS and identifies mistakes and defects and
send it to the development team for correcting the bugs. The development updates the CRS and
continues developing SRS simultaneously.
2 ) In the next stage, the SRS is sent to the testing team for review and the developers start building the
HLD of the product. The testers do the following tests on SRS,
The testing team reviews every detail of the SRS if the CRS has been converted properly to SRS.
3 ) In the next stage, the developers start building the LLD of the product. The testers do the following
tests on HLD,
1. Review HLD
4 ) In the next stage, the developers start with the coding of the product. The testing team carries out
the following tasks,
1. Review LLD
After coding, the developers themselves carry out unit testing or also known as white box testing. Here
the developers check each and every line of code and if the code is correct. After white-box testing, the
s/w product is sent to the testing team which tests the s/w product and carries out functional testing,
integration testing, system testing and acceptance testing and finally deliver the product to the client
Types of verification:
1. Peer Reviews –
The very easiest method and informal way of reviewing the documents or the
programs/software for the purpose of finding out the faults during the verification process is the
peer-review method. In this method, we give the document or software programs to others and
ask them to review those documents or software programs where we expect their views about
the quality of our product and also expect them to find the faults in the program/document. The
activities that are involved in this method may include SRS document verification, SDD
verification, and program verification. In this method, the reviewers may also prepare a short
report on their observations or findings, etc.
2. Walk-through –
Walk-throughs are the formal and very systematic type of verification method as compared to
peer-review. In a walkthrough, the author of the software document presents the document to
other persons which can range from 2 to 7. Participants are not expected to prepare anything.
The presenter is responsible for preparing the meeting. The document(s) is/are distributed to all
participants. At the time of the meeting of the walk-through, the author introduces the content
in order to make them familiar with it and all the participants are free to ask their doubts
3. Inspections –
Inspections are the most structured and most formal type of verification method and are
commonly known as inspections. A team of three to six participants is constituted which is led by
an impartial moderator. Every person in the group participates openly, actively, and follows the
rules about how such a review is to be conducted. Everyone may get time to express their views,
potential faults, and critical areas. After the meeting, a final report is prepared after
incorporating necessary suggestions by the moderator.
4. Desk Checking-
Desk checking is nothing but a debugging of an application or program code
Verification Validations
It includes checking documents, design, codes It includes testing and validating the actual
and programs. product.
Verification is the static testing Validation is the dynamic testing
It does not include the execution of the code. It includes the execution of the code
Methods used in verification are reviews, Methods used in validation are Black Box Testing,
walkthroughs, inspections and desk-checking. White Box Testing and non-functional testing
The goal of verification is application and The goal of validation is an actual product
software architecture and specification.
t checks whether the software conforms to It checks whether the software meets the
specifications or not. requirements and expectations of a customer or
not.
1) Testing starts in very early stages of product development which avoids downward flow of defects
which in turn reduces lot of rework
3) Deliverables are parallel/simultaneous – as developers are building SRS, testers are testing CRS and
also writing ATP and ATC and so on. Thus as the developers give the finished product to testing team, the
testing team is ready with all the test plans and test cases and thus the project is completed fast.
4) Total investment is less – as there is no downward flow of defects. Thus there is less or no re-work
1) Initial investment is more – because right from the beginning testing team is needed
2) More documentation work – because of the test plans and test cases and all other documents
Applications of V&V model
3) when customer is expecting a very high quality product within stipulated time frame because every
stage is tested and developers & testing team are working in parallel
3) Prototype Model
Prototyping Model is a software development model in which prototype is built, tested, and reworked
until an acceptable prototype is achieved. It also creates base to produce the final system or software. It
works best in scenarios where the project’s requirements are not known in detail. It is an iterative, trial
and error method which takes place between developer and client.
This phase will not over until all the requirements specified by the user are met. Once the user is
satisfied with the developed prototype, a final system is developed based on the approved final
prototype.
● Users are actively involved in development. Therefore, errors can be detected in the initial stage
of the software development process.
● Missing functionality can be identified, which helps to reduce the risk of failure as Prototyping is
also considered as a risk reduction activity.
● Helps team member to communicate effectively
● Customer satisfaction exists because the customer can feel the product at a very early stage.
Test all independent paths – Consider a path from main( ) to function 7. Set the parameters and test if
the program is correctly in that path. Similarly test all other paths and fix defects.
b) Condition testing
Test all the logical conditions for both true and false values i.e, we check for both “if” and “else”
condition.
If( condition) - true
{
…….
…….
}
Else - false
{
…..
…..
}
The program should work correctly for both conditions i.e, if condition is true, then else should be false
and vice-versa
c) Loop testing
Test the loops(for, while, do-while, etc) for all the cycles and also check for terminating condition if
working properly and if the size of the condition is sufficient enough.
For ex, let us consider a program where in the developer has given about 1lakh loops
{
While ( 1,00,000 )
…….
…….
}
We cannot test this manually for all 1lakh cycles. So we write a small program,
Test A
{
……
}
b) 1) Look into the source code and test the logic of the code
2) Verifying the functionality of the application against requirement specifications
1) FUNCTIONAL TESTING
Also called component testing. Testing each and every component thoroughly (rigorously) against
requirement specifications is known as functional testing.
For ex, let us consider that Citibank wants a s/w for banking purpose and it asks the company Iflex to
develop this s/w. The s/w is something as shown below. When the user clicks his valid user name and
enters his password, then he is taken into the homepage. Once inside the homepage, he clicks on
amount transfer and the below page is displayed. He enters his valid account number and then the
account number to which the money is to be transferred. He then enters the necessary amount and
clicks on transfer. The amount must be transferred to the other account number
2) INTEGRATION TESTING
Testing the data flow or interface between two features is known as integration testing.
Take 2 features A & B. Send some data from A to B. Check if A is sending data and also check if B is
receiving data.
Now let us consider the example of banking s/w as shown in the figure above ( amount transfer ).
Scenario 1 – Login as A to amount transfer – send 100rs amount – message should be displayed saying
‘amount transfer successful’ – now logout as A and login as B – go to amount balance and check balance
– balance is increased by 100rs – thus integration test is successful.
Scenario 3 – click on transactions – in A and B, message should be displayed regarding the data and time
of amount transfer
Scenario 1 – Login as A and click on compose mail. We then do functional testing for the individual
fields. Now we click on send and also check for save drafts. After we send mail to B, we should check in
the sent items folder of A to see if the sent mail is there. Now we logout as A and login as B. Go to inbox
and check if the mail has arrived.
Scenario 2 – we also do integration testing for spam folders. If the particular contact has been marked
as spam, then any mail sent by that user should go to spam folder and not to the inbox
We also do functional testing for each and every feature like – inbox,sent items etc
Take two modules. Check if data flow between the two is working
fine. If it is, then add one more module and test again. Continue like
this. Incrementally add the modules and test the data flow between
the modules.
There are two ways,
a) Top-down Incremental Integration Testing
b) Bottom – up Incremental Integration Testing
Incrementally add the modules and test the data flow between the
modules. Make sure that the module that we are adding is child of
previous one.
Child3 is child of child2 and so on.
Stub is a dummy module which just receives data and generates a whole lot of expected data,
but it behaves like a real module. When a data is sent from real module A to stub B, then B just
accepts the data without validating and verifying the data and it generates expected results for
the given data. The function of a driver is it checks the data from A and sends it to stub and also
checks the expected data from stub and sends it to A. Driver is one which sets up the test
environment and takes care of communications, analyses results and sends the report. We
Testing Environment and Why it should be similar to Production Environment ? After the
requirements have been collected and the design of the s/w has been developed, the CRS is then
given to the development team for coding and building of the modules and the s/w. the
development team stores all the modules and the code it builds in a development server which they
name it REX (any name can be given to the server). The development team builds module A of the
s/w – does WBT – installs the s/w at http://qa.citibank.com - zips the code of module A and stores it
in REX – the team lead of the development team then emails the zip file of module A to the test lead
and tells him that the module A has been built and WBT has been performed and that they can start
testing the module A – the test lead first unzips the module A and installs it in the testing team
server named QA - the test lead then calls in the test engineers in his team and assigns them
different parts of the module A for testing – this is the first cycle – the testing team do functional
testing on A – let’s say the testing team finds 100bugs in module A – for each bug found, the testing
team prepares a report on the bug in a Word document file and each bug is assigned a number – like
this, the testing team finds 100bugs in the s/w – each test engineer when he finds a bug, he
immediately emails bug report to the development team for defect repair – the testing team take
5days to test module A. The developers are reading the defect reports, goes through the code, fixes
the problem – when testing team is testing s/w, the developers are fixing defects and also preparing
another module and also doing WBT for the repaired program – now the developers fix majority of
the defects(say 70) and also build module B – now the team lead of the development team installs
the s/w at the above website, zips the code of the module B and sends a mail to the test lead
containing the code – the test lead first uninstalls the old s/w
Whenever a new build comes in, the testing team concentrates on testing the new feature first –
because the probability of finding the bugs is more, we expect more number of bugs in the new
feature – as soon as new build comes in, a) Test new features b) Do integration testing c) Retest all
the fixed defects d) test unchanged (old) feature to make sure that it is not broken e) in the new
build, we retest only fixed defects f) each test engineer retests only his bugs which are fixed, he is
not responsible for other bugs found by other test engineers. We find new bugs in old feature
because – a) fixing the bugs may lead to other bugs b) Adding new features (modules) c) might have
missed it in the earlier test cycle In the second cycle – we do both functional and integration testing
for A and B – we find 80 bugs – each bug is sent in a report of Word format – the developers repair
about 40bugs and also repair 5bugs of the remaining 30bugs in the first test cycle. Like this we carry
on, and do about 20cycles and reach a stage wherein the developers are developing the 20th build,
say module L – now the testing team gets a server which is similar to the production server (real
–time server on which the s/w will run at the client’s place) – and install the s/w there – and they
start off with system testing. We start System Testing –
a) when the minimum number of features are ready
b) basic functionality of all the modules must be working
c) testing environment should be similar to production environment We say that the product is
ready for release when,
a) all the features requested by customer are ready
b) when all the functionality, integration and end-to-end scenarios are working fine
c) when there are no critical bugs
d) bugs are there, but all are minor and less number of bugs e) by this time, we would have met the
deadline or release date is very near. The entire period right from collecting requirements to
delivering the s/w to the client is known as release.
ACCEPTANCE TESTING
Acceptance testing is done by end users. Here, they use the s/w for the business for a particular
period of time and check whether the s/w can handle all kinds of real-time business scenarios /
situations.
For Acceptance testing, let us consider the example shown below.
Fed-ex with its requirements asks Wipro to develop the s/w and Wipro agrees to give the s/w in 2
releases like below, 25crores 18crores Jan 2010 Sept 2010 Sept 2010 Feb 2011 On September 8th,
test manager tells the project manager that there is a critical bug in the application which will take
another 5days to fix it. But the project manager says you just deliver the application and by the time
they implement in Fed-ex, it takes another 25days so we can fix the bugs or otherwise we will have
to pay the penalty for each day after the said release day. Is this the real scenario ? – No. Then what
happens, we will see now in 3 cases which really and who really does the acceptance testing.
Alpha Testing
Alpha Testing is a type of software testing performed to identify bugs before releasing the software
product to the real users or public. It is a type of acceptance testing. The main objective of alpha testing
is to refine the software product by finding and fixing the bugs that were not discovered through
previous tests.
This testing is referred to as an alpha testing only because it is done early on, near the end of the
development of the software, and before Beta Testing.
Who is involved in Alpha testing?
Alpha testing has two phases,
1. The first phase of testing is done by in-house developers. They either use hardware-assisted
debuggers or debugger software. The aim to catch bugs quickly. Usually while alpha testing, a
tester will come across to plenty of bugs, crashes, missing features, and docs.
2. While the second phase of alpha testing is done by software QA staff, for additional testing in an
environment. It involves both black box
As such alpha testing is done on a prototype, in-depth reliability testing, installation testing, and
documentation testing can be ignored.
A good alpha test must have a well-defined with comprehensive test cases. Various activities involved in
alpha testing are logging defects, fixing defects, retesting, several iterations, etc.
Although Alpha testing is not completely functional, QA team must ensure that whatever is on hand
should be thoroughly tested, especially those which has to be sent to the customer.
For best practice, the QA team should gather early all additional information like usability feedback on an
alpha stage storage code, look and feel of the software, navigation scheme, etc.
Also, e-mail to the customer citing all the details about the test is recommended to make the customer
aware of the current condition of the software.
BETA TESTING
Beta Testing is one of the Acceptance Testing types, which adds value to the product as the end-user
(intended real user) validates the product for functionality, usability, reliability, and compatibility.
Inputs provided by the end-users helps in enhancing the quality of the product further and leads to its
success. This also helps in decision making to invest further in the future products or the same product
for improvisation.
Since Beta Testing happens at the end user’s side, it cannot be the controlled activity.
#1) Beta Test provides a complete overview of the true experience gained by the end users while
experiencing the product.
#2) It is performed by a wide range of users and the reasons for which the product is being used varies
highly. Marketing managers focus on target market’s opinion on each and every feature, while a usability
engineer / common real users focus on product usage and easiness, technical users focus on installation
and uninstallation experience, etc..
But the actual perception of the end users clearly exhibits why they need this product and how they are
going to use it.
#3) Real world compatibility for a product can be ensured to a greater extent through this testing, as a
great combination of real platforms is used here for testing on a wide range of devices, OS, Browsers,
etc.
1. Requirement Analysis
2. Test Planning
3. Test case development
4. Test Environment setup
5. Test Execution
6. Test Cycle closure
What is Entry and Exit Criteria in STLC?
● Entry Criteria: Entry Criteria gives the prerequisite items that must be completed before testing
can begin.
● Exit Criteria: Exit Criteria defines the items that must be completed before testing can be
concluded
You have Entry and Exit Criteria for all levels in the Software Testing Life Cycle (STLC)
In an Ideal world, you will not enter the next stage until the exit criteria for the previous stage is met. But
practically this is not always possible. So for this tutorial, we will focus on activities and deliverables for
the different stages in STLC life cycle. Let’s look into them in detail.
● Test cases/scripts
● Test data
● Understand the required architecture, environment set-up and prepare hardware and software
requirement list for the Test Environment.
● Setup test Environment and test data
● Perform smoke test on the build
● Evaluate cycle completion criteria based on Time, Test coverage, Cost,Software, Critical Business
Objectives, Quality
● Prepare test metrics based on the above parameters.
● Document the learning out of the project
● Prepare Test closure report
● Qualitative and quantitative reporting of quality of the work product to the customer.
● Test result analysis to find out the defect distribution by type and severity.
SMOKE TESTING
Testing the basic or critical features of an application before doing thorough testing or rigorous testing is
called as smoke testing. It is also called Build Verification Testing – because we check whether the build is
broken or not. Whenever a new build comes in, we always start with smoke testing, because for every
new build – there might be some changes which might have broken a major feature ( fixing the bug or
adding a new feature could have affected a major portion of the original software). In smoke testing, we
do only positive testing – i.e, we enter only valid data and not invalid data.
From the above diagram, it may be confusing when we actually do smoke testing Now, we have to
understand that smoke testing is done in all testing before proceeding deep into the testing we do. The
below example will make us understand better when to do smoke testing, Developers develop
application and gives it for testing. The testing team will start with FT. suppose we assume that 5days we
are given for FT. on the 1st day, we check one module and later 2nd day we go for another module. On
the 5th day, we find a critical bug, when it is given to the developer – he says it will take another 3days to
fix it. Then we have to stretch the release date to extra 3days.
In other words, we can say that sanity testing is performed to make sure that all the defects have been
solved and no added issues come into the presence because of these modifications.
Sanity testing also ensures that the modification in the code or functions does not affect the associated
modules. Consequently, it can be applied only on connected modules that can be impacted.
Therefore, we need to follow the below steps to implement the sanity testing process gradually:
o Identification
o Evaluation
o Testing
Step1: Identification
The first step in the sanity testing process is Identification, where we detect the newly added
components and features as well as the modification presented in the code while fixing the bug.
Step2: Evaluation
After completing the identification step, we will analyze newly implemented components, attributes and
modify them to check their intended and appropriate working as per the given requirements.
Step3: Testing
Once the identification and evaluation step are successfully processed, we will move to the next step,
which is testing.
In this step, we inspect and assess all the linked parameters, components, and essentials of the above
analyzed attributed and modified them to make sure that they are working fine.
If all the above steps are working fine, the build can be subjected to more detailed and exhausting
testing, and the release can be passed for thorough testing.
Test case Design Technique or BBT method
• Error Guessing
• Equivalence Partitioning
Error Guessing : Guessing the error. If the Amount text field asks for only integers, we enter all other
values, like – decimal, special character, negative etc. Check for all the values mentioned above.
Equivalence Class Partitioning : It reduces the test data to the managerable level
Ex: consider the input field must have 50 characters are allowed so here we are having 2 inputs like valid
and invalid
Boundary Value analysis : Sometimes application behaves differently at boundry condition so we are
having this analysis with below formula
Consider input field must must have 50 characters are allowed so lets focus on below boundry value
analysis
0 1 45 50 51.
Test Scenario
What is needed to test at higher level is test scenario.
EX: Check the login functionality of Gmail
Test Cases
How to test the particular functionality means Test cases. Refer the excel sheet for example.
Test Suite:
Collection of the test cases means Test suite.
Defect ;
Difference between expected result and actual result is known as Defect.
Regression Testing:
Regression testing is a software testing practice that ensures an application still functions as expected
after any code changes, updates, or improvements
Types:
In this, we are going to test only the changed unit, not the impact area, because it may affect the
components of the same module.
Example1
In the below application, and in the first build, the developer develops the Search button that
accepts 1-15 characters. Then the test engineer tests the Search button with the help of the test case
design technique.
Now, the client does some modification in the requirement and also requests that the Search
button can accept the 1-35 characters. The test engineer will test only the Search button to verify
that it takes 1-35 characters and does not check any further feature of the first build.
2) Regional Regression
In this, we are going to test the modification along with the impact area or regions, are called
the Regional Regression testing. Here, we are testing the impact area because if there are dependable
modules, it will affect the other modules also.
For example:
In the below image as we can see that we have four different modules, such as Module A, Module B,
Module C, and Module D, which are provided by the developers for the testing during the first build.
Now, the test engineer will identify the bugs in Module D. The bug report is sent to the developers, and
the development team fixes those defects and sends the second build.
In the second build, the previous defects are fixed. Now the test engineer understands that the bug
fixing in Module D has impacted some features in Module A and Module C. Hence, the test engineer
first tests the Module D where the bug has been fixed and then checks the impact areas in Module A
and Module C. Therefore, this testing is known as Regional regression testing
3)Full Regression :
During the second and the third release of the product, the client asks for adding 3-4 new features, and
also some defects need to be fixed from the previous release. Then the testing team will do the Impact
Analysis and identify that the above modification will lead us to test the entire product.
When we perform Full Regression testing?
Note:
The regional regression testing is the ideal approach of regression testing, but the issue is, we may miss
lots of defects while performing the Regional Regression testing.
And here we are going to solve this issue with the help of the following approach:
o When the application is given for the testing, the test engineer will test the first 10-14 cycle, and
will do the RRT.
o Then for the 15th cycle, we do FRT. And again, for the next 10-15 cycle, we do Regional
regression testing, and for the 31th cycle, we do the full regression testing, and we will continue
like this.
o But for the last ten cycle of the release, we will perform only complete regression testing.
Hence, we will go for the automation to get over with these issues; when we have n-number of the
regression test cycle, we will go for the automation regression testing process.
Retesting:
Retesting is a process to check specific test cases that are found with bug/s in the final execution.
Generally, testers find these bugs while testing the software application and assign it to the developers
to fix it. Then the developers fix the bug/s and assign it back to the testers for verification. This
continuous process is called Retesting.