Manual Testing Material
Manual Testing Material
MANUAL TESTING
l.c
ai
( By V. Mallikarjuna Reddy, [email protected] )
gm
@
lik
al
m
a.
nn
ve
n,
ju
ar
lik
al
M
Definition of Quality
Bug free
Meets the customer requirements
om
User friendly
l.c
Requirement
ai
It’s the functionality and expectation from the software which are defined by the customers.
gm
@
Advantages of Testing lik
al
● Quality product
m
● Client Satisfaction
● More business
a.
Tester Qualities/Skills
ju
➔ Negative thinking
lik
Product
The software applications developed for selling purposes. Ex: JIRA, Google Docs/Drive
om
l.c
ai
Testing Methodologies
gm
@
★ Black box testing
★ White box testing lik
★ Grey box testing
al
m
Black box testing: The engineers who perform the black box testing are called as Black box
a.
testers. Test the functionality and UI of the application. They don’t have how the code is
nn
White box/ Glass box/ Clear box testing: Testing the application code (ie the internal code,
n,
Grey box: is a testing method which is a combination of Black box and White box methods.
lik
Usually test engineers who have the knowledge of structural parts will be involved in gray box
testing.
al
M
om
1. Requirements gathering: Business Analysts gather the requirements from the customer and
l.c
prepare the Business Requirement Specification document. BA responsibility is to gather as
much information as possible from the customers. The BRS is prepared by the BA and is
ai
submitted to the Project Managers.
gm
2. Requirements Analysis: The PMs analyze the Business Requirements(BRS). Requirements are
@
analyzed to check whether they are possible to develop or not. Technology selection, Resource
lik
plan and HW/SW plan also created. Finally, a Functional Requirement Specification
al
document(FRS) will be created. FRS is also called System Requirement Specification(SRS).
m
a.
3. Design: Before starting the actual coding, it is highly important to understand what we are
nn
going to create and what it should look like? The requirement specifications are studied in this
phase and system design is prepared. System Design helps in defining overall system
ve
architecture. Describes desired features including screen layouts, HLDs, LLDs and overall
n,
4. Coding: developers/programmers write the code based on the design docs and follow the
ar
coding standards.
lik
al
5. Testing: Testers review the BRS, the FRS/SRS and create test cases. They start testing when
M
6. Deployment: The application will be deployed into Live (i.e, Production) when the testing is
completed. The public can start using the application when it is in Live.
7. Maintenance: The Maintenance will be started after the application is launched in the
Production. The maintenance means fixing of the bugs found in the Live and any
enhancements. The maintenance will continue as long as the project is on Live.
om
l.c
ai
gm
@
lik
al
m
a.
nn
ve
n,
ju
ar
lik
al
M
om
l.c
ai
gm
@
lik
al
m
a.
nn
ve
n,
ju
ar
lik
al
M
There are various models for software development. Each model contains a series of phases to
build the software successfully.
Waterfall Model: In this model, the software development progresses sequentially from one
phase to another phase. This model is also referred to as Linear sequential model. Each phase
must be completed before the next phase can begin. The output of each stage becomes input
for the next stage. The sequential execution of all the phases in the SDLC is known as the
om
Waterfall model. Testing is carried out once the code has been fully developed.
l.c
ai
gm
@
lik
al
m
a.
nn
ve
n,
ju
ar
Testing starts only after the implementation is done. But if you are working in a large project,
where the systems are complex, it's easy to miss out key details in the requirements phase
itself. In such cases , an entirely wrong product will be delivered to the client. The earlier in
om
life cycle a defect is detected, the cheaper it is to fix it. As the saying, "A stitch in time saves
a nine"
l.c
ai
V model /Verification and Validation (V&V):
gm
@
V-model is the modified version of the Waterfall model. This means for each phase in the
lik
waterfall model there is a corresponding testing phase. There are Verification phases on one
side and Validation phases on the other side of the ‘V’.
al
m
Verification: It is also known as SQA(Software Quality Assurance). QA checks that the Software
a.
is developing as per the guidelines and specifications. This is also called ‘Static Testing’.
nn
Validation: It is also known as SQC(Software Quality Control). Test engineers check that the
ve
software is developed as per the requirements". This is also called ‘Dynamic Testing’ means
validating the software i.e. actual test execution.
n,
ju
ar
lik
al
M
The test team participates from the Requirements phase and starts reviewing and creating the
ve
test documentation.
n,
URS(User Requirement Specification). It contains business requirements i.e. from the customer.
ar
the Project/Product owners. These are easy to understand by the technical team.
M
HLD: HLD and LLDs are prepared by the Designers. High level designs. This contains high level
modules.
LLD: Low level designs. This contains very detailed designs. Each module is broken to
submodules with more details.
Unit Testing: Developers do this testing. They check the modules individually which is called a
unit.
System Testing: This is conducted by the Testers. To ensure the whole system works well after
the integration testing.
UAT: User Acceptance Testing. Client side testers will do this testing before going to Production.
om
Reviews - Doing these whether the documents contain correct and complete details.
These are not conducted by the owner(ie Author) of the document, usually a third
l.c
person(colleagues/Leads/customer) will do the review. Eg: Requirements review, Code review,
ai
Test plan review, Test cases review.
gm
Walkthrough - The owner(ie Author) of the document will explain or discuss with team
@
members/peers/lead.
lik
Inspection - This is like Review only. Here all the documents will be cross checked to
al
ensure everyone in the team following the same requirements and having common
m
understanding.
a.
Advantages of V Model
n,
Disadvantages of V-model
● If any changes happen mid way, not only the requirements documents but also the test
al
● Investment is more because all the teams work from first phase
om
● Working s/w is delivered frequently(weeks rather than months)
● Daily conversations among Clients, BAs, POs, Devs and Testers
l.c
● Working s/w will be more useful than just presenting docs to clients in the meetings
ai
● Customer collaboration- requirements can’t be fully collected at the beginning,
gm
therefore continuous customer/stakeholder involvement is very important.
@
Disadvantages of Agile model:
lik
● Less documentation because the requirements can change just in time
● Difficult for the new starters because of less documentation
al
m
Agile methodologies:
a.
Agile Unified Process (AUP), Scrum, Dynamic Systems Development Method (DSDM),
nn
Scrum
ju
ar
Scrum is one of the Agile methodologies. Scrum is focused on delivering business value all the
time.
lik
al
Scrum Roles:
M
There are 3 roles: Product Owner, Scrum Master & the Development Team
Product Owner: The Product Owner represents the stakeholders and also the voice of
the customer. Accountable for ensuring that the team delivers value to the business. The
Product Owner maintains the product backlog i.e, adding or removing the user
stories(requirements).
Team: The team responsible for delivering the working s/w at the end of each Sprint.
The team means Design, Dev & QA people.
Sprint:
Sprint is a fixed time period which is also known as Iteration. It could be 1 to 4 weeks.
om
User Story:
l.c
A feature/requirement is called as a user story in the Scrum model.
The structure of a story is: "As a <user type> I want to <do some action> so that <desired
ai
result>"
gm
Eg: As a user I want the statement enquiry so that I can view the monthly statement
@
Meetings/Ceremonies/Scrum Events:
lik
al
m
Sprint planning meeting – Each sprint begins with a meeting called SPM. At the
a.
beginning of the sprint cycle, a “Sprint planning meeting” is held. Discuss the items from the
nn
product backlog.
ve
The team selects what work is to be done in the sprint. Product owner describes the user
n,
stories to the development team. The team estimates the stories based on complexity. Teams
ju
use the Fibonacci series: 1, 1, 2, 3, 5, 8. If any story takes more than 8, then the team breaks
into smaller stories.
ar
lik
Daily Scrum - The status meeting that happens each day during the sprint is called the
al
All members of the Development Team come prepared with the updates for the meeting.
The meeting starts precisely on time even if some Development team members are missing.
The meeting should happen at the same location and same time every day.
The meeting length is set (time boxed) to 15 minutes.
Other interested parties can come but normally only the core roles speak
Any impediment/stumbling block identified in this meeting is documented by the Scrum Master
and worked towards resolution outside of this meeting. No detailed discussions shall happen in
this meeting.
om
Sprint Review Meeting – At the end of the sprint, the team reviews the output of the
sprint. Review the work that was completed and not completed.
l.c
Present(demo) the completed work to the stakeholders.
ai
Incomplete work cannot be demonstrated.
gm
Sprint retrospective - To make continuous process improvements. Two main questions
@
are asked in the sprint retrospective: What(i.e, process, relationship among people/other teams
lik
and the tools) went well during the sprint? What could be improved in the next sprint? SM
records action points for further sprints.
al
m
Artifacts:
a.
Sprint backlog: A prioritized list of requirements to be completed during the sprint. This
list is chosen from the Product backlog.
n,
Sprint Board:
lik
User story6, 7, User story5, User story4 User story3 User story2 User story1
8
Bug 1
Bug 3, Bug 2
om
l.c
ai
gm
@
lik
al
m
a.
nn
ve
n,
ju
ar
lik
al
M
Requirements analysis: Testing team start understanding the requirements from the
Requirements and the design documents. (Testing team: QA Lead, Sr Test engineer, Test
Engineer)
om
Test Planning: After reviewing the requirements, a test plan will be created. The test plan
l.c
describes the process to conduct the software testing. Prepared by Test Lead or Sr Test
Engineers.
ai
gm
@
Test Design: Writing the test cases, test scenarios is called test design. Test engineers or Sr Test
lik
engineers will be responsible for writing the test cases, creating the test data and maintaining
the Traceability Matrix.
al
m
a.
Test Execution: Test execution will be started when the TCs are ready and the code is ready. Test
nn
engineers will compare the actual result with the Expected result.
ve
n,
Bug Reporting: The bugs will be reported to the developers after analyzing the test results.
ju
ar
lik
Bug Verification: Bugs will be retested when they are fixed by the development team and a
al
Test Closure or Testing Sign-Off: Testing will be closed by creating the Test Summary report.
This report will be prepared by the Team Lead. This report contains an overall summary of the
testing. It includes the build number, total no of TCs executed, Passed, failed and no of bugs
reported and fixed.
These techniques help us to avoid the exhaustive testing(testing with all the possible
combinations / input values) and to design more effective test cases. Also these techniques help
us to achieve maximum test coverage with the minimal combinations. Main objective: Better
test coverage and Reduce the duplicate test execution. They are:
om
a. Boundary Value Analysis(BVA)
b. Equivalence Class Partitioning(ECP)
l.c
c. Decision Table
d. State Transition
ai
e. Error guessing
gm
@
Boundary Value Analysis:
lik
BVA is useful to design Test cases for a range of input. Most of the time developers make
al
mistakes while implementing the conditions such as <, <=, >, >=. BVA is useful to find defects in
these kinds of conditions.
m
a.
Eg: Suppose the field ‘Age’ accepts values from 1 to 50 years. We can write 50 TCs to test
nn
This can be used when the input is a combination of different types of data. In this technique,
the input data will be divided into equivalent classes/groups. Only one value will be taken from
each group because the rest of the values in that group produce the same output.
Error Guessing:
This technique basically depends on the experience of the testers who can think about the
problematic areas.
Eg:
om
Submit the form without filling the Mandatory fields
Typing white spaces and Submit in the Facebook comments
l.c
Decision table Testing:
ai
gm
This technique is useful when there are multiple fields. If the fields are two then the
@
combinations will be 4 i.e., 2^2. If the fields are ten then the combinations will be 1024 i.e.,
2^10. So, we can choose a rich set to test minimum combinations and save time.
lik
al
Eg:
m
State Transition:
lik
al
This technique is used to test the different states/statuses in the application. We need to check
M
Eg:
1. We can design test cases to test the grades of a student Distinction, First, Second and Fail.
2. Ecommerce purchase Or Online Food delivery. Ordered -> Shipped -> Out For Delivery ->
Delivery
1. Unit testing
This testing is carried out by the Developers to know whether the code/program is working
om
properly or not. A smallest program or function in the code is known as Unit. This is done by the
developers, so it comes under White box testing. The Unit testing is also referred to as
l.c
Component/Module level testing.
ai
2. Integration testing
gm
@
The process of combining one module with another module is called Integration testing. It
lik
checks the data communication/data flow(Request data and Response data) among the
modules. This is carried out by the Developers at the code level. Testers do the integration
al
testing at the application level. (Amazon Registration, login, delivery address, products list, add
m
1. Top Down Integration: It is a process of incrementally adding the modules from Top to
n,
Bottom. Stubs are needed to simulate the bottom level modules(ie, child modules). The
ju
Stub: In the top down integration, if the bottom level module is not ready then it will be
lik
Bottom to Top. Drivers are needed to simulate top level units. The module added is the
parent of the previous module.
Driver: In the bottom up integration, if the top level unit is not ready then it will be
replaced by the Driver.
b. Non-incremental integration testing (or Bigbang): Integrating all the modules at once and
doing testing is called Bigbang. This method can be used to save time in the integration process.
Performed after integration testing to ensure that the entire system(collection of all the
modules which makes the system) is working as per the FRS/SRS. Testing all the
features/modules whether working properly or not.
System testing includes: Functional testing, User Interface testing(GUI), Usability Testing and
Non-functional testing.
om
Functional Testing: Functional testing refers to testing the features of the software as
l.c
mentioned in the FRS document. Providing inputs/data, data saving, test calculation(total
amount, tax calculation, discounts calculation, total marks) error validations, receipts.
ai
gm
GUI testing: Checking the front-end. Elements on the screens or forms. Eg: Text boxes, Buttons,
@
hyperlinks, images, animations, Radio buttons, check boxes, calendar, video or audio buttons etc
lik
Usability testing: Test as an end user to understand the content, help texts on the application. Whether
al
they are meaningful or difficult to understand. The application should be self exploratory i.e., simple to
m
understand.
a.
Performance Testing: Load testing (Gradually increasing the load i.e. adding the users to access the
n,
application), Stress testing(Sudden increase/decrease of the load) and Volume testing(How much data is
ju
Security Testing: Authentication(testing whether the user is already in the system or not),
lik
Authorization(User is already in the system. But whether user has permission/privileges to access some
al
UAT is 2 types:
Alpha testing: This is conducted before the beta testing. Alpha is conducted in the
staging environment which is similar to the Production environment. People involved in this are
om
Testers, Developers, and limited people inside the organization participate in this testing.
l.c
Beta testing: This is conducted after the Beta testing. Testing will be performed on the
Staging environment. People involved in this are End users(Real users) i.e. outside the
ai
organization, they can do testing from their homes or anywhere. This is done before releasing
gm
the product into the market.
@
lik
Test Environments
al
m
Dev Environment: This env is used by the developers for developing and testing the code
written by them. Eg: www.gmaildev.com
a.
nn
Test Environment: This is used by the test engineers for test execution. Testers use this to
ve
Staging Environment: This is also called the Release environment. This is similar to the
ju
Production environment, so it is also called a Production like environment. Client side people
ar
use this environment to test the application. The software will be tested in the Staging env
lik
Production Environment: When the testing is completed on the Staging env, then the software
M
will be deployed into the Production env. This is also called a LIVE environment. The public can
use the software when it is in the Production environment.
Test case
A test case is a sequence of steps/actions to be performed on the application to compare both
the Actual result and the Expected result.
om
1. GUI Test cases:
l.c
a. Check for the availability of the GUI elements eg: Buttons, Text boxes, check boxes, Radio
ai
buttons, Hyperlinks etc
gm
b. Check for the alignments and look & feel of the UI elements/objects (means whether
@
they are placed properly on the web pages)
c. Check for the spellings and grammar lik
d. Check for the consistency of the elements (means one page says Sign In and other page
al
says Log In)
m
a.
2. Functional Test cases: Check for the functionality of the application. Testing the Forms by
nn
entering the data, clicking the hyperlinks, buttons etc,. These can be divided into two types.
a. Positive test cases: Check for the positive flow of the application by giving valid inputs.
ve
b. Negative test cases: Check for the negative behavior by giving the invalid inputs.
n,
3. Non-Functional TCs: These test cases will be designed to test the Performance of the
application. This is done by performance testers. Eg: Load testing and Stress testing
lik
al
M
Note: See the example test cases in the another xls file
TC Priority: This describes how important it is to execute the test case. eg: P1, P2, P3. High,
Medium & Low. p1-> Describes the main/critical functionality. P2-> Major functionality(field
level validations), P3-> Minor
Test Data: The input data which is needed to execute the test cases.
Pre-condition:It is the setup(Or preparation) that should already exist to execute the test cases.
om
Test steps: The steps/actions to be performed on the application.
l.c
Expected Result: While creating the TCs, testers write the expected result/behavior as
ai
mentioned in the requirements.
gm
Actual result(Test Result): While executing the TCs, testers note down the actual
@
result/behavior observed in the application. Testers will compare both ER & AR and note down
the results. Either Pass or Fail.
lik
al
m
When a bug is found then it will be assigned different statuses. The bug moves through various
nn
Bug/Defect/Issue: If the Expected result and the Actual result is not same then we can call it as
n,
a Bug. (or) If the application is deviating from the requirement we can say it as a Bug. (OR) If the
application doesn’t work as expected then it's a bug.
ju
ar
Note: A human being(Developer) can make an error(mistake) in the code. These mistakes
lik
produce defects(fault, bug) in the Application. Due to these Bugs, the system will Fail means a
al
Failure/Incident.
M
▪ New: This is the initial state for any newly reported issue. In this state a ‘Triage' can be
conducted (depends on projects). Product owner/TL sets the Priority.
▪ In progress: The bug will be moved to In Progress when a developer is accepted and
assigned the bug to him/her.
▪ Fixed(or Verify): When the bug is resolved then it will be changed to Fixed (or Verify).
Developer will assign back to the Reporter or the testing team.
om
▪ Closed: When the bug is working fine after the re-testing then it will be moved to Closed
status.
l.c
ai
▪ Deferred(Unsupported): The bug will be moved to ‘Deferred(Or Unsupported)’ if the
gm
resolution is not needed at present. These can be resolved in the future builds and then
closed.
@
▪ Rejected: Developers can reject the bug with these reasons: Duplicate( it means another
tester already created a similar bug), Not Reproducible(When the Developer is unable to
lik
reproduce the bug) and WAI(it means ‘Working As Intended’. Developer says the
al
behavior is correct. In this cases Tester need to modify the Test cases)
m
a.
nn
ve
n,
ju
ar
lik
al
M
Severity can be categorized as: Critical, High, Medium, Low (or) S1,S2, S3 etc.
▪ High: The Defects that cause the system crash and loss of critical data. There is no
workaround available. Can't release the product live. It's like a Showstopper. Testing
can't proceed further.
Ex: Unsuccessful installation, Can't sign in/register, data entered into the forms
not saved.
om
▪ Medium: The Defects that affect major functionality. There is a workaround but not
l.c
satisfactory. Significant impact on the users. The product can't be released live.
ai
Ex: a) When the User is logged into a website, the home page is not displayed and
gm
another page is displayed. Workaround is: Users can go to the Home page by clicking the
‘Home page’ link on the other pages.
@
lik
b) Log out button is missed on other pages and only available on the Home page. So users
need to come to the Home page to log out from the website.
al
m
▪ Low: The defect that affects too low and can be ignored. They can be cosmetic and
a.
grammatical.
nn
Ex: Spelling mistakes, text alignments, color of the pages or fields etc
ve
n,
Bug Priority: It indicates the importance or urgency to resolve the defect. Though the
ju
priority may be initially set by a Tester, it is usually finalized by the Project/Product Manager
in triage meetings.
ar
lik
High: High priority bugs need to be fixed immediately. Developers stop working on other
M
tasks and work on these bugs immediately. Fix needs to be ready before the end of the
current sprint or iteration.
Medium: These bugs need to be fixed immediately if there are no High priority bugs. Fix
needs to be ready before the end of the current sprint or iteration.
Low: These bugs may or may not be fixed at all. These are just nice to have.
om
not urgent. Very few people do this. Another example: I can add 2000 products, but adding
2001 products, the cart removes all previously added products.
l.c
High Severity - High Priority
ai
Unable to Login into the Website. When you try to open a site it crashes. or when you try to
gm
open a game and the app crashes.
@
Low Severity - Low Priority
lik
Suppose there is a spelling mistake on the website or some typo errors.
al
Common fields in a Bug report
m
a.
Bug Summary/Title: Giving title to the bug. The actual behavior of the system.
ve
Environment: What kind of env are you using in terms of OS, browser, Url. device etc etc
n,
Steps to Reproduce the Bug (Or) Description: Detailed steps to reproduce the bug. Also
ju
Reproducibility frequency.
ar
Attachments: Screenshots, log files of the bug to help developers to understand the defect
M
A Test Plan provides a detailed approach to manage and execute the software projects
effectively.
om
Objective
Revision History
l.c
Scope
ai
1. Features to be Tested
gm
2. Features not to be Tested
Environment
@
1. Hardware
2. Software
Entry & Exit Criteria
lik
al
Assumptions
m
Test Strategy
a.
1. Testing Types
nn
2. Risk management
3. Configuration Management
ve
4. Defect Management
n,
Staffing
ju
Communication Plan
ar
1. Meetings
2. Reporting
lik
Schedule
al
Test Design
M
Test Deliverables
References
Glossary
Eg: The main objective is to test the funds transfer between the savings accounts.
Revision History: It gives the history of the test plan i,e, who created and modified the versions
and dates.
om
Mann 14-Jun-2011 Draft 1.0 Initial version
l.c
Manager name comments
ai
received
gm
Mann 20-Jul-2011 Issue 1.0 Issue version
@
lik
Scope: It talks about features to test and not to test
al
1. Features to be Tested: The list of features/functionalities need to be tested by the
m
testing team. Eg: Funds transfer from account A to B, Account balance of A & B
2. Feature not to be Tested: The list of features which are not tested by the testing team.
a.
Eg: A/c balance messages on mobile phones, Emails about transactions etc
nn
ve
Environment: This describes the list of Hardware and Software needed to test the application.
Eg:
n,
Machine - Windows Server Enterprise; OS: Windows 2007, Processor: Intel xeon, Memory: 4 GB,
ju
Hard Disk: 150 GB, Database: SQL server 2008, Browser: IE Version 10, FF 32.0
ar
lik
Entry & Exit Criteria: This specifies when to Start the Test execution and when to Stop the test
execution.
al
1. Entry criteria:
M
om
1. Testing Types
2. Risk Management
l.c
3. Configuration Management
4. Defect Management
ai
1. Testing types: This section deals about the types of testing to be performed by the Testing
gm
team.
Eg:
@
Unit testing and Integration testing will be conducted by the Developers
lik
Smoke testing, System testing, Regression testing will be conducted by the Testing team
al
UAT(Alpha & Beta) will be conducted by the Client.
m
a.
2. Risk Management: (Prevention is better than cure) It deals with the risks that occur during
nn
the testing. Specify the mitigation plan and contingency plan for each risk.
ve
condition/situation that may i.e preventive measures to when the risk is occurred i.e
ar
or may not occur be taken before the risk the plan to implement when
lik
Eg: 1. Delay of the builds Review the dev progress Take additional resources or
M
from the dev team daily and chase the devs to work more hours/night
deliver the build on a shifts to meet the delivery
specified date. date
om
will be used for keeping the deliverables securely.
Eg: Visual Source Safe will be used to keep the test deliverables.
l.c
ai
4. Defect Management: It talks about managing the defects that occur during the test
gm
execution. It defines the bug statuses, the defect Severity and Priority levels. Also mention the
tools(JIRA, Bugzilla, Quality Center) which will be used for managing the bugs.
@
lik
Eg: Bug statuses will be New, In Progress, Verify etc.
Severity * Priority levels like High, Medium & Low
al
m
Trisha Developer
lik
Communication Plan: This describes the frequency of the communication and the tools(Skype,
Google Hangout, Gmail, Team Viewer etc) used for the communication purpose.
1. Meetings:
Eg: Dev and Test teams daily meeting, Weekly meeting with Client via Skype to discuss the
Testing status.
2. Reports:
Eg: Daily status reports to the client, Weekly status report to the Client
Test Cases: Test cases template and the Test cases will be created in this section.
Test Deliverables: It tells about the items which will be delivered to the client from the testing
team.
om
Eg: Test Plan, Test cases, Test Results, Traceability Matrix, Bug report and Test Summary Report.
l.c
ai
References: This contains the list of documents ( eg: FRS, Design docs) which are referred to
gm
create the test plan and the test cases.
@
Document Name Author Version
lik
FundTransfer-Req John 1.0
al
uirements.doc
m
a.
sfer.pdf
ve
n,
Glossary: Define terms and acronyms used in this document to eliminate confusion in testing
ju
and communications.
ar
Eg:
lik
Req ID TC ID
R1 TC1
R2 TC2,
TC3
om
Rn TCn
l.c
ai
Daily Status Report: This includes the tasks that are completed on that particular day. Test
gm
engineer his/her report to Team Lead. Team Lead collects the reports from the team members
and send this to PM or Client daily or Weekly.
@
Sample report:
lik
Tasks for Today:
al
1. Test case creation is completed for Funds transfer functionality
m
2. Test execution completed for Registration and login pages(Total: 20, Executed: 20, Pass:
a.
15, Fail: 5)
nn
Test Summary Report: It is an overall testing status report. This will be prepared by the Team
lik
Sample:
M
Build: An executable application. The build will be created by the Developers and given to the
Test team.
om
Smoke Testing(Build Verification Testing):
l.c
It's conducted on every new build to know whether the build is Stable or Unstable. If the basic
ai
test cases are passed then the test team continues further testing otherwise rejects the build.
gm
The tester touches all the main areas of the application without getting into too deep.
@
Regression Testing:
lik
Regression testing is performed when the code is changed. It is performed to make sure that
al
the existing functionality is still working fine and not broken due to the changes in the code. The
m
code can be changed when the new functionality is added to the existing or when the bugs are
a.
fixed.
nn
Sanity Testing: It's conducted on a stable build. If we don’t have time to do full regression, then
ve
we identify a few important functionalities and do the testing which is called Sanity testing. Its a
part of Regression testing.
n,
ju
Retesting:
ar
Retesting is performed when a bug is fixed. We execute the steps which were mentioned in
‘Steps to reproduce’. This is called Re-testing.
lik
al
Usability Testing:
M
Usability means to check whether the application is user-friendly or not. Make sure that the
Button names, label names and help messages etc are meaningful.
Eg: If the drop down field ‘Country Name’ has country names in the jumbling(not in
Alphabetical order). Then it is very difficult to locate a country name, so it's not usable.
It would be good if the Password field shows some help text when creating the password. Some
applications show the help text after submitting the Password.
Compatibility testing:
Testing the application on different Environments. There are two types.
a> Browser compatibility(Cross Browser testing): Testing the application on different
browsers like FF, IE, Chrome, Safari and Opera etc. Also, testing on different versions
om
of Browser eg: IE8, IE9, IE10 etc.
b> OS compatibility: Testing on different OSs like Windows 7, 98, XP, Vista, Unix, Linux
l.c
and Mac etc.
ai
Again the combinations like:
gm
Windows machine with IE, FF & Chrome
Mac machine with Safari, Chrome
@
Linux machine with IE, Opera etc
lik
al
m
Exploratory testing:
a.
Domain experts will perform this testing. Doing without knowing the
nn
functionality/requirements. Just explore and understand the application then do the testing. It
is like the domain experts review the entire application.
ve
n,
Security testing:
ju
Eg: Check whether users can login with Valid details or not.
lik
Submit some code in the fields on the Forms(code will be given by the developers)
al
Critical info(passwords etc) is encrypted or not, browser back/forward and access the direct
URLs.
Localization Testing:
Testing the application in different languages(Chinese, Urdu, Hindi, UK English etc) is known as
Localization testing. In this the application will be tested for the currency symbols, date & time
formats and the text alignment.
Eg: Google website can be tested in multiple languages
Eg: Test the Registration into Bookmyshow.com -> Test the Login into the site -> Searching the
movies -> Test the seat booking -> Making payment -> Test the accounts for the balance -> Test
the ticket details on Phone / Email -> Check the printed details on the Ticket -> Logout -> Go to
the Theater and show the ticket
om
Ad-hoc testing:
Test engineers can perform the testing in their own way. Testers know the functionality of the
l.c
application. Test cases will not be designed for this testing. This can be performed when the test
engineers have some spare time after completion of testing.
ai
gm
Monkey testing: Testing an application by doing some abnormal actions or in a zigzag way to
@
find the defects is called Monkey/Zigzag testing. We can find defects like crashing or hanging.
lik
Eg: Clicking the ‘Submit’ button quickly for 2 to 3 times
al
Scroll the page quickly from top to bottom vice versa
m
a.
nn
Installation/uninstallation testing: To verify the builds are installed and uninstalled successfully.
Also upgrading the build from lower to higher version or higher to lower version.
ve
n,
https://artoftesting.com/test-cases-water-bottle
lik
What do you cover/consider when writing the Test cases: 1) GUI test cases(ie, to cross-check
the application is looking as per the Designs. All Links, buttons, Radio buttons, Check boxes,
dropdowns, color, images, animations). 2) Positive flow test cases 3) Negative test cases (ie,
using Test case Design techniques)
om
8) What is a Deferred bug?
9) What are the fields Or details you give when writing a Bug?
l.c
10) What is Regression Testing? Smoke testing? Sanity testing?
11) What are the fields in the Test case Template?
ai
12) Do you use any techniques to write Test cases? Or What are the test case design
gm
techniques?
13) What is BVA?
@
14) What is RTM(Requirement Traceability Matrix)? Or How do you make sure the Test
coverage? lik
15) Scenario based cases. Example: How do you test a Login page? How do you test Shopping
al
cart?
m
16) How do you test a water bottle? How do you test a pen?
17) What is your response if the Developer says your bug is invalid(or Rejected) ?
a.
18) If the Customer says a bug is in the Production, then what is your response?
nn
20) How many test cases do you write every day? How many test cases do you execute every
day?
n,
25) Where do you write your test cases? Or What do you use to write the Test cases?
26) Where do you post the bugs? Or What do you use to create a new bug?
al