Manual Testing Guide
Manual Testing Guide
Table of Contents
• Introduction to Manual Testing
• Types of Testing
• Bug and Defect Tracking
• Verification and Validation
• Testing Process
• Tools and Resources
• Images and Illustrations
• Conclusion
Manual testing is an important part of software quality assurance. It ensures that the application meets
user needs and requirements before release 1 . By having humans test the system, teams can validate
things like layout, ease of use, and workflow from a real-user perspective. This complements automated
testing, which excels at repeatable, high-volume checks.
Definition Tests are executed by a human tester 2 Tests are executed by software tools 2
Speed & Time-consuming (manual effort for each Fast execution once scripts exist (initial
Effort test run) 3 setup overhead) 3
1
Parameter Manual Testing Automated Testing
The table above compares key differences: in manual testing, a human conducts the tests step-by-step,
whereas automated testing relies on scripts/tools 2 . Manual testing is slower and labor-intensive, but it
excels at exploratory and usability tests 4 【78†】. Automated testing is faster and consistent (especially
for regression), but requires upfront scripting and maintenance 3 【78†】. Both approaches are
complementary in a QA strategy.
Types of Testing
Manual testing encompasses various test types to validate different aspects of software. Key types include:
• Functional Testing: Verifies that each feature works according to requirements. In functional
testing, testers check user actions and application responses against specifications 6 . For example,
ensuring a form submission behaves as intended.
• Regression Testing: Re-runs previous test cases after changes to ensure existing functionality still
works. Regression testing ensures that recent code changes or fixes have not broken anything
previously working 7 . It is critical after bug fixes or new feature integration to catch any
unintended side effects.
• Integration Testing: Checks that different modules or components work together. Testers combine
parts of the system (which were unit-tested individually) and validate the interfaces between them
8 . For instance, testing how a login module interacts with a database and a UI component
together.
• User Acceptance Testing (UAT): Validates the software in real-world conditions by actual users. UAT
is typically the final testing phase, performed by the intended audience (often end-users or clients)
9 . The goal is to confirm the software meets business requirements and user needs before
release. For example, a customer trying out key workflows to ensure the product fits their day-to-day
use 9 .
• Exploratory Testing: Involves simultaneous learning, test design, and execution without pre-set
scripts 10 . Testers freely navigate the application, guided by their experience and intuition, to
discover unexpected issues. This unscripted approach helps find bugs that structured tests might
miss 10 .
• Smoke Testing: A quick, shallow set of tests on a new build to ensure basic functionality works 11 .
Also called build verification testing, smoke tests check if the most important features function
correctly so that deeper testing can proceed. If a smoke test fails (e.g. the app crashes on launch),
the build is rejected immediately 11 .
2
• Sanity Testing: A narrow, focused testing after receiving a build, to verify specific bug fixes or
functionality. Sanity tests quickly ensure that critical changes work and haven’t introduced new
problems. It’s a subset of regression testing, limited in scope, used when time is short 12 . For
example, after a fix is made for a checkout page bug, sanity testing would quickly check that
payment still processes correctly 12 .
Each testing type has a clear purpose, and often multiple types are used together. For example, after
integration testing confirms module interfaces, regression and UAT might follow to fully validate the
system.
Defects are often classified by type and severity. Common defect types include design defects (e.g.
incorrect UI layout), logical defects (errors in algorithms or logic), integration defects (issues when
modules interact), and performance defects (problems under load) 15 . Severity levels describe impact:
e.g. Critical (system down), Major (function fails but system partially works), Medium (undesirable but
tolerable), and Low (minor issue) 16 . Priority (urgent fix vs. can wait) is a separate attribute, but often
defects that are critical in severity get higher priority.
A well-documented bug report is essential for efficient defect tracking. An effective bug report should
include: - Title/ID: A concise summary of the issue. - Environment: Software version, OS, browser, or device
where the bug was found. - Description: Clear description of the problem. - Steps to Reproduce:
Numbered steps that reliably reproduce the bug, with details of inputs and actions 17 . - Expected vs
Actual Results: What you expected to happen, and what actually happened 17 . - Attachments:
Screenshots, logs, or videos showing the issue in context 18 . Visual evidence (e.g. a screenshot of an error
message) and log files can greatly speed debugging by providing context. - Severity/Priority: The
suspected impact (e.g. high severity if it crashes the app) and priority for fixing.
By following this structured format, developers can quickly understand and reproduce the issue 17 . For
example, “Steps: 1) Open app; 2) Click X; 3) Observe crash. Expected: navigate to Y page. Actual: app crashes
with error. Screenshot attached 17 18 .” Clear bug reports lead to faster resolutions and better QA
outcomes.
According to IEEE definitions, Verification is the process of evaluating whether the product complies with
requirements and design specifications 19 . This is often done through static testing and reviews.
Examples of verification activities include: - Inspections: Carefully examining design documents or code to
3
spot flaws before execution 20 . - Reviews/Walkthroughs: Team meetings to go through requirements or
designs step-by-step 20 . - Desk-checking: A developer manually walking through their own code logic for
errors 20 .
These activities do not execute the code, but they catch issues early by ensuring specifications are followed.
Validation is the process of checking whether the product actually fulfills its intended use and customer
requirements 19 . This is done through dynamic testing. Examples include: - Unit Testing: Checking
individual functions or components work correctly 21 . - Integration Testing: Verifying combined
components interact correctly (overlaps with earlier “Integration Testing” type) 22 . - System/Acceptance
Testing: Running end-to-end scenarios in an environment similar to production. - User Acceptance Testing
(UAT): Real users testing to ensure the product meets their needs (e.g. business scenarios) 9 .
Validation ensures the software does what users need in practice, while verification ensures the software
was built to spec. Together, V&V activities form a robust QA strategy. For example, one might verify the
payment module design via a review, then validate it by actually processing a test transaction.
Testing Process
The manual testing lifecycle typically follows these stages: planning, design, execution, defect reporting,
and closure. Each stage has clear goals:
• Test Planning: Defining what to test and how. The QA team reviews requirements and decides test
objectives, scope, resources, and schedule 23 . A test plan document is created, detailing test
strategy (types of tests, tools), environments needed, roles, and milestones 23 . Estimations of effort
and deliverables (e.g. number of test cases) are made. By the end of planning, the team has a clear
roadmap of the testing approach.
• Test Case Design: Developing how to test. Test engineers write detailed test cases and scripts based
on requirements 24 . Each test case includes input conditions, actions, and expected outcomes 24 .
Test data (e.g. sample inputs) are prepared. Reviewers then validate test cases for correctness and
completeness. A Requirement Traceability Matrix (RTM) is often updated to ensure all requirements
have corresponding test cases 24 . The goal is a comprehensive suite of tests that cover functional
and non-functional requirements.
• Test Environment Setup: Preparing where to test. This involves configuring hardware, software, and
network needed for testing (e.g. test servers, databases, tools). Although sometimes parallel to test
design, the environment must be stable before execution.
• Test Execution: Running the tests. Testers execute the prepared test cases against the build 25 . For
each test, they check if the actual result matches the expected result. When discrepancies (defects)
are found, they are logged in a defect tracking system 25 . For example, “Test Case 5 failed: Login
button unresponsive (see bug #123)”. Testers record the outcome of each test (pass/fail) and attach
evidence (screenshots/logs) for failures.
4
• Defect Reporting and Retesting: Any defects found are reported to developers. The cycle often
repeats: developers fix bugs, a new build is provided, and testers re-run affected tests (regression) to
verify fixes 25 . Throughout execution, the team may also perform smoke tests on new builds to
ensure stability before full testing.
• Test Closure: Finalizing testing. Once testing objectives are met (or deadlines arrived), the team
compiles the testing results and documents lessons learned. Activities include preparing a Test
Summary Report (executed cases, defect counts, pass rate) and a Test Closure Report 26 . Defects
are checked that all critical ones are resolved or deferred. Test environments are cleaned up and test
artifacts (cases, logs, reports) are archived 26 . Knowledge is transferred (e.g. sharing any
workaround or remaining risks). The closure report ensures stakeholders understand the testing
coverage, remaining issues, and readiness of the software 26 .
By following these stages in sequence, the QA process remains organized and traceable, ensuring high test
coverage and clear documentation of results 23 26 .
• Bug/Issue Trackers:
JIRA – A popular issue-tracking and project management tool by Atlassian. It provides a centralized
platform to capture, assign, and monitor bugs 27 . Teams can record bugs with descriptions,
attachments, and workflow statuses, ensuring nothing falls through the cracks.
Bugzilla – A mature, open-source defect-tracking system. Bugzilla lets teams track outstanding bugs,
issues, and change requests across releases 28 . It is web-based and customizable, widely used
when organizations need robust bug tracking without licensing costs.
• Other Tools: While manual testing is done without automation scripts, tools can assist test
management. For example, spreadsheet templates for cases, screenshots tools, or browser developer
tools to inspect issues. Exploratory testers may use mind-mapping or session logging tools (e.g.
TestBuddy) to record their test paths.
• Learning Resources: For training, many QA professionals use online courses (e.g. Udemy’s Manual
Testing courses, Coursera/Simplilearn software testing tracks) and certification materials (ISTQB
guides). Books like “Foundations of Software Testing” or “Lessons Learned in Software Testing” are
recommended for concepts and best practices. Community forums and blogs are invaluable – for
example, Stack Overflow and the Ministry of Testing community (MoT) host active discussions on
testing problems. Sites like Software Testing Help, Guru99, and QA blogs offer tutorials and QA tips.
Engaging with these resources helps testers stay current with techniques and tools.
5
The table and citations above highlight a few mainstream tools; many teams choose the ones that fit their
workflows best. The key is to use tools that facilitate good documentation and tracking, and to keep
learning through courses, books, and community forums.
6
Manual vs Automated Testing Comparison Chart
Conclusion
Manual testing is a foundational aspect of software quality assurance. By manually executing test cases,
testers provide the human perspective needed to catch usability issues, logical errors, and other defects
that automation may overlook 1 10 . This guide covered the core concepts: from the definition and
importance of manual testing, through the various test types and defect tracking practices, to the full
testing process lifecycle. Key takeaways include the need for clear documentation (comparing expected vs
actual results in bug reports 17 ), the distinction between verification and validation activities 19 30 , and
the value of good tools (like JIRA and TestRail) and communities for supporting QA work 27 29 .
In practice, manual testing and automation go hand-in-hand. While automation speeds up regression
checks, manual testing remains essential for creative exploration and user-focused validation. By
understanding both methods and following a structured testing process, QA teams can ensure they are
building the right product and building the product right 19 . Manual testing thus plays a crucial role in
delivering reliable, high-quality software.
References: Authoritative sources on testing were cited throughout. (See brackets for source links.)
7
6 Functional testing - Wikipedia
https://en.wikipedia.org/wiki/Functional_testing
28 About - Bugzilla
https://www.bugzilla.org/about/
29 TestRail Introduction
https://www.tutorialspoint.com/testrail/testrail_introduction.htm