0% found this document useful (0 votes)
6 views

Manual Testing Guide

hhhh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Manual Testing Guide

hhhh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Manual Testing Guide

Table of Contents
• Introduction to Manual Testing
• Types of Testing
• Bug and Defect Tracking
• Verification and Validation
• Testing Process
• Tools and Resources
• Images and Illustrations
• Conclusion

Introduction to Manual Testing


Manual testing is the process where testers manually execute test cases without automation tools 1 .
The tester acts as an end user, using the application’s features to find defects and ensure the software
behaves correctly 1 . Manual testing remains crucial in QA because it allows human insight into usability
and visual aspects that automated tests may miss. Testers can intuitively explore the application and catch
issues such as UI/UX problems or unexpected behavior that scripted tests might overlook.

Manual testing is an important part of software quality assurance. It ensures that the application meets
user needs and requirements before release 1 . By having humans test the system, teams can validate
things like layout, ease of use, and workflow from a real-user perspective. This complements automated
testing, which excels at repeatable, high-volume checks.

Manual vs Automated Testing

Parameter Manual Testing Automated Testing

Definition Tests are executed by a human tester 2 Tests are executed by software tools 2

Human-driven, flexible and intuitive Script-driven, fast and repeatable (ideal


Execution
(good for exploratory tests)【78†】 for regression)【78†】

Speed & Time-consuming (manual effort for each Fast execution once scripts exist (initial
Effort test run) 3 setup overhead) 3

Good for usability and ad-hoc checks;


Coverage & Good for broad coverage of repeatable
allows on-the-fly scenario changes
Scope tests (regression, performance)【78†】
【78†】

More reliable for repetitive tasks (no


Reliability Prone to human error 4 【78†】
fatigue) 4 【78†】

1
Parameter Manual Testing Automated Testing

Requires test automation frameworks


Tools/Skills No special tools or programming needed
and programming skills 5

Usability, exploratory testing, client Regression, load/performance, large test


Use Cases
demos suites

The table above compares key differences: in manual testing, a human conducts the tests step-by-step,
whereas automated testing relies on scripts/tools 2 . Manual testing is slower and labor-intensive, but it
excels at exploratory and usability tests 4 【78†】. Automated testing is faster and consistent (especially
for regression), but requires upfront scripting and maintenance 3 【78†】. Both approaches are
complementary in a QA strategy.

Types of Testing
Manual testing encompasses various test types to validate different aspects of software. Key types include:

• Functional Testing: Verifies that each feature works according to requirements. In functional
testing, testers check user actions and application responses against specifications 6 . For example,
ensuring a form submission behaves as intended.

• Regression Testing: Re-runs previous test cases after changes to ensure existing functionality still
works. Regression testing ensures that recent code changes or fixes have not broken anything
previously working 7 . It is critical after bug fixes or new feature integration to catch any
unintended side effects.

• Integration Testing: Checks that different modules or components work together. Testers combine
parts of the system (which were unit-tested individually) and validate the interfaces between them
8 . For instance, testing how a login module interacts with a database and a UI component

together.

• User Acceptance Testing (UAT): Validates the software in real-world conditions by actual users. UAT
is typically the final testing phase, performed by the intended audience (often end-users or clients)
9 . The goal is to confirm the software meets business requirements and user needs before

release. For example, a customer trying out key workflows to ensure the product fits their day-to-day
use 9 .

• Exploratory Testing: Involves simultaneous learning, test design, and execution without pre-set
scripts 10 . Testers freely navigate the application, guided by their experience and intuition, to
discover unexpected issues. This unscripted approach helps find bugs that structured tests might
miss 10 .

• Smoke Testing: A quick, shallow set of tests on a new build to ensure basic functionality works 11 .
Also called build verification testing, smoke tests check if the most important features function
correctly so that deeper testing can proceed. If a smoke test fails (e.g. the app crashes on launch),
the build is rejected immediately 11 .

2
• Sanity Testing: A narrow, focused testing after receiving a build, to verify specific bug fixes or
functionality. Sanity tests quickly ensure that critical changes work and haven’t introduced new
problems. It’s a subset of regression testing, limited in scope, used when time is short 12 . For
example, after a fix is made for a checkout page bug, sanity testing would quickly check that
payment still processes correctly 12 .

Each testing type has a clear purpose, and often multiple types are used together. For example, after
integration testing confirms module interfaces, regression and UAT might follow to fully validate the
system.

Bug and Defect Tracking


Bugs and defects refer to flaws in the software. A bug is a fault in the code that causes incorrect or
unexpected behavior 13 (e.g., a crash or wrong output). A defect generally means any deviation from
requirements observed in testing 14 . In practice, “bug” and “defect” are often used interchangeably.

Defects are often classified by type and severity. Common defect types include design defects (e.g.
incorrect UI layout), logical defects (errors in algorithms or logic), integration defects (issues when
modules interact), and performance defects (problems under load) 15 . Severity levels describe impact:
e.g. Critical (system down), Major (function fails but system partially works), Medium (undesirable but
tolerable), and Low (minor issue) 16 . Priority (urgent fix vs. can wait) is a separate attribute, but often
defects that are critical in severity get higher priority.

A well-documented bug report is essential for efficient defect tracking. An effective bug report should
include: - Title/ID: A concise summary of the issue. - Environment: Software version, OS, browser, or device
where the bug was found. - Description: Clear description of the problem. - Steps to Reproduce:
Numbered steps that reliably reproduce the bug, with details of inputs and actions 17 . - Expected vs
Actual Results: What you expected to happen, and what actually happened 17 . - Attachments:
Screenshots, logs, or videos showing the issue in context 18 . Visual evidence (e.g. a screenshot of an error
message) and log files can greatly speed debugging by providing context. - Severity/Priority: The
suspected impact (e.g. high severity if it crashes the app) and priority for fixing.

By following this structured format, developers can quickly understand and reproduce the issue 17 . For
example, “Steps: 1) Open app; 2) Click X; 3) Observe crash. Expected: navigate to Y page. Actual: app crashes
with error. Screenshot attached 17 18 .” Clear bug reports lead to faster resolutions and better QA
outcomes.

Verification and Validation


Verification and Validation are two complementary quality activities. Verification asks “Are we building the
product right?” – it checks conformance to specifications through internal activities. Validation asks “Are we
building the right product?” – it checks that the final product meets user needs.

According to IEEE definitions, Verification is the process of evaluating whether the product complies with
requirements and design specifications 19 . This is often done through static testing and reviews.
Examples of verification activities include: - Inspections: Carefully examining design documents or code to

3
spot flaws before execution 20 . - Reviews/Walkthroughs: Team meetings to go through requirements or
designs step-by-step 20 . - Desk-checking: A developer manually walking through their own code logic for
errors 20 .

These activities do not execute the code, but they catch issues early by ensuring specifications are followed.

Validation is the process of checking whether the product actually fulfills its intended use and customer
requirements 19 . This is done through dynamic testing. Examples include: - Unit Testing: Checking
individual functions or components work correctly 21 . - Integration Testing: Verifying combined
components interact correctly (overlaps with earlier “Integration Testing” type) 22 . - System/Acceptance
Testing: Running end-to-end scenarios in an environment similar to production. - User Acceptance Testing
(UAT): Real users testing to ensure the product meets their needs (e.g. business scenarios) 9 .

Validation ensures the software does what users need in practice, while verification ensures the software
was built to spec. Together, V&V activities form a robust QA strategy. For example, one might verify the
payment module design via a review, then validate it by actually processing a test transaction.

Testing Process
The manual testing lifecycle typically follows these stages: planning, design, execution, defect reporting,
and closure. Each stage has clear goals:

• Test Planning: Defining what to test and how. The QA team reviews requirements and decides test
objectives, scope, resources, and schedule 23 . A test plan document is created, detailing test
strategy (types of tests, tools), environments needed, roles, and milestones 23 . Estimations of effort
and deliverables (e.g. number of test cases) are made. By the end of planning, the team has a clear
roadmap of the testing approach.

• Test Case Design: Developing how to test. Test engineers write detailed test cases and scripts based
on requirements 24 . Each test case includes input conditions, actions, and expected outcomes 24 .
Test data (e.g. sample inputs) are prepared. Reviewers then validate test cases for correctness and
completeness. A Requirement Traceability Matrix (RTM) is often updated to ensure all requirements
have corresponding test cases 24 . The goal is a comprehensive suite of tests that cover functional
and non-functional requirements.

• Test Environment Setup: Preparing where to test. This involves configuring hardware, software, and
network needed for testing (e.g. test servers, databases, tools). Although sometimes parallel to test
design, the environment must be stable before execution.

• Test Execution: Running the tests. Testers execute the prepared test cases against the build 25 . For
each test, they check if the actual result matches the expected result. When discrepancies (defects)
are found, they are logged in a defect tracking system 25 . For example, “Test Case 5 failed: Login
button unresponsive (see bug #123)”. Testers record the outcome of each test (pass/fail) and attach
evidence (screenshots/logs) for failures.

4
• Defect Reporting and Retesting: Any defects found are reported to developers. The cycle often
repeats: developers fix bugs, a new build is provided, and testers re-run affected tests (regression) to
verify fixes 25 . Throughout execution, the team may also perform smoke tests on new builds to
ensure stability before full testing.

• Test Closure: Finalizing testing. Once testing objectives are met (or deadlines arrived), the team
compiles the testing results and documents lessons learned. Activities include preparing a Test
Summary Report (executed cases, defect counts, pass rate) and a Test Closure Report 26 . Defects
are checked that all critical ones are resolved or deferred. Test environments are cleaned up and test
artifacts (cases, logs, reports) are archived 26 . Knowledge is transferred (e.g. sharing any
workaround or remaining risks). The closure report ensures stakeholders understand the testing
coverage, remaining issues, and readiness of the software 26 .

By following these stages in sequence, the QA process remains organized and traceable, ensuring high test
coverage and clear documentation of results 23 26 .

Tools and Resources


Effective manual testing often uses specialized tools and community resources:

• Bug/Issue Trackers:
JIRA – A popular issue-tracking and project management tool by Atlassian. It provides a centralized
platform to capture, assign, and monitor bugs 27 . Teams can record bugs with descriptions,
attachments, and workflow statuses, ensuring nothing falls through the cracks.
Bugzilla – A mature, open-source defect-tracking system. Bugzilla lets teams track outstanding bugs,
issues, and change requests across releases 28 . It is web-based and customizable, widely used
when organizations need robust bug tracking without licensing costs.

• Test Case Management:


TestRail – A web-based test management tool. It helps testers organize test cases, group them into
runs/suites, execute tests, and report results 29 . Using TestRail, QA teams can maintain a central
repository of test cases, see coverage dashboards, and integrate with bug trackers. Other similar
tools include TestLink, Zephyr, and qTest.

• Other Tools: While manual testing is done without automation scripts, tools can assist test
management. For example, spreadsheet templates for cases, screenshots tools, or browser developer
tools to inspect issues. Exploratory testers may use mind-mapping or session logging tools (e.g.
TestBuddy) to record their test paths.

• Learning Resources: For training, many QA professionals use online courses (e.g. Udemy’s Manual
Testing courses, Coursera/Simplilearn software testing tracks) and certification materials (ISTQB
guides). Books like “Foundations of Software Testing” or “Lessons Learned in Software Testing” are
recommended for concepts and best practices. Community forums and blogs are invaluable – for
example, Stack Overflow and the Ministry of Testing community (MoT) host active discussions on
testing problems. Sites like Software Testing Help, Guru99, and QA blogs offer tutorials and QA tips.
Engaging with these resources helps testers stay current with techniques and tools.

5
The table and citations above highlight a few mainstream tools; many teams choose the ones that fit their
workflows best. The key is to use tools that facilitate good documentation and tracking, and to keep
learning through courses, books, and community forums.

Images and Illustrations

Testing Lifecycle Flowchart

Figure: Software Testing Life Cycle flowchart.


A typical testing lifecycle flows through sequential phases as shown above: from Requirement Analysis
(understanding what to test), to Test Planning and Test Design, then setting up the Test Environment,
executing tests and logging results, and finally Test Reporting/Test Closure. Each phase builds on the previous
to ensure a thorough and organized QA process 23 26 .

Sample Bug Report Layout

[Insert sample bug report layout diagram here]


A standard bug report format includes fields such as Bug ID, Summary/Title, Environment, Description,
Steps to Reproduce, Expected Result, Actual Result, and Attachments (screenshots or logs). For example, a
bug report might list numbered steps to recreate the issue, with a clear distinction between what should
happen and what did happen. Including a severity level and any supporting attachments (e.g. error
screenshot) makes the report actionable for developers 17 18 .

Severity vs Priority Matrix

[Insert severity vs priority matrix diagram here]


Severity and priority are often visualized in a matrix to help triage defects. Severity measures the technical
impact on the system (critical/major/medium/low) 16 , while priority indicates how soon the issue should be
fixed. For instance, a high-severity, high-priority bug (e.g. login broken) blocks users and must be fixed
immediately, whereas a low-severity, low-priority bug (e.g. minor UI typo) can be deferred. A matrix helps
teams agree on which bugs to address first, balancing impact and urgency.

6
Manual vs Automated Testing Comparison Chart

Figure: Manual vs. Automated Testing Comparison chart.


Manual and automated testing each have distinct strengths, as illustrated above. Manual testing (left) is
human-driven and intuitive, making it good for exploratory and usability testing. Automated testing (right)
is script-driven and repeatable, making it ideal for regression and performance testing. Understanding both
methods helps teams choose the right approach for each testing need 2 【78†】.

Conclusion
Manual testing is a foundational aspect of software quality assurance. By manually executing test cases,
testers provide the human perspective needed to catch usability issues, logical errors, and other defects
that automation may overlook 1 10 . This guide covered the core concepts: from the definition and
importance of manual testing, through the various test types and defect tracking practices, to the full
testing process lifecycle. Key takeaways include the need for clear documentation (comparing expected vs
actual results in bug reports 17 ), the distinction between verification and validation activities 19 30 , and
the value of good tools (like JIRA and TestRail) and communities for supporting QA work 27 29 .

In practice, manual testing and automation go hand-in-hand. While automation speeds up regression
checks, manual testing remains essential for creative exploration and user-focused validation. By
understanding both methods and following a structured testing process, QA teams can ensure they are
building the right product and building the product right 19 . Manual testing thus plays a crucial role in
delivering reliable, high-quality software.

References: Authoritative sources on testing were cited throughout. (See brackets for source links.)

1 Manual testing - Wikipedia


https://en.wikipedia.org/wiki/Manual_testing

2 3 4 5 Manual Testing vs Automated Testing | GeeksforGeeks


https://www.geeksforgeeks.org/software-engineering-differences-between-manual-and-automation-testing/

7
6 Functional testing - Wikipedia
https://en.wikipedia.org/wiki/Functional_testing

7 Regression testing - Wikipedia


https://en.wikipedia.org/wiki/Regression_testing

8 Integration testing - Wikipedia


https://en.wikipedia.org/wiki/Integration_testing

9 What is User Acceptance Testing (UAT)? | Definition from TechTarget


https://www.techtarget.com/searchsoftwarequality/definition/user-acceptance-testing-UAT

10 Exploratory Testing | GeeksforGeeks


https://www.geeksforgeeks.org/exploratory-testing/

11 What is Smoke Testing? | Definition from TechTarget


https://www.techtarget.com/searchsoftwarequality/definition/smoke-testing

12 Sanity Testing – Software Testing | GeeksforGeeks


https://www.geeksforgeeks.org/sanity-testing/

13 14 15 Bug vs Defect: Core Differences | BrowserStack


https://www.browserstack.com/guide/bug-vs-defect

16 Severity in Testing vs Priority in Testing | GeeksforGeeks


https://www.geeksforgeeks.org/severity-in-testing-vs-priority-in-testing/

17 18 How to write an Effective Bug Report | BrowserStack


https://www.browserstack.com/guide/how-to-write-a-bug-report

19 Verification and validation - Wikipedia


https://en.wikipedia.org/wiki/Verification_and_validation

20 21 22 30 Verification and Validation in Software Engineering | GeeksforGeeks


https://www.geeksforgeeks.org/software-engineering-verification-and-validation/

23 24 25 26 Software Testing Life Cycle (STLC) | GeeksforGeeks


https://www.geeksforgeeks.org/software-testing-life-cycle-stlc/

27 Bug Tracking with Jira | Atlassian


https://www.atlassian.com/software/jira/features/bug-tracking

28 About - Bugzilla
https://www.bugzilla.org/about/

29 TestRail Introduction
https://www.tutorialspoint.com/testrail/testrail_introduction.htm

You might also like