0% found this document useful (0 votes)
9 views

Manual Testing

The document provides a comprehensive overview of manual testing, detailing its objectives, principles, processes, and various types of testing. It emphasizes the importance of testing in identifying defects, ensuring software quality, and supporting development methodologies like Test-Driven Development (TDD). Additionally, it outlines the roles, skills, and activities involved in testing, along with the significance of planning and documentation in the testing process.

Uploaded by

aaliyah17394
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Manual Testing

The document provides a comprehensive overview of manual testing, detailing its objectives, principles, processes, and various types of testing. It emphasizes the importance of testing in identifying defects, ensuring software quality, and supporting development methodologies like Test-Driven Development (TDD). Additionally, it outlines the roles, skills, and activities involved in testing, along with the significance of planning and documentation in the testing process.

Uploaded by

aaliyah17394
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Manual Testing

By Prof. Aalia Shaikh


What is Testing?

 Software testing assesses software quality and helps reducing the


risk of software failure in operation.
 Software testing is a set of activities to discover defects and evaluate
the quality of software artifacts.
 Whilst testing involves verification, i.e., checking whether the
system meets specified requirements.
 It also involves validation, which means checking whether the
system meets users’ and other stakeholders’ needs in its operational
environment
The typical test objectives are:
 Evaluating work products such as requirements, user stories, designs, and code
 Triggering failures and finding defects
 Ensuring required coverage of a test object
 Reducing the level of risk of inadequate software quality
 Verifying whether specified requirements have been fulfilled
 Verifying that a test object complies with contractual, legal, and regulatory
requirements
 Providing information to stakeholders to allow them to make informed decisions
 Building confidence in the quality of the test object
 Validating whether the test object is complete and works as expected by the
stakeholders
Testing and Debugging

 Testing can trigger failures that are caused by defects in the software
(dynamic testing) or can directly find defects in the test object (static
testing).
 When dynamic testing triggers a failure, debugging is concerned with
finding causes of this failure (defects), analyzing these causes,
and eliminating them.
 The typical debugging process in this case involves:
 • Reproduction of a failure
 • Diagnosis (finding the root cause)
 • Fixing the cause
Why is Testing Necessary?

 Testing components, systems, and associated documentation helps to


identify defects in software
 Testing provides a cost-effective means of detecting defects

 Testing is a form of quality control (QC).


 QC is a product-oriented, corrective approach that focuses on those
activities supporting the achievement of appropriate levels of quality.
 QA is a process-oriented, preventive approach that focuses on the
implementation and improvement of processes.
 It works on the basis that if a good process is followed correctly, then
it will generate a good product
Errors, Defects, Failures, and Root
Causes
 Human beings make errors (mistakes),
 which produce defects (faults, bugs),
 which in turn may result in failures.
 A root cause is a fundamental reason for the occurrence of a problem
 Root causes are identified through root cause analysis
Testing Principles

1. Testing shows the presence, not the absence of defects


2. Exhaustive testing is impossible
3. Early testing saves time and money.
4. Defects cluster together
5. Tests wear out
6. Testing is context dependent.
7. Absence-of-defects fallacy
Test Activities

 Test planning
 Test monitoring and control.
 Test analysis – what to test
 Test design – how to test
 Test implementation
 Test execution
 Test completion
Test Process: the way the testing is carried out
will depend on a number of contextual factors
including:
 Stakeholders (needs, expectations, requirements, willingness to cooperate, etc.)
 Team members (skills, knowledge, level of experience, availability, training
needs, etc.)
 Business domain (criticality of the test object, identified risks, market needs,
specific legal regulations, etc.)
 Technical factors (type of software, product architecture, technology used, etc.)
 Project constraints (scope, time, budget, resources, etc.)
 Organizational factors (organizational structure, existing policies, practices
used, etc.)
 Software development lifecycle (engineering practices, development methods,
etc.)
 Tools (availability, usability, compliance, etc.)
Testware is created as output work
products from the test activities
 Test planning work products include: test plan, test schedule, risk register, and entry and exit
criteria
 Risk register is a list of risks together with risk likelihood, risk impact and information about risk
mitigation
 Test monitoring and control work products include: test progress reports documentation of control
directives and risk information
 Test analysis work products include: (prioritized) test conditions and defect reports regarding
defects in the test basis (if not fixed directly).
 Test design work products include: (prioritized) test cases, test charters, coverage items, test
data requirements and test environment requirements.
 Test implementation work products include: test procedures, automated test scripts, test suites,
test data, test execution schedule, and test environment elements. Examples of test environment
elements include: stubs, drivers, simulators, and service virtualizations.
 Test execution work products include: test logs, and defect reports
 Test completion work products include: test completion report action items for improvement of
subsequent projects or iterations, documented lessons learned, and change requests (e.g., as
product backlog items).
Traceability between the Test Basis
and Testware
 Accurate traceability supports coverage evaluation
 The coverage criteria can function as key performance indicators to drive
the activities that show to what extent the test objectives have been
achieved

 Traceability of test cases to requirements can verify that the requirements


are covered by test cases.
 Traceability of test results to risks can be used to evaluate the level of
residual risk in a test object.

 Good traceability also makes test progress and completion reports more
easily understandable by including the status of test basis elements.
Generic Skills Required for Testing

 Testing knowledge (to increase effectiveness of testing, e.g., by using test


techniques)
 Thoroughness, carefulness, curiosity, attention to details, being methodical
(to identify defects, especially the ones that are difficult to find)
 Good communication skills, active listening, being a team player (to interact
effectively with all stakeholders, to convey information to others, to be
understood, and to report and discuss defects)
 Analytical thinking, critical thinking, creativity (to increase effectiveness of
testing)
 Technical knowledge (to increase efficiency of testing, e.g., by using
appropriate test tools)
 Domain knowledge (to be able to understand and to communicate with end
users/business representatives)
Testing as a Driver for Software
Development
Test-Driven Development (TDD)
 Directs the coding through test cases (instead of extensive software design)
(Beck 2003)
 Tests are written first, then the code is written to satisfy the tests, and then
the tests and code are refactored
Acceptance Test-Driven Development (ATDD)
 Derives tests from acceptance criteria as part of the system design process
(Gärtner 2011)
 Tests are written before the part of the application is developed to satisfy the
tests
Behavior-Driven Development (BDD)
 Expresses the desired behavior of an application with test cases written in a
simple form of natural language, which is easy to understand by
stakeholders – usually using the Given/When/Then format. (Chelimsky 2010)
 Test cases are then automatically translated into executable tests
Retrospectives and Process
Improvement
 Retrospectives (also known as “post-project meetings” and project
retrospectives) are often held at the end of a project or an iteration,
at a release milestone, or can be held when needed.

 What was successful, and should be retained?


 What was not successful and could be improved?
 How to incorporate the improvements and retain the successes in the
future?

 The results should be recorded and are normally part of the test
completion report
Test Levels
 Component testing (also known as unit testing)
focuses on testing components in isolation
 Component integration testing (also known as unit integration
testing)
focuses on testing the interfaces and interactions between components
 System testing
focuses on the overall behavior and capabilities of an entire system or
product, often including functional testing of end-to-end tasks and the
non-functional testing of quality characteristics
 System integration testing
focuses on testing the interfaces of the system under test and other
systems and external services .
 Acceptance testing
focuses on validation and on demonstrating readiness for deployment,
which means that the system fulfills the user’s business needs.
Test Types
Functional testing evaluates the functions that a component or
system should perform

Non-functional testing is the testing of “how well the system


behaves”.
 Performance efficiency
 Compatibility
 Usability
 Reliability
 Security
 Maintainability
 Portability
 Black-box testing The main objective of black-box testing is
checking the system's behavior against its specifications.
 White-box testing is structure-based and derives tests from the
system's implementation or internal structure (e.g., code,
architecture, work flows, and data flows).

 Confirmation testing (retesting) confirms that an original


defect has been successfully fixed.
 Regression testing confirms that no adverse consequences have
been caused by a change, including a fix that has already been
confirmation tested
Differences between Static Testing
and Dynamic Testing

Static Testing Dynamic Testing


 Code  Actual product
 can be applied to non-  can only be applied to
executable work products executable work products
 Maintainability  performance efficiency
Typical defects to find through static
testing
 Defects in requirements (e.g., inconsistencies, ambiguities,
contradictions, omissions, inaccuracies, duplications)
 Design defects (e.g., inefficient database structures, poor
modularization)
 Certain types of coding defects (e.g., variables with undefined values,
undeclared variables, unreachable or duplicated code, excessive code
complexity)
 Deviations from standards (e.g., lack of adherence to naming
conventions in coding standards)
 Incorrect interface specifications (e.g., mismatched number, type or
order of parameters)
 Specific types of security vulnerabilities (e.g., buffer overflows)
 Gaps or inaccuracies in test basis coverage (e.g., missing tests for an
acceptance criterion)
Review Process Activities

 Planning
 Review initiation
 Individual review
 Communication and analysis
 Fixing and reporting
Roles and Responsibilities in
Reviews
Manager – decides what is to be reviewed and provides resources,
such as staff and time for the review
 Author – creates and fixes the work product under review
 Moderator (also known as the facilitator) – ensures the effective
running of review meetings, including mediation, time management,
and a safe review environment in which everyone can speak freely
 Scribe (also known as recorder) – collates anomalies from reviewers
and records review information, such as decisions and new anomalies
found during the review meeting
 Reviewer – performs reviews. A reviewer may be someone working on
the project, a subject matter expert, or any other stakeholder
 Review leader – takes overall responsibility for the review such as
deciding who will be involved, and organizing when and where the
review will take place
Review Types

 Informal review
 Walkthrough
 Technical Review
 Inspection
Black-Box Test Techniques

 Equivalence Partitioning
 Boundary Value Analysis
 Decision Table Testing
 State Transition Testing
Equivalence Partitioning

Equivalence Partitioning (EP) divides data into partitions (known as equivalence


partitions)
based on the expectation that all the elements of a given partition are to be
processed
in the same way by the test object
Boundary Value Analysis

 Boundary Value Analysis (BVA) is a technique based on exercising the


boundaries of equivalence partitions.
Decision Table Testing

 Decision tables are used for testing the implementation of system


requirements that specify how different combinations of conditions
result in different outcomes. Decision tables are an effective way of
recording complex logic, such as business rules.
State Transition Testing

 A state transition diagram models


the behavior of a system by
showing its possible states and
valid state transitions.
 A transition is initiated by an
event, which may be additionally
qualified by a guard condition.
 The transitions are assumed to be
instantaneous and may sometimes
result in the software taking action.
White-Box Test Techniques

 White-box techniques can be used in static testing (e.g., during dry


runs of code).
 They are well suited to reviewing code that is not yet ready for
execution

 Statement testing
 Branch testing
Statement Testing

 In statement testing, the coverage items are executable statements.


The aim is to design test cases that exercise statements in the code
until an acceptable level of coverage is achieved
 When 100% statement coverage is achieved, it ensures that all
executable statements in the code have been exercised at least once.
Branch Testing and Branch
Coverage
 A branch is a transfer of control between two nodes in the control
flow graph, which shows the possible sequences in which source code
statements are executed in the test object.
Experience-based Test Techniques

 Error Guessing
 Exploratory Testing
 Checklist-based testing
Acceptance Criteria

 Define the scope of the user story


 Reach consensus among the stakeholders
 Describe both positive and negative scenarios
 Serve as a basis for the user story acceptance testing
 Allow accurate planning and estimation
Purpose and Content of a Test Plan

 Documents the means and schedule for achieving test objectives


 Helps to ensure that the performed test activities will meet the
established criteria
 Serves as a means of communication with team members and other
stakeholders
 Demonstrates that testing will adhere to the existing test policy and
test strategy
The typical content of a test plan
includes:
 Context of testing (e.g., scope, test objectives, constraints, test basis)
 Assumptions and constraints of the test project
 Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and
training needs)
 Communication (e.g., forms and frequency of communication,
documentation templates)
 Risk register (e.g., product risks, project risks)
 Test approach (e.g., test levels, test types, test techniques, test deliverables,
entry criteria and exit criteria, independence of testing, metrics to be
collected, test data requirements, test environment requirements,
deviations from the organizational test policy and test strategy)
 Budget and schedule
Entry Criteria and Exit Criteria

 Entry criteria define the preconditions for undertaking a given


activity.
 If entry criteria are not met, it is likely that the activity will prove to
be more difficult, time-consuming, costly, and riskier.

 Exit criteria define what must be achieved in order to declare an


activity completed.

 Entry criteria and exit criteria should be defined for each test level,
and will differ based on the test objectives.
To be continued..

What is Sanity testing?


 Sanity testing is testing done at the release level to test the main
functionalities. It’s also considered an aspect of regression testing.

You might also like