QA Manual
QA Manual
QA Manual
Examples of software being delivered not working as expected, or not working at all, are common.
Why is this?
• pressure • deadlines
• lack of time • complexity
• lack of tests • etc. Error -> Defect -> Failure
1
Testing Principles
Testing shows the presence of bugs
Exhaustive testing is impossible
Early testing Defect clustering
The pesticide paradox
Testing is context dependent
Absence of errors fallacy
2
PENTASTAGIU QA 2019 – WEEK #2
1. WATERFALL 2. V-MODEL
3
3. ITERATIVE
Key characteristics:
•Each phase must be completed fully before the next phase can begin;
•Testing is done only after the software is fully developed;
•All requirements are defined;
•Model phases do not overlap.
Cons:
Pros: • High risks if major bugs are discovered at the end
•Simple to use; of development;
• Once an application is in the testing stage, it is
•Easy to manage because of it’s rigidity;
very difficult to go back and change something that
•Works well for smaller projects where
was not well-thought out in the concept stage;
requirements are well understood. • No working software is produced until late during the
life cycle;
• High amounts of risk and uncertainty.
Key characteristics:
•V-Shaped life cycle is a sequential path of execution of processes;
•Each phase must be completed before the next phase begins;
•On each phase, parallel activities can be done.
Pros:
•Defects are found at early stages;
•Good for small projects with clearly defined
requirements.
4
Cons: •If any changes happen in midway, then the
•Very rigid and least flexible; test documents along with requirement
•Software is developed during the documents have to be updated.
implementation phase, so no early prototypes
of the software are produced;
SDLC: ITERATIVE-INCREMENTAL
Key characteristics:
•Cycles are divided up into smaller, more easily managed modules;
•Each module passes through the requirements, design, implementation and testing phases;
•Each subsequent release of the module adds functionality to the previous release;
•The process continues until the complete system is achieved.
Pros: Cons:
• Generates working software quickly and •Needs good planning and design;
early during the SDLC; •Needs a clear and complete definition of the
• This model is more flexible – less costly to whole system before it can be broken down
change scope and requirements; and built incrementally;
• It is easier to test and debug during a •Total cost is higher than waterfall;
smaller iteration; •Lack of formal documentation.
• In this model, the customer can respond to
each build (fast and frequent feedback);
• Lowers initial delivery cost;
• Easier to manage risk because risky pieces
can be identified and handled during each
iteration.
AGILE METHODOLOGY
Agile principles:
•Collaboration •Adaptability
•Flexibility
5
From Agile Manifesto:
•Individuals and interactions over processes and tools;
•Working software over comprehensive documentation;
•Responding to change over following a plan;
•Collaborating with customers over contract negotiation.
TEST LEVELS
UNIT TESTING
• UNIT TESTING is a level of software testing where individual units/components of a software are
tested.
• The purpose is to validate that each unit of the software performs as designed. A unit is the smallest
testable part of any software. It usually has one or a few inputs and usually a single output.
• In procedural programming, a unit may be an individual program, function, procedure, etc. In object-
oriented programming, the smallest unit is a method, which may belong to a base/ super class,
abstract class or derived/child class. (Some treat a module of an application as a unit. This is to be
discouraged as there will probably be many individual units within that module.)
• Unit testing frameworks, drivers, stubs, and mock/fake objects are used to assist in unit testing.
INTEGRATION TESTING
•INTEGRATION TESTING is a level of software testing where individual units are combined and tested
as a group.
•The purpose of this level of testing is to expose faults in the interaction between integrated units.
•Test drivers and test stubs are used to assist in Integration Testing.
SYSTEM TESTING
•SYSTEM TESTING is a level of software testing where a complete and integrated software is tested.
•The purpose of this test is to evaluate the system’s compliance with the specified requirements.
ACCEPTANCE TESTING
•ACCEPTANCE TESTING is a level of software testing where a system is tested for acceptability.
•The purpose of this test is to evaluate the system’s compliance with the business requirements and
assess whether it is acceptable for delivery.
6
TEST TYPES
•FUNCTIONAL TESTING
•NON-FUNCTIONAL TESTING
•STRUCTURAL TESTING
•TESTING AFTER CODE HAS BEEN CHANGED
Functional Testing
•Functional Testing is a type of software testing whereby the system is tested against the functional
requirements/specifications;
•Functions (or features) are tested by feeding them input and examining the output. Functional
testing ensures that the requirements are properly satisfied by the application;
•This type of testing is not concerned with how processing occurs, but rather, with the results of
processing;
•Functional testing is normally performed during the levels of System Testing and Acceptance Testing.
Non-functional Testing
•Non-functional testing is a type of testing to check non-functional aspects of a software application.
It is designed to check the readiness of a system considering parameters that are never addressed by
functional testing.
•Non-functional testing focuses on the software’s performance i.e. “How well it works”.
•Examples: performance, load, stress, usability, reliability, security, compliance etc.
Structural Testing
•This type of testing is used to measure how much testing has been carried out. In functional testing,
this could be the number of functional requirements tested against the total number of requirements;
•In structural testing, we change our measure to focus on the structural aspects of the system. This
could be the code itself, or an architectural definition of the system. A common measure is to look at
how much of the actual code that has been written has been tested;
•Structural testing can be carried out at any test level.
7
PENTASTAGIU QA 2019 – WEEK #3
STATIC TESTING
• Static Testing is a technique by which we can check the defects in software without actually
executing it
Review
Reviews can be used to test anything that is written or typed; this can include documents such as
requirement specifications, system designs, code, test plans and test cases.
The types of defects most typically found by reviews are:
• Deviations from standards either internally defined and managed or regulatory/legally defined by
Parliament or perhaps a trade organization.
• Requirements defects – for example, the requirements are ambiguous, or there are missing
elements.
• Design defects – for example, the design does not match the requirements.
• Insufficient maintainability – for example, the code is too complex to maintain.
• Incorrect interface specifications – for example, the interface specification does not match the
design or the receiving or sending interface
Review - Roles
• Manager: decides on what is to be reviewed, ensures there is sufficient time allocated in the project
plan for all of the required review activities, and determines if the review objectives have been met.
• Moderator (review leader): the person who leads the review of the document or set of documents,
including planning the review, running the meeting, and follow-ups after the meeting.
• Author: is the writer or person with chief responsibility for the development of the document(s) to
be reviewed. The author will in most instances also take responsibility for fixing any agreed defects.
• Reviewer(s): individuals with a specific technical or business background (also called checkers or
inspectors) who, after the necessary preparation, identify and describe findings (e.g. defects) in the
product under review.
• Scribe (recorder): attends the review meeting and documents all of the issues and defects, problems
and open points that were identified during the meeting.
Static Analysis
• Referencing a variable with an undefined value, e.g. using a variable as part of a calculation before
the variable has been given a value.
• Variables that are never used. This is not strictly an error, but if a programmer declares a variable in
a program and does not use it, there is a chance that some intended part of the program has
inadvertently been omitted,
• Unreachable (dead) code. This means lines of code that cannot be executed because the logic of the
program does not provide any path in which that code is included.
• Programming standards violations, e.g. if the standard is to add comments only at the end of the
piece of code, but there are notes throughout the code, this would be a violation of standards.
• Security vulnerabilities, e.g. password structures that are not secure.
• Syntax violations of code and software models, e.g. incorrect use of the programming or modelling
language.
9
Static Analysis tool – SonarQube
BLACK-BOX
• The main thing about specification-based techniques is that they derive test cases directly from the
specification or from some other kind of model of what the system should do. The source of information on
which to base testing is known as the ‘test basis’.
•Equivalence partitioning
•Boundary value analysis
•Decision table testing
•State transition testing
•Use case testing
10
BLACK-BOX – Decision Table Testing
11
BLACK-BOX – State Transition Testing
12
WHITE-BOX – Pseudocode, flowcharts
•Code is visible; the starting point and focus of testing is the code
13
WHITE-BOX
•Statement testing and statement coverage
•Decision testing and decision coverage
14
WHITE-BOX – Decision coverage
15
WHITE-BOX – Decision coverage
16
EXPERIENCE BASED TECHNIQUES
Experience-based techniques are those that you fall back on when there is no adequate specification
from which to derive specification-based test cases or no time to run the full structured set of tests. They
use the users’ and the testers’ experience to determine the most important areas of a system and to
exercise these areas in ways that are both consistent with expected use (and abuse) and likely to be the
sites of errors – this is where the experience comes in.
•Error guessing is a very simple technique that takes advantage of a tester’s skill, intuition and
experience with similar applications to identify special tests that may not be easy to capture by the more
formal techniques
•Error guessing is commonly used in risk analysis to "guess" where errors are likely to occur and to
assign a higher risk to the errorprone areas. Error guessing as a testing technique is employed by the
tester to determine the potential errors that might have been introduced during the software
development and to devise methods to detect those errors as they manifest into defects and failures.
• Exploratory testing is a technique that combines the experience of testers with a structured
approach to testing where specifications are either missing or inadequate and where there is severe time
pressure. It exploits concurrent test design, test execution, test logging and learning within time-boxes
and is structured around a test charter containing test objectives. In this way exploratory testing
maximizes the amount of testing that can be achieved within a limited time frame, using test objectives to
maintain focus on the most important areas.
• Remember that exploratory testing is not ad hoc testing. Exploratory testing occurs when the tester
plans, designs, and executes tests concurrently and learns about the product while executing the tests. As
testing proceeds, the tester adjusts what will be tested next based on what has been discovered.
Exploratory tests are planned and usually guided by a test charter that provides a general description of
the goal of the test. The process is interactive and creative, ensuring that the tester's knowledge is directly
and immediately applied to the testing effort. Documentation for exploratory testing is usually
lightweight, if it exists at all.
17
PENTASTAGIU QA 2019 – WEEK #4
Project risks:
• Supplier issue (client side): third party deliver, contractual;
• Organizational factors: skills, personal issues, political/cultural issues, bugs not solved, no
improvements;
• Technical: spaghetti code, requirement not defined or not understood,
environments/certificates/licenses not ready.
Product risks:
• Failure risks: integration, failure-prone modules;
• Possibility to cause harm;
• Poor software characteristics: missed scenarios;
• Poor data integrity and quality: database migration, API;
• Software that not meet expectation: we might deliver other than client expectation.
Risks mitigation:
• Contractual agreement
• Agree upon methodology, processes and procedures
• Use formal review
• Document all the info you can obtain
• Asking validation at each step
• Risk based technique
• Learn form the past
Tester tasks:
• Reviewing and contributing to the development of test plan
• Analyzing, reviewing and assessing user requirements, specifications and models for testability
18
• Creating test specifications from the test basis
• Setting up the test environment
• Preparing and acquiring/copying/creating test data
• Implementing tests on all test level
• Using test administration or management and test monitoring tools as required
• Automating tests
• Running the tests and measuring the performance of components and systems
• Reviewing tests developed by other testers
Test approaches:
A test approach is the implementation of a test strategy on a particular project. The test approach defines
how testing will be implemented
All will depend on the context within which the test team is working, and may consider risks, hazards and
safety, available resources and skills, the technology, the nature of the system
• Developed early in the life cycle, which is known as preventative
• Left until just before the start of test execution, which is known as reactive
• Analytical approaches such as risk-based (testing is directed to area of greatest risk)
• Model-based approaches (using statistical information about failure rates or usage)
• Methodical approaches – failure based (including error guessing and fault attacks, checklist based and
quality characteristic based)
• Standard-compliant approaches (specified by industry-specific standards)
• Process-compliant approaches (Waterfall, Agile, etc)
• Dynamic and heuristic approaches – exploratory
• Consultative approaches (advice and guidance of technology and/or business domain experts outside or
within the test team)
S scope (including test items, features to be tested and features not to be tested)
P people (including responsibilities, staff and training and approvals)
A approach
C criteria (including item pass/fail criteria and suspension and resumption requirements)
E environment needs
D deliverables (test)
I identifier and introduction (test plan)
R risks and contingencies
T testing tasks and schedule
19
Entry criteria:
• Test environment available and ready for use (it functions).
• Test tools installed in the environment are ready for use.
• Testable code is available.
• All test data is available and correct.
• All test design activity has completed
Exit criteria:
• All tests planned have been run.
• A certain level of requirements coverage has been achieved.
• No high-priority or severe defects are left outstanding.
• All high-risk areas have been fully tested, with only minor residual risks left outstanding.
• Cost – when the budget has been spent.
• The schedule has been achieved, e.g. the release date has been reached and the product has to go live
Test estimation:
20
Test reporting
Test reporting is the process where by test metrics are reported in summarized format to update the
reader regarding the testing tasks undertaken
• Defect found
• Open/closed defect
• Risks
• Activities
Test control:
• Making decisions based on information from test monitoring.
• Reprioritize tests when an identified project risk occurs (e.g. software delivered late).
• Change the test schedule due to availability of a test environment.
• Set an entry criterion requiring fixes to be retested (confirmation tested) by a developer before accepting
them into a build (this is particularly useful when defect fixes continually fail again when retested).
• Review of product risks and perhaps changing the risk ratings to meet the target.
• Adjusting the scope of the testing (perhaps the amount of tests to be run) to manage the testing of late
change requests.
21
Configuration management
• Tools
• Version Code
• Version documentation
22