0% found this document useful (0 votes)
5 views25 pages

Software Quality Testing

The document outlines the Software Development Life Cycle (SDLC) consisting of five phases: Analysis, Design, Coding, Testing, and Maintenance, detailing the processes and outputs of each phase. It also describes various software models such as Waterfall, V Model, Prototype, Iterative & Incremental, Spiral, and Agile Scrum, highlighting their advantages and disadvantages. Additionally, it covers the Software Testing Life Cycle (STLC) and principles of testing, emphasizing the importance of early testing and the need for a structured approach to ensure software quality.

Uploaded by

khaleqalimp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views25 pages

Software Quality Testing

The document outlines the Software Development Life Cycle (SDLC) consisting of five phases: Analysis, Design, Coding, Testing, and Maintenance, detailing the processes and outputs of each phase. It also describes various software models such as Waterfall, V Model, Prototype, Iterative & Incremental, Spiral, and Agile Scrum, highlighting their advantages and disadvantages. Additionally, it covers the Software Testing Life Cycle (STLC) and principles of testing, emphasizing the importance of early testing and the need for a structured approach to ensure software quality.

Uploaded by

khaleqalimp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 25

Software Quality Testing

Software Development Life Cycle (SDLC):


SDLC contain five phases
1. Analysis
2. Design
3. Coding
4. Testing
5. Maintenance

Analysis:
Inputs: User Requirements Document
Business Rules
This phase involves planning and requirement gathering

1. Planning
 Understanding of client requirements and specifications
 Performing feasibility analysis
 Develop solution strategy
 Determining acceptance criteria
 Planning the development process
The output of this process will give us project plan

2. Requirement Gathering
 Analyze allocated requirements
 Segregate requirements into technical and non technical category
 Prepare a traceability matrix for each gathered requirement
 Review all allocated requirements and traceability matrix
The output of this process will give us SRS & RTM

Design:
Design is implemented based on SRS and Project Plan
Here design is divided into two groups HLD & LLD

High Level Design (HLD):


 Determining the overall architecture of system from root module to leaf module
 It gives over all system design in terms of functional architecture and database
design
 Class diagrams, object diagrams, UML diagrams are used in this phase
 It gives an outline of a project

Low Level Design (LLD):


 Here high level diagram is divided into modules and programs
 Logic design is done for every program and documented as program
specifications

Record architecture designs and detailed design in DDD – Detailed Design Document
Prepare system test plan based on SRS and DDD
Prepare unit test cases, integration test cases and system test cases

The output of this phase is detailed design document and system test plan

Coding:
 Component diagrams are used in this phase

Anilkumar Kusuma Page 1


 The detailed design is used to produce the required software in the specified
programming language
 Follows the naming and coding standards to develop the source code
 Conduct reviews on source code
 Execute all unit test cases
 Review and close all reported bugs
 Place the software source code under software configuration management

The output of this phase is executable code and Unit test report

Testing:
 Testing will be done for validating software according to client specifications
 Setup test environment
 All test cases are executed
 Review and close all reported bugs

The output of this phase is system test report

Maintenance:
 Error corrections
 Code modifications

Software Models

 Waterfall Model
 V Model
 Prototype Model
 Iterative & Incremental Model
 Spiral Model
 Agile Scrum

Waterfall Model:
 It is classical approach to the software development life cycle
 Approach is a linear and sequential
 Each phase must begin only when previous phase is done
 This model is used for known requirements or repetitive projects

Advantage:
 Easy to understand and implement
 Phases are processed and complete one at a time

Anilkumar Kusuma Page 2


 No phase is complete until documents are done and checked by SQA group
 Identifies deliverables and milestones
 Works well for smaller projects where requirements are very well understood

Disadvantage:
 It is a document driven model
 Client cannot understand by documents
 Assumes feasibility before implementation
 Working software will be produced only after last phase
 Not all requirements are received at once, the requirements from customer
goes on getting added to the list even after the end of "Requirement
Gathering and Analysis" phase, this affects the system development process
and its success in negative aspects.

V Model:
 V model is an extension of waterfall model
 This model is also called as verification and validation
 It means verification and validation will be done side by side
 The testing procedures are developed early in the life cycle
 Testing starts from requirements phase
 Each phase must be completed before the next phase begins

Anilkumar Kusuma Page 3


Advantages:
 Each phase has specific deliverables
 Testing will be done in all phases
 The error occurred at any phase will be corrected in that phase only
 It follows a strict process to develop a quality product

Disadvantage:
 This model is very rigid
 If any changes happen in the middle not only requirements documents but
also testing documents should be updated
 It is cost effective and required more human resources
 It needs established process to implements
 It can be implemented only by big companies

Prototype Model:
 This approach is used to develop a software product quickly
 Based on client requirement a prototype will be designed and send for client
feedback after approval only actual engineering starts

Advantages:
 Reduced time and cost
 Savings of development resources
 Client involvement will be more

Disadvantage:
 Insufficient analysis
 Excessive development time of the prototype

Iterative and Incremental Model:


 It starts with an initial planning and ends with deployment with the cyclic
interactions in between
 In the incremental model, we construct a partial implementation of a total
system. Then we slowly add increased functionality.

Anilkumar Kusuma Page 4


 The incremental model prioritizes requirements of the system and then
implements them in groups

Advantages:
 Generates working software quickly and early during the software life cycle.
 More flexible – less costly to change scope and requirements
 Easier to test and debug during a smaller iteration.
 Easier to manage risk because risky pieces are identified and handled during
its iteration.
 Each iteration is an easily managed milestone

Disadvantage:
 Design issues may arise because not all requirements are gathered up front
for the entire lifecycle

Spiral Model:
 The spiral model is similar to the incremental model, with more emphases placed
on risk analysis.
 The spiral model has four phases: Planning, Risk Analysis, Engineering and
Evaluation
 This is the first model to explain why iteration matters
 The iteration were typically 6 months to 2 years long
 Requirements are gathered during the planning phase.
 In the risk analysis phase, a process is undertaken to identify risk and alternate
solutions.
 A prototype is produced at the end of the risk analysis phase.
 Software is produced in the engineering phase, along with testing at the end of
the phase.
 The evaluation phase allows the customer to evaluate the output of the project to
date before the project continues to the next spiral

Anilkumar Kusuma Page 5


Advantages:
 High amount of risk analysis
 Good for large and mission-critical projects

Disadvantage:
 Can be a costly model to use
 Risk analysis requires highly specific expertise.
 Project’s success is highly dependent on the risk analysis phase.
 Doesn’t work well for smaller projects

Agile - Scrum:
Agile Testing can be defined as testing practice that follows the agile manifesto, treating
development as the customer of testing
 Agile testing is used whenever customer requirements are changing dynamically
 In agile process at every moments of developing or testing the application the
customer is always present at the desk, so it is easier to do the testing. Testing
starts with the exploration of the requirements and what the customer really
wants.

Unique about Scrum:


 Of all the agile methodologies, Scrum is unique because it introduced the
idea of “empirical process control.” That is, Scrum uses the real-world
progress of a project — to plan and schedule releases.
 In Scrum, projects are divided into sprints, which are typically one week,
two weeks, or three weeks in duration.

Anilkumar Kusuma Page 6


 At the end of each sprint, Product Owner and team members meet to
assess the progress of a project and plan its next steps.
 This allows a project’s direction to be adjusted or reoriented based on
completed work, not speculation or predictions.

 The product is described as a list of features called as backlog


 The features are described in terms of user stories
 The scrum team estimates the work associated with each story
 Features in the backlog are ranked in order of importance
 Sprints starts with planning meeting and sprint end with retrospective
 At the planning meeting we commit to an amount of work
 Every day we have daily scrum meeting

Participants in scrum are


 Product Owner
 Scrum Master
 Scrum Team

Testing Process Workflow:

Anilkumar Kusuma Page 7


People Things Behavior

 Product Owner  Product Backlog  Planning meeting


 Scrum Master  Stories  Sprints
 Scrum Team  Estimates  Retrospective
 Daily Meetings

Scrum Glossary:
Product Backlog: To-do List. That contains the project goals and priorities which is
managed by product owner
Product Owner: The person responsible for the products product backlog and make
sure that the project is working with the right things from a business perspective
Release Backlog: Is same as product backlog but restricted to release of the product
Scrum Master: The Team Leader of the scrum team. The Scrum Master does not
manage the team. Instead, he or she works to remove any impediments that are
obstructing the team from achieving its sprint goals
Sprint: Iteration
Sprint Backlog: To do list for a specific
Sprint Review: It’s an informal meeting (about 4 hours) at the end of sprint, in which
they present what has done in that sprint
Sprint Retrospective: Meeting (about 3 hours) held after each sprint. The Scrum
master and scrum team both review and see what went well and what should be
improved in the next phase
Timebox: A period during which something is to be carried out. A sprint is a result of
timebox thinking
Burn-down chart: A diagram that monitors how much work remains to implement a
segment of the software being developed during a sprint

Why Iterative?
 Prototype leads to product
 Rapid Feedback
 Reduced Risk

Anilkumar Kusuma Page 8


Software Testing Life Cycle (STLC):
1. Requirement Analysis
2. Test Planning
3. Test Case Development
4. Test Environment Setup
5. Test Execution
6. Test Cycle Closure

Requirement Analysis
During this phase, test team studies the requirements from a testing point of view to identify
the testable requirements. The QA team may interact with various stakeholders to
understand the requirements in detail
 Identify types of tests to be performed
 Gather details about testing priorities
 Prepare Requirement Traceability Matrix (RTM)
 Identify test environment details where testing is supposed to be carried out.
 Automation feasibility analysis (if required).

Deliverables
 RTM
 Automation feasibility report (If applicable)

Test Planning
This phase is also called Test Strategy phase. Typically, in this stage, a Senior QA manager
will determine effort and cost estimates for the project and would prepare and finalize the
Test Plan.
 Preparation of test plan/strategy document for various types of testing
 Test tool selection
 Test effort estimation
 Resource planning and determining roles and responsibilities
 Training requirement

Deliverables
 Test plan /strategy document
 Effort estimation document

Test Case Development


This phase involves creation, verification and rework of test cases & test scripts. Test data,
is identified/created and is reviewed and then reworked as well.
Activities
 Create test cases, automation scripts (if applicable)
 Review and baseline test cases and scripts
 Create test data (If Test Environment is available)

Deliverables
 Test cases/scripts
 Test data

Test Environment Setup


Test environment decides the software and hardware conditions under which a work product
is tested. Test environment set-up is one of the critical aspects of testing process
 Understand the required architecture, environment set-up and prepare hardware and
software requirement list for the Test Environment.
 Setup test Environment and test data
 Perform smoke test on the build
Anilkumar Kusuma Page 9
Deliverables
 Environment ready with test data set up
 Smoke Test Results

Test Execution
During this phase test team will carry out the testing based on the test plans and the test
cases prepared. Bugs will be reported back to the development team for correction and
retesting will be performed.
 Execute tests as per plan
 Document test results, and log defects for failed cases
 Map defects to test cases in RTM
 Retest the defect fixes
 Track the defects to closure

Deliverables
 Completed RTM with execution status
 Test cases updated with results
 Defect reports

Test Cycle Closure


Testing team will meet, discuss and analyze testing artifacts to identify strategies that have
to be implemented in future, taking lessons from the current test cycle. The idea is to
remove the process bottlenecks for future test cycles and share best practices for any
similar projects in future.
 Evaluate cycle completion criteria based on Time, Test coverage, Cost, Software,
Critical Business Objectives and Quality
 Prepare test metrics based on the above parameters.
 Document the learning out of the project
 Prepare Test closure report
 Qualitative and quantitative reporting of quality of the work product to the customer
 Test result analysis to find out the defect distribution by type and severity.

Deliverables
 Test Closure report
 Test metrics

Software Testing
The goal of software tester is to find bugs and make sure they get fixed

Testing: It’s a process of executing a program with the intent of finding bugs

Software Testing: It’s a process used to identify the correctness, completeness and the
quality of the software

Software Quality: Conformance to requirements and absence of bugs

Note: If we want to improve our software we should not test more we should develop better

Goals of Testing:
 Find the cases where the program does not do things which is supposed to do
 Find the cases where the program does things which is not supposed to do
 To ensure that system performs all the functions that are listed in the specifications

Anilkumar Kusuma Page 10


Principles of Testing:

 Early testing
 Testing shows presence of defects
 Exhaustive testing is impossible
 Testing is context dependent
 Defect clustering
 Pesticide paradox
 Absence of errors fallacy

Early testing
Testing activities should start as early as possible in the software or system development life
cycle and should be focused on defined objectives

Testing shows presence of defects


Testing can show that defects are present, but cannot prove that there are no defects.
Testing reduces the probability of undiscovered defects remaining in the software but, even
if no defects are found, it is not a proof of correctness.

Exhaustive testing is impossible


Testing everything (all combinations of inputs and preconditions) is not feasible except for
trivial cases. Instead of exhaustive testing, we use risks and priorities to focus testing
efforts.

Testing is context dependent


Testing is done differently in different contexts. For example, safety-critical software is
tested differently from an e-commerce site.

Defect clustering
A small number of modules contain most of the defects discovered during pre-release
testing or show the most operational failures

Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases
will no longer find any new bugs. To overcome this 'pesticide paradox', the test cases need
to be regularly reviewed and revised, and new and different tests need to be written to
exercise different parts of the software or system to potentially find more defects.

Absence of errors fallacy


Finding and fixing defects does not help if the system built is unusable and does not fulfill
the users' needs and expectations.

Testing & Quality:


Testing: Testing is an activity to achieve quality
Quality: Quality is usually a journey towards excellence

Black box texting:


 Test are based on requirements and functionality
 Verify how application is going to work
 Check for functionality

White box texting:


 Test based on internal logic of an application code and coverage of code statement
 Verify how application is developed

Anilkumar Kusuma Page 11


Testing Techniques:
1. Static Testing
Examining and reviewing the code without execution
 Reviews
 Walkthroughs
 Inspections

2. Dynamic Testing
Testing by executing and validating the application

 Specification Based / Black Box Techniques


o Equivalence class partition
o Boundary value analysis
o Decision Table testing
o State Transition testing
o Use Case testing

 Structured Based / White Box Techniques


o Basis Path Testing
o Control Structure Testing
 Statement Testing
 Condition Testing
 Looping Testing
 Data Flow Testing

 Experienced Based Technique


o Error Guessing
o Exploratory Testing

Black Box Testing Techniques

Equivalence class partition:


 It divides the input domain of a program into classes of data from which test cases
can be derived
1. If an input condition specifies a range, one valid and two invalid equivalence classes
are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid
equivalence class are defined.
4. If an input condition is Boolean, one valid and one invalid class are defined.

Boundary Value Analysis:


 Boundary value analysis is a technique for test data selection. Here we check only at
lower boundary and higher boundary with a formula n, n-1, n+1
 A text field accepts between the range of 18 to 60 then the test data w.r.t BVA is

Lower Boundary Higher Boundary


n=18  Valid n=60  Valid
n-1=17  Invalid n-1=59  Valid
n+1=18  Valid n+1=61  Invalid

Decision Table Testing:

Anilkumar Kusuma Page 12


 A Decision table testing is a method used to build a complete set of test cases
without using internal structure of the program.
 Decision Table Testing is a good way to deal with combination of inputs, which
produce different results
 When there are large number of test scenarios and less amount of time then we go
for Decision based table testing
 In order to create a test case we use a table which contains input and output values
of a program.

Inputs Rule1 Rule2 Rule3 Rule4


Fly From F F T T
Fly To F T F T
Outcome
F F F T
Flight Button

 Here we have 3 false conditions and one true condition


 From three false conditions we need to select one richest false condition
 One true condition and one false condition

State Transaction Testing


 This kind of testing is used when testing critical system
 State transition testing focuses on the testing of transitions from one state (e.g.,
open, closed) of an object (e.g., an account) to another state.
 State transaction table consist of four columns. Start State, Input, Output and Finish
State

Start State Off On


Input Switch On Switch Off
Output Light On Light Off
Finish State On Off

Use case testing:


 Use case testing is a technique that helps us identify test cases that exercise the
whole system on a transaction by transaction basis from start to finish.
 A use case is a description of a particular use of the system by an actor (a user of the
system). Each use case describes the interactions the actor has with the system in
order to achieve a specific task
 Use cases are defined in terms of the actor, not the system

White Box Testing Techniques:


 Basis Path Testing
 Control Structure Testing

Basis Path Testing:


 A basis path is a unique path through the software where no iterations are allowed
 Testing is designed to execute all selected paths through a computer program
Steps:
 Draw a control flow graph
 Calculate Cyclomatic complexity
 Choose a “basis set” of paths
 Generate test cases to exercise each path

Control Flow Graph:

Anilkumar Kusuma Page 13


 Lines (or arrows) called edges represent flow of control
 Circles called nodes represent one or more actions
 Areas bounded by edges and nodes called regions
 A predicate node is a node containing a condition

Cyclomatic Complex or V(G) = Edges – Nodes +2P


Where P = number of unconnected parts of the graph

Cyclomatic Complexity:
 Cyclomatic complexity is a software metric used to measure the complexity of
a program.
 It directly measures the number of linearly independent paths through a
program's source code

Control Structure Testing:


Control Structure Testing is a white-box testing technique that test three types of program
control namely condition testing, looping testing and data flow testing

Statement Testing:
Ensuring that all statements have been executed at least once

Condition Testing
It is a test case design method that test the logical conditions contained in a
procedural specification. It focuses on testing each condition in the program by
providing possible combination of values

Anilkumar Kusuma Page 14


Looping Testing
It is a test case design method that focuses exclusively on the validity of iterative
constructs or repetition.

Data Flow Testing


Testing in which test cases are designed based on variable usage within the code
 Variable defined but not used
 Variable used but not defined
 Variable has been defined twice before using

Experienced Based Technique

Error Guessing:
Test cases can be developed for invalid data to guess errors

Exploratory Testing:
Testing that is not based on formal test plan or test case. Tester will learn the software as
they test it using their experience

Levels of testing
1. Unit / Component Testing
2. Integration Testing
3. System Testing
4. Acceptance Testing

1. Unit Testing:
 Unit testing is called component testing
 It is a method of testing the correctness of particular module of source code
 The goal of unit testing is to isolate each part of the program and shows that
individual parts are correct
 Here white box testing techniques will be applied

2. Integration Testing:
 In which software units of an application are combined and tested for evaluating
the interaction between them

Integration Approach:
 Top – Down
 Bottom –Up
 Big Bang / Sandwich

Top – Down:
 Integrating individual components from top level
 Test stubs are needed

Bottom – Up:
 Integrating individual components from bottom level
 Test drivers are needed

Big Bang / Sandwich:


 Software components of applications are combined all at once into an
overall system

3. System Testing:
 In which, software is integrated to the overall system

Anilkumar Kusuma Page 15


 Test to see if all functional and non-functional requirements have been met
 Testing against system requirements

4. Acceptance Testing / Build Verification Testing:


 It is a final testing based on specifications of end user

Alpha Testing:
 Testing will be done with test data in controlled environment
 Testing will be done at our location

Beta Testing:
 Testing will be done with live data in live environment
 Testing will be done at client location

Reviews:
 Review is a process or meeting during a work product
 The mail goal is to indentify defects in the initial stage
 There are three reviews
 Peer Review
 Walkthrough
 Inspection

Peer Review:
 It is generally one to one meeting
 Generally we exchange our test cases with our teammates and perform a
review to see if they miss anything

Walkthrough:
 It is an informal meeting for evaluation or informational purpose
 We can discuss/raise the issue at peer level
 A team of 8 to 10 people
 The issues raised are captured and published in a report distributed to the
participants

Inspection:
 Inspections are formal review
 Inspections are strict and close examinations conducted on specifications,
requirements, design, code and testing

Verification & Validation:


Verification:
 It is a static testing
 Verification is done to ensure that it meets specifications
 Verification ensures that the system compiles with an organization standards and
process relying on reviews
 It answers “Did we build the right product”

Validation:
 It is a dynamic testing
 Validation is done to ensure that it meets client requirements
 Validation physically ensures that system operates according to plan by
executing series of test
 It answers “Did we build the right product”

Anilkumar Kusuma Page 16


QA & QC:
QA (Quality Assurance):
 QA is process oriented. Oriented to prevention
 QA activities ensures that process is defined and appropriate
 Monitoring and Improving process in the beginning activity
 QA make sure “Are we doing the right things in right way”

QC (Quality Control):
 QC is product oriented. Oriented to detection
 QC activities focus on finding defects in specific deliverables
 Inspecting and ensuring the work product
 QC make sure the result of “What we done are what we expected”

Sanity & Smoke Testing:


Sanity Testing:
 Setting up of environmental conditions before testing project
 PC / HW / SW / OS / Tools …

Smoke Testing:
 When a build is received a smoke test is run to determine if the build is stable and
it can be consider for further testing
 Testing of major functionalities
 TL will perform smoke testing
 Smoke testing test cases are positive test cases only

End to End Testing & System Testing:


End to End Testing:
 It is the methodology to validate whether the flow of application from the
starting point to end point
 For example, while testing a web page, the start point will be logging in to the
page and the end point will be logging out of the application.
 It involves the testing of complete application environment such as interacting
with database, using network communications, interacting with other hardware,
applications or systems

System Testing:
 It is a methodology to validate the system as a whole
 Testing will be limited to functional and non-functional

Retesting and Regression Testing


Retesting means executing the same test case after fixing bug to ensure that bug
was fixed

Regression:
 Testing of all test cases/functionality to ensure that the code fix haven't
introduced any problem to existing code/functionality which was working fine
earlier
 Regression Testing is carried out both manually and automation. The
automatic tools are mainly used for the Regression Testing
 Regression testing occurs at three times
1. after bug fixes

Anilkumar Kusuma Page 17


2. When new features are added
3. When new module id integrated

Test Case:
It is a document which describes input, expected and actual results to determine if a feature
of an application is working correctly or not

Test Scenario:
A set of test cases ensures that the business process flow will be tested from end to end

Test Suit:
A collection of test suit / test cases that are related to each other

Test Environment:
An environment that is created for testing purpose

Test Bed:
It’s an environment where testing is supposed to done

Ad hoc Testing:
Test that is not based on formal test plan or test case, but tester should have significant
knowledge of the software before using it

Setup Testing:
Testing of an installation and un-installation of the software

Installations are of two types


1. Normal Installation
2. Silent / unattended Installation

Normal Installation:
 It will have installed shield.
 It will ask for Next  Next in every window while installing
 It will ask for restart machine

Silent / Unattended Installation:


 It won’t have installed shield
 No (Next  Next) window will be displayed
 It won’t ask for restart machine

After installing a product, it will effect in


 Start programs menu
 Add/Remove programs
 Registry (regedit)
 Services (services.msc)
 Primary Drive
 Desktop Icon

Registry:
Start  run  regedit  HKEY_Local_Machine  Software  (Installed Product)

Services:
Start  run  services.msc  (Installed Product)  status should be “started” and startup
type should be “Automatic”

Anilkumar Kusuma Page 18


Traceability matrix:
 Traceability Matrix is a document that maps the test cases to the customer
requirements
 It is in the form of table which shows the relationship between test requirements and
test cases
 Usually each requirement is listed in a row of the matrix and the columns of the
matrix are used to identify how and where each requirement has been addressed.

Why Traceability Matrix?


 If requirement is changed then we can change their test case easily
 To ensure QA team has covered all requirements
 System Maintenance – If a tester who has developed test cases without using
traceability is leaving the organization then the new tester doesn’t

There are three types of traceability matrix


1. Horizontal / Forward Traceability matrix
2. Vertical / Backward Traceability matrix
3. Transversal / Bidirectional Traceability Matrix

Horizontal / Forward Traceability matrix:


 This is used for Coverage analysis.
 When a requirement changed it will used to identify the test cases prepared on that
requirement.
 We can check that requirements are covered test cases or not

Vertical / Backward Traceability matrix:


 It is high level document which map the requirements to all phases of the Software
development cycle. i.e. Unit testing Component Integration testing System
Integration testing Smoke/Sanity testing System Testing Acceptance testing...etc.
 This will help us in identifying if there are test cases that do not trace to any
coverage item— in which case the test case is not required and should be removed
 This is also very helpful if you want to identify that a particular test case is covering
how many requirements?

Transversal / Bidirectional Traceability matrix:


 If we will add "Defects" column in our traceability matrix then it is Bi-Directional
traceability matrix otherwise it is uni-Directional traceability Matrix.
 In uni-directional traceability matrix only we can check whether our test cases
covered all the requirements or not
 But in Bi-Directional traceability matrix after getting the defects we can see this
defects is related to which test case as well as which requirement.
 So in bi-directional traceability matrix we have two options for mapping.

Test Matrix and Test Metrics:


Test Matrix:
 It keeps the track of testing activities like traceability matrix, defect matrix

Test Metrics:
 It used for measurement to track performance
 Like Number of defects found in a specific cycle, cost / effort estimations

Software Configuration Management:


It is a process of controlling and recording of changes that are made to the software and
documentation throughout the software development life cycle

Anilkumar Kusuma Page 19


Change Management:
 If any changes done due to client request then it is called “Change request”

Version Control:
 Whenever there is a change in software of documentation then it version will be
changed accordingly.
 Commonly used version control tools are Win-CVS, Perforce, Microsoft VSS (Visual
Source Safe)

Test Plan:
 It is a document which describes the objective, scope, approach and focus of
software testing effort
 It contains what activities need to be done in what time schedule
 What resources are needed?
 What products are delivered need to be considered in advance

Test Strategy:
 It is a formal description of how software product will be tested
 Test strategy is developed for all levels of testing
 It defines what methods, techniques and tools to be used
 Test strategy indicates how testing is carried out

Test Plan Template:

1. Test Plan Identifier


2. References
3. Introduction
4. Test Items
5. Software Risk Issues
6. Features to be tested
7. Features not to be tested
8. Approach
9. Item Pass/fail criteria
10. Entry criteria and exit criteria
11. Suspension criteria and resumption requirements
12. Test Deliverables
13. Environmental Needs
14. Staffing and training Needs
15. Responsibilities
16. Schedule
17. Planning Risk and Contingencies
18. Glossary

Test Plan Identifier:


A unique number to identify test plan which will also have a revision numbers

References:
List all documents that support Test Plan
 Project Plan
 Requirement Specifications Document
 High level Design Document
 Low Level Design Document
 Methodology guidelines and examples

Anilkumar Kusuma Page 20


Introduction:
 State the executive summary of the test plan
 Identify the scope of the plan

Test Items (Functions):


 List out what is to be tested

Software Risk Issues:


 Identify what software is to be tested and what the critical areas are such as
 New version of software
 Ability to use and understand a new tool

Features To be tested:
 This is a listing of what is to be tested from the USERS viewpoint of what the system
does.
 This is not a technical description of the software, but a USERS view of the functions.
 Set the level of risk for each feature. Use a simple rating scale such as (H, M, L): High,
Medium and Low. These types of levels are understandable to a User. You should be
prepared to discuss why a particular level was chosen

Features not to be tested:


 List out what is not tested from user point of view
 Identify WHY the feature is not to be tested, there can be any number of reasons.
 Not to be included in this release of the Software.
 Will be released but not tested or documented as a functional part of the release
of this version of the software.

Approach (Test Strategy):


 Test is overall Test Strategy
 Testing Levels
 Unit testing
 Integration Testing
 System testing
 Acceptance Testing

 Test Tools
 Are any special tools to be used and what are they?
 Will the tool require special training?

 Meetings

 Measures and Metrics


 Defects by module and severity
 Time spent on defect resolution

Items Pass/fail criteria:


 Specify the criteria to be used to determine whether each test item has passed or
failed
 To define the criteria for pass and fail, consider issues such as the following:
 How significant is the problem? Does it affect a critical function or a peripheral
one?
 How likely is someone to encounter the problem?
 Is there a way to circumvent the problem?

Entry criteria & Exit criteria:


Entry Criteria:
 Unit/Integration testing should complete

Anilkumar Kusuma Page 21


 All hardware and software are installed and working properly
 Test cases should be sign off
 Test data should be available
 Show stopper bugs should be closed

Exit Criteria:
 Hardware / Software are not available at the time of testing
 If application contains one or more show stopper defects
 When all test cases are executed
 When all defects are closed
 When it reaches to deadlines

Suspension criteria and resumption requirements:


Suspension Criteria:
 In sanity testing, if more than 30% test cases are failed then testing will be
suspended
 If we are unable to open the application the testing will be suspended

Resumption Criteria:
 It testing is suspended, resumption will only occur when the caused suspension
has been resolved

Test Deliverables:
 Unit test Plan
 Integration test plan
 System test plan
 Acceptance test plan
 Defect reports and summaries
 Test logs

Environmental Needs:
Are there any special requirements for this test plan, such as
 Special hardware such as simulators, static generators etc
 How will test data be provided. Are there special collection requirements or specific
ranges of data that must be provided?
 How much testing will be done on each component of a multi-part feature?
 Specific versions of other supporting software
 Restricted use of the system during testing

Staffing and training needs:


 Training on the application/system
 Training for any test tools to be used

Responsibilities:
Who is in charge?
 Setting risks
 Selecting features to be tested and not tested.
 Setting overall strategy for this level of plan
 Ensuring all required elements is in place for testing.
 Providing for resolution of scheduling conflicts, especially, if testing is done on the
production system.
 Who provides the required training?
 Who makes the critical go/no go decisions for items not covered in the test plans?

Schedule:
Test estimations
 Estimations for test plan

Anilkumar Kusuma Page 22


 Estimations for preparation test cases
 Estimations for test executions
 Estimations for test cycles

Planning risk and contingencies:


What are the overall risks to the project with an emphasis on the testing process?
 Lack of personnel resources when testing is to begin
 Lack of availability of required hardware, software, data or tools
 Late delivery of the software, hardware or tools
 Delays in training on the application and/or tools
 Changes to the original requirements or designs

Glossary:
 Used to define terms and acronyms used in the document

Web Testing:

1. Usability Testing
2. Checking Links
3. Browser Compatibility Testing
4. Functionality Testing
5. Credit Card Testing
6. Security Testing
7. Performance Testing
8. Database Testing

Usability Testing: Testing for user friendly for the application


 Ease of use: Screens and functionality should be understandable to the end user
 Look & Feel: Attractiveness of screen like colors, fonts and graphics
 Speed in Interface: Fast navigation

Checking Links: Proper functionality of hyperlinks


 Broken Links
 Missing Links
 Wrong Links

Browser Compatibility Testing: Test to validate with different types of browsers with
different configurations
 Fonts and Graphics position
 Screen resolution 1024 x 768 …
 Support for different script and software (Flash)

Functionality Testing: Test to validate correctness and completeness of every


functionality like
 Error Handling: Prevention of negative navigation
 Input domain coverage: Correctness of size and value with a technique Boundary
value analysis and Equivalence class partition
 Calculations: Correctness of output
 Backend coverage: Input of front end operations should contain in backend table

Credit Card Testing:


 Here we perform any of one testing.
 We check, whether the given number is valid credit card number or not, it will be
based on card type and prefix values
 Another is sand box testing, a dummy PayPal account will be created for test purpose

Anilkumar Kusuma Page 23


Security Testing: Testing how well system protects against application level security

Performance Testing: To determine how fast system performs under particular work load
Types of performance testing
 Baseline Testing
 Load Testing
 Stress Testing
 Soak Testing
 Scalability Testing

Baseline Testing
Baseline Testing examines how a system performs under expected or normal load and
creates a baseline against which the other types of tests can be related

Load Testing
Load Testing includes increasing the Load and see how the system behaves under higher
load

Stress Testing
The goal of Stress Test is exactly that; to find the volume of load where the system actually
breaks or is close to breaking. Applying the load by decresing the resources (RAM /
Processor)

Soak Testing
In order to find system instabilities that occur over time, we need to conduct tests that run
over a long period

Scalability Testing
Scalability Testing is very much like Load Testing, but instead of increasing the number of
requests, we instead increase the size or the complexity of the requests sent. This involves
sending large requests, large attachments, or deeply nestled requests.

Database Testing:
Verify the consistency, accuracy and correctness of data stored in database

In database testing we check for


 Entity Integrity
 Domain Integrity
 Referential Integrity
 User Defined Integrity

Entity Integrity:
Entity integrity ensures that each row in a table is uniquely identified
Example: Two customers do not have same ID

Domain Integrity:
Here we check for correct data types, null status, and field size

Referential Integrity:
Keeping the relationship between the tables. This ensures every foreign key matches
primary key

User Defined Integrity:


It refers to the specific business rules which is implemented through triggers and stored
procedures

Anilkumar Kusuma Page 24


Bug Life Cycle
When we found bug on the application then the bug life cycle begins

Bug Statuses
 New
 Open
 Assigned
 Fixed
 Differed
 Verified
 Closed
 Re-Open
 Invalid
 Duplicate
 Hold

Defect Age: It’s a time gap between the defect introduction time to defect close time
Defect Density: Number of defects raised to a measure of size of the program
Latent Bug: The bug that has been found after two or more releases
Bug Leakage: Bug which has to found in analysis phase has found in Design/Code/Test
Bug Triage: It is nothing but making a meeting and examine the open bugs and divide
them into categories
o Bugs to fix now
o Bugs to fix later
o Bugs we will never fix

Note: Developer will fix the bug


Note: QA / Test Lead will close the reported bug

Difference between defect and enhancements


Defect: It’s an error found in the application while testing
Enhancement: It’s an additional feature or functionality added to the application as desired
by the client
Enhancement is done to improve the quality of the software where as defect is removed to
maintain the quality

Localization Testing:
 It checks how well the build has been translated into a particular target language
 It includes the translation of the application user interface

Internalization Testing:
 It makes sure that code can handle all international support without breaking
functionality either data loss or display problems
 It checks proper functionality of the product with any of local settings using all types
of international inputs

Linguistic Testing:
 Checking of grammatical and contextual errors, language specific settings and spell
checks

Anilkumar Kusuma Page 25

You might also like