ST - Unit I

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 62

Unit I

Introduction
Introduction as an Engineering Activity

• Software Engineering
– Application of systematic disciplined
– Quantifiable approach to develop an effective and efficient
software
• Software Testing
• Internal part of software engineering
• Verification and validation of software
• Execution of software with actual test data
• It focused on quality issues, poor quality software not
acceptable by user.
• Software failure leads to catastrophic losses.
• Software Product
– It is universal requirement
– It is build with in time, budget, quality
– Quality is based on reliability, correctness, usability,
satisfying all user requirements, etc
• Engineering Discipline:
– Engineering are normally educated and trained based on,
• Standards
• Tools measurements
• Best practices
• Engineering process
• Basic scientific principles
• Test Specialist:
 A well trained and well educated domain expert is called “test
specialist”
 Test specialist concentration is based on,
o Principles,
o Practices
o Processes of software engineering discipline
 Test specialist must know,
o Test processes, principles
o Test measurements
o Testing standards
o Test plan
o Testing tools
o Testing methods
o Testing elements
TESTING AS A PROCESS
• Software process : Fundamental activities are,
– Software specification
– Software development
– Software validation
– Software evaluation
• The purpose of testing is to cover the defects in the
system
raw material finished product
Process

Problem definition software product


Software Process
TESTING AS A PROCESS

Test Input

Source Code Testing Test Output

Executable code
Defect Analysis
Testing objectives:
– To find error before user
– Cover all the defects before launch
– Testing is related to 2 processes, they are
• Verification
• Validation
– Verification and validation processes done after the
implementation processes
– Verification: “whether a system is right or wrong”?
– Validation: “whether a system is right system or
not”?
Testing purpose :
To check the quality attributes of software like,
– Reliability
– Security
– Usability
– Correctness
Debugging:
–Process of finding and correcting the errors
Process:
1. Test case are taken
2. Test cases are executed
3. Find errors
4. Repair the code
5. Retest the code
Testing-Aspects:
1. Economic aspects
• Formatted testing process is completed within time and
budget
2. Technical aspects
• Technical support are necessary to find defects : testing
methods, tools, advanced testing techniques
3. Managerial aspects
• Manage the test process
• Follow organizational policy and standards
• Testing process should be measured and monitored
TESTING AXIOMS
“ An established rule, principle or law”
 Its impossible to test a program completely because
• No of possible inputs are large
• No of possible outputs are large
• No of paths through the software is large
Types:
 Stakeholder axiom – people who get benefited
 Test basis axiom – identify the targets of testing
 Scope management axiom –items in & out of managing change
 Coverage – measure process towards the goal
 Delivery - frequency or format to be provided
 Environment – it is used for testing
 Event – managing and communicating planned & unplanned events
 Prioritization – priority the tests
 Design – adopt and agree a model
TESTING AXIOMS
Conclusion of Axioms

Testing is not possible to test a program completely


It’s a risk based exercise
It cannot show absences of bugs

Not all bugs found will be fixed


Software tester is not a expert in any time

Software testing is a discipline & technical profession


Benefits of test process improvement are the following:
 smarter testers

 higher quality software

 the ability to meet improved planningbudget and

scheduling goals
 the ability to meet quantifiable testing goals

 5 levels

 Initial

 Phase definition

 Integration

 Management and measurements

 Defect prevention and control


Test Maturity Model
 When a software is tested, there are so many processes which
are followed in order to attain maximum quality and
minimizing defects or errors. 
 Test Maturity Model is one of such model which has a set of
structured levels.
 TMM is now replaced by Test Maturity Model
Integration(TMMI)
 TMMI is a 5 level model which provides a framework to
measure the maturity of the testing processes.
– Initial:
– The software process is characterized as inconsistent, and occasionally
even chaotic.
– Managed:
– This level of Software Development Organization has a basic and
consistent project management processes to track cost, schedule, and
functionality
– Defined:
– The software process for both management and engineering activities
are documented, standardized, and integrated into a standard software
process
– Management and Measurement:
– Organization set a quantitative quality goal for both software process
and software maintenance.
– Optimization:
– The Key characteristic of this level is focusing on continually improving
process performance through both incremental and innovative
technological improvements
 Maturity goals
 Each maturity level, except 1, contains certain maturity goals.
 For an organization to reach a certain level, the
corresponding maturity goals must be met by the organization.
 Maturity subgoals
 Maturity goals are supported by maturity subgoals
 It also address how an organization can adapt its practices so that it
can move in-line with the TMM model.
 Three groups:
 Managers
 Developers and test engineers
 Customers (user/client).
 Level 1 – Initial
 There are no maturity goals to be met at this level.
 Testing begins after code is written.
 An organization performs testing to demonstrate
that the system works.
 No serious effort is made to track the progress
of testing.
 Test casesare designed and executed in an ad
hoc manner.
 In summary, testing is not viewed as a critical, distinct
phase in software development.
 Level 2 – Phase Definition: The maturity goals are as follows:
 Develop testing and debugging goals.
 Organizations form committees on testing and debugging.
 The committees develop and document testing and debugging goals.
 Initiate a test planning process. (Identify test objectives. Analyze risks. Devise
strategies. Develop test specifications. Allocate resources.)
 Assign the task of test planning to a committee.
 The committee develops a test plan template.
 Proper tools are used to create and manage test plans.
 Provisions are put in place so that customer needs constitute a part of the
test plan.
 Institutionalize basic testing techniques and methods.
Level 3 – Integration: The maturity goals are as follows:
 Establish a software test group.
 The test group is involved in all stages
of the software
development.
 Trained and motivated test engineers are
assigned to the group.
 The test group communicates with the customers.
 Establish a technical training program.
 Integrate testing into the software lifecycle.
 Control and monitor the testing process.
 Define a set of metrics related to the test project.
 Be prepared with a contingency plan
Level 4 – Management and Measurement: The maturity goals are:

Establish an organization-wide review program.


Establish a test management program.
Evaluate software quality.
The organization defines quality attributes and quality goals for
products.
The management develops policies and mechanismsto
collect test
metrics to support the quality goals.
 Level 5 –Optimization, Defect Prevention and Quality Control: The maturity
goals are as follows:
 Application of process data for defect prevention
 Establish a defect prevention team.
 Document defects that have been identified and removed.
 Each defect is analyzed to get to its root cause.
 Develop an action plan to eliminate recurrence of common defects.
 Statistical quality control
 Maturity subgoals to support the above are as follows.
 Establish high-level measurable quality goals. (Ex. Test case execution

rate, defect arrival rate, …)


 Ensure that the new quality goals form a part of the test plan.
 The test group is trained in statistical testing and analysis methods.
TMM maturity goals

Defect Quality
Test planning repository evaluation

Test case Controlling and


development monitoring test

Defect Test
prevention measurement

Test process Quality


improvement control
Basic Definitions
Errors :
An error is a mistake, misconception, or misunderstandingon
the part of a software developer.
Faults (Defects):
A fault (defect) is introduced into the software as the result of an error.
Failures:
A failure is the inability of a software system or component to perform its
required functions within specified performance requirements .
Test Cases:
It is a set of input data and then execute the software with the input data
under a particular set of conditions
Test:
A test is a group of related test cases, or a group of related test cases and
test procedures.

Test Oracle:
A test oracle is a document, or piece of software that allows testers to
determine whether a test has been passed or failed.
Test Bed:
A test bed is an environment that contains all the hardware and software
needed to test a software component or a software system.
Reviews:
It is a well structure and regulated.
Inspection:
It is a formal review type. It is lead by trained moderator.
Software Testing Principles
• Testing principles are used as a guidelines for tester.
• It is defined as general ‘law’, ‘rule’ or ‘code of conduct’ (or) ‘ facts
of nature’
Key Principles:
• Testing shows the presence of defects
• Exhaustive testing is impossible.
• Early testing
• Defect clustering
• Pesticide Parabox
• Testing is context dependent
• Absence of errors fallacy
• Testing as a creative and challenging task
• Testing activities should be integrated into software life cycle
• Testing should be planned
• Tests must be repeatable and reusable
• Testing should be carried out by a group
• Software component is proportional to no of defects
• Test case development on all conditions
• Test case must contain the expected output and result
• Test results should be inspected meticulously
• Test case defects of all the undetected defect
• Testing executes selected set of test case
The Tester‘s Role in a Software Development
Organization
Tester roles:
1. To reveal defects
2. To find weak points
3. To know the inconsistent behavior of system
4. To find the situations at which software does not work
5. To get more programming experience, it helps to,
1. Understanding software/ system
2. How to code is developed
3. Possibilities of errors
4. When occur it may be occurred
• Try to produce high quality software
• Try to satisfy user requirements and needs
• Tester combine and work with requirement engineers
• Tester combine work with test manager and project manager
• Tester performs,
– To provide test plan early
– To do execution parallel
– Recording the test results
– Analyzing the test results
• To minimize the cost for support
• Inform the errors and defects to developers
• Tester needs,
– Communication skills
– Team working skills
– Decision making skills
– Scripting knowledge/ coding skills
– Working experience, etc
DEFECTS,HYPOTHESES AND TESTS
not satisfied, poor quality
software , time and cost constraints
1. Origins of Defects

•The term defect and its relationship to the terms error and failure in
the context of the software development domain.
•Defects have detrimental affects on software users, and software
engineers work very hard to produce high-quality software with a
low number of defects.
•But even under the best of development circumstances errors are
made, resulting in defects being injected in the software during the
phases of the software life cycle.
1. Education:
The software engineer did not have the proper educational
background to prepare the software so it leads to defect
2. Communication:
software engineers must communicate with group members properly.
3. Oversight:
The software engineer omitted to do something.
For example, a software engineer might omit an initialization statement.
4. Transcription:
The software engineer knows what to do, but makes a mistake in doing it. A
simple example is a variable name being misspelled when entering the code.
5. Process:
The process used by the software engineer misdirected her actions. For
example, a development process that did not allow sufficient time for a
detailed specification to be developed and reviewed could lead to
specification defects.
Testers as doctors need to have knowledge possible defects
about
(illnesses) in order to develop defect hypotheses. They use
the hypotheses to:

• Design test cases;


• Design test procedures;
• Assemble test sets;
• Select the testing levels (unit, integration, etc.) appropriate for the
tests;
• Evaluate the results of the tests.
Defect Classes, the Defect Repository, and
Test Design
•Defects can be classified in many ways.
•It is important for an organization to adapt a single classification scheme and
apply it to all projects.
• No matter which classification scheme is selected, some defects will fit into more
than one class or category.
•Because of this problem, developers, testers, and SQA staff should try to be as
consistent as possible when recording defect data.
•The defects are classified into 4 types
• Requirement and specification defects
• Design Defects
• Coding Defects
• Testing Defects
REQUIREMENTS AND SPECIFICATION DEFECTS

The beginning of the software life cycle is critical for ensuring high quality in the
software being developed.
Defects injected in early phases can persist and be very difficult to remove in
later phases.
Since many requirements documents are written using a natural language
representation, there are very often occurrences of ambiguous, contradictory,
unclear, redundant, and imprecise requirements.
1. Functional Description Defects

The overall description of what the product does, and howit


should behave (inputs/outputs),
 incorrect,
 ambiguous,
 incomplete.
2. Feature Defects
Features may be described as distinguishing characteristics of a software
component or system.
Features refer to functional aspects of the software that map to
functional
requirements as described by the users and clients.
• Features also map to quality requirements such as performance and reliability.
Feature defects are due to feature descriptions that are
missing,
incorrect,
incomplete,
3. Feature Interaction Defects

 These are due to an incorrect description.

How the features interacts with the another features.

4. Interface Description Defects


• These are defects that occur in the description of how the target software is to
interface with external software, hardware, and users.
• How to avoid requirements/ specification defect
• Requirements/ specification defect normally found by
 unit test,
 integration test
 system test
 user acceptance test
DESIGN DEFECTS
 This covers defects in the design of algorithms, control, logic, data elements,
module interface descriptions, and external software/hardware/user interface
descriptions.
 When describing these defects we assume that the detailed design description
for the software modules is at the pseudo code level with processing steps, data
structures, input/output parameters, and major control structures defined.
 If module design is not described in such detail then many of the defects types
described here may be moved into the coding defects class.
1.Algorithmic and Processing Defects
If the processing steps are not correct in given algorithms, so pseudo
code creates maximum defects. They are:
Misplace steps in pseudo code
Duplicate steps
Rejection errors eg: division by zero
2. Control, Logic, and Sequence Defects
Incorrectly developed pseudo code create control and logic defects
Control defects- poor logic flow in code
Logic defects- logical operator applied mistakenly
Sequence defect- conditions are not properly checked in the pseudo code

3. Data Defects
 Due to poor data structure
Eg:
 Incorrect allocation of memory
 Incorrect type assignment
 Lacking of field in a record
4. Module Interface Description Defects
These are defects derived from,
 Incorrect, and/or in- consistent parameter types,
 Incorrect number of parameters,
 Incorrect ordering of parameters.

5. Functional Description Defects


The defects in this category include incorrect, missing, and/or unclear design
elements.

6. External Interface Description Defects


These are derived from incorrect design descriptions for interfaces with
 Cost components,
 External software systems,
 Databases,
 Hardware devices (e.g., I/O devices).
CODING DEFECTS
Coding defects are derived from errors in implementing the code.

1. Algorithmic and Processing Defects


 Unchecked overflow
 Data conversion
 Missing parenthesis
 Precision loss
 In correct ordering of operators
2. Control, Logic and Sequence Defects

 Incorrect Expression Of Case Statements,


 Incorrect Iteration Of Loops (Loop Boundary Problems),
 Missing Paths.
3. Typographical Defects
These are principally syntax errors, incorrect spelling of a variable name, that are
usually detected by a
Compiler,
Self-reviews,
Peer Reviews.
4. Initialization Defects
These occur when initialization statements are omitted or are incorrect. This may
occur because of
Misunderstandings or Lack of Communication Between Programmers,
 Programmers and Designers,
Misunderstanding of The Programming Environment.
5. Data-Flow Defects
Poor operational sequence create data flow defects
6. Data Defects
Due to poor data structure implementation , it create data defect
Eg:
Incorrect accessing of files
Improper elements count in an array
In correct setting of flags, constants

7. Module Interface Defects


Incorrect Or Inconsistent Parameter Types,
An Incorrect Number Of Parameters,
Improper Ordering Of The Parameters.
8. Code Documentation Defects
• Incomplete code documentation create code documentation defects.
• It offers testing efforts, when it is
 Incomplete
 unclear
 Incorrect
 out of data

9. External Hardware, Software Interfaces Defects


These defects arise from
 System calls,
 Links to databases,
 Input/output sequences,
 Memory usage,
 Resource usage,
 Interrupts and exception handling,
 Data exchanges with hardware
TESTING DEFECTS
Defects are arised in Test plans, test cases, test harnesses, and test procedures

1.Test Harness Defects


This is called the test harness or scaffolding code.
Code is reusable
Code must be correctly.
 Designed
 Implemented
 Tested
2.Test Case Design and Test Procedure Defects
Test case design rise some valid defects
Whenever,
 Incorrect test cases
 Incomplete test cases
 Missing test cases
•The control, logic, and sequence, data flow defects:
•Could be detected by using a combination of white and black box testing
techniques.
•Black box tests may work well to reveal the algorithmic and data defects.

• The code documentation defects require a code review for detection.


•The poor quality of this small program is due to defects injected during several of

the life cycle phases


•We must work with analysts, designers and code developers to ensure that quality

issues are addressed early the software life cycle.


3.3 Developer/Tester Support
for Developing a
Defect Repository
 Defect and its relevant information are stored in defect repository
 Defect repository development is essential part of testing an debugging.

 To collect defect relevant data, forms and templates are help to collect data. Eg:
 Defect fix reports

 test analyze report

 test reports

 testing incident report

 Reports are to be recorded each defect and frequently of the occurrence each defect
type after testing.
 The defect data is used for test planning. It helps

 To choose testing techniques.


 To allocate number of resources

 To estimate test cost.


DEFECT PREVENTION STRATEGIES :

• Defect prevention:
• It is one of the important activity in any software project.
• It is QA process to identify the root causes of defects, which
helps to improve the process to avoid introducing defects.

Requirement Initial SRS Clarification and


from customer study document verification from
customers
• Tester Role:
 Studying and reviewing of requirement specification
document
 It is review of requirements documentations
 While studying these documents testes run into different
queries or misleading or unclear requirements.
 Based on quires, stakeholder provide input and changes to
specification document.
• Developer Roles:
1. Reviewing requirement
2. Code review
3. Static code analysis
4. Unit testing
• Defect prevention process:

Identify critical Estimated expected Minimize expected


risks impact impact

• Identify Critical risk:


 Identify the risk by facing the project or system.
 These types of defects that could jeoparadize the successful construction,
delivery and operation of system.
• Estimate expected impact:
 For each risk, make an assessment of financial impact if risk becomes a problem
• Minimize expected impact:
 If risks are identified try to eliminate it.
 If risk cannot be eliminated, reduce the probability that the risk will become a
problem.
 If risks are not prevented properly then following may occur:
 Missing of key requirement
 function not working properly
 performance is poor
 users inability to actively participate
• Defect prevention strategies:
• It is classified into 3 types
 Product approach of defect prevention
 Process approach of defect prevention
 Automation of development process
–Product approach of defect prevention :
–Error removal techniques:
– Train and educate the developers
– Use of formal specification and formal verification
– DP based on tools, technologies, process
– DP by analyzing the root causes for defects.
–Defect reduction through fault detection and removal
–Defect containment through failure prevention technique
– Process approach of defect prevention
• Management of software industry have some responsibilities
to defect prevention such as:
 Commitment to perform
 Ability to perform

 Activities performed
 Measurement and analysis
 Verifying implementation
– DP through Automation of Development Process:
– Tools are eliminated human intensive defects.
– Automation tools are available from requirement phase to testing phase.
– Requirement tools are classifies by,
 Requirement management tool
 Requirement recorded tool
 Requirement verifier tool
– Design tools include:
 Data base design tool
 Application design tool
 Visual modelling tool
– Testing phase includes:
 code generation tool
 code testing tools
 code coverage tools
 code analyzer tool
 defect tracking tool


– Test Maturity Model

 When a software is tested, there are so many processes which


are followed in order to attain maximum quality and
minimizing defects or errors. 
 Test Maturity Model is one of such model which has a set of
structured levels.
 TMM is now replaced by Test Maturity Model
Integration(TMMI)
 TMMI is a 5 level model which provides a framework to
measure the maturity of the testing processes.
Defect Examples: The Coin Problem
Defect Examples: The Coin Problem

Functional description defects:


• Arises because the functional description is ambiguous and incomplete.
•It does not state that the in- put, number_of_coins, and the
output, number_of_dollars and number_of_cents, should all have
values of zero or greater.
•A precondition is a condition that must be true in order for a software
component to operate properly.
•A post condition is a condition that must be true when a software component
completes its operation properly.
•Interface description defects:
Arises due to,
Poor education/un experience people.
No relevant training
Incomplete nature of specification
Incorrect nature of specification
Design defects include the following:

Control, logic, and sequencing defects. The defect in this subclass arises from
an incorrect “while” loop condition (should be less than or equal to six)

Algorithmic, and processing defects. These arise from the lack of error
checks for incorrect and/or invalid inputs, lack of a path where users can
correct erroneous inputs, lack of a path for recovery from input errors. The
lack of an error check could also be counted as a functional design defect
since the design does not adequately describe the proper function- ality for the
program.

Data This relates to an incorrect value for one of the


defects. defect integer array, coin_values, which should
elements of the
1,5,10,25,50,100 read
External interface description defects.
These are defects arising from the absence of input messages or prompts that
introduce the program to the user and request inputs.

The above defects are solved by,


Conditional testing
Loop and branch testing
Control and structure testing
Path testing, etc

You might also like