0% found this document useful (0 votes)
4 views43 pages

Testing Levels (G3)

The document outlines various levels and types of software testing within the software development life cycle, including unit testing, integration testing, system testing, acceptance testing, alpha testing, and beta testing. It also describes testing methodologies such as black box testing, white box testing, and techniques like equivalence partitioning and boundary value analysis. The importance of testing at each level is emphasized to ensure software quality and functionality before deployment.

Uploaded by

jecid34
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views43 pages

Testing Levels (G3)

The document outlines various levels and types of software testing within the software development life cycle, including unit testing, integration testing, system testing, acceptance testing, alpha testing, and beta testing. It also describes testing methodologies such as black box testing, white box testing, and techniques like equivalence partitioning and boundary value analysis. The importance of testing at each level is emphasized to ensure software quality and functionality before deployment.

Uploaded by

jecid34
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

TESTING LEVELS

Testing Levels
Testing levels are basically to identify missing areas and prevent overlap
and repetition between the development life cycle phases. In software
development life cycle models, there are defined phases like requirement
gathering and analysis, design, coding or implementation, testing and
deployment. Each phase goes through the testing. Hence there are
various levels of testing. The various levels of testing are.

1. UNIT TESTING 6. SYSTEM TESTING


2. COMPONENT TESTING 7. ACCEPTANCE TESTING
3. INTEGRATION TESTING 8. ALPHA TESTING
4. COMPONENT INTEGRATION TESTING 9. BETA TESTING
5. SYSTEM INTEGRATION TESTING
UNIT TESTING
• Unittesting is a method by which individual units of source code are tested to
determine if they are fit for use. A unit is the smallest testable part of an
application like functions /procedures, classes, interfaces.
• Unit tests are typically written and run by software developers to ensure that
code meets its design and behaves as intended.
• The goal of unit testing is to isolate each part of the program and show that
the individual parts are correct.
• A unit test provides a strict, written contract that the piece of code must
satisfy. As a result, it affords several benefits. Unit tests find problems early in
the development cycle.
INTEGRATING TESTING
• Integration testing tests integration or interfaces between components, interactions to
Different parts of the system such as an operating system, file system and hardware or
Interfaces between systems.
• Integration testing is done by a specific integration tester or test team.
• Big bang integration testing:
In big bang integration testing all components or modules are integrated simultaneously,
after which everything is tested as a whole.
Big bang testing has the advantage that everything is finished before integration testing
starts.
• The major disadvantage is that in general it is time consuming and difficult to trace the
Cause of failures because of this late integration.
SYSTEM TESTING
• IT VERIFY WHETHER ALL THE SYSTEM ELEMENTS HAVE BEEN INTEGRATED & PERFORM THE ALLOCATED
FUNCTIONS.
• ONCE ALL THE COMPONENTS ARE INTEGRATED, THE APPLICATION AS A WHOLE IS TESTED RIGOROUSLY TO SEE
THAT IT MEETS QUALITY STANDARDS.
• PERFORMED BY A SPECIALIZED TESTING TEAM.
• SYSTEM TESTING IS SO IMPORTANT BECAUSE OF THE FOLLOWING REASONS:
– FIRST STEP IN THE SOFTWARE DEVELOPMENT LIFE CYCLE, WHERE THE APPLICATION IS TESTED AS A WHOLE.
– THE APPLICATION IS TESTED THOROUGHLY TO VERIFY THAT IT MEETS THE FUNCTIONAL AND TECHNICAL
SPECIFICATIONS.
– THE APPLICATION IS TESTED IN AN ENVIRONMENT WHICH IS VERY CLOSE TO THE PRODUCTION ENVIRONMENT
WHERE THE APPLICATION WILL BE DEPLOYED.
– ENABLES TP TEST, VERIFY AND VALIDATE BOTH THE BUSINESS REQUIREMENTS AS WELL AS THE APPLICATIONS
ARCHITECTURE.
ACCEPTANCE TESTING
• Most important stage.
• It is conducted by the quality assurance team .
– who will gauge whether the application meets the intended specifications and satisfies
the client's requirements.
• The qa team will have a set of pre written scenarios and test cases that will be used to
test the application.
• Can range from informal test drive to a planed and systematically executed serious of
tests.
• This can be conducted over period of weeks or months.
• Acceptance tests are not only intended to point out simple spelling mistakes, cosmetic
errors or interface gaps, but also to point out any bugs in the application that will result in
system crashers or major errors in the application.
• There are also legal and contractual requirements for acceptance of the system
TESTING VARIETIES

• BLACK BOX TESTING • SYSTEM TESTING


• WHITE BOX TESTING • END-TO-END TESTING
• UNIT TESTING • SANITY TESTING
• INCREMENTAL TESTING • REGRESSION TESTING
• INTEGRATION TESTING • ACCEPTANCE TESTING
• FUNCTIONAL TESTING • LOAD TESTING
CONT..

• USABILITY TESTING • AD-HOC TESTING


• INSTALL/UNINSTALL TESTING • USER ACCEPTANCE TESTING
• PERFORMANCE TESTING • COMPARISON TESTING
• RECOVERY TESTING • ALPHA TESTING
• SECURITY TESTING • BETA TESTING
• COMPATIBILITY TESTING • MUTATION TESTING
• EXPLORATORY TESTING
ALPHA TESTING
Alpha testing is one of the most common software testing strategy used in software development. Its
specially used by product development organizations.
• This test takes place at the developer’s site. Developers observe the users and note problems.
• Alpha testing is testing of an application when development is about to complete. Minor design changes
can still be made as a result of alpha testing.
• Alpha testing is typically performed by a group that is independent of the design team, but still within
the company, e.G. In-house software test engineers, or software QA engineers.
• Alpha testing is final testing before the software is released to the general public. It has two phases:
• In the first phase of alpha testing, the software is tested by in-house developers. They use either
debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly.
• In the second phase of alpha testing, the software is handed over to the software qa staff, for additional
testing in an environment that is similar to the intended use.
• Alpha testing is simulated or actual operational testing by potential users/customers or an independent
test team at the developers’ site. Alpha testing is often employed for off-the shelf software as a form of
internal acceptance testing, before the software goes to beta testing.
BETA TESTING
•It is also known as field testing. It takes place at customer’s site. It sends the
system to users who install it and use it under real-world working conditions.
• A beta test is the second phase of software testing in which a sampling of the
intended audience tries the product out. (Beta is the second letter of the greek
alphabet.) Originally, the term alpha test meant the first phase of testing in a
software development process. The first phase includes unit testing, component
testing, and system testing. Beta testing can be considered “pre-release testing.
• The goal of beta testing is to place your application in the hands of real users
outside of your own engineering team to discover any flaws or issues from the
user’s perspective that you would not want to have in your final, released version
of the application.
ALPHA TESTING & BETA TESTING
CONT..
BLACK BOX TESTING
• Specification-based testing technique is also known as ‘black-box’ or input/output driven
testing techniques because they view the software as a black-box with inputs and outputs.
• The testers have no knowledge of how the system or component is structured inside the
box. In black-box testing the tester is concentrating on what the software does, not how it
does it.
• The definition mentions both functional and non-functional testing. Functional testing is
concerned with what the system does its features or functions. Non-functional testing is
concerned with examining how well the system does. Non-functional testing like
performance, usability, portability, maintainability.
• Specification-based techniques are appropriate at all levels of testing (component testing
through to acceptance testing) where a specification exists. For example, when performing
system or acceptance testing, the requirements specification or functional specification may
form the basis of the tests.
BLACK BOX TESTING CONT..

There are four specification-based or black-box technique:


1. Equivalence partitioning
2. Boundary value analysis
3. Decision tables
4. State transition testing
EQUIVALENCE PARTITIONING
• Equivalence partitioning is a software testing technique that divides the input
and/or output data of a software unit into partitions of data from which test
cases can be derived.
• The equivalence partitions are usually derived from the requirements
specification for input attributes that influence the processing of the test object.
• Test cases are designed to cover each partition at least once.

WHAT CAN BE FOUND USING EQUIVALENCE PARTITIONING?


WHAT CAN BE PARTITIONED?

• Usually it is the input data that is partitioned.


• However, depending on the software unit to be tested, output data can be
partitioned as well.
• Each partition shall contain a set or range of values, chosen such that all the
values can reasonably be expected to be treated by the component in the
same way (i.E. They may be considered ‘equivalent’).
RECOMMENDATIONS ON DEFINING PARTITIONS
A number of items must be considered:
• All valid input data for a given condition are likely to go through the same process.
• Invalid data can go through various processes and need to be evaluated more carefully. For
Example:
▪ A blank entry may be treated differently than an incorrect entry,
▪ A value that is less than a range of values may be treated differently than a value that is
greater,
▪ If there is more than one error condition within a particular function, one error may
override the other, which means the subordinate error does not get tested unless the other
value is valid.
EQUIVALENCE PARTITIONING EXAMPLE

• Example of a function which takes a parameter “month”.


• The valid range for the month is 1 to 12, representing january to december. This valid range
is called a partition.
• In this example there are two further partitions of invalid ranges.

• Test cases are chosen so that each partition would be tested.


BOUNDARY VALUE ANALYSIS

• Equivalence partitioning is not a stand alone method to determine


test cases. It is usually supplemented by boundary value analysis.
• Boundary value analysis focuses on values on the edge of an
equivalence partition or at the smallest value on either side of an
edge.
EQUIVALENCE PARTITIONING WITH BOUNDARY VALUE
ANALYSIS
WE USE THE SAME EXAMPLE AS BEFORE.
TEST CASES ARE SUPPLEMENTED WITH BOUNDARY VALUES.
DECISION TABLES

Decision tables are a precise yet compact way to model complicated logic.
Decision tables, like if-then-else and switch-case statements, associate
conditions with actions to perform.

But, unlike the control structures found in traditional programming languages,


decision tables can associate many independent conditions with several
actions in an elegant way.
DECISION TABLES - USAGE
Decision tables make it easier to observe that all possible conditions are
accounted for.
Decision tables can be used for:
Specifying complex program logic
Generating test cases (also known as logic-based testing)
Logic-based testing is considered as:
Structural testing when applied to structure (i.E. Control flow graph of an
Implementation).
Functional testing when applied to a specification.
DECISION TABLES - STRUCTURE
Conditions - (Condition stub) Condition Alternatives –(Condition Entry)

Actions – (Action Stub) Action Entries

•Each condition corresponds to a variable, relation or predicate


• possible values for conditions are listed among the condition alternatives
• Boolean values (true / false) – limited entry decision tables
• Several values – extended entry decision tables
• Don’t care value
• Each action is a procedure or operation to perform
• The entries specify whether (or in what order) the action is to be performed
To express the program logic we can use a limited-entry decision table consisting of 4 areas
called the condition stub, condition entry, action stub and the action entry:
DECISION TABLES – STRUCTURE CONT..
DECISION TABLES – STRUCTURE CONT..
• We can specify default rules to indicate the action to be taken when none of the other rules
apply.
• When using decision tables as a test tool, default rules and their associated predicates must
be explicitly provided.
DECISION TABLE EXAMPLE
STATE TRANSITION TESTING
What is state transition testing?
State transition testing, a black box testing technique, in which outputs are triggered by changes to the input conditions
or changes to 'state' of the system. In other words, tests are designed to execute valid and invalid state transitions.
When to use?
• When we have sequence of events that occur and associated conditions that apply to those events
• When the proper handling of a particular event depends on the events and conditions that have occurred in the
past
• It is used for real time systems with various states and transitions involved
Deriving test cases:
• Understand the various state and transition and mark each valid and invalid state
• Defining a sequence of an event that leads to an allowed test ending state
• Each one of those visited state and traversed transition should be noted down
• Steps 2 and 3 should be repeated until all states have been visited and all transitions traversed
• For test cases to have a good coverage, actual input values and the actual output values have to be generated
STATE TRANSITION TESTING CONT..
EXAMPLE:
A SYSTEM'S TRANSITION IS REPRESENTED AS SHOWN IN THE BELOW DIAGRAM:
STATE TRANSITION TESTING CONT..
The tests are derived from the above state and transition and below are the possible
scenarios that need to be tested.
Tests Test 1 Test 2 Test 3
Start State Off On On

Input Switch ON Switch Off Switch off

Output Light ON Light Off Fault

Finish State ON OFF On


WHITE BOX TESTING
• This testing is based on knowledge of the internal logic of an application’s
Code
• Known as glass box testing.
• Internal software and code working should be known for this type of
Testing.
• Tests are based on coverage of code statements, branches, paths,
Conditions.
PSEUDOCODE AND CONTROL FLOW GRAPHS

INPUT(Y)
IF (Y<=0)
THEN
Y := −Y
END_IF
WHILE (Y>0)
DO
INPUT(X)
Y := Y-1
END_WHILE
UNIT TESTING
• Testing of individual software components or modules.
• Typically done by the programmer and not by testers, as it requires detailed knowledge of
the internal program design and code.
• May require developing test driver modules or test harnesses.

• Testing of integrated modules to verify combined functionality after Integration.


• Modules are typically code modules, individual applications, client and Server applications
on a network, etc.
• This type of testing is especially relevant to client/server and distributed Systems.
END –TO – END & SANITY TESTING
• End-to-end testing
– Similar to system testing,
– Involves testing of a complete application environment in a situation that
mimics real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or
systems if appropriate.
• Sanity testing -
– testing to determine if a new software version is performing well enough to
accept it for a major testing effort.
– If application is crashing for initial use then system is not stable enough for
further testing and build or application is assigned to fix.
REGRESSION TESTING
• Whenever a change in a software application is made it is quite possible that
other areas within the application have been affected by this change.
• The intent of regression testing is to ensure that a change,
– such as a bug fix did not result in another fault being uncovered in the application.

• Regression testing is so important because of the following reasons:


– Minimize the gaps in testing when an application with changes made has to be tested.
– Testing the new changes to verify that the change made did not affect any other area of the
application.
– Mitigates risks when regression testing is performed on the application.
– Test coverage is increased without compromising timelines.
– Increase speed to market the product.
PERFORMANCE TESTING
• It is mostly used to identify any bottlenecks or performance issues rather than finding the bugs
in software.
• There are different causes which contribute in lowering the performance of software:
– Network delay. – Client side processing.
– Database transaction processing. – Load balancing between servers.
– Data rendering.
• Performance testing is considered as one of the important and mandatory testing type in terms
of following aspects:
– Speed (i.E. Response time, data rendering and accessing)
– Capacity
– Stability
– Scalability
• It can be either qualitative or quantitative testing activity and can be divided into different sub
types such as load testing and stress testing.
LOAD TESTING
• Testing the behavior of the software by applying maximum load in terms of software
accessing and manipulating large input data.
• It can be done at both normal and peak load conditions.
• Most of the time, load testing is performed with the help of automated tools such as
– load runner, apploader, ibm rational performance tester, apache jmeter, silk performer,
visual studio load test etc.
• Virtual users (vusers) are defined in the automated testing tool and the script is executed to
verify the load testing for the software.
• The quantity of users can be increased or decreased concurrently or incrementally based
upon the requirements.
STRESS TESTING

• This
testing type includes the testing of software behavior under abnormal
conditions.
• Taking away the resources, applying load beyond the actual load limit is stress
testing.
• This testing can be performed by testing different scenarios such as:
– Shutdown or restart of network ports randomly.
– Turning the database on or off.
– Running different processes that consume resources such as cpu, memory,
server etc.
USABILITY TESTING

• How much system is efficiency and effective to use.


• There are some standards and quality models and methods which define the
usability in the form of attributes and sub attributes such as iso-9126, iso-
9241-11, iso-13407 and ieee std.610.12 etc.
SECURITY TESTING
• Security testing involves the testing of software in order to identify any flaws ad gaps from security
and vulnerability point of view.
• Following are the main aspects which security testing should ensure:
– Confidentiality, integrity. – Authentication, availability.
– Authorization, non-repudiation. – Software is secure against known and unknown vulnerabilities.
– Software data is secure. – Software is according to all security regulations.
– Input checking and validation. – Sql insertion attacks.
– Injection flaws. – Session management issues.
– Cross-site scripting attacks. – Buffer overflows vulnerabilities.
– Directory traversal attacks.
INSTALL/UNINSTALL TESTING

• Tested for full, partial, or upgrade install/uninstall processes on


different operating systems under different hardware, software
environment.
RECOVERY TESTING

• Testing how well a system recovers from crashes, hardware


failures, or other catastrophic problems.
COMPATIBILITY TESTING

• Testing how well software performs in a particular


– hardware
– Software
– Operating system
– Network environment and different combination s of above.

You might also like