0% found this document useful (0 votes)
12 views65 pages

Chapter 1

Uploaded by

abel.abraham712
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views65 pages

Chapter 1

Uploaded by

abel.abraham712
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

CHAPTER ONE

INTRODUCTION

CS448 - SOFTWARE TESTING


GENERAL TESTING PROCESS

 Starts with the testing of individual program units such as functions or objects. These were then
integrated into sub-systems and systems, and the interactions of these units were tested.
 Finally, after delivery of the system, the customer may carry out a series of acceptance tests to check
that the system performs as specified.
 This model of the testing process is appropriate for large system development — but for smaller
systems, or for systems that are developed through scripting or reuse, there are often fewer distinct
stages in the process.
ABSTRACT VIEW OF SOFTWARE TESTING

Figure 1: Testing phases

 The two fundamental testing activities are component testing (testing the parts of the system) and
system testing (testing the system as a whole).
Component testing
 Component testing aims to uncover defects by testing individual program components such as functions,
objects, or reusable components.
 System testing involves integrating these components into sub-systems or the entire system. The focus
of system testing is to ensure that the system meets functional and non-functional requirements and
behaves as expected.
 Defects in components that were missed during earlier testing are typically discovered during system
testing.
TWO DISTINCT GOALS OF SOFTWARE TESTING PROCESS

 To demonstrate to the developer and the customer that the software meets its requirements.
– For custom software, this means that there should be at least one test for every requirement in the user and
system requirements documents. For generic software products, it means that there should be tests for all of the
system features that will be incorporated in the product release. Some systems may have an explicit acceptance
testing phase where the customer formally checks that the delivered system conforms to its specification.
 To discover faults or defects in the software where the behavior of the software is incorrect,
undesirable or does not conform to its specification.
– Defect testing is concerned with rooting out all kinds of undesirable system behavior, such as system crashes,
unwanted interactions with other systems, incorrect computations and data corruption.
CONT...

 The first goal leads to validation testing, where you expect the system to perform correctly using a
given set of test cases that reflect the system’s expected use.
 The second goal leads to defect testing, where the test cases are designed to expose defects. The test
cases can be deliberately obscure and need not reflect how the system is normally used.
 For validation testing, a successful test is one where the system performs correctly.
 For defect testing, a successful test is one that exposes a defect that causes the system to perform
incorrectly.
Testing cannot demonstrate that the software is free of defects or that it will behave as specified in every circumstance. It is always
possible that a test that you have overlooked could discover further problems with the system.
“Testing can only show the presence of errors, not their absence.”
Edsger Dijkstra, et al., 1972
 Overall, therefore, the goal of software testing is to convince system developers and customers that the
software is good enough for operational use. Testing is a process intended to build confidence in the
software.
GENERAL MODEL OF THE TESTING PROCESS

Figure 2: A model of the software testing process

 Test cases are specifications of the inputs to the test and the expected output from the system plus
a statement of what is being tested.
 Test data are the inputs that have been devised to test the system. Test data can sometimes be
generated automatically.
Automatic test case generation is impossible.
 The output of the tests can only be predicted by people who understand what the system should do.
SUBSET OF POSSIBLE TEST CASES

 Exhaustive testing, where every possible program execution sequence is tested, is impossible.
 Testing, therefore, has to be based on a subset of possible test cases.
 Ideally, software companies should have policies for choosing this subset rather than leave this to the
development team.
 These policies might be based on general testing policies, such as a policy that all program statements
should be executed at least once. Alternatively, the testing policies may be based on experience of
system usage and may focus on testing the features of the operational system.
 For example:
– All system functions that are accessed through menus should be tested.
– Combinations of functions (e.g., text formatting) that are accessed through the same menu must be tested.
– Where user input is provided, all functions must be tested with both correct and incorrect input.
CONT...

 Experience with major software products like word processors and spreadsheets shows that similar
guidelines are typically followed in product testing. When individual software features are used in
isolation, they usually function correctly. However, issues arise when combinations of features have not
been tested together. For instance, using footnotes with multicolumn layout in a word processor may
result in incorrect text layout.
WHO IS RESPONSIBLE FOR THE DIFFERENT STAGES OF
TESTING

 During the V & V (Verification and Validation) planning process, decisions are made by the managers
regarding the responsibilities for different testing stages.
 Typically, programmers test the components they have developed, followed by an integration team that
integrates modules from different developers and build the software then tests the system as a whole.
 However, for critical systems, an independent testing process may be employed, where dedicated
testers are responsible for all stages of testing. In critical system testing, tests are developed separately,
and detailed records of test results are maintained.
CONT...

 Component testing by developers is usually based on an intuitive understanding of how the components
should operate.
 System testing, however, has to be based on a written system specification. This can be a detailed
system requirements specification, or it can be a higher-level user-oriented specification of the features
that should be implemented in the system.
 A separate team is normally responsible for system testing. The system testing team works from the
user and system requirements documents to develop system-testing plans (see Figure 3).

Figure 3: Testing phases in the software process


IDEAL OUTCOMES IN SOFTWARE TESTING

 In an ideal world, software testing will ensure that a system is ready to use under all conditions.
 Good test coverage ensures that all areas, such as functionality, compatibility, and performance, are
covered by the testing and that the application is deemed reliable. Ideally, testing also determines that
deployment can be achieved easily and without roadblocks.
 Essentially, the ideal outcome of software testing is an application that’s easy to install, understand, and
use, while also working as expected under real-world scenarios.
CONT...

 The major objectives of software testing could be summarized as follows:


– Spotting bugs
– Ensuring quality
– Avoiding future defects
– Assuring the product is fit for purpose
– Ensuring compliance with relevant requirements
– Creating a valuable product for customers
WHAT ARE THE LIMITATIONS OF TESTING?

 Testing run throughout development providing a framework that streamlines the process while
maximizing the quality of the end product.
 While this should result in a product that works effectively and consistently, handling every surprise input
that’s thrown at it, there are limitations to the testing process.
 Balancing these is about knowing how much testing is enough and what type of testing is
necessary to get as close to these ideal outcomes as possible.
LIMITATIONS OF TESTING IN SOFTWARE DEVELOPMENT

 There are certain limitations of testing in software testing that need to be considered. It’s important to acknowledge these
limitations and implement strategies to mitigate their impact during the testing process.
Key Limitations of Testing in Software Development:
 Testing cannot typically uncover things that are missing or unknown:
Tests can be written to find and identify known issues. Still, if there are issues that aren’t well understood, there’s no way to
design an appropriate test to find them. Therefore, testing simply cannot guarantee that an application is free from errors.
 It’s impossible to test for all conditions:
There is effectively an infinite number and variety of inputs that can go into an application, and testing them all, even if
theoretically possible, is never going to be practical. Testing must cover the known probable inputs, and this is a balance
between quality and deadlines.
 Testing usually gives no insight into the root causes of errors:
These causes need to be identified to prevent repeating the same mistakes in the future. Yet, testing will only tell whether
or not the issues are present.
CONT...

Other limitations:
 Time and resource constraints:
Limited time and resources may restrict the extent of testing that can be performed.
 Reliance on test data:
Testing relies on the availability of representative and comprehensive test data for accurate results.
 Inability to guarantee absolute correctness:
Testing can provide confidence in the software’s functionality, but it cannot guarantee that it is entirely error-free.
 In summary, the limitations of testing in software testing come from practical, financial, and time restrictions. The result of
these restrictions is a set of hard truths that testing cannot be exhaustive, can only be completed to a particular budget, and
will always be a compromise between the release date and the quality of software.
HOW TO OVERCOME THE LIMITATIONS OF TESTING IN
SOFTWARE DEVELOPMENT

 These limitations are inherent in all software releases. Even the best applications by diligent developers
have these testing limitations. Instead of fearing them, the key is to mitigate these limitations to deliver
high-quality products on time and meet customer needs.
How to overcome the three key limitations we listed above:
 Testing cannot typically uncover things that are missing or unknown.
– The obvious solution to this problem is to know as much as possible about what to expect. Of course, this
might be easier said than done. Still, when you consider that testing is as complex and difficult as the
engineering process itself, it seems reasonable to assign an equally-qualified team to the testing process.
– There will always be unknown unknowns, but with an experienced team of testers, you’ll have a wealth of
insights into what to expect and what to test for, and this dramatically reduces the number of unknown errors
that will slip through the cracks.
CONT...

 It’s impossible to test for all conditions.


– The key to working around this issue is finding the balance between testing speed, accuracy, scope, and budget
for your particular software. Automation solves many problems for smaller tests running numerous times. If
you’ve designated enough resources to your testing team, you’ll solve many problems by identifying what
conditions to test.
– Modern software is usually too complex to test in every feasible way and still release on a competitive schedule,
so learning to prioritize the requirements and functions that are most likely to contain critical errors is the key.
– Another important thing to keep in mind is the potential for requirement-specification changes down the line. If
you design your tests with maintenance in mind, you’ll cover many future testings' needs too.
CONT...

 Testing usually gives no insight into the root causes of errors.


– For this, your testing methodologies need to incorporate or at least accommodate root cause analyses. These
typically involve defining the problem, brainstorming the possible root causes, and implementing corrective
procedures to fix it. From there, your testing processes need to be altered to prevent the cause from being
repeated.
– Of course, the best way to avoid this process is to make testing something that happens early and often and
build the software with as few possible errors. Still, when errors inevitably appear, it’s important to know how to
use them as a learning process and a way to improve your processes.
CONT...

 So, working around the limitations involves targeting your testing approach with the most powerful tools
available to you. Whether that’s a highly-experienced team for testing or the most up-to-date automation
solutions, it’s generally a matter of coming in prepared to do the best you can under the circumstances.
 In this way, your testing objectives and attitudes should be aligned with what is optimal rather than
what is perfect, the latter of which is a physical impossibility.
CONT...

Conclusion
 So, testing should ensure that a product is fit for purpose and can handle anything the customers throw
at it without compromising quality, speed, or reliability. And while it’s impossible to guarantee this, it’s
certainly possible to get very close.
 Anticipating future needs is a skill that comes with experience, and helps create future-proof software,
so using the right people is a must.
 The key to making the optimal software is to know where, how, and how much to test and factor the
limitations into your testing strategy. Where necessary, root cause analyses can identify and inform
necessary corrective procedures, and continuous testing by a qualified team should go a long way to
keeping development on track.
COMPONENT (UNIT) TESTING

 Component testing (sometimes called unit testing) is the process of testing individual components in
the system. This is a defect testing process so its goal is to expose faults in these components.
 For most systems, the developers of components are responsible for component testing.
THE DIFFERENT TYPES OF COMPONENT THAT MAY BE
TESTED AT THIS STAGE

1. Individual functions or methods within an object


2. Object classes that have several attributes and methods
3. Composite components made up of several different objects or functions. These composite components
have a defined interface that is used to access their functionality.
Individual functions or methods within an object
 Individual functions or methods are the simplest type of component and your tests are a set of calls to
these routines with different input parameters.
 You can use the approaches to test case design, discussed in the next section, to design the function or
method tests.
COMPONENT TYPE: OBJECT CLASSES

 When you are testing object classes, you should design your tests to provide coverage of all of the
features of the object.
 Therefore, object class testing should include:
 Each method (operation) associated with the object class should be tested in isolation.
 The setting and interrogation (querying) of all attributes associated with the object.
 The exercise of the object in all possible states. This means that all events that cause a state change in the object
should be simulated.
EXAMPLE

Figure 4: The weather station object interface

 It has only a single attribute, which is its identifier. This is a constant that is set when the weather
station is installed. Therefore, you only need a test that checks whether it has been set up.
 You need to define test cases for reportWeather, calibrate, test, startup and shutdown.
 Ideally, you should test methods in isolation but, in some cases, some test sequences are necessary.
For example, to test shutdown you need to have executed the startup method.
CONT...

Figure 5: State diagram for WeatherStation


CONT...

 Using the state model in the previous slide you can identify sequences of state transitions that have to
be tested and define event sequences to force these transitions.
 In principle, you should test every possible state transition sequence, although in practice this may be
too expensive.
 Examples of state sequences that should be tested in the weather station include:
 Shutdown → Waiting → Shutdown
 Waiting → Calibrating → Testing → Transmitting → Waiting
 Waiting → Collecting → Waiting → Summarizing → Transmitting → Waiting
CONT...

Inheritance
 If you use inheritance, this makes it more difficult to design object class tests.
 Where a superclass provides operations that are inherited by a number of subclasses, all of these
subclasses should be tested with all inherited operations.
 The reason for this is that the inherited operation may make assumptions about other operations and attributes,
which these may have been changed when inherited.
 Equally, when a superclass operation is overridden then the overwriting operation must be tested.
COMPONENT TYPE: INTERFACE TESTING

 Many components in a system are not simple functions or objects but are composite components that
are made up of several interacting objects.
 The functionality of these components are accessed through their defined interface.
 Testing these composite components then is primarily concerned with testing that the component
interface behaves according to its specification.

Figure 6: Interface
testing
CONT...

 Figure 6 above illustrates this process of interface testing.


 Assume that components A, B and C have been integrated to create a larger component or sub-system.
The test cases are not applied to the individual components but to the interface of the composite
component created by combining these components.
Interface errors
 Interface testing is particularly important for object-oriented and component-based development.
Objects and components are defined by their interfaces and may be reused in combination with other
components in different systems.
 Interface errors in the composite component cannot be detected by testing the individual objects or
components. Errors in the composite component may arise because of interactions between its parts.
TYPES OF INTERFACES BETWEEN PROGRAM COMPONENTS

 Parameter interfaces: These are interfaces where data or sometimes function references are passed
from one component to another.
 Shared memory interfaces: These are interfaces where a block of memory is shared between
components. Data is placed in the memory by one sub-system and retrieved from there by other sub-
systems.
 Procedural interfaces: These are interfaces where one component encapsulates a set of procedures
that can be called by other components. Objects and reusable components have this form of
interface.
 Message passing interfaces: These are interfaces where one component requests a service from
another component by passing a message to it. A return message includes the results of executing the
service. Some object-oriented systems have this form of interface, as do client-server systems.
INTERFACE ERRORS

 Interface errors are one of the most common forms of error in complex systems (Lutz, 1993).

 Interface errors fall into three classes:


 Interface misuse: A calling component calls some other component and makes an error in the use of its
interface. This type of error is particularly common with parameter interfaces where parameters may be of the
wrong type, may be passed in the wrong order or the wrong number of parameters may be passed.
 Interface misunderstanding: A calling component misunderstands the specification of the interface of the called
component and makes assumptions about the behavior of the called component. The called component does not
behave as expected and this causes unexpected behavior in the calling component. For example, a binary
search routine may be called with an unordered array to be searched. The search would then fail.
 Timing errors: These occur in real-time systems that use a shared memory or a message-passing interface.
The producer of data and the consumer of data may operate at different speeds. Unless particular care is taken in
the interface design, the consumer can access out-of-date information because the producer of the information
has not updated the shared interface information.
CONT...

 Interface testing is tricky because defects often emerge only under unusual conditions.
 Consider a queue object with a limited size; a problem arises if the calling object expects an unlimited
size and doesn't account for potential overflow.
 Test cases must be crafted to deliberately trigger overflow to reveal any resulting issues in the system.
 Complications increase when errors across different modules interact, making some defects noticeable
only when another part of the system fails.
 For instance, if an object receives an incorrect but valid-looking response from another object, the error
might not be evident until it leads to a later malfunction.
GENERAL GUIDELINES FOR INTERFACE TESTING

 Analyze the Code: Identify every call to external components and test them with parameter values at
the extremes of their ranges to uncover inconsistencies.
 Pointer Parameters: Always include tests that pass null pointers to ensure the interface handles them
without errors.
 Procedural Interfaces: Create test scenarios that are expected to cause failures in the component, to
check for any incorrect assumptions about component behavior.
 Stress Testing: In systems that use message-passing, overload the system with an excessive number
of messages to identify potential timing issues.
 Shared Memory Interactions: Test the system with different activation sequences for components
sharing memory to detect any presumed data production and consumption order.
SOFTWARE TESTING LEVELS

 Integration Testing: involves testing two or more combined units that must work together to ensure an error-free
flow of control and data (such as consistency of parameters, file format, and so on) among combined units and their
overall correct design and integration. User interface, use-case, interaction, and big bang (integrate and test all
modules at once) are some of the integration testing types. This kind of testing is performed by testers.
 System Testing: involves testing an integrated complete software to check against its compliance with its
requirements. It verifies the overall interaction of components to ensure the unanimous working of all modules and
programs without error. It involves various types of both functional (tests functionality of software) testing and non-
functional (tests quality of software) testing such as performance, reliability, usability, and security testing. System
testing is performed by the testing team.
 Acceptance Testing: This testing is performed to validates the software against customer requirements. This testing
is done to ensure that the software does what the customer wants it to do and check the acceptability of the system.
User Acceptance Testing (UAT), as sometimes called, comprises of two testing types: Alpha testing: is a testing
performed by both development team and users using made-up data, and Beta testing in which users start using the
software with real data and carefully observer the software for errors.
CONT...

Figure 6: Software Testing Levels


CONT...
SOFTWARE TESTING TECHNIQUES

 Software testing involves various techniques to ensure that the software performs as expected. These
techniques specify the strategy for developing test cases, analyzing test results, and increasing test
coverage.
 Testing techniques help identify difficult-to-recognize test conditions and improve the overall quality of
the software. There are several testing techniques with each technique covering different aspects of the
software to reveal its quality.
 While it is not possible to utilize all testing techniques, testers can choose and combine multiple
techniques based on requirements, software type, budget, and time constraints. The higher the number
of testing techniques combined, the better the testing result, coverage, and quality.
 The three essential testing techniques are White-box, Black-box, and Grey-box testing.
CONT...
SOFTWARE TESTING TECHNIQUES: WHITE-BOX TESTING

 This is a testing technique in which the internal structure and implementation of software being
tested are known to the tester.
 In white-box testing, full knowledge of source code is required because test cases selection is
grounded on implementation of the software entity; internal view of the system and tester’s
programming skills are used to design test cases.
 Tester selects inputs to exercise program paths and compare the output with the expected output.
 White-box testing is also called Structural, Transparent Box, Glass Box, Clear Box, Logic Driven, Open
Box Testing.
 White-box testing, although usually done at the unit level, is also performed at integration and system
levels of the software testing process.
 Some white-box testing types include: Control Flow, Data flow, Branch, Loop, Path Testing.
CONT...

 Advantages:
– As the tester has knowledge of the source code, it becomes very easy to
find out which type of data can help in testing the application effectively.
– It helps in optimizing the code.
– Extra lines of code can be removed which can bring in hidden defects.
– Due to the tester's knowledge about the code, maximum coverage is
attained during test scenario writing.
 Disadvantages:
– Due to the fact that a skilled tester is needed to perform white box testing,
the costs are increased.
– Sometimes it is impossible to look into every nook and corner to find out
hidden errors that may create problems as many paths will go untested.
– It is difficult to maintain white box testing as the use of specialized tools
like code analyzers and debugging tools are required.
COMMON WHITE-BOX TESTING TYPES: CONTROL-FLOW TESTING

 Control flow testing is a method of white-box testing that uses a program's control flow graph (CFG) to
guide the creation of test cases.
 This approach involves selecting paths, nodes, and conditions within the CFG to ensure the execution
of these paths and that each paths, nodes and conditions are executed at least once.
 This helps to verify the program's execution order and control flow. A typical test case covers a path
from the start to the end of the CFG.
 This testing method is especially useful for unit testing new software.
 A CFG consists of nodes representing statements and edges indicating the flow of control.
 There are five main types of nodes: entry, exit, decision (for conditional branches like if or switch
statements), merge (where branches converge), and statement (sequential statements).
 Control flows from the beginning to the end of the program, with some CFGs allowing reverse flow for
additional testing depth.
 Different CFG models exist, each with distinct characteristics, yet all aim to facilitate comprehensive
testing by meeting specific coverage criteria.
CONT...

 Control-flow testing supports the following test coverage criteria:


– Statement/Node Coverage: Executes each statement in the program at least once
– Edge Coverage: Executes each statement in the program at least once using all possible outcomes at least once on
every decision in the program.
– Condition Coverage: Executes each statement in the program at least once using all possible outcomes at least once
on every condition in each decision. every individual condition in a program's decision point has been exercised at least once,
both in its true and false form
– Path Coverage: Executes each complete path in the program at least once. Except for loops, which usually has an
infinite number of complete paths.
COMMON WHITE-BOX TESTING TYPES: DATA FLOW TESTING

 Data-flow testing, a white-box testing technique, leverages control flow graph (CFG) paths to identify
incorrect data definitions, usage in calculations, and terminations (killing).
 It tracks how data is manipulated to uncover potential errors, focusing on anomalies like unused
initialized or uninitialized used variables.
 This method inspect the definitions, uses, and the context, whether in computations or conditions, of
variables throughout the program.
 It employs two primary approaches:
– Define/use testing, which follows specific rules and coverage metrics, and
– Program slicing, which examines program segments.
CONT...

 Data flow testing uses the following Test Coverage Criteria in creating test cases for the test:
– All-defs (AD) coverage: Every variable definition is followed by at least one use in the code.
– All-uses (AU) coverage: Ensures a path exists from every variable's definition to its use.
– All-c-uses (ACU) coverage: Tracks paths from variable definitions to their use in computations, excluding
variables without computational use.
– All-c-uses/some-p-uses (ACU+P) coverage: Covers paths from definitions to computational uses and
considers predicate uses if computational uses are absent.
– All-p-uses (APU) coverage: Ensures every variable's definition leads to its use in predicates, excluding
variables without predicate use.
– All-p-uses/some-c-uses (APU+C) coverage: Focuses on paths leading to predicate uses, and to computational
uses if predicate uses are absent.
– All-du-paths (ADUP) coverage: The most comprehensive, covering all paths between definitions and uses,
encompassing all other criteria and requiring the most extensive testing.
CONT...
SOFTWARE TESTING TECHNIQUES: BLACK-BOX TESTING

 Black-box testing is a method where the tester does not know the software's internal structure. It
focuses on testing the software against its requirement specifications, without considering its internal
workings.
 This technique can be used for both functional (e.g., integration testing) and non-functional (e.g.,
performance testing) aspects, though it's mainly applied to functional testing.
 Testers evaluate the software's basic functionalities through detailed test cases, aiming to ensure the
integrity of its external operations. They check if the software correctly accepts inputs and produces
outputs, comparing results against a predefined standard or "test oracle.“
 Black-box testing is versatile, applicable at various testing stages such as Unit, Integration, System, and
Acceptance levels, but it's primarily used in System and Integration testing.
 Also known as Opaque, Functional, Specification-based, Close-box, Behavioral, and Input-Output
testing.
 It includes techniques like Equivalence Partitioning, Boundary Value Analysis, Decision Table, and State
Transition, among others, to ensure comprehensive coverage.
CONT...

 Advantages:
– Well suited and efficient for large code segments.
– Code Access not required.
– Clearly separates user’s perspective from the developer’s
perspective through visibly defined roles.
– Large numbers of moderately skilled testers can test the
application with no knowledge of implementation, programming
language or operating systems.
 Disadvantages:
– Limited Coverage since only a selected number of test scenarios
are actually performed.
– Inefficient testing, due to the fact that the tester only has limited
knowledge about an application.
– Blind Coverage, since the tester cannot target specific code
segments or error prone areas.
– The test cases are difficult to design.
COMMON BLACK-BOX TESTING TYPES: EQUIVALENCE
PARTITIONING TESTING (EP)

 EP testing technique simplifies testing by grouping a program's input domain into equivalence classes to minimize test cases.
 By selecting one representative from each equivalence class (EC) for testing, it ensures coverage while eliminating
redundant tests.
 EC testing can be weak or strong:
– Weak Equivalence Class Testing (WECT): the number of test cases is defined by chosen one variable value from each
equivalence class and then taking the maximum value from the chosen variables
– Strong Equivalence Class Testing (SECT): is based on the cartesian product of partition class, i.e., testing all interactions of all
equivalence classes
COMMON BLACK-BOX TESTING TYPES: BOUNDARY VALUE
ANALYSIS TESTING (BVA)

 This is a black box test selection technique that aims at finding software errors at the boundaries of
equivalence classes. Unlike the Equivalence Partitioning technique (uses only input domain), BVA
uses both input and output domains in creating test cases.
 BVA complements EP in such that while EP selects tests from within equivalence classes, BVA focuses
on tests at and near the boundaries of equivalence classes. Tests derived using either of the two
techniques may overlap.
SOFTWARE TESTING TECHNIQUES: GREY-BOX TESTING

 Grey-box (translucent) testing technique that takes the straightforward technique of black-box testing
and combines it with the code-targeted systems in white-box testing.
 Some knowledge of the internal working of the software is required (usually of the part to be tested) in
designing tests at the black-box level.
 More understanding of the internals of software is required in grey-box testing than in black-box testing,
but less compared to white box testing.
 Gray box testing is much more effective in integration testing and is the best approach for functional or
domain testing, also a perfect fit for Web-based applications.
CONT...

 Advantages:
– combined benefits of black box and white box testing wherever
possible.
– Grey box testers don’t rely on the source code; instead they rely
on interface definition and functional specifications.
– Based on the limited information available, a grey box tester can
design excellent test scenarios especially around communication
protocols and data type handling.
– The test is done from the point of view of the user and not the
designer.
 Disadvantages:
– Since the access to source code is not available, the ability to go
over the code and test coverage is limited.
– The tests can be redundant if the software designer has already
run a test case.
– Testing every possible input stream is unrealistic because it
would take an unreasonable amount of time; therefore, many
program paths will go untested.
COMMON GREY-BOX TESTING TYPES: REGRESSION TESTING

 Regression testing is a grey-box testing strategy that is performed every time changes are made to the
software to ensure that the changes behave as intended and that the unchanged part is not negatively
affected by the modification. Errors that occurred at unchanged parts of the software are called
regression errors.
 Regression testing starts with a (possibly modified) specification, a modified program, and an old test
plan (which requires updating).
COMMON GREY-BOX TESTING TYPES: ORTHOGONAL ARRAY
TESTING (OAT)

 This is a type of testing that uses pair-wise combinations of data or entities as test input parameters to
increase the scope. The selected pairs of parameters should be independent of one another.
 OAT is handy when maximum coverage is required with minimum test cases and a huge number of test
data having many permutations and combinations. It’s extremely valuable for testing complex
applications and e-comm products.
COMPARISON OF SOFTWARE TESTING TECHNIQUES
 There is no one particular technique that is better, however, depending on the testing requirements and needs one technique
can have some advantages over others and vice.
 In testing any software, exploring and combining many testing techniques helps in eliminating more bugs thereby increasing
the overall quality of the software than sticking to one technique.

Criteria White-box Black-box Grey-box


Required Full knowledge of the internal working Knowledge of the internal working of Limited knowledge of the internal
knowledge of the software. software is not required. workings of the Software.

Performed by Usually testers and developers. End-users, developers, and testers. End-users, developers, and testers.

Internal workings, coding structure, Evaluating fundamental aspects of the High-level database diagrams and
Testing focus
and flow of data and control. software. data flow diagrams.

Granularity High Low Medium

Time consumption Very exhaustive and time-consuming. The least time-consuming and exhaustive. Partly time-consuming and exhaustive.

Data domains and internal boundaries Can be performed through trial-and-error Can be done on identified data
Data domain testing
can be better tested. method. domains and internal boundaries.

Algorithm testing Suitable for testing algorithms. Unsuitable for testing algorithms. Inappropriate for testing algorithms.

Transparent-box, Open-box, Logic- Closed-box, data-driven, functional, or


Also known as Translucent testing
driven, or code-based testing. Specification-based testing.
SOFTWARE TESTING TYPES
 Testing types are the various testing that are performed at a particular test level based on a proper test
technique to address testing requirements in the most effective manner [12]. There are many types of
testing each serving different purposes.
 Some of the most important types of testing are:

Testing Type object Technique Type Testing Level


Functional testing Test functions of a software Blackbox testing Acceptance and System level

Testing software responsiveness and


Performance testing Blackbox testing Any level
stability under a particular Workload
Protect data and maintain software
Security testing Whitebox testing Any level
functionality

Usability testing Check ease of use of software Blackbox testing Acceptance and System level

Checking that path used by user is Acceptance, System and


Use case testing Blackbox testing
working as intended Integration level

Exploratory testing Validate experience of user Ad-hoc testing Acceptance and System level
LEVELS OF TESTING

 Levels of testing include the different methodologies that can be used while conducting Software Testing.
 Following are the main levels of Software Testing:
– Functional Testing.
– Non- functional Testing.
Functional Testing
 This is a type of black box testing that is based on the specifications of the software that is to be tested.
The application is tested by providing input and then the results are examined that need to conform to
the functionality it was intended for.
 Functional Testing of the software is conducted on a complete, integrated system to evaluate the
system's compliance with its specified requirements.
CONT...

 There are five steps that are involved when testing an application for functionality:
I. The determination of the functionality that the intended application is meant to perform.
II. The creation of test data based on the specifications of the application.
III. The output based on the test data and the specifications of the application.
IV. The writing of Test Scenarios and the execution of test cases.
V. The comparison of actual and expected results based on the executed test cases.
 An effective testing practice will see the above steps applied to the testing policies of every organization
and hence it will make sure that the organization maintains the strictest of standards when it comes to
software quality.
SDLC - V-MODEL

 The V - Model is SDLC model where execution of processes happens in a sequential manner in V-
shape. It is also known as Verification and Validation model.
 V - Model is an extension of the waterfall model and is based on association of a testing phase for each
corresponding development stage. This means that for every single phase in the development cycle
there is a directly associated testing phase.
 This is a highly disciplined model and next phase starts only after completion of the previous
phase.
V – MODEL DESIGN
 Under V - Model, the
corresponding testing phase of
the development phase is planned
in parallel.
 So there are Verification phases
on one side of the “V” and
Validation phases on the other
side.
 Coding phase joins the two
sides of the V-Model.
 The figure illustrates the different
phases in V-Model of SDLC.
PHASES IN V-MODEL

Verification Phases
 Business Requirement Analysis:
– This is the first phase in the development cycle where the product requirements are understood from the customer perspective.
This phase involves detailed communication with the customer to understand his expectations and exact requirement.
– This is a very important activity and need to be managed well, as most of the customers are not sure about what exactly they need.
– The acceptance test design planning is done at this stage as business requirements can be used as an input for acceptance
testing.

 System Design:
– Once you have the clear and detailed product requirements, its time to design the complete system.
– System design would comprise of understanding and detailing the complete hardware and communication setup for the product
under development.
– System test plan is developed based on the system design. Doing this at an earlier stage leaves more time for actual test
execution later.
CONT...

 Architectural Design:
– Architectural specifications are understood and designed in this phase. Usually more than one technical approach is proposed
and based on the technical and financial feasibility the final decision is taken.
– System design is broken down further into modules taking up different functionality. This is also referred to as High Level Design
(HLD).
– The data transfer and communication between the internal modules and with the outside world (other systems) is clearly
understood and defined in this stage.
– With this information, integration tests can be designed and documented during this stage.

 Module Design:
– In this phase the detailed internal design for all the system modules is specified, referred to as Low Level Design (LLD).
– It is important that the design is compatible with the other modules in the system architecture and the other external systems.
– Unit tests are an essential part of any development process and helps eliminate the maximum faults and errors at a very early
stage.
– Unit tests can be designed at this stage based on the internal module designs.
CONT...

Coding Phases
 The actual coding of the system modules designed in the design phase is taken up in the Coding phase.
 The best suitable programming language is decided based on the system and architectural requirements.
 The coding is performed based on the coding guidelines and standards.
 The code goes through numerous code reviews and is optimized for best performance before the final build is
checked into the repository.
Validation Phases
 Unit Testing:
– Unit tests designed in the module design phase are executed on the code during this validation phase.
– Unit testing is the testing at code level and helps eliminate bugs at an early stage, though all defects cannot be uncovered by unit
testing.
CONT...

 Integration Testing:
– Integration testing is associated with the architectural design phase.
– Integration tests are performed to test the coexistence and communication of the internal modules within the system.

 System Testing:
– System testing is directly associated with the System design phase.
– System tests check the entire system functionality and the communication of the system under development with external systems.
– Most of the software and hardware compatibility issues can be uncovered during system test execution.

 Acceptance Testing:
– Acceptance testing is associated with the business requirement analysis phase and involves testing the product in user
environment.
– Acceptance tests uncover the compatibility issues with the other systems available in the user environment.
– It also discovers the non functional issues such as load and performance defects in the actual user environment.
V - MODEL APPLICATION

 V- Model application is almost same as waterfall model, as both the models are of sequential type.
 Requirements have to be very clear before the project starts, because it is usually expensive to go
 back and make changes. This model is used in the medical development field, as it is strictly
 disciplined domain.
 Suitable scenarios to use V-Model:
– Requirements are well defined, clearly documented and fixed.
– Product definition is stable.
– Technology is not dynamic and is well understood by the project team.
– There are no ambiguous or undefined requirements.
– The project is short.
V- MODEL PROS AND CONS

PROS CONS
This is a highly disciplined model and Phases are
High risk and uncertainty.
completed one at a time.

Works well for smaller projects where requirements are


Not a good model for complex and object-oriented projects.
very well understood.

Simple and easy to understand and use. Poor model for long and ongoing projects.

Easy to manage due to the rigidity of the model. each Not suitable for the projects where requirements are at a
phase has specific deliverables and a review process. moderate to high risk of changing.

Once an application is in the testing stage, it is difficult to go back


and change a functionality

No working software is produced until late during the life cycle.

You might also like