0% found this document useful (0 votes)
66 views66 pages

Foundation For Testing Project: ISYS6264 - Testing and System Implementation

This document provides an overview of testing fundamentals for a course on testing and system implementation. It discusses the importance of testing, defines key terms like defects and failures, explains how testing fits into different software development life cycles, and outlines the fundamental activities involved in test planning, control, and analysis. The objective is to explain the foundation of testing projects and principles.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views66 pages

Foundation For Testing Project: ISYS6264 - Testing and System Implementation

This document provides an overview of testing fundamentals for a course on testing and system implementation. It discusses the importance of testing, defines key terms like defects and failures, explains how testing fits into different software development life cycles, and outlines the fundamental activities involved in test planning, control, and analysis. The objective is to explain the foundation of testing projects and principles.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 66

ISYS6264 – Testing and System Implementation

Foundation for Testing Project


Learning Outcomes

• LO 1: Explain the foundation of testing project


References

• Black, Rex. (2009). Managing the testing process : practical


tools and techniques for managing hardware and software
testing. 03. Wiley. Indianapolis. ISBN: 9780470404157.

• Burnstein, Ilene. (2003). Practical Software Testing. Springer.


New York. ISBN: 0-387-95131-8

• Homès, Bernard. (2012).Fundamentals of Software Testing. ISTE


– Wiley. London – Hoboken. ISBN: 978-1-84821-324-1

• Black, Rex & Mitchell, Jamie. (2011). Advanced Software Testing.


3. Rocky Nook. Santa Barbara, USA. ISBN: 978-1-933952-39-0
Sub Topics

• What is Testing?
• Testing in Software Life Cycle
• Fundamental Test Process
• Principles in Software Testing
• Granularity Test
• Test Phases
• Assuring Quality in Testing
• Quality Risk Analysis
What is Testing?
Why is testing necessary?

• In our everyday of life, we are more and more dependent on the


correct execution of software, whether it is in our equipment (cell
phones, engine injection, etc.), in the transactions we undertake
each day (credit or debit card purchases, fund transfers, internet
usage, electronic mail, etc.), or even those that are hidden from
view (back office software for transaction processing), software
simplifies our daily lives. When it goes away, the impact can be
devastating.

• Testing software and systems is necessary to avoid failures to


customers and avoid bad publicity for the organizations involved.
This is the case for service companies responsible for the
development or testing of third-party software, because the
customer might not renew the contract, or might sue for damages.
Why is testing necessary?
(cont…)
• Our software and systems become more and more complex, and
we rely more and more on their faultless operation. Our cell
phones and personal digital assistants (PDAs) are more powerful
than the mainframes of 30 years ago, simultaneously integrating
agenda, notepad, and calendar functions, plus global positioning
systems (GPSs), cameras, emails, instant messaging, games,
voice recorders, music, video players, etc., not forgetting the
telephone functionalities of course. Vehicles are equipped with
more and more electronic circuits and data processing systems
(ESP, GPS, fuel injection, airbags, course control, cruise control,
etc.), our cell phones connect automatically (via Bluetooth) to our
vehicles and its audio system. We only need a small software
problem and our vehicle or our cell phone become unusable.
Why is testing necessary?
(cont…)
• We also rely on other software, such as those in our credit or
debit cards, where a defect can directly impact millions of users,
such as occurred in early 2010 where Germans users were
victims of a major failure for over a week. We have also been
exploding virtual high-speed trains (without actual victims) or the
availability of customer data for rail companies available on the
internet, as well as problem with bank software, administration,
etc.

• Our lives rely on software, it is necessary to test this software.


Software testing is undertaken to make sure that it works
correctly, to protect against defects and potentially fatal failures.
Testing is a …

Testing is a set of activities with the objective of


identifying failures in a software or system and to
evaluate its level of quality, to obtain user satisfaction.
It is a set of tasks with clearly defined goals.
Defect and Failure

• The impact of human error on a product is called a “defect”, which


will produce a “failure” (a mode of operation that does not fulfill
user’s expectations) if it is executed when the software or system
is used.

• Defects and failures can arise from different root causes, such as
gaps in the developer’s training, communication problems
between the customer and the designers, immature design
processes – from requirements gathering, to detailed design and
architecture – or even oversights or misunderstanding or incorrect
transcriptions of the requirements. Among other things, these
causes may result from stress or fatigue of the design teams.
Objectives for Testing

• Test objectives vary depending on the phase of the life cycle of


the software. The objectives are not identical during the initial
design, the maintenance, or at the end of the software usage.
Similarly, they differ also according to the test level.

• During the general design or detailed design phase, testing will


focus on finding the highest number of defects (or failures), in the
shortest possible timescale, in order to deliver high-quality
software.

• During the customer acceptance phase, testing will show that the
software works properly to obtain customer approval for usage of
that software.
Objectives for Testing
(cont.)
• During the operational phases, where the software is being used,
testing will focus on ensuring that the requirement levels (SLA,
service level agreement) are reached.

• During evolutive or corrective maintenance of the software,


testing aims to ensure the absence of defects via corrections or
evolutions, and also that no side effects occur on the unchanged
functionalities of the system or software.

• When the software is discarded and replaced by another, testing


takes a snapshot of the software, to ensure which functionalities
are present and guarantee the quality of data, so that migration
goes smoothly to another platform, whether this is new hardware
or new software. Data transfer from the old to the new
environment is also important and must be tested
Testing in Software Life Cycle
Where Testing Fits into the
Project Life Cycle
• There are many different life cycle models defined, and it’s
beyond the scope of this course to discuss each in detail. Testing
can be applied to all of these development models.

• Testing is always present in the software development cycle,


sometimes it is implemented differently depending on the model,
but the main principles are always applicable.

• In the following, we’ll show the major types of life cycles and how
testing fits to them.
Software Development Model

Sequential • where the activities are executed in sequence, one


after the other, with milestones that allow the
Models accomplishment of the objectives to be identified

Iterative • where the activities are executed iteratively until


Development the required level of quality is reached. These
models use regression testing extensively;
Models

Incremental • Combining two previous models


Models
Sequential Models
(Waterfall)

Source: Homès (2012, pg. 44)


Sequential Models
(V Model)

Source: Black (2009, pg. 502)


Sequential Models
(W Model)

Source: Homès (2012, pg. 46)


Iterative Models
(Basic)

Source: Homès (2012, pg. 47)


Iterative Models
(Spiral)

Source: Black (2009, pg. 505)


Incremental Models

Source: Homès (2012, pg. 49)


Fundamental Test Process
Fundamental Test Process

Source: Homès (2012, pg. 14)


Activities in Planning

• Before any human activity, it is necessary to organize and plan


the activities to be executed.

• Test planning consists of the definition of test goals, and the


definition of the activities to reach these goals.

• Test planning activities include organization of the tasks, and


coordination with the other stakeholders, such as development
teams, support teams, user representatives, management,
customers, etc.

• The level of detail will depend on the context: a complex safety-


critical software will not be tested with the same goals or the
same focus as a video game or e-commerce software
Activities in Control

• Control activities are executed throughout the test campaign.


They are often grouped with planning activities because they
ensure that what has been planned is correctly implemented.

• Control identifies deviations from planned operations, or


variations with regards to planned objectives, and proposes
actions to reach these objectives. This implies the ability to
measure the progress of activities in terms of the resources used
(including time) as well as in terms of objectives reached.
Activities in Analysis
• Analysis of the basis of the test is the study of the reference
documents used to design the software and the test objectives.
This includes, but is not limited to:
– contractual documents, such as the contract, statement of work, and
any amendments or technical attachments, etc.;
– software specifications, high-level design, detailed architecture of the
components, database organization or file system organization;
– user documentation, maintenance or installation manuals, etc.;
– the risks identified or associated with the use or the development of
the software;
– applicable standards, whether they be company specific, profession
specific, or mentioned in the contractual documents.

• The analysis of the test basis allows the definition of the


objectives of the test, their prioritization, and the evaluation of the
testability of the test basis and test objectives. During this phase,
we will identify the risks and test priorities (integrity level), as well
as test environments (including test data) to be acquired.
Activities in Analysis
(cont.)
• The integrity level indicates the criticality of the software for the
stakeholders, and is based on attributes of the software, such as
risks, safety, and security level, etc. The integrity level will impact
the depth and breadth of tests to be executed, type and the level
of detail of test documentation and the minimal set of testing
tasks to be executed.

• Analysis of the test basis allows you to identify the aspects to test
(test objectives) and to determine how they will be tested.
Traceability from the test basis to the test objectives and test
conditions, to allow quick impact analysis of change requests to
the test basis should be ensured.
Activities in Design

• Test design consists of applying the test objectives under obvious


test conditions, and then applying them to test cases. Test
conditions are usually abstract while test cases are usually
precise and include both test data and expected results.

• The test design phase comprises:


– identification and prioritization of the test conditions based on an
analysis of the test objects, of the structure of the software and
system;
– design and prioritization of high-level test cases;
– identification of the test environment and test data required for test
execution;
– provision for control and tracking information that will enable
evaluation of test progress.
Activities in Implementation

• Test implementation is the conversion of test conditions towards


test cases and test procedures, with specific test data and precise
expected results. Detailed information on test environment and
test data, as well as on the sequence of the test cases are
necessary to anticipate test execution. Test implementation tasks
are (non-exhaustive list):
– finalize, implement, and order test cases based on the priorities defined. This
can come from the integrity levels or other considerations such as risk
analysis or the relative criticality of the components;
– develop and sequence the test procedures, by organizing test cases and test
data. This can require the creation of drivers or stubs, or even automated test
cases;
– create test suites (scenarios) from test procedures and test cases, to facilitate
test execution;
– define the test environment and design test data;
– ensure that bi-directional traceability, started during the analysis and test
design phases is continued until the test case levels;
– provide information on the evolution of the process (metrics and
measurement), so that project control and management can be efficient.
Activities in Execution

• Test execution in the test environment enables the identification


of differences between the expected and the actual results and
includes tasks linked to the execution of test cases, test
procedures, or test suites. This includes:
– ensuring that the test environment is ready for use, including the availability of
test data;
– executing test cases, test procedures, and test suites, either manually or
automatically, according to the planned sequence and priority;
– recording test execution results and identifying the version of the component
tested, of the test tools, and test environment;
– comparing expected results with actual results;
– identifying and analyzing any discrepancy between expected and actual
results, and clearly defining the cause of the discrepancy. Recording these
differences as incidents or defects, by providing the highest level of detail
possible to facilitate defect correction in the future;
– providing tracking and control information to allow efficient management of
test activities.
Activities in Analysis
of Exit Criteria
• Analysis of the exit criteria evaluates the test object with regards
to the test objectives and criteria defined during the planning
phase. This evaluation takes place during test execution and
depending on the results enables other test activities to be
envisaged.

• Test completion evaluation includes:


– analysis of the test execution logs and reports (notes taken
during test execution);
– comparison of the objectives reached versus objectives
identified during the planning phase, and evaluation of the
need to test more thoroughly or to modify the exit criteria.
Activities in Reporting

• Test activity results interest a large number of stakeholders:


– testers, to evaluate their own progress and efficiency;
– developers, to evaluate the quality of their code, and the remaining workload,
whether it is on the basis of remaining defects to be fixed, or components to
deliver;
– quality assurance managers, to determine the required process improvement
activities, whether in the requirement elicitation phase or reviews, or during
design or test phases;
– customers and end users, or marketing, to advise them when the software or
system will be ready and released to the market;
– upper management, to evaluate the anticipated remaining expenses and
evaluate the efficiency and effectiveness of the activities to date.

• These stakeholders must be informed, via progress reports,


statistics, and graphs, of the answer to their queries, and enable
them to take appropriate decisions with adequate information.
Reporting activities are based on the tracking and control data
provided by each test activity.
Activities in Closure

• Once the software or system is considered ready to be delivered


(to the next test phase, or to the market), or the test project is
considered complete (either successfully or because it was
cancelled), it is necessary to close the test activities. This
consists of:
– ensuring that the planned components have been delivered;
– determining the actions that must be taken to rectify unfixed incidents or
remaining defects. This can be closure without any action, raising change
requests on a future version, delivery of data to the support team to enable
them to anticipate user questions;
– document the acceptance of the software or system;
– archive test components and test data, drivers and stubs, test environment
parameters and infrastructure for future usage (i.e. for the next version of the
software or system);
– if necessary, delivery of the archived components to the team in charge of
software maintenance;
– identify possible lessons learned or return on experience, so as to document
them and improve future projects and deliveries, and raise the organization’s
maturity level.
Principles in Software Testing
Testing Principles

• Testing is the process of exercising a software component using a selected set


1 of test cases, with the intent of (i) revealing defects, and (ii) evaluating quality.

• When the test objective is to detect defects, then a good test case is one that
2 has a high probability of revealing a yet undetected defect(s).

• Test results should be inspected meticulously.


3

• A test case must contain the expected output or result.


4

• Test cases should be developed for both valid and invalid input conditions.
5
Testing Principles
(cont.)
• The probability of the existence of additional defects in a software component
6 is proportional to the number of defects already detected in that component.

• Testing should be carried out by a group that is independent of the


7 development group.

• Tests must be repeatable and reusable.


8

• Testing should be planned.


9

• Testing activities should be integrated into the software life cycle.


10

• Testing is a creative and challenging task.


11
Granularity Test
Test Granularity

• Test granularity refers to the fineness or coarseness of a test’s


focus. A fine-grained test case allows the tester to check low-level
details, often internal to the system. A coarse-grained test case
provides the tester with information about general system
behavior. You can think of test granularity as running along a
spectrum ranging from structural (white-box) to behavioral (black-
box and live) tests

Source: Black (2009, pg. 2)


Structural (White Box) Tests

• Structural tests (also known as white-box tests and glass-box


tests) find bugs in low-level structural elements such as lines of
code, database schemas, chips, subassemblies, and interfaces.
The tester bases structural tests on how a system operates.
– For example, a structural test might reveal that the database that
stores user preferences has space to store an 80-character
username, but that the field allows the user to enter only 40
characters.

• Structural testing involves a detailed technical knowledge of the


system. For software, testers create structural tests by looking at
the code and the data structures themselves. For hardware,
testers create structural tests to compare chip specifications to
readings on oscilloscopes or voltage meters.
Behavioral (Black Box) Tests

• Testers use behavioral tests (also known as black-box tests) to


find bugs in high-level operations, such as major features,
operational profiles, and customer scenarios. Testers can create
black-box functional tests based on what a system should do.
– For example, if SpeedyWriter should include a feature that saves files in XML
format, then you should test whether it does so. Testers can also create
black-box non-functional tests based on how a system should do what it
does.
– For example, if DataRocket can achieve an effective throughput of only 10
Mbps across two 1-gigabit Ethernet connections acting as a bridge, a black-
box network-performance test can find this bug.

• Behavioral testing involves a detailed understanding of the


application domain, the business problem that the system solves,
and the mission the system serves.
Live Tests

• Live tests involve putting customers, content experts, early


adopters, and other end users in front of the system. In some
cases, we encourage the testers to try to break the system. Beta
testing is a well-known form of bug-driven live testing.
– For example, if the SpeedyWriter product has certain configuration-specific
bugs, live testing might be the best way to catch those bugs specific to
unusual or obscure configurations. In other cases, the testers try to
demonstrate conformance to requirements, as in acceptance testing, another
common form of live testing.

• Live tests can follow general scripts or checklists, but live tests
are often ad hoc (worst case) or exploratory (best case). Live
testing is a perfect fit for technical support, marketing, and sales
organizations whose members don’t know formal test techniques
but do know the application domain and the product intimately.
This understanding, along with recollections of the nasty bugs
that have bitten them before, allows them to find bugs that
developers and testers miss.
Test Phases
Test Phase Sequencing

Source: Black (2009, pg. 9)


Unit Testing

• Unit testing focuses on an individual piece of code.

• Unit testing is not usually a test phase in a project-wide sense of


the term, but rather the last step of writing a piece of code.

• Unit tests are white-box in the sense that the programmer knows
the internal structure of the unit under test and is concerned with
how the testing affects the internal operations. Therefore,
programmers usually do the unit testing. Sometimes they test
their own code. Sometimes they test other programmers’ code,
often referred to as buddy tests or code swaps. Sometimes two
programmers collaborate on both the writing and unit testing of
code, such as the pair programming technique advocated by
practitioners of the agile development approach called Extreme
Programming.
Component or Subsystem
Testing
• Component testing applies to some collection of units that provide
some defined set of capabilities within the system.

• Component test execution usually starts when the first


component of the product becomes functional, along with
whatever scaffolding, stubs, or drivers are needed to operate this
component without the rest of the system.
– In our SpeedyWriter product, for example, file manipulation is a component.
– For DataRocket, the component test phase would focus on elements such as
the SCSI subsystem: the controller, the hard-disk drives, the CD/DVD drive,
and the tape backup unit.

• Component testing should use both structural and behavioral


techniques.
Integration or Product Testing

• Integration or product testing focuses on the relationships and


interfaces between pairs of components and groups of
components in the system under test, often in a staged fashion.
Integration testing must happen in coordination with the project-
level activity of integrating the entire system— putting all the
constituent components together, a few components at a time.
The staging of integration and integration testing must follow the
same plan— sometimes called the build plan — so that the right
set of components comes together in the right way and at the
right time for the earliest possible discovery of the most
dangerous integration bugs.
– For SpeedyWriter, integration testing might start when the developers
integrate the file-manipulation component with the graphical user interface
(GUI) and continue as developers integrate more components one, two, or
three at a time, until the product is feature-complete.
– For DataRocket, integration testing might begin when the engineers integrate
the motherboard with the power supply, continuing until all components are in
the case.
String Testing

• String testing focuses on problems in typical usage scripts and


customer operational strings. This phase is a rare bird.
– In the case of SpeedyWriter, string testing might involve cases such as
encrypting and decrypting a document, or creating, printing, and saving a
document.
System Testing

• System testing encompasses the entire system, fully integrated.


Sometimes, as in installation and usability testing, these tests
look at the system from a customer or end-user point of view.
Other times, these tests stress particular aspects of the system
that users might not notice, but are critical to proper system
behavior.
– For SpeedyWriter, system testing would address such concerns as
installation, performance, and printer compatibility.
– For DataRocket, system testing would cover issues such as performance and
network compatibility.

• System testing tends to be behavioral.


Acceptance
or User-Acceptance Testing
• In commercial software and hardware development, acceptance
tests are sometimes called alpha tests (executed by in-house
users) and beta tests (executed by current and potential
customers). Alpha and beta tests, when performed, might be
about demonstrating a product’s readiness for market, although
many organizations also use these tests to find bugs that can’t be
(or weren’t) detected in the system testing process.

• Acceptance testing can involve live data, environments, and user


scenarios. The focus is usually on typical product-usage
scenarios, not extreme conditions. Therefore, marketing, sales,
technical support, beta customers, and even company executives
are perfect candidates to run acceptance tests.
Assuring Quality in Testing
Software Quality

• Two concise definitions for quality are found in the IEEE Standard
Glossary of Software Engineering Terminology [2]:

1. Quality relates to the degree to which a system, system


component, or process meets specified requirements.

2. Quality relates to the degree to which a system, system


component, or process meets customer or user needs,
or expectations.
Examples of Quality Attributes
(IEEE)
Correctness • the degree to which the system performs its intended function

• the degree to which the software is expected to perform its required


Reliability functions under stated conditions for a stated period of time

• relates to the degree of effort needed to learn, operate, prepare input,


Usability and interpret output of the software

• relates to the system’s ability to withstand both intentional and


Integrity accidental attacks

• relates to the ability of the software to be transferred from one


Portability environment to another

Maintainability • the effort needed to make changes in the software

Interoperability • the effort needed to link or couple one system to another.


Quality Characteristics
(ISO 9126)

Functionality • Suitability, accuracy, interoperability, security, compliance

• Maturity (robustness), fault tolerance, recoverability,


Reliability compliance

• Understandability, learnability, operability, attractiveness,


Usability compliance

Efficiency • Time behavior, resource utilization, compliance

• Analyzability, changeability, stability, testability,


Maintainability compliance

• Adaptability, installability, coexistence, replaceability,


Portability compliance
Software Quality Assurance
(SQA) Group
• The software quality assurance (SQA) group is a team of people
with the necessary training and skills to ensure that all necessary
actions are taken during the development process so hat the
resulting software conforms to established technical
requirements.

• They work with project managers and testers to develop quality-


related policies and quality assurance plans for each project. The
group is also involved in measurement collection and analysis,
record keeping, and reporting. The SQA team members
participate in reviews and audits (special types of reviews that
focus on adherence to standards, guidelines, and procedures),
record and track problems, and verify that corrections have been
made. They also play a role in software configuration
management.
High-Fidelity Test System

Source: Black (2009, pg. 12)


Low Fidelity Test System

Source: Black (2009, pg. 13)


Quality Risk Analysis
Quality Risk Analysis
Techniques and Templates
• Several major types of quality risk analysis techniques:
– Informal
– ISO 9126
– Cost of exposure
– Hazard analysis
– Failure mode and effect analysis

• The two most commonly used techniques, though, are the


informal technique and failure mode and effect analysis (FMEA)
Failure Mode and Effect
Analysis (FMEA)

Source: Black (2009, pg. 33)


FMEA Columns

• The System Function or Feature column is the starting point for


the analysis. In most rows, you enter a concise description of a
system function. If the entry represents a category, you must
break it down into more specific functions or features in
subsequent rows. Getting the level of detail right is a bit tricky.
With too much detail, you can create an overly long, hard-to-read
chart; with too little detail, you will have too many failure modes
associated with each function.

• In the Potential Failure Mode(s)-Quality Risk(s) column, for each


specific function or feature (but not for the category itself), you
identify the ways you might encounter a failure. These are quality
risks associated with the loss of a specific system function. Each
specific function or feature can have multiple failure modes.
FMEA Column
(cont.)
• In the Potential Effect(s) of Failure column, you list how each
failure mode can affect the user, in one or more ways.

• In the Critical? column you indicate whether the potential effect


has critical consequences for the user. Is the product feature or
function completely unusable if this failure mode occurs?

• In the Severity column, you capture the effect of the failure


(immediate or delayed) on the system. This example uses a scale
from 1 to 5 as follows:
– 1. Loss of data, hardware damage, or a safety issue
– 2. Loss of functionality with no workaround
– 3. Loss of functionality with a workaround
– 4. Partial loss of functionality
– 5. Cosmetic or trivial
FMEA Column
(cont.)
• In the Potential Cause(s) of Failure column, you list possible
factors that might trigger the failure— for example, operating-
system error, user error, or normal use.

• In the Priority column, you rate the effect of failure on users,


customers, or operators. This example uses a scale from 1 to 5,
as follows:
– 1. Complete loss of system value
– 2. Unacceptable loss of system value
– 3. Possibly acceptable reduction in system value
– 4. Acceptable reduction in system value
– 5. Negligible reduction in system value
FMEA Column
(cont.)
• In the Detection Method(s) column, you list a currently existing
method or procedure, such as development activities or vendor
testing, that can find the problem before it affects users, excluding
any future actions (such as creating and executing test suites)
you might perform to catch it.

• In the Likelihood column, you have a number that represents the


vulnerability of the system, in terms of: a) existence in the product
(e.g., based on technical risk factors such as complexity and
defect history); b) escape from the current development process;
and c) intrusion on user operations. This example uses the
following 1-to-5 scale:
– 1. Certain to affect all users
– 2. Likely to impact some users
– 3. Possible impact on some users
– 4. Limited impact to few users
– 5. Unimaginable in actual usage
FMEA Column
(cont.)
• As with the informal technique, the RPN (Risk Priority Number)
column tells you how important it is to test this particular failure
mode. The risk priority number (RPN) is the product of the
severity, the priority, and the likelihood. Because this example
used values from 1 to 5 for all three of these parameters, the RPN
ranges from 1 to 125.

• The Recommended Action column contains one or more simple


action items for each potential effect to reduce the related risk
(which pushes the risk priority number toward 125). For the test
team, most recommended actions involve creating a test case
that influences the likelihood rating.
FMEA Column
(cont.)
• The Who/When? column indicates who is responsible for each
recommended action and when they are responsible for it (for
example, in which test phase).

• The References column provides references for more information


about the quality risk. Usually this involves product specifications,
a requirements document, and the like.

• The Action Results columns allow you to record the influence of


the actions taken on the priority, severity, likelihood, and RPN
values. You will use these columns after you have implemented
your tests, not during the initial FMEA.
Thank You

You might also like