0% found this document useful (0 votes)
45 views54 pages

Software Testing M1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views54 pages

Software Testing M1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 54

Software Testing

Module 1: Basics of Software Testing, Basic


Principles, Test case selection and Adequacy
Presented By,
Kavya HV
Asst Professor
Dept of MCA
PESITM
What is Software?
• Software refers to a set of instructions (or) programs that tell a computer
how to perform specific tasks.
• Software can be broadly categorized into two main types: system software
and application software.

1) System Software: System software is a type of computer program that is


designed to run a computer's hardware and application programs.
Eg: Operating System: An operating system is a type of system software that
manages computer hardware and provides services for computer programs.
Examples include Microsoft Windows, Mac OS, Linux, and Android.
2) Application Software: Application software is created to execute a
certain set of tasks.
Eg: Microsoft Word: Microsoft Word is an example of application
software. It is a word processing program that allows users to create, edit,
and format documents.
• Other examples of application software include web browsers,
spreadsheet programs, and video editing software.
Testing:
• Testing refers to the process of evaluating a software application to
identify any defects, ensure that it meets specified requirements and
ensure its overall quality.
• The primary goal of testing is to detect and fix bugs or errors in the
software to ensure that it functions as intended and meets the needs of
users.
Software Testing: Software testing is a systematic process of evaluating
a software application to identify any defects, ensure that it meets specified
requirements, and verify that it behaves as expected.
• The primary objective of software testing is to ensure the quality and
reliability of the software, making it fit for its intended purpose.

Key objectives of software testing include:


1) Error Detection: Identifying and documenting defects or errors in the
software. This includes bugs, glitches, and other issues that may impact the
functionality or performance of the software.
2) Verification and Validation: Verifying that the software meets the specified
requirements and validating that it behaves as intended in the specified
environment.
3) Ensuring Quality: Ensuring that the software meets quality standards and
satisfies the needs and expectations of users.
4) Usability Assessment: Evaluating the user interface and overall usability of
the software to ensure a positive user experience.
Humans, Errors and Testing:
• Humans can make errors in any fields for example in speech, in surgery,
in driving and even in software development.
• The primary goal of testing is to determine if the thoughts, actions and
products are as desired.
• Testing of thoughts is usually designed to determine if a concept (or)
method has been understood satisfactorily.
• Testing of actions is designed to check if a skill that results in the actions
has been acquired satisfactorily.
• Testing of a product is designed to check if the product behaves as
desired.
Errors, Faults and Failures:
Error: An error occurs in the process of writing a program.
• A mistake made by a programmer during coding.
• An error can be a grammatical error in one or more of the code lines.
Fault: An incorrect step, process (or) data definition in a computer program.
• A fault occurs when a faulty piece of code is executed leading to an
incorrect state that propagates to the program’s output.
Failure: The inability of a system (or) component to perform its required
functions within specified performance requirements.
Test Automation: Test automation is the process of using automation tools
to maintain test data, execute tests and analyze test results to improve software
quality.
• Execution of many tests can be tiring process for a human so Software
testing is absolutely required.
• Most software development organizations, automate test-related tasks such
as graphical user interface testing, and i/o device driver testing.
Developers and Testers as two Roles:
• Developer is one who writes code & tester is one who tests code.
Developer & Tester roles are different and complementary roles. Thus, the
same individual could be a developer and a tester. It is hard to imagine an
individual who assumes the role of a developer but never that of a tester
and vice-versa.
• Certainly, within a software development organization, the primary role of
individual might be to test the software or a product that is the role of a
tester.
• Similarly, the primary role of an individual who designs applications and
writes code is that of a developer.
Software Quality: Software quality is a measurement of performance of a
software, by software testing professionals.
There exist several measures of software quality. These can be divided into
static and dynamic quality attributes.
These can be divided to static and dynamic quality attributes.
• Static quality attributes refer to the actual code and related documentation.
• Dynamic quality attributes relate to the behavior of the application while
in use.
Dynamic quality attributes includes the following:
• Reliability
• Correctness
• Completeness
• Consistency
• Usability
• Performance

1) Reliability: It refers to the probability of failure free operation of


software over a given time interval & under given conditions.
2) Correctness: The ability of software products to perform their exact tasks, as
defined by their specification.
3) Completeness: Refers to the availability of all the features listed in the
requirements.
• An incomplete software is one that does not fully implement all features
required.
4) Consistency: Refers to check the quality of software whether it is behaving
or performing in similar way.
5) Usability: Usability refers to the quality of the user’s experience when
interacting with software.
6) Performance: Refers to the time that application takes to perform a
requested task.
Requirements, Behavior and Correctness:
• Product (or) software are designed in response to requirements
(Requirements specify the functions that a product is expected to
perform).
• During the development of the product, the requirement might have
changed from what was stated originally.
• Regardless of any change, the expected behavior of the product is
determined by the tester’s understanding of the requirements during
testing.
Behavior and correctness:
• A program is considered correct if it behaves as desired on all possible test
inputs.
• Correctness can be defined as the specifications that determine how users
can interact with the software and how the software should behave when it
is used correctly.
• If the software behaves incorrectly, it might take considerable amount of
time to achieve the task or sometimes it is impossible to achieve it.
Correctness versus reliability:
Reliability: The ability of a system to perform its requested functions under
stated conditions whenever required means that relates to testing a software's
ability to function, given environmental conditions, for a particular amount of
time.

Correctness: means if a program is considered correct if it behaves as desired


on all possible test inputs.
• The degree to which a system is free from [defects] in its specification,
design, and implementation.
Testing and Debugging:
• Testing is the process of verifying and validating that a software or
application is bug free, meets the user requirements effectively and
efficiently with handling all the exceptional cases.
• Debugging is the process of fixing a bug in the software. It can be defined
as the identifying, analyzing and removing errors. This activity begins
after the software fails to execute properly and concludes by solving the
problem and successfully testing the software.
Figure: A test and debug cycle
Following are the main steps in Testing and Debugging Cycle:

 Preparing a test plan


 Constructing test data
 Executing the program
 Specifying program behavior
 Assessing the correctness of program behavior
 Construction of oracle
1) Preparing a test plan:
• Test plan is given for the sort program is to be tested to meet the
requirements given in example
1) Execute the program on at least two input sequence one with “A” and
the other with “D” as request characters.
2) Execute the program on an empty input sequence
3) Test the program for robustness against erroneous input such as “R”
typed in as the request character.
4) All failures of the test program should be recorded in a suitable file
using the company failure report form.
2) Constructing Test Data:
• A test case is a pair consisting of test data to be input to the program and
the expected output.
• The test data is a set of values, one for each input variable.
• A test set is a collection of one or more cases.
Program requirements and the test plan help in the construction of test data.
Execution of the program on test data might begin after all or a few test
cases have been constructed.
Based on the results obtained, the testers decide whether to continue the
construction of additional test cases or to enter the debugging phase.
The following test cases are generated for the sort program using the test plan
3) Executing the program:
• Testers might be able to construct a test harness to aid is program
execution. The harness initializes any global variables, inputs a test case,
and executes the program. The output generated by the program may be
saved in a file for subsequent examination by a tester.
• Test harness is a system that contains a set of software, test data and
tools, which are used to perform testing of applications under various
environments.
Example: The test harness shown in figure reads an input sequence,
checks for its correctness, and then calls sort. The sorted array
sorted_sequence returned by sort is printed using print_sequence. The test
cases are assumed to be in the Test Pool.
Figure: A simple test harness to test the sort program

In preparing this test harness the assumptions made are:


• sort is coded as a procedure.
• The test_setup procedure is invoked first to set up the test that includes
identifying and opening the file containing tests.
• The get_input procedure accepts/reads the variables in the
sequence as request_char, num_items and in_numbers.
• The input is checked prior to calling sort by the check_input
procedure.
• check_output procedure serves as the oracle that checks if the
program under test behaves correctly.
• report_failure is invoked when the output from sort is incorrect.
• print_sequence prints the sequence generated by the sort program.
This also can be saved in file for subsequent examination.
4) Specifying program behavior:
State sequence diagram can be used to specify the behavioral requirements.
This same specification can then be used during the testing to ensure if the
application confirms to the requirements.
Figure: A state sequence for myapp showing how the application is
expected to behave when the user selects the open option under the file menu
5) Assessing the correctness of program:
An important step in testing a program is that the tester determines if the
observed behavior of the program under test is correct or not. This step can be
further divided into two smaller steps.
• In the first step, one observes the behavior and in the second, one analyzes
the observed behavior to check if it is correct or not.
The entity that performs the task of checking the correctness of the observed
behavior is known as an Oracle.
An oracle is a testing software designed to check the behavior of programs.
Figure shows the relationship between the program under test and the
oracle.
6) Construction of oracles:
Construction of automated oracles, such as the one to check a matrix
multiplication program or a sort program, requires the determination of I/O
relationship.
When tests are generated from models such as finite-state machines (FSMs) or
state charts, both inputs and the corresponding outputs are available.
This makes it possible to construct an oracle while generating the tests.
Example:
Consider a program named HVideo that allows one to keep track of home
videos.
In the data entry mode, it displays a screen in which the user types
information about a DVD.
In search mode, the program displays a screen in which a user can type some
attribute of the video being searched for and set up a search criterion.
To test HVideo we need to create an oracle that checks whether the program
function correctly in data entry and search modes.
The input generator generates a data entry request. The input generator now
requests the oracle to test if HVideo performed its task correctly on the input
given for data entry.

The oracle uses the input to check if the information to be entered into the
database has been entered correctly or not. The oracle returns a pass or no pass
to the input generator.
Test Metrics: Test metrics are indicators of the efficiency, effectiveness,
quality and performance of software testing techniques.
• The term metric refers to a standard of measurement. In software testing,
there exist a variety of metrics.
• These metrics allow professionals to collect data about various testing
procedures and devise ways to make them more efficient.

Figure: Types of metrics used in software testing and their relationships


The four general core areas that assist in the design of Metrics are:
1) Schedule related metrics: It measure actual completion times of various
activities and compare these with estimated time to completion.
2) Resource related metrics: It measure items which are required, and test
executed.
3) Quality related metrics: Measure quality of a product or a process.
4) Size-related metrics: It measure size of various objects such as the source
code and number of tests in a test suite.
Types of Test Metrics:
1) Organizational metrics:
Metrics at the level of an organization are useful
in overall project planning and management.
Eg: The number of defects reported after product release, averaged over a set
of products developed and marketed by an organization is a useful metric of
product quality at the organizational level.
• Organizational metrics allow senior management to monitor the overall
strength of the organization and points to areas of weakness. Thus, these
metrics help senior management in setting new goals and plan for resources
needed to realize these goals.
2) Project metrics: Project metrics are the metrics used by the project
manager to check the project's progress.
• Project Metrics are used to assess a project’s overall quality. It is used to
estimate a project’s resources, as well as to determine costs, productivity.
3) Process Metrics: A project’s characteristics and execution are defined by
process metrics.
4) Product Metrics: A product’s size, design, performance, quality, and
complexity are defined by product metrics.
• It deals with the quality of the software product.
• Developers can improve the quality of their software development by
utilizing these features.
Software and Hardware Testing:
1) Cost of Bugs fixes: Hardware testing needs to be thorough and precise
because if a bug is missed, the cost of fixing it later is huge, where as in
software it is not as costly.
2) Need for Test Cases: Testing software requires so many test cases to be
written but hardware testing doesn’t need so much only we have to verify
whether its working correctly or not.
3) Cost of Testing: The cost is relatively modest for testing software because
all we need is computer or mobile device as required. But Testing hardware
can require specific machines or standard-testing devices that are expensive.
4) Who does the testing: Software Testing is done by specialized QA
engineers where as hardware testing usually done by the product engineers.
5) Setting up the labs: For hardware testing, we need to have dedicated test
labs where the hardware devices and associated infrastructure can be set up,
where as in software testing less lab structure is required.
Testing and Verification:
• Program verification aims at proving the correctness of programs by
showing that it contains no errors. Verification aims at showing that a given
program works for all possible inputs that satisfy a set of conditions and
testing aims to show that the given program is reliable i,e it has no errors.
• Testing is not a perfect technique when a program still contains errors even
after the successful execution of set of tests, but this increases the
confidence about the correctness of the application.
• Verification promises to verify that a program is free from errors but a
closer look reveals that it has its own weakness. The person who verified a
program might have made a mistake in the verification process there might
be an incorrect assumption on the input conditions, incorrect assumptions
on the components that interface with the program and so on. Thus neither
verification nor testing is a perfect technique for proving the correctness of
programs.
Defect Management: Defect Management is an integral part of a
development and test process in many software development organizations. It
is a sub process of the development process. It entails the following:
 Defect prevention
 Discovery
 Classification
 Resolution
 Prediction
 Recording and reporting
1) Defect Prevention: Means analyzing defects that were encountered in the
past and taking specific actions to prevent the occurrence of those types of
defects in the future.
• A measure to ensure that defects being detected so far, should not appear or
occur again.
• It is achieved through a variety of process and tools: They are,
 Good coding techniques.
 Unit test plans.
 Code Inspections.
2) Defect Discovery: Capturing and identifying defects.
• Defect discovery is the identification of defects in response to failures
observed during dynamic testing or found during static testing.
• It involves debugging the code under test.
3) Defect Classification: It involves classifying the defects during the
inspection process by using several classification techniques which the
system has been programmed with.
• Defects found are classified and recorded in a database. Classification
becomes important in dealing with the defects.
4) Resolution:
Each defect, when recorded, is marked as ‘open’ indicating that
it needs to be resolved. It required careful scrutiny of the defects, identifying a
fix if needed, implementing the fix, testing the fix, and finally closing the
defect indicating that every recorded defect is resolved prior to release.
5) Defect Prediction:
• Organizations often do source code analysis to predict how many defects an
application might contain before it enters the testing the phase.
• Advanced statistical techniques are used to predict defects during the test
process.
Execution History: Execution history of a program, also known as
execution trace, is an organized collection of information about various
elements of a program during a given execution. An execution slice is an
executable subsequence of execution history. There are several ways to
represent an execution history,
• Sequence in which the functions in a given program are executed against a
given test input.
• Sequence in which program blocks are executed.
• Sequence of objects and the corresponding methods accessed for object
oriented languages such as Java. An execution history may also included
values of program variables.
Test Generation Strategies: The process of creating a set of test data or
test cases for testing the adequacy of new or revised software applications.
• Test generation uses a source document. The source document resides in the
mind of the tester who generates tests based on knowledge of the
requirements.
There are different type of test generation
1) Model based
2) Specification based
3) Code based test generation
1) Model based test generation: Here models are used as a basis for
designing and generating test cases.
This model include Finite-state machines, Timed I/O Automata are used
always it is a graphical representation.
2) Specification base: Here we are using mathematical notation to do the test
generations
3) Code based Generation: There also exist techniques to generate tests
directly from the code i,e. code based test generation.
Static Testing: Static testing is a software testing method that examines a
program along with any associated documents but does not require the
program to be executed.
• Static testing is carried out without executing the application under test.
• This is in contrast to dynamic testing that requires one or more executions
of the application under test.
• It is useful in that it may lead to the discovery of faults in the application
and errors in the requirements and other application-related document, at a
relatively low cost.
• This is carried out by the tester.
• Team also has access to one or more static testing tools.
• A static testing tool takes the application code as input and generates a
variety of data useful in the test process.
Types of static testing reviews: The first step in static testing is reviews.
They can be conducted in numerous ways to find and remove errors in
supporting documents. This process can be carried out in different ways:
1) Informal: Informal reviews will not follow any specific process to find
errors. Co-workers can review documents and provide informal comments.
2) Walk-through: The author of the document in question will explain the
document to their team. Participants will ask questions and write down any
notes.
3) Inspection: A designated moderator will conduct a strict review as a
process to find defects.
4) Peer reviews: Before executing the code they are going to identify the
issues in the requirement specifications.
2) Walk-through: The author of the document in question will explain the
document to their team. Participants will ask questions and write down any
notes.
• Walkthrough is an informal process to review any application-related
document.
• For example, requirements are reviewed using a process termed
requirements walkthrough.
• In requirements walkthrough, the test team must review the requirements
document to ensure that the requirements match user needs.
• A detailed report is generated that lists items of concern regarding the
requirements.
3) Inspection: Inspection is a more formally defined process than a
walkthrough. This term is usually associated with code.
• A designated moderator will conduct a strict review as a process to find
defects.
• Several organizations consider formal code inspections as a tool to improve
code quality of the code.
• Code inspection is carried out by a team.
• Members of the inspection team are assigned roles of moderator, reader,
recorder, and author.
• The moderator is in charge of the process and leads the review.
• Actual code is read by the reader, with the help of a code browser and with
large monitors for all in the team to view the code.
• The recorder records any errors discovered or issues to be looked into. The
author is the actual developer whose primary task is to help others
understand code.
Inspection plan:
 Statement of purpose.
 Work product to be inspected this includes code and associated
documents needed for inspection.
 Team formation, roles, and tasks to be performed.
 Rate at which the inspection task is to be completed
 Data collection forms where the team will record its findings such as
defects discovered, coding standard violations and time spent in each
task.
THANK YOU

You might also like