PMIT 6111 Lecture 2 Types of Testing
PMIT 6111 Lecture 2 Types of Testing
Types of Testing
1
Development / Functional testing
• The test engineer will check all the components systematically against
requirement specifications is known as functional testing. Functional testing is also known as
Component testing.
• In functional testing, all the components are tested by giving the value, defining the output, and
validating the actual output with the expected value.
2
Purpose of Functional Testing
Functional testing mainly involves black box testing and can be done manually or using automation.
The purpose of functional testing is to:
•Test each function of the application: Functional testing tests each function of the application by
providing the appropriate input and verifying the output against the functional requirements of the
application.
•Test primary entry function: In functional testing, the tester tests each entry function of the
application to check all the entry and exit points.
•Test flow of the GUI screen: In functional testing, the flow of the GUI (Graphical User Interface)
screen is checked so that the user can navigate throughout the application.
What to Test in Functional Testing?
The goal of functional testing is to check the functionalities of the application under test. It concentrates
on:
•Basic Usability: Functional testing involves basic usability testing to check whether the user can freely
navigate through the screens without any difficulty.
•Mainline functions: This involves testing the main features and functions of the application.
•Accessibility: This involves testing the accessibility of the system for the user.
•Error Conditions: Functional testing involves checking whether the appropriate error messages are
being displayed or not in case of error conditions.
Functional Testing Process
Identify the testing goals: Functional testing goals are the features the software is expected to have based on the project
requirements. Testing goals include validating that the application works as it was intended to, and that it handles errors and
unexpected scenarios.
Create test scenarios: Develop a list of all possible (or at least all the most important) test scenarios for a given feature. Test
scenarios describe the different ways the feature will be used. For instance, for a payment module, the test scenarios may include
multiple currencies, handling invalid or expired card numbers, and generating a notification on successful transaction completion.
Create test data: Create test data that simulates normal use conditions based on the test scenarios you identified. You could enter test
data manually (from Excel spreadsheet) or automatically or test tool that reads and inputs the data from a database, flat file, XML, or
spreadsheet. Each set of input data should also have associated data that describes the expected result the input data should generate.
Design test cases: Create test cases based on the different desired outcomes for the test inputs. For example, if you enter an invalid
credit card number, the application should display a meaningful error message.
Execute the test cases: Run the test cases through the application and compare actual outcomes against expected results. If actual
and expected outputs are different, the feature has failed the test and a defect should be recorded.
Deliberate on, track and resolve defects: Once a defect is identified, it should be recorded on a formal tracking system that’s
accessible to the entire project team. The requisite changes should be made to the application and the test case executed again to
confirm resolution before a defect is marked as closed.
Functional Testing vs Non-Functional Testing
Parameters Functional Testing Non-functional Testing
Functional testing verifies the operations Non-functional verifies the behavior of
Definition
and actions of an application. an application.
It is based on the requirements of the It is based on the expectations of the
Testing based on
customer. customer.
The objective is to validate software The objective is the performance of the
Objective
actions. software system
Functional testing is carried out using the Non-functional testing is carried out
Requirements
functional specification. using the performance specifications.
Functionality It describes what the product does. It describes how the product works.
•Unit testing. •Performance testing.
•Integration testing. •Load testing.
Example •Sanity testing •Stress testing.
•Smoke testing. •Volume testing.
•Regression testing. •Usability testing.
Functional Testing Types
Unit Testing
module
to be
tested
results
test cases
Software Engineer
8
Unit testing
• Unit testing is the process of testing the smallest parts of your code, like
individual functions or methods, to make sure they work correctly.
• It’s a key part of software development that improves code quality by
testing each unit in isolation.
• You write unit tests for these code units and run them automatically every
time you make changes. If a test fails, it helps you quickly find and fix the
issue.
• Unit testing promotes modular code, ensures better test coverage, and
saves time by allowing developers to focus more on coding than manual
testing.
9
Unit test effectiveness
• The test cases should show that, when used as expected, the
component that you are testing does what it is supposed to do.
• If there are defects in the component, these should be revealed by
test cases.
• This leads to 2 types of unit test case:
• The first of these should reflect normal operation of a program and should
show that the component works as expected.
• The other kind of test case should be based on testing experience of where
common problems arise. It should use abnormal inputs to check that these
are properly processed and do not crash the component.
10
Integration Testing
• Once we are successfully implementing the unit testing, we will go integration testing. It is the
second level of functional testing, where we test the data flow between dependent modules or
interface between two features is called integration testing.
11
Why Integration Testing?
1. Each module is designed by different software developer whose programming
logic may differ from developers of other modules so integration testing
becomes essential to determine the working of software modules.
2. To check the interaction of software modules with the database whether it is
correct or not.
3. Customer requirements can be changed at the time of module development.
These new requirements may not be tested at the unit testing hence
integration testing becomes mandatory.
4. Incompatibility between modules of software could create errors.
5. To test hardware's compatibility with software.
Integration Test Approaches
8
Integration Testing: Big-Bang Approach
Unit Test
A
Unit Test
B
Unit Test
C
System Test
Unit Test
D
Unit Test
E
Unit Test
F
Bottom-up Integration
A
Layer I
B C D Layer II
Test E
E F G
Layer III
Test B, E, F
Test F
Test C Test
A, B, C, D,
E, F, G
Test D,G
Test G
Stubs are placeholder components used during testing to simulate the behavior of a lower-level module or component that has not yet
been implemented or is unavailable. They are primarily used in Top-Down Integration Testing, where higher-level modules are tested
before the lower-level modules.
Drivers are temporary modules or programs used during Bottom-Up Integration Testing to simulate the behavior of higher-level
modules or components that are not yet implemented or available. Drivers help test lower-level modules by providing the necessary
inputs and invoking their functionalities.
Sandwich Testing Strategy
• A mixed integration testing is also called sandwiched integration testing. A mixed
integration testing follows a combination of top down and bottom-up testing
approaches.
• In top-down approach, testing can start only after the top-level module have been
coded and unit tested.
• In bottom-up approach, testing can start only after the bottom level modules are
ready.
• This sandwich or mixed approach overcomes this shortcoming of the top-down and
bottom-up approaches. It is also called the hybrid integration testing. also, stubs and
drivers are used in mixed integration testing.
A
Layer II
E F G
Layer III
Test E
up
Bottom Test B, E, F
Layer Test F
Tests
Test
A, B, C, D,
Test D,G E, F, G
Test G
Test A,B,C, D
Drivers
1.Used in Bottom-Up Integration Testing.
2. Code that simulates a calling function.
3. Drivers pass test cases to another code & invoke
modules under testing.
4. Created when high-level modules are not yet
developed and lower level modules are tested.
Modified Sandwich Testing Strategy
• Test in parallel:
• Middle layer with drivers and stubs
• Top layer with stubs
• Bottom layer with drivers
• Test in parallel:
• Top layer accessing middle layer (top layer replaces drivers)
• Bottom accessed by middle layer (bottom layer replaces stubs)
Modified Sandwich Testing Strategy
Advantages:
• Allows parallel testing of various elements of the software system.
• Enables early testing of user interface components.
• Performs more coverage with the same stubs.
• It is a time saving process as several components are tested
simultaneously.
Disadvantages:
• Requires development of many stubs and drivers.
• As the need for stubs and drivers is high, this model of testing
becomes quite expensive.
Object-Oriented Testing
• Testing is a continuous activity during software development. In object-oriented systems,
testing encompasses three levels, namely, unit testing, subsystem testing, and system
testing.
• Testing strategy changes
• the concept of the ‘unit’ broadens due to encapsulation
• Integration focuses on classes and their execution across a ‘thread’ or in the
context of a usage scenario
• validation uses conventional black box methods
• Test case design draws on conventional methods, but also encompasses
special features
29
Characteristics of Object-Oriented
Testing
1.Testing Units:
• The primary unit of testing is the class rather than a function or
procedure. Classes encapsulate both data (attributes) and behavior
(methods).
2.Focus on Relationships:
• Object-oriented systems rely on relationships such as inheritance,
composition, and polymorphism, which need thorough testing.
3.Dynamic Nature:
• Polymorphism and dynamic method binding make it essential to test
how objects interact during runtime.
Issues in Testing Classes
❑ Testing Dependencies:
• Additional testing techniques are required to test dependencies between classes.
❑ Testing Class vs. Object:
• Classes cannot be tested dynamically; only their instances (objects) can be tested.
❑ Challenges with Inheritance:
• Changes to a parent class (superclass) can affect subclasses, making it difficult to test subclasses in
isolation. Errors in larger systems are harder to isolate to a specific class when inheritance is
involved.
❑ Importance of Object State in Testing:
• Testing objects requires focusing on the object's state since it impacts behavior across
method invocations.
Techniques of object-oriented testing
1. Fault Based Testing: This type of checking develops test cases based on
consumer specifications and code to identify areas with the highest probability of
faults. It aims to expose these faults and ensure each code segment is executed.
However, it may not catch all errors, such as incorrect specifications and interface
errors. These can be detected through traditional function testing or scenario-
based testing in object-oriented models, but interface errors may still be missed.
35
System testing
• Done as the Development progresses
• Users Stories, Features and Functions are Tested
• Testing done in Production Environment
• Quality Tests are executed (Performance, Reliability, etc.)
• Defects are reported
• Tests are automated where possible
36
Requirements based testing
• Requirements-based testing involves examining each requirement
and developing a test or tests for it.
37
Requirements tests
• Set up a patient record with no known allergies. Prescribe medication for allergies
that are known to exist. Check that a warning message is not issued by the system.
• Set up a patient record with a known allergy. Prescribe the medication to that the
patient is allergic to, and check that the warning is issued by the system.
• Set up a patient record in which allergies to two or more drugs are recorded.
Prescribe both of these drugs separately and check that the correct warning for
each drug is issued.
• Prescribe two drugs that the patient is allergic to. Check that two warnings are
correctly issued.
• Prescribe a drug that issues a warning and overrule that warning. Check that the
system requires the user to provide information explaining why the warning was
overruled.
38
Features tested by scenario
• Authentication by logging on to the system.
• Downloading and uploading of specified patient records to a laptop.
• Home visit scheduling.
• Encryption and decryption of patient records on a mobile device.
• Record retrieval and modification.
• Links with the drugs database that maintains side-effect information.
• The system for call prompting.
39
A usage scenario for the MHC-PMS
Kate is a nurse who specializes in mental health care. One of her responsibilities is to visit patients at
home to check that their treatment is effective and that they are not suffering from medication side -effects.
On a day for home visits, Kate logs into the MHC-PMS and uses it to print her schedule of home visits
for that day, along with summary information about the patients to be visited. She requests that the
records for these patients be downloaded to her laptop. She is prompted for her key phrase to encrypt the
records on the laptop.
One of the patients that she visits is Jim, who is being treated with medication for depression. Jim feels that
the medication is helping him but believes that it has the side -effect of keeping him awake at night. Kate
looks up Jim’s record and is prompted for her key phrase to decrypt the record. She checks the drug
prescribed and queries its side effects. Sleeplessness is a known side effect so she notes the problem in Jim’s
record and suggests that he visits the clinic to have his medication changed. He agrees so Kate enters a
prompt to call him when she gets back to the clinic to make an appointment with a physician. She ends the
consultation and the system re-encrypts Jim’s record.
After, finishing her consultations, Kate returns to the clinic and uploads the records of patients visited to the
database. The system generates a call list for Kate of those patients who she has to contact for follow-up
information and make clinic appointments.
29
Regression Testing
• Regression testing is the re-execution of some subset of tests that
have already been conducted to ensure that changes have not
propagated unintended side effects.
41
Smoke Testing
• A common approach for creating “daily builds” for product software.
• Smoke testing is the initial testing process exercised to check whether the
software under test is ready/stable for further testing.
• The term ‘Smoke Testing‘ comes from the hardware testing, in the hardware
testing initial pass is done to check if it did not catch the fire or smoke in the
initial switch on.
42
Smoke testing steps
• Software components that have been translated into code are integrated into a
“build.”
• A build includes all data files, libraries, reusable modules, and engineered components that are
required to implement one or more product functions.
• A series of tests is designed to expose errors that will keep the build from properly
performing its function.
• The intent should be to uncover “show stopper” errors that have the highest likelihood of
throwing the software project behind schedule.
• The build is integrated with other builds and the entire product (in its current form)
is smoke tested daily.
• The integration approach may be top down or bottom up.
43
Sanity Testing
• Sanity testing is performed on stable builds and it is also known as a variant of
regression testing.
• Sanity testing was performed when we are receiving software build (with minor
code changes) from the development team. It is a checkpoint to assess if testing
for the build can proceed or not.
• In other words, we can say that sanity testing is performed to make sure that all
the defects have been solved and no added issues come into the presence
because of these modifications.
• Attributes
• a narrow and deep method where limited components are protected deeply.
• It is a subdivision of regression testing, which mainly emphases on the less important unit of the
application.
• Unscripted
• Not documented
• Performed by testers
44
Non-Functional Testing
• The next part of black-box testing is non-functional testing. It
provides detailed information on software product performance and
used technologies.
• Non-functional testing will help us minimize the risk of production
and related costs of the software.
• Non-functional testing is a combination of performance, load, stress,
usability and, compatibility testing.
45
Non-Functional Testing
• Stress Testing • Timing testing
• Stress limits of system (maximum # of users, • Evaluate response times and time to perform
peak demands, extended operation) a function
• Volume testing • Environmental test
• Test what happens if large amounts of data • Test tolerances for heat, humidity, motion,
are handled portability
• Configuration testing • Quality testing
• Test the various software and hardware • Test reliability, maintain- ability & availability
configurations of the system
• Compatibility test • Recovery testing
• Test backward compatibility with existing • Tests system’s response to presence of errors
systems or loss of data.
• Security testing • Human factors testing
• Try to violate security requirements • Tests user interface with user
46
Test Cases for Performance Testing
47
User testing
• User or customer testing is a stage in the testing process in which
users or customers provide input and advice on system testing.
• User testing is essential, even when comprehensive system and
release testing have been carried out.
• The reason for this is that influences from the user’s working environment
have a major effect on the reliability, performance, usability and robustness
of a system. These cannot be replicated in a testing environment.
48
Types of user testing
• Alpha testing
• Users of the software work with the development team to test the software
at the developer’s site.
• Beta testing
• A release of the software is made available to users to allow them to
experiment and to raise problems that they discover with the system
developers.
• Acceptance testing
• Customers test a system to decide whether or not it is ready to be accepted from
the system developers and deployed in the customer environment. Primarily for
custom systems.
49
The acceptance testing process
50
Stages in the acceptance testing process
• Define acceptance criteria
• Plan acceptance testing
• Derive acceptance tests
• Run acceptance tests
• Negotiate test results
• Reject/accept system
51
Automation Testing
• The most significant part of Software testing is Automation testing. It uses specific
tools to automate manual design test cases without any human interference.
• Automation testing is the best way to enhance the efficiency, productivity, and
coverage of Software testing.
• It is used to re-run the test scenarios, which were executed manually, quickly, and
repeatedly.
• In other words, we can say that whenever we are testing an application by using
some tools is known as automation testing.
52
Reference
1. Roger S. Pressman, “Software Engineering: A Practitioner’s Approach”, 7th
edition, Chapter 17: Software Testing Strategies.
2. Ian Sommerville, “software Engineering”, 9th edition. Chapter 8: Software
Testing.
3. https://www.javatpoint.com/
53
Thank You