0% found this document useful (0 votes)
26 views55 pages

Unit 3 SE

Uploaded by

adithyav857
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views55 pages

Unit 3 SE

Uploaded by

adithyav857
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Object Oriented Analysis and Design

INTRODUCTION:

Object-Oriented Analysis and Design (OOAD) is a software engineering


methodology that involves using object-oriented concepts to design and
implement software systems. OOAD involves a number of techniques and
practices, including object-oriented programming, design patterns, UML
diagrams, and use cases. Here are some important aspects of OOAD:

1. Object-Oriented Programming: Object-oriented programming involves


modeling real-world objects as software objects, with properties and methods
that represent the behavior of those objects. OOAD uses this approach to
design and implement software systems.
2. Design Patterns: Design patterns are reusable solutions to common problems
in software design. OOAD uses design patterns to help developers create
more maintainable and efficient software systems.
3. UML Diagrams: Unified Modeling Language (UML) is a standardized notation
for creating diagrams that represent different aspects of a software system.
OOAD uses UML diagrams to represent the different components and
interactions of a software system.
4. Use Cases: Use cases are a way of describing the different ways in which
users interact with a software system. OOAD uses use cases to help
developers understand the requirements of a system and to design software
systems that meet those requirements.
There are several advantages to using OOAD in software engineering:

1. Reusability: OOAD emphasizes the use of reusable components and design


patterns, which can save time and effort in software development.
2. Scalability: OOAD can help developers design software systems that are
scalable and can handle changes in user demand and business requirements
over time.
3. Maintainability: OOAD emphasizes modular design and can help developers
create software systems that are easier to maintain and update over time.
4. Flexibility: OOAD can help developers design software systems that are
flexible and can adapt to changing business requirements over time.
5. However, there are also some potential disadvantages to using OOAD:
6. Complexity: OOAD can be complex and may require significant expertise to
implement effectively.
7. Time-consuming: OOAD can be a time-consuming process that involves
significant upfront planning and documentation.
8. Rigidity: Once a software system has been designed using OOAD, it can be
difficult to make changes without significant time and expense.
9. Cost: OOAD can be more expensive than other software engineering
methodologies due to the upfront planning and documentation required.
10. Overall, OOAD can be an effective approach to designing and
implementing software systems, particularly for complex or large-scale
projects. However, it’s important to weigh the advantages and disadvantages
carefully before adopting this approach.
Object-Oriented Analysis (OOA) is the first technical activity performed as part
of object-oriented software engineering. OOA introduces new concepts to
investigate a problem. It is based on a set of basic principles, which are as
follows-
1. The information domain is modeled.
2. Behavior is represented.
3. The function is described.
4. Data, functional, and behavioral models are divided to uncover greater detail.
5. Early models represent the essence of the problem, while later ones provide
implementation details.
The above notes principles form the foundation for the OOA approach.

Object-Oriented Design (OOD): An analysis model created using object-


oriented analysis is transformed by object-oriented design into a design model
that works as a plan for software creation. OOD results in a design having several
different levels of modularity i.e., The major system components are partitioned
into subsystems (a system-level “modular”), and data manipulation operations
are encapsulated into objects (a modular form that is the building block of an OO
system.). In addition, OOD must specify some data organization of attributes and
a procedural description of each operation. Shows a design pyramid for object-
oriented systems. It is having the following four layers.
1. The Subsystem Layer : It represents the subsystem that enables software to
achieve user requirements and implement technical frameworks that meet
user needs.
2. The Class and Object Layer : It represents the class hierarchies that enable
the system to develop using generalization and specialization. This layer also
represents each object.
3. The Message Layer : It represents the design details that enable each object
to communicate with its partners. It establishes internal and external interfaces
for the system.
4. The Responsibilities Layer : It represents the data structure and algorithmic
design for all the attributes and operations for each object.
The Object-Oriented design pyramid specifically emphasizes specific product or
system design. Note, however, that another design layer exists, which forms the
base on which the pyramid rests. It focuses on the core layer the design of the
domain object, which plays an important role in building the infrastructure for the
Object-Oriented system by providing support for human/computer interface
activities, task management.

Some of the terminologies that are often encountered while studying Object-
Oriented Concepts include:
1. Attributes: a collection of data values that describe a class.
2. Class: encapsulates the data and procedural abstractions required to describe
the content and behavior of some real-world entity. In other words, A class is a
generalized description that describes the collection of similar objects.
3. Objects: instances of a specific class. Objects inherit a class’s attributes and
operations.
4. Operations: also called methods and services, provide a representation of one
of the behaviors of the class.
5. Subclass: specialization of the super class. A subclass can inherit both
attributes and operations from a super class.
6. Superclass: also called a base class, is a generalization of a set of classes
that are related to it.

Advantages of OOAD:

1. Improved modularity: OOAD encourages the creation of small, reusable


objects that can be combined to create more complex systems, improving the
modularity and maintainability of the software.
2. Better abstraction: OOAD provides a high-level, abstract representation of a
software system, making it easier to understand and maintain.
3. Improved reuse: OOAD encourages the reuse of objects and object-oriented
design patterns, reducing the amount of code that needs to be written and
improving the quality and consistency of the software.
4. Improved communication: OOAD provides a common vocabulary and
methodology for software developers, improving communication and
collaboration within teams.
5. Reusability: OOAD emphasizes the use of reusable components and design
patterns, which can save time and effort in software development by reducing
the need to create new code from scratch.
6. Scalability: OOAD can help developers design software systems that are
scalable and can handle changes in user demand and business requirements
over time.
7. Maintainability: OOAD emphasizes modular design and can help developers
create software systems that are easier to maintain and update over time.
8. Flexibility: OOAD can help developers design software systems that are
flexible and can adapt to changing business requirements over time.
9. Improved software quality: OOAD emphasizes the use of encapsulation,
inheritance, and polymorphism, which can lead to software systems that are
more reliable, secure, and efficient.

Disadvantages of OOAD:

1. Complexity: OOAD can add complexity to a software system, as objects and


their relationships must be carefully modeled and managed.
2. Overhead: OOAD can result in additional overhead, as objects must be
instantiated, managed, and interacted with, which can slow down the
performance of the software.
3. Steep learning curve: OOAD can have a steep learning curve for new software
developers, as it requires a strong understanding of OOP concepts and
techniques.
4. Complexity: OOAD can be complex and may require significant expertise to
implement effectively. It may be difficult for novice developers to understand
and apply OOAD principles.
5. Time-consuming: OOAD can be a time-consuming process that involves
significant upfront planning and documentation. This can lead to longer
development times and higher costs.
6. Rigidity: Once a software system has been designed using OOAD, it can be
difficult to make changes without significant time and expense. This can be a
disadvantage in rapidly changing environments where new technologies or
business requirements may require frequent changes to the system.
7. Cost: OOAD can be more expensive than other software engineering
methodologies due to the upfront planning and documentation required.

Difference Between Object And Class

Class is a detailed description, the definition, and the template of what an object
will be. But it is not the object itself. Also, what we call, a class is the building
block that leads to Object-Oriented Programming. It is a user-defined data type,
that holds its own data members and member functions, which can be accessed
and used by creating an instance of that class. It is the blueprint of any object.
Once we have written a class and defined it, we can use it to create as many
objects based on that class as we want. In Java, the class contains fields,
constructors, and methods. For example, consider the Class of Accounts. There
may be many accounts with different names and types, but all of them will share
some common properties, as all of them will have some common
attributes like balance, account holder name, etc. So here, the Account is
the class.
Object is an instance of a class. All data members and member functions of the
class can be accessed with the help of objects. When a class is defined, no
memory is allocated, but memory is allocated when it is instantiated (i.e. an object
is created). For Example, considering the objects for the class Account are SBI
Account, ICICI account, etc.

Fig-1: Pic Descriptions Class and object


Fig-2: Class Diagram To Understand Class and Object

Difference Between Class And Object:


There are many differences between object and class. Some differences
between object and class are given below:

Class Object

Class is used as a template for declaring and


An object is an instance of a class.
creating the objects.
Class Object

When a class is created, no memory is Objects are allocated memory space


allocated. whenever they are created.

The class has to be declared first and only An object is created many times as per
once. requirement.

A class can not be manipulated as they are


not Objects can be manipulated.
available in the memory.

A class is a logical entity. An object is a physical entity.

It is created with a class name in C++ and


It is declared with the class keyword
with the new keywords in Java.

Class does not contain any values which Each object has its own values, which are
can be associated with the field. associated with it.

A class is used to bind data as well as


Objects are like a variable of the class.
methods together as a single unit.

Syntax: Declaring Class in C++ is as Syntax: Instantiating an object for a Class


follows: in C++ is as follows:
Class Object

class <classname> {}; class Student {

public:

void put(){

cout<<“Function Called”<<endl;

}; // The class is declared here

int main(){

Student s1; // Object created

s1.put();

Example: Bike Example: Ducati, Suzuki, Kawasaki

Software Engineering | Software


Evolution

Software Evolution is a term which refers to the process of developing software


initially, then timely updating it for various reasons, i.e., to add new features or to
remove obsolete functionalities etc. The evolution process includes fundamental
activities of change analysis, release planning, system implementation and
releasing a system to customers.
The cost and impact of these changes are accessed to see how much system is
affected by the change and how much it might cost to implement the change. If
the proposed changes are accepted, a new release of the software system is
planned. During release planning, all the proposed changes (fault repair,
adaptation, and new functionality) are considered.

A design is then made on which changes to implement in the next version of the
system. The process of change implementation is an iteration of the development
process where the revisions to the system are designed, implemented and
tested.

a) Change in requirement with time: With the passes of time, the organization’s
needs and modus Operandi of working could substantially be changed so in this
frequently changing time the tools(software) that they are using need to change
for maximizing the performance.
b) Environment change: As the working environment changes the things(tools)
that enable us to work in that environment also changes proportionally same
happens in the software world as the working environment changes then, the
organizations need reintroduction of old software with updated features and
functionality to adapt the new environment.
c) Errors and bugs: As the age of the deployed software within an organization
increases their preciseness or impeccability decrease and the efficiency to bear
the increasing complexity workload also continually degrades. So, in that case, it
becomes necessary to avoid use of obsolete and aged software. All such
obsolete Softwares need to undergo the evolution process in order to become
robust as per the workload complexity of the current environment.
d) Security risks: Using outdated software within an organization may lead you
to at the verge of various software-based cyberattacks and could expose your
confidential data illegally associated with the software that is in use. So, it
becomes necessary to avoid such security breaches through regular assessment
of the security patches/modules are used within the software. If the software isn’t
robust enough to bear the current occurring Cyber attacks so it must be changed
(updated).
e) For having new functionality and features: In order to increase the
performance and fast data processing and other functionalities, an organization
need to continuously evolute the software throughout its life cycle so that
stakeholders & clients of the product could work efficiently.

Laws used for Software Evolution:

1. Law of continuing change:


This law states that any software system that represents some real-world
reality undergoes continuous change or become progressively less useful in
that environment.
2. Law of increasing complexity:
As an evolving program changes, its structure becomes more complex unless
effective efforts are made to avoid this phenomenon.
3. Law of conservation of organization stability:
Over the lifetime of a program, the rate of development of that program is
approximately constant and independent of the resource devoted to system
development.
4. Law of conservation of familiarity:
This law states that during the active lifetime of the program, changes made
in the successive release are almost constant.

Software Engineering | User Interface


Design

User interface is the front-end application view to which user interacts in order to
use the software. The software becomes more popular if its user interface is:

 Attractive
 Simple to use
 Responsive in short time
 Clear to understand
 Consistent on all interface screens
There are two types of User Interface:
1. Command Line Interface: Command Line Interface provides a command
prompt, where the user types the command and feeds to the system. The user
needs to remember the syntax of the command and its use.
2. Graphical User Interface: Graphical User Interface provides the simple
interactive interface to interact with the system. GUI can be a combination of
both hardware and software. Using GUI, user interprets the software

User Interface Design Process:

The analysis and design process of a user interface is iterative and can be
represented by a spiral model. The analysis and design process of user interface
consists of four framework activities.

1. User, task, environmental analysis, and modeling: Initially, the focus is


based on the profile of users who will interact with the system, i.e.
understanding, skill and knowledge, type of user, etc, based on the user’s
profile users are made into categories. From each category requirements are
gathered. Based on the requirements developer understand how to develop
the interface. Once all the requirements are gathered a detailed analysis is
conducted. In the analysis part, the tasks that the user performs to establish
the goals of the system are identified, described and elaborated. The analysis
of the user environment focuses on the physical work environment. Among
the questions to be asked are:
 Where will the interface be located physically?
 Will the user be sitting, standing, or performing other tasks unrelated to the
interface?
 Does the interface hardware accommodate space, light, or noise
constraints?
 Are there special human factors considerations driven by environmental
factors?
2. Interface Design: The goal of this phase is to define the set of interface
objects and actions i.e. Control mechanisms that enable the user to perform
desired tasks. Indicate how these control mechanisms affect the system.
Specify the action sequence of tasks and subtasks, also called a user
scenario. Indicate the state of the system when the user performs a particular
task. Always follow the three golden rules stated by Theo Mandel. Design
issues such as response time, command and action structure, error handling,
and help facilities are considered as the design model is refined. This phase
serves as the foundation for the implementation phase.
3. Interface construction and implementation: The implementation activity
begins with the creation of prototype (model) that enables usage scenarios to
be evaluated. As iterative design process continues a User Interface toolkit
that allows the creation of windows, menus, device interaction, error
messages, commands, and many other elements of an interactive
environment can be used for completing the construction of an interface.
4. Interface Validation: This phase focuses on testing the interface. The
interface should be in such a way that it should be able to perform tasks
correctly and it should be able to handle a variety of tasks. It should achieve
all the user’s requirements. It should be easy to use and easy to learn. Users
should accept the interface as a useful one in their work.

Golden Rules:
The following are the golden rules stated by Theo Mandel that must be followed
during the design of the interface.

Place the user in control:


 Define the interaction modes in such a way that does not force the user into
unnecessary or undesired actions: The user should be able to easily enter and
exit the mode with little or no effort.
 Provide for flexible interaction: Different people will use different interaction
mechanisms, some might use keyboard commands, some might use mouse,
some might use touch screen, etc, Hence all interaction mechanisms should
be provided.
 Allow user interaction to be interruptible and undoable: When a user is doing
a sequence of actions the user must be able to interrupt the sequence to do
some other work without losing the work that had been done. The user should
also be able to do undo operation.
 Streamline interaction as skill level advances and allow the interaction to be
customized: Advanced or highly skilled user should be provided a chance to
customize the interface as user wants which allows different interaction
mechanisms so that user doesn’t feel bored while using the same interaction
mechanism.
 Hide technical internals from casual users: The user should not be aware of
the internal technical details of the system. He should interact with the
interface just to do his work.
 Design for direct interaction with objects that appear on screen: The user
should be able to use the objects and manipulate the objects that are present
on the screen to perform a necessary task. By this, the user feels easy to
control over the screen.
Reduce the user’s memory load:
 Reduce demand on short-term memory: When users are involved in some
complex tasks the demand on short-term memory is significant. So the
interface should be designed in such a way to reduce the remembering of
previously done actions, given inputs and results.
 Establish meaningful defaults: Always initial set of defaults should be provided
to the average user, if a user needs to add some new features then he should
be able to add the required features.
 Define shortcuts that are intuitive: Mnemonics should be used by the user.
Mnemonics means the keyboard shortcuts to do some action on the screen.
 The visual layout of the interface should be based on a real-world metaphor:
Anything you represent on a screen if it is a metaphor for real-world entity then
users would easily understand.
 Disclose information in a progressive fashion: The interface should be
organized hierarchically i.e. on the main screen the information about the task,
an object or some behavior should be presented first at a high level of
abstraction. More detail should be presented after the user indicates interest
with a mouse pick.
Make the interface consistent:
 Allow the user to put the current task into a meaningful context: Many
interfaces have dozens of screens. So it is important to provide indicators
consistently so that the user know about the doing work. The user should also
know from which page has navigated to the current page and from the current
page where can navigate.
 Maintain consistency across a family of applications: The development of
some set of applications all should follow and implement the same design,
rules so that consistency is maintained among applications.
 If past interactive models have created user expectations do not make
changes unless there is a compelling reason.
User interface design is a crucial aspect of software engineering, as it is the
means by which users interact with software applications. A well-designed user
interface can improve the usability and user experience of an application, making
it easier to use and more effective.

There are several key principles that software engineers should follow
when designing user interfaces:
1. User-centered design: User interface design should be focused on the needs
and preferences of the user. This involves understanding the user’s goals,
tasks, and context of use, and designing interfaces that meet their needs and
expectations.
2. Consistency: Consistency is important in user interface design, as it helps
users to understand and learn how to use an application. Consistent design
elements such as icons, color schemes, and navigation menus should be used
throughout the application.
3. Simplicity: User interfaces should be designed to be simple and easy to use,
with clear and concise language and intuitive navigation. Users should be able
to accomplish their tasks without being overwhelmed by unnecessary
complexity.
4. Feedback: Feedback is important in user interface design, as it helps users to
understand the results of their actions and confirms that they are making
progress towards their goals. Feedback can take the form of visual cues,
messages, or sounds.
5. Accessibility: User interfaces should be designed to be accessible to all users,
regardless of their abilities. This involves considering factors such as color
contrast, font size, and assistive technologies such as screen readers.
6. Flexibility: User interfaces should be designed to be flexible and customizable,
allowing users to tailor the interface to their own preferences and needs.

Software Testing Strategies


Software testing is the process of evaluating a software application to identify if
it meets specified requirements and to identify any defects. The following are
common testing strategies:

1. Black box testing – Tests the functionality of the software without looking at
the internal code structure.
2. White box testing – Tests the internal code structure and logic of the
software.
3. Unit testing – Tests individual units or components of the software to ensure
they are functioning as intended.
4. Integration testing – Tests the integration of different components of the
software to ensure they work together as a system.
5. Functional testing – Tests the functional requirements of the software to
ensure they are met.
6. System testing – Tests the complete software system to ensure it meets the
specified requirements.
7. Acceptance testing – Tests the software to ensure it meets the customer’s
or end-user’s expectations.
8. Regression testing – Tests the software after changes or modifications have
been made to ensure the changes have not introduced new defects.
9. Performance testing – Tests the software to determine its performance
characteristics such as speed, scalability, and stability.
10. Security testing – Tests the software to identify vulnerabilities and ensure
it meets security requirements.
Software Testing is a type of investigation to find out if there is any default or
error present in the software so that the errors can be reduced or removed to
increase the quality of the software and to check whether it fulfills the specifies
requirements or not.
According to Glen Myers, software testing has the following objectives:
 The process of investigating and checking a program to find whether there is
an error or not and does it fulfill the requirements or not is called testing.
 When the number of errors found during the testing is high, it indicates that
the testing was good and is a sign of good test case.
 Finding an unknown error that wasn’t discovered yet is a sign of a successful
and a good test case.
The main objective of software testing is to design the tests in such a way that it
systematically finds different types of errors without taking much time and effort
so that less time is required for the development of the software. The overall
strategy for testing software includes:
1.
Before testing starts, it’s necessary to identify and specify the
requirements of the product in a quantifiable manner. Different
characteristics quality of the software is there such as maintainability that
means the ability to update and modify, the probability that means to find and
estimate any risk, and usability that means how it can easily be used by the
customers or end-users. All these characteristic qualities should be specified
in a particular order to obtain clear test results without any error.
2. Specifying the objectives of testing in a clear and detailed
manner. Several objectives of testing are there such as effectiveness that
means how effectively the software can achieve the target, any failure that
means inability to fulfill the requirements and perform functions, and the cost
of defects or errors that mean the cost required to fix the error. All these
objectives should be clearly mentioned in the test plan.
3. For the software, identifying the user’s category and developing a profile
for each user. Use cases describe the interactions and communication
among different classes of users and the system to achieve the target. So as
to identify the actual requirement of the users and then testing the actual use
of the product.
4. Developing a test plan to give value and focus on rapid-cycle
testing. Rapid Cycle Testing is a type of test that improves quality by
identifying and measuring the any changes that need to be required for
improving the process of software. Therefore, a test plan is an important and
effective document that helps the tester to perform rapid cycle testing.
5. Robust software is developed that is designed to test itself. The software
should be capable of detecting or identifying different classes of errors.
Moreover, software design should allow automated and regression testing
which tests the software to find out if there is any adverse or side effect on the
features of software due to any change in code or program.
6. Before testing, using effective formal reviews as a filter. Formal technical
reviews is technique to identify the errors that are not discovered yet. The
effective technical reviews conducted before testing reduces a significant
amount of testing efforts and time duration required for testing software so that
the overall development time of software is reduced.
7. Conduct formal technical reviews to evaluate the nature, quality or
ability of the test strategy and test cases. The formal technical review helps
in detecting any unfilled gap in the testing approach. Hence, it is necessary to
evaluate the ability and quality of the test strategy and test cases by technical
reviewers to improve the quality of software.
8. For the testing process, developing a approach for the continuous
development. As a part of a statistical process control approach, a test
strategy that is already measured should be used for software testing to
measure and control the quality during the development of software.

Advantages or Disadvantages:

Advantages of software testing:

1. Improves software quality and reliability – Testing helps to identify and fix
defects early in the development process, reducing the risk of failure or
unexpected behavior in the final product.
2. Enhances user experience – Testing helps to identify usability issues and
improve the overall user experience.
3. Increases confidence – By testing the software, developers and stakeholders
can have confidence that the software meets the requirements and works as
intended.
4. Facilitates maintenance – By identifying and fixing defects early, testing
makes it easier to maintain and update the software.
5. Reduces costs – Finding and fixing defects early in the development process
is less expensive than fixing them later in the life cycle.

Disadvantages of software testing:

1. Time-consuming – Testing can take a significant amount of time, particularly


if thorough testing is performed.
2. Resource-intensive – Testing requires specialized skills and resources, which
can be expensive.
3. Limited coverage – Testing can only reveal defects that are present in the test
cases, and it is possible for defects to be missed.
4. Unpredictable results – The outcome of testing is not always predictable, and
defects can be hard to replicate and fix.
5. Delays in delivery – Testing can delay the delivery of the software if testing
takes longer than expected or if significant defects are identified.

Differences between Black Box Testing


vs White Box Testing

1. Black Box Testing is a software testing method in which the internal


structure/design/implementation of the item being tested is not known to the
tester. Only the external design and structure are tested.
2. White Box Testing is a software testing method in which the internal
structure/design/implementation of the item being tested is known to the
tester. Implementation and impact of the code are tested.

Black box testing and white box testing are two different approaches to
software testing, and their differences are as follows:

Black box testing is a testing technique in which the internal workings of the
software are not known to the tester. The tester only focuses on the input and
output of the software. Whereas, White box testing is a testing technique in which
the tester has knowledge of the internal workings of the software, and can test
individual code snippets, algorithms and methods.

Testing objectives: Black box testing is mainly focused on testing the


functionality of the software, ensuring that it meets the requirements and
specifications. White box testing is mainly focused on ensuring that the internal
code of the software is correct and efficient.
Knowledge level: Black box testing does not require any knowledge of the
internal workings of the software, and can be performed by testers who are not
familiar with programming languages. White box testing requires knowledge of
programming languages, software architecture and design patterns.

Testing methods: Black box testing uses methods like equivalence partitioning,
boundary value analysis, and error guessing to create test cases. Whereas, white
box testing uses methods like control flow testing, data flow testing and statement
coverage.
Scope: Black box testing is generally used for testing the software at the
functional level. White box testing is used for testing the software at the unit level,
integration level and system level.
Advantages and disadvantages:
Black box testing is easy to use, requires no programming knowledge and is
effective in detecting functional issues. However, it may miss some important
internal defects that are not related to functionality. White box testing is effective
in detecting internal defects, and ensures that the code is efficient and
maintainable. However, it requires programming knowledge and can be time-
consuming.

In conclusion, both black box testing and white box testing are important for
software testing, and the choice of approach depends on the testing objectives,
the testing stage, and the available resources.
Differences between Black Box Testing vs White Box Testing:

Black Box Testing White Box Testing

It is a way of software testing in which It is a way of testing the software in which


the internal structure or the program or the tester has knowledge about the internal
the code is hidden and nothing is known structure or the code or the program of the
about it. software.

Implementation of code is not needed for Code implementation is necessary for white
black box testing. box testing.

It is mostly done by software testers. It is mostly done by software developers.

No knowledge of implementation is
Knowledge of implementation is required.
needed.

It can be referred to as outer or external It is the inner or the internal software


software testing. testing.

It is a functional test of the software. It is a structural test of the software.

This testing can be initiated based on the This type of testing of software is started
requirement specifications document. after a detail design document.

No knowledge of programming is It is mandatory to have knowledge of


required. programming.

It is the behavior testing of the software. It is the logic testing of the software.
Black Box Testing White Box Testing

It is applicable to the higher levels of It is generally applicable to the lower levels


testing of software. of software testing.

It is also called closed testing. It is also called as clear box testing.

It is least time consuming. It is most time consuming.

It is not suitable or preferred for


It is suitable for algorithm testing.
algorithm testing.

Can be done by trial and error ways and Data domains along with inner or internal
methods. boundaries can be better tested.

Example: Search something on google by


Example: By input to check and verify loops
using keywords

Black-box test design techniques-


White-box test design techniques-
 Decision table testing
 Control flow testing
 All-pairs testing
 Data flow testing
 Equivalence partitioning
 Branch testing
 Error guessing

Types of Black Box Testing: Types of White Box Testing:


 Functional Testing  Path Testing
 Non-functional testing  Loop Testing
 Regression Testing  Condition testing

It is less exhaustive as compared to white It is comparatively more exhaustive than


box testing. black box testing.
Software Testing - Validation Testing

The process of evaluating software during the development process


or at the end of the development process to determine whether it
satisfies specified business requirements.

Validation Testing ensures that the product actually meets the


client's needs. It can also be defined as to demonstrate that the
product fulfills its intended use when deployed on appropriate
environment.

It answers to the question, Are we building the right product?

Validation Testing - Workflow:

Validation testing can be best demonstrated using V-Model. The


Software/product under test is evaluated during this type of testing.

System Testing
System Testing is a type of software testing that is performed on a complete integrated
system to evaluate the compliance of the system with the corresponding requirements. In
system testing, integration testing passed components are taken as input. The goal of
integration testing is to detect any irregularity between the units that are integrated
together. System testing detects defects within both the integrated units and the whole
system. The result of system testing is the observed behavior of a component or a system
when it is tested. System Testing is carried out on the whole system in the context of
either system requirement specifications or functional requirement specifications or in
the context of both. System testing tests the design and behavior of the system and also
the expectations of the customer. It is performed to test the system beyond the bounds
mentioned in the software requirements specification (SRS). System Testing is basically
performed by a testing team that is independent of the development team that helps to
test the quality of the system impartial. It has both functional and non-functional
testing. System Testing is a black-box testing. System Testing is performed after the
integration testing and before the acceptance testing.

System Testing Process: System Testing is performed in the following steps:


 Test Environment Setup: Create testing environment for the better quality testing.
 Create Test Case: Generate test case for the testing process.
 Create Test Data: Generate the data that is to be tested.
 Execute Test Case: After the generation of the test case and the test data, test cases
are executed.
 Defect Reporting: Defects in the system are detected.
 Regression Testing: It is carried out to test the side effects of the testing process.
 Log Defects: Defects are fixed in this step.
 Retest: If the test is not successful then again test is performed.

Types of System Testing:


 Performance Testing: Performance Testing is a type of software testing that is
carried out to test the speed, scalability, stability and reliability of the software product
or application.
 Load Testing: Load Testing is a type of software Testing which is carried out to
determine the behavior of a system or software product under extreme load.
 Stress Testing: Stress Testing is a type of software testing performed to check the
robustness of the system under the varying loads.
 Scalability Testing: Scalability Testing is a type of software testing which is carried
out to check the performance of a software application or system in terms of its
capability to scale up or scale down the number of user request load.
Tools used for System Testing :
1. JMeter
2. Gallen Framework
3. Selenium
Here are a few common tools used for System Testing:
1. HP Quality Center/ALM
2. IBM Rational Quality Manager
3. Microsoft Test Manager
4. Selenium
5. Appium
6. LoadRunner
7. Gatling
8. JMeter
9. Apache JServ
10. SoapUI
Note: The choice of tool depends on various factors like the technology used, the size
of the project, the budget, and the testing requirements.
Advantages of System Testing :
 The testers do not require more knowledge of programming to carry out this testing.
 It will test the entire product or software so that we will easily detect the errors or
defects which cannot be identified during the unit testing and integration testing.
 The testing environment is similar to that of the real time production or business
environment.
 It checks the entire functionality of the system with different test scripts and also it
covers the technical and business requirements of clients.
 After this testing, the product will almost cover all the possible bugs or errors and
hence the development team will confidently go ahead with acceptance testing.

Here are some advantages of System Testing:

 Verifies the overall functionality of the system.


 Detects and identifies system-level problems early in the development cycle.
 Helps to validate the requirements and ensure the system meets the user needs.
 Improves system reliability and quality.
 Facilitates collaboration and communication between development and testing teams.
 Enhances the overall performance of the system.
 Increases user confidence and reduces risks.
 Facilitates early detection and resolution of bugs and defects.
 Supports the identification of system-level dependencies and inter-module
interactions.
 Improves the system’s maintainability and scalability.
Disadvantages of System Testing :
 This testing is time consuming process than another testing techniques since it checks
the entire product or software.
 The cost for the testing will be high since it covers the testing of entire software.
 It needs good debugging tool otherwise the hidden errors will not be found.

Here are some disadvantages of System Testing:

 Can be time-consuming and expensive.


 Requires adequate resources and infrastructure.
 Can be complex and challenging, especially for large and complex systems.
 Dependent on the quality of requirements and design documents.
 Limited visibility into the internal workings of the system.
 Can be impacted by external factors like hardware and network configurations.
 Requires proper planning, coordination, and execution.
 Can be impacted by changes made during development.
 Requires specialized skills and expertise.
 May require multiple test cycles to achieve desired results.
Debugging
Debugging is the process of identifying and resolving errors, or bugs, in a
software system. It is an important aspect of software engineering because bugs
can cause a software system to malfunction, and can lead to poor performance
or incorrect results. Debugging can be a time-consuming and complex task, but
it is essential for ensuring that a software system is functioning correctly.
There are several common methods and techniques used in debugging,
including:
1. Code Inspection: This involves manually reviewing the source code of a
software system to identify potential bugs or errors.
2. Debugging Tools: There are various tools available for debugging such as
debuggers, trace tools, and profilers that can be used to identify and resolve
bugs.
3. Unit Testing: This involves testing individual units or components of a
software system to identify bugs or errors.
4. Integration Testing: This involves testing the interactions between different
components of a software system to identify bugs or errors.
5. System Testing: This involves testing the entire software system to identify
bugs or errors.
6. Monitoring: This involves monitoring a software system for unusual behavior
or performance issues that can indicate the presence of bugs or errors.
7. Logging: This involves recording events and messages related to the
software system, which can be used to identify bugs or errors.
It is important to note that debugging is an iterative process, and it may take
multiple attempts to identify and resolve all bugs in a software system.
Additionally, it is important to have a well-defined process in place for reporting
and tracking bugs, so that they can be effectively managed and resolved.
In summary, debugging is an important aspect of software engineering, it’s the
process of identifying and resolving errors, or bugs, in a software system. There
are several common methods and techniques used in debugging, including code
inspection, debugging tools, unit testing, integration testing, system testing,
monitoring, and logging. It is an iterative process that may take multiple attempts
to identify and resolve all bugs in a software system.
In the context of software engineering, debugging is the process of fixing a bug
in the software. In other words, it refers to identifying, analyzing, and removing
errors. This activity begins after the software fails to execute properly and
concludes by solving the problem and successfully testing the software. It is
considered to be an extremely complex and tedious task because errors need to
be resolved at all stages of debugging.
A better approach is to run the program within a debugger, which is a specialized
environment for controlling and monitoring the execution of a program. The basic
functionality provided by a debugger is the insertion of breakpoints within the
code. When the program is executed within the debugger, it stops at each
breakpoint. Many IDEs, such as Visual C++ and C-Builder provide built-in
debuggers.
Debugging Process: Steps involved in debugging are:
 Problem identification and report preparation.
 Assigning the report to the software engineer defect to verify that it is genuine.
 Defect Analysis using modeling, documentation, finding and testing candidate
flaws, etc.
 Defect Resolution by making required changes to the system.
 Validation of corrections.
The debugging process will always have one of two outcomes :
1. The cause will be found and corrected.
2. The cause will not be found.
Later, the person performing debugging may suspect a cause, design a test case
to help validate that suspicion and work toward error correction in an iterative
fashion.
During debugging, we encounter errors that range from mildly annoying to
catastrophic. As the consequences of an error increase, the amount of pressure
to find the cause also increases. Often, pressure sometimes forces a software
developer to fix one error and at the same time introduce two more.
Debugging Approaches/Strategies:
1. Brute Force: Study the system for a larger duration in order to understand
the system. It helps the debugger to construct different representations of
systems to be debugged depending on the need. A study of the system is also
done actively to find recent changes made to the software.
2. Backtracking: Backward analysis of the problem which involves tracing the
program backward from the location of the failure message in order to identify
the region of faulty code. A detailed study of the region is conducted to find
the cause of defects.
3. Forward analysis of the program involves tracing the program forwards using
breakpoints or print statements at different points in the program and studying
the results. The region where the wrong outputs are obtained is the region that
needs to be focused on to find the defect.
4. Using past experience with the software debug the software with similar
problems in nature. The success of this approach depends on the expertise
of the debugger.
5. Cause elimination: it introduces the concept of binary partitioning. Data
related to the error occurrence are organized to isolate potential causes.
6. Static analysis: Analyzing the code without executing it to identify potential
bugs or errors. This approach involves analyzing code syntax, data flow, and
control flow.
7. Dynamic analysis: Executing the code and analyzing its behavior at runtime
to identify errors or bugs. This approach involves techniques like runtime
debugging and profiling.
8. Collaborative debugging: Involves multiple developers working together to
debug a system. This approach is helpful in situations where multiple modules
or components are involved, and the root cause of the error is not clear.
9. Logging and Tracing: Using logging and tracing tools to identify the
sequence of events leading up to the error. This approach involves collecting
and analyzing logs and traces generated by the system during its execution.
10. Automated Debugging: The use of automated tools and techniques to
assist in the debugging process. These tools can include static and dynamic
analysis tools, as well as tools that use machine learning and artificial
intelligence to identify errors and suggest fixes.
Debugging Tools:
Debugging tool is a computer program that is used to test and debug other
programs. A lot of public domain software like gdb and dbx are available for
debugging. They offer console-based command-line interfaces. Examples of
automated debugging tools include code-based tracers, profilers, interpreters,
etc. Some of the widely used debuggers are:
 Radare2
 WinDbg
 Valgrind
Difference Between Debugging and Testing:
Debugging is different from testing. Testing focuses on finding bugs, errors, etc
whereas debugging starts after a bug has been identified in the software. Testing
is used to ensure that the program is correct and it was supposed to do with a
certain minimum success rate. Testing can be manual or automated. There are
several different types of testing unit testing, integration testing, alpha, and beta
testing, etc. Debugging requires a lot of knowledge, skills, and expertise. It can
be supported by some automated tools available but is more of a manual process
as every bug is different and requires a different technique, unlike a pre-defined
testing mechanism.
Advantages of Debugging:

Several advantages of debugging in software engineering:

1. Improved system quality: By identifying and resolving bugs, a software


system can be made more reliable and efficient, resulting in improved overall
quality.
2. Reduced system downtime: By identifying and resolving bugs, a software
system can be made more stable and less likely to experience downtime,
which can result in improved availability for users.
3. Increased user satisfaction: By identifying and resolving bugs, a software
system can be made more user-friendly and better able to meet the needs of
users, which can result in increased satisfaction.
4. Reduced development costs: By identifying and resolving bugs early in the
development process, it can save time and resources that would otherwise be
spent on fixing bugs later in the development process or after the system has
been deployed.
5. Increased security: By identifying and resolving bugs that could be exploited
by attackers, a software system can be made more secure, reducing the risk
of security breaches.
6. Facilitates change: With debugging, it becomes easy to make changes to
the software as it becomes easy to identify and fix bugs that would have been
caused by the changes.
7. Better understanding of the system: Debugging can help developers gain
a better understanding of how a software system works, and how different
components of the system interact with one another.
8. Facilitates testing: By identifying and resolving bugs, it makes it easier to
test the software and ensure that it meets the requirements and specifications.
In summary, debugging is an important aspect of software engineering as it helps
to improve system quality, reduce system downtime, increase user satisfaction,
reduce development costs, increase security, facilitate change, a better
understanding of the system, and facilitate testing.

Disadvantages of Debugging:

While debugging is an important aspect of software engineering, there are also


some disadvantages to consider:
1. Time-consuming: Debugging can be a time-consuming process, especially
if the bug is difficult to find or reproduce. This can cause delays in the
development process and add to the overall cost of the project.
2. Requires specialized skills: Debugging can be a complex task that requires
specialized skills and knowledge. This can be a challenge for developers who
are not familiar with the tools and techniques used in debugging.
3. Can be difficult to reproduce: Some bugs may be difficult to reproduce,
which can make it challenging to identify and resolve them.
4. Can be difficult to diagnose: Some bugs may be caused by interactions
between different components of a software system, which can make it
challenging to identify the root cause of the problem.
5. Can be difficult to fix: Some bugs may be caused by fundamental design
flaws or architecture issues, which can be difficult or impossible to fix without
significant changes to the software system.
6. Limited insight: In some cases, debugging tools can only provide limited
insight into the problem and may not provide enough information to identify
the root cause of the problem.
7. Can be expensive: Debugging can be an expensive process, especially if it
requires additional resources such as specialized debugging tools or
additional development time.
In summary, debugging is an important aspect of software engineering but it also
has some disadvantages, it can be time-consuming, requires specialized skills,
can be difficult to reproduce, diagnose and fix, may have limited insight, and can
be expensive.

Product Metrics in Software Engineering


Product metrics are software product measures at any stage of their
development, from requirements to established systems. Product metrics are
related to software features only. Product metrics fall into two classes:
1. Dynamic metrics that are collected by measurements made from a program
in execution.
2. Static metrics that are collected by measurements made from system
representations such as design, programs, or documentation.
Dynamic metrics help in assessing the efficiency and reliability of a program
while static metrics help in understanding, understanding and maintaining the
complexity of a software system. Dynamic metrics are usually quite closely
related to software quality attributes. It is relatively easy to measure the
execution time required for particular tasks and to estimate the time required to
start the system. These are directly related to the efficiency of the system
failures and the type of failure can be logged and directly related to the
reliability of the software. On the other hand, static matrices have an indirect
relationship with quality attributes. A large number of these matrices have been
proposed to try to derive and validate the relationship between the complexity,
understandability, and maintainability. several static metrics which have been
used for assessing quality attributes, given in table of these, program or
component length and control complexity seem to be the most reliable
predictors of understandability, system complexity, and
maintainability. Software Product Metrics :
Software
S.No. Metric Description

Fan-in is a measure of the number of functions that call some


other function (say X). Fan-out is the number of functions which
are called by function X. A high value for fan-in means that X is
Fan-in/Fan-
(1) tightly coupled to the rest of the design and changes to X will
out
have extensive knock-on effects. A high value for fan-out
suggests that the overall complexity of the control logic needed
to coordinate the called components.

This is measure of the size of a program. Generally, the large the


Length of
(2) size of the code of a program component, the more complex and
code
error-prone that component is likely to be.

Cyclomatic This is a measure of the control complexity of a program. This


(3)
complexity control complexity may be related to program understandability.

This is a measure of the average length of distinct identifier in a


Length of
(4) program. The longer the identifiers, the more understandable the
identifiers
program.

Depth of This is a measure of the depth of nesting of if statements in a


(5) conditional program. Deeply nested if statements are hard to understand and
nesting are potentially error-prone.

This is a measure of the average length of words and sentences in


(6) Fog index documents. The higher the value for the Fog index, the more
difficult the document may be to understand.
Measuring Software Quality using
Quality Metrics
In Software Engineering, Software Measurement is done based on some Software
Metrics where these software metrics are referred to as the measure of various
characteristics of a Software.
In Software engineering Software Quality Assurance (SAQ) assures the quality of the
software. A set of activities in SAQ is continuously applied throughout the software
process. Software Quality is measured based on some software quality metrics.
There is a number of metrics available based on which software quality is measured. But
among them, there are a few most useful metrics which are essential in software quality
measurement. They are –

1. Code Quality
2. Reliability
3. Performance
4. Usability
5. Correctness
6. Maintainability
7. Integrity
8. Security
1. Code Quality – Code quality metrics measure the quality of code used for software
project development. Maintaining the software code quality by writing Bug-free and
semantically correct code is very important for good software project development.
In code quality, both Quantitative metrics like the number of lines, complexity,
functions, rate of bugs generation, etc, and Qualitative metrics like readability, code
clarity, efficiency, and maintainability, etc are measured.
2. Reliability – Reliability metrics express the reliability of software in different
conditions. The software is able to provide exact service at the right time or not
checked. Reliability can be checked using Mean Time Between Failure (MTBF) and
Mean Time To Repair (MTTR).
3. Performance – Performance metrics are used to measure the performance of the
software. Each software has been developed for some specific purposes. Performance
metrics measure the performance of the software by determining whether the software
is fulfilling the user requirements or not, by analyzing how much time and resource it
is utilizing for providing the service.
4. Usability – Usability metrics check whether the program is user-friendly or not.
Each software is used by the end-user. So it is important to measure that the end-user
is happy or not by using this software.
5. Correctness – Correctness is one of the important software quality metrics as this
checks whether the system or software is working correctly without any error by
satisfying the user. Correctness gives the degree of service each function provides as
per developed.
6. Maintainability – Each software product requires maintenance and up-gradation.
Maintenance is an expensive and time-consuming process. So if the software product
provides easy maintainability then we can say software quality is up to mark.
Maintainability metrics include the time required to adapt to new
features/functionality, Mean Time to Change (MTTC), performance in changing
environments, etc.
7. Integrity – Software integrity is important in terms of how much it is easy to
integrate with other required software which increases software functionality and
what is the control on integration from unauthorized software’s which increases the
chances of cyberattacks.
8. Security – Security metrics measure how secure the software is. In the age of cyber
terrorism, security is the most essential part of every software. Security assures that
there are no unauthorized changes, no fear of cyber attacks, etc when the software
product is in use by the end-user.
Metrics for the Design Model of the
Product
Metrics simply measures quantitative assessment that focuses on countable values most
commonly used for comparing and tracking performance of system. Metrics are used in
different scenarios like analyzing model, design model, source code, testing, and
maintenance. Metrics for design modeling allows developers or software engineers to
evaluate or estimate quality of design and include various architecture and component-
level designs.
Metrics by Glass and Card :
In designing a product, it is very important to have efficient management of complexity.
Complexity itself means very difficult to understand. We know that systems are generally
complex as they have many interconnected components that make it difficult to
understand. Glass and Card are two scientists who have suggested three design
complexity measures. These are given below :
1. Structural Complexity –
Structural complexity depends upon fan-out for modules. It can be defined as :
S(k) = f2out(k)
Where fout represents fanout for module k (fan-out means number of modules that are
subordinating module k).
2. Data Complexity –
Data complexity is complexity within interface of internal module. It is size and
intricacy of data. For some module k, it can be defined as :
D(k) = tot_var(k) / [fout(k)+1]
Where tot_var is total number of input and output variables going to and coming out
of module.
3. System Complexity –
System complexity is combination of structural and data complexity. It can be denoted
as:
Sy(k) = S(k)+D(k)

When structural, data, and system complexity get increased, overall architectural
complexity also gets increased.

Complexity metrics –
Complexity metrics are used to measure complexity of overall software. The computation
if complexity metrics can be done with help of a flow graph. It is sometimes called
cyclomatic complexity. The cyclomatic complexity is a useful metric to indicate
complexity of software system. Without use of complexity metrics, it is very difficult and
time-consuming to determine complexity in designing products where risk cost emanates.
Even continuous complexity analysis makes it difficult for project team and management
to solve problem. Measuring Software complexity leads to improve code quality, increase
productivity, meet architectural standards, reduce overall cost, increases robustness, etc.
To calculate cyclomatic complexity, following equation is used:
Cyclomatic complexity= E - N + 2

Where, E is total number of edges and N is total number of nodes.

Example –
In diagram given below, you can see number of edges and number of nodes.
So, the Cyclomatic complexity can be calculated as –

Given,

E = 10,

N=8

So,

Cyclomatic complexity

=E-N+2

= 10 – 8 + 2

=4
Whether you're preparing for your first job interview or aiming to upskill in this ever-
evolving tech landscape.

Software Metrics
A software metric is a measure of software characteristics which are measurable or
countable. Software metrics are valuable for many reasons, including measuring software
performance, planning work items, measuring productivity, and many other uses.

Within the software development process, many metrics are that are all connected.
Software metrics are similar to the four functions of management: Planning, Organization,
Control, or Improvement.

Classification of Software Metrics


Software metrics can be classified into two types as follows:

1. Product Metrics: These are the measures of various characteristics of the software
product. The two important software characteristics are:

1. Size and complexity of software.


2. Quality and reliability of software.

These metrics can be computed for different stages of SDLC.

2. Process Metrics: These are the measures of various characteristics of the software
development process. For example, the efficiency of fault detection. They are used to measure
the characteristics of methods, techniques, and tools that are used for developing software.
Types of Metrics
Internal metrics: Internal metrics are the metrics used for measuring properties that are viewed
to be of greater importance to a software developer. For example, Lines of Code (LOC) measure.

External metrics: External metrics are the metrics used for measuring properties that are
viewed to be of greater importance to the user, e.g., portability, reliability, functionality,
usability, etc.

Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource
metrics. For example, cost per FP where FP stands for Function Point Metric.

Project metrics: Project metrics are the metrics used by the project manager to

check the project's progress. Data from the past projects are used to collect
various metrics, like time and cost; these estimates are used as a base of new
software. Note that as the project proceeds, the project manager will check its
progress from time-to-time and will compare the effort, cost, and time with the
original effort, cost and time. Also understand that these metrics are used to

decrease the development costs, time efforts and risks. The project quality can

also be improved. As quality improves, the number of errors and time, as well as

cost required, is also reduced.


Advantage of Software Metrics
Comparative study of various design methodology of software systems.
For analysis, comparison, and critical study of different programming language
concerning their characteristics.

In comparing and evaluating the capabilities and productivity of people involved

in software development.
In the preparation of software quality specifications.

In the verification of compliance of software systems requirements and

specifications.

In making inference about the effort to be put in the design and development of

the software systems.


In getting an idea about the complexity of the code.
In taking decisions regarding further division of a complex module is to be done

or not.

In guiding resource manager for their proper utilization.

In comparison and making design tradeoffs between software development and


maintenance cost.
In providing feedback to software managers about the progress and quality during
various phases of the software development life cycle.
In the allocation of testing resources for testing the code.
Disadvantage of Software Metrics
The application of software metrics is not always easy, and in some cases, it is
difficult and costly.

The verification and justification of software metrics are based on


historical/empirical data whose validity is difficult to verify.

These are useful for managing software products but not for evaluating the

performance of the technical staff.


The definition and derivation of Software metrics are usually based on assuming
which are not standardized and may depend upon tools available and working

environment.

Most of the predictive models rely on estimates of certain variables which are often
not known precisely.

Software Testing Metrics, its Types and


Example
Software testing metrics are quantifiable indicators of the software
testing process progress, quality, productivity, and overall health. The purpose
of software testing metrics is to increase the efficiency and effectiveness of the
software testing process while also assisting in making better decisions for
future testing by providing accurate data about the testing process. A metric
expresses the degree to which a system, system component, or process
possesses a certain attribute in numerical terms. A weekly mileage of an
automobile compared to its ideal mileage specified by the manufacturer is an
excellent illustration of metrics. Here, we discuss the following points:
1. Importance of Metrics in Software Testing.
2. Types of Software Testing Metrics.
3. Manual Test Metrics: What Are They and How Do They Work?
4. Other Important Metrics.
5. Test Metrics Life Cycle.
6. Formula for Test Metrics.
7. Example of Software Test Metrics Calculation.

Importance of Metrics in Software Testing

Test metrics are essential in determining the software’s quality and


performance. Developers may use the right software testing metrics to improve
their productivity.

 Test metrics help to determine what types of enhancements are required in


order to create a defect-free, high-quality software product.
 Make informed judgments about the testing phases that follow, such as
project schedule and cost estimates.
 Examine the current technology or procedure to see if it need any more
changes.

Types of Software Testing Metrics

Software testing metrics are divided into three categories:

1. Process Metrics: A project’s characteristics and execution are defined by


process metrics. These features are critical to the SDLC process’s
improvement and maintenance (Software Development Life Cycle).
2. Product Metrics: A product’s size, design, performance, quality, and
complexity are defined by product metrics. Developers can improve the
quality of their software development by utilizing these features.
3. Project Metrics: Project Metrics are used to assess a project’s overall
quality. It is used to estimate a project’s resources and deliverables, as well
as to determine costs, productivity, and flaws.
It is critical to determine the appropriate testing metrics for the process. A few
points to keep in mind:

 Before creating the metrics, carefully select your target audiences.


 Define the aim for which the metrics were created.
 Prepare measurements based on the project’s specific requirements. Assess
the financial gain associated with each statistic.
 Match the measurements to the project lifestyle phase for the best results.
The major benefit of automated testing is that it allows testers to complete more
tests in less time while also covering a large number of variations that would be
practically difficult to calculate manually.

Manual Test Metrics: What Are They and How Do They Work?

Manual testing is carried out in a step-by-step manner by quality assurance


experts. Test automation frameworks, tools, and software are used to execute
tests in automated testing. There are advantages and disadvantages to both
human and automated testing. Manual testing is a time-consuming technique,
but it allows testers to deal with more complicated circumstances. There are
two sorts of manual test metrics:

1. Base Metrics: Analysts collect data throughout the development and


execution of test cases to provide base metrics. By generating a project status
report, these metrics are sent to test leads and project managers. It is quantified
using calculated metrics.
 The total number of test cases
 The total number of test cases completed.
2. Calculated Metrics: Data from base metrics are used to create calculated
metrics. The test lead collects this information and transforms it into more useful
information for tracking project progress at the module, tester, and other levels.
It’s an important aspect of the SDLC since it allows developers to make critical
software changes.

Other Important Metrics

The following are some of the other important software metrics:

 Defect metrics: Defect metrics help engineers understand the many


aspects of software quality, such as functionality, performance, installation
stability, usability, compatibility, and so on.
 Schedule Adherence: Schedule Adherence’s major purpose is to determine
the time difference between a schedule’s expected and actual execution
times.
 Defect Severity: The severity of the problem allows the developer to see
how the defect will affect the software’s quality.
 Test case efficiency: Test case efficiency is a measure of how effective test
cases are at detecting problems.
 Defects finding rate: It is used to determine the pattern of flaws over a
period of time.
 Defect Fixing Time: The amount of time it takes to remedy a problem is
known as defect fixing time.
 Test Coverage: It specifies the number of test cases assigned to the
program. This metric ensures that the testing is completed completely. It also
aids in the verification of code flow and the testing of functionality.
 Defect cause: It’s utilized to figure out what’s causing the problem.
Test Metrics Life Cycle

The below diagram illustrates the different stages in the test metrics life cycle.

Test Metrics Lifecycle

The various stages of the test metrics lifecycle are:


1. Analysis:
 The metrics must be recognized.
 Define the QA metrics that have been identified.
2. Communicate:
 Stakeholders and the testing team should be informed about the
requirement for metrics.
 Educate the testing team on the data points that must be collected in order
to process the metrics.
3. Evaluation:
 Data should be captured and verified.
 Using the data collected to calculate the value of the metrics
4. Report:
 Create a strong conclusion for the paper.
 Distribute the report to the appropriate stakeholder and representatives.
 Gather input from stakeholder representatives.

Formula for Test Metrics

To get the percentage execution status of the test cases, the following formula
can be used:
Percentage test cases executed = (No of test cases executed / Total no of
test cases written) x 100
Similarly, it is possible to calculate for other parameters also such as test cases
that were not executed, test cases that were passed, test cases that were
failed, test cases that were blocked, and so on. Below are some of the
formulas:
1. Test Case Effectiveness:
Test Case Effectiveness = (Number of defects detected / Number of test
cases run) x 100
2. Passed Test Cases Percentage: Test Cases that Passed Coverage is a
metric that indicates the percentage of test cases that pass.
Passed Test Cases Percentage = (Total number of tests ran / Total number
of tests executed) x 100
3. Failed Test Cases Percentage: This metric measures the proportion of all
failed test cases.
Failed Test Cases Percentage = (Total number of failed test cases / Total
number of tests executed) x 100
4. Blocked Test Cases Percentage: During the software testing process, this
parameter determines the percentage of test cases that are blocked.
Blocked Test Cases Percentage = (Total number of blocked tests / Total
number of tests executed) x 100
5. Fixed Defects Percentage: Using this measure, the team may determine
the percentage of defects that have been fixed.
Fixed Defects Percentage = (Total number of flaws fixed / Number of
defects reported) x 100
6. Rework Effort Ratio: This measure helps to determine the rework effort
ratio.
Rework Effort Ratio = (Actual rework efforts spent in that phase/ Total
actual efforts spent in that phase) x 100
7. Accepted Defects Percentage: This measures the percentage of defects
that are accepted out of the total accepted defects.
Accepted Defects Percentage = (Defects Accepted as Valid by Dev Team /
Total Defects Reported) x 100
8. Defects Deferred Percentage: This measures the percentage of the defects
that are deferred for future release.
Defects Deferred Percentage = (Defects deferred for future releases / Total
Defects Reported) x 100

Example of Software Test Metrics Calculation

Let’s take an example to calculate test metrics:


S Data retrieved during test case
No. Testing Metric development

1 No. of requirements 5

The average number of test cases written


2 40
per requirement

Total no. of Test cases written for all


3 200
requirements

4 Total no. of Test cases executed 164

5 No. of Test cases passed 100

6 No. of Test cases failed 60

7 No. of Test cases blocked 4

8 No. of Test cases unexecuted 36

9 Total no. of defects identified 20

Defects accepted as valid by the dev


10 15
team

11 Defects deferred for future releases 5


S Data retrieved during test case
No. Testing Metric development

12 Defects fixed 12

1. Percentage test cases executed = (No of test cases executed / Total no of


test cases written) x 100
= (164 / 200) x 100
= 82
2. Test Case Effectiveness = (Number of defects detected / Number of test
cases run) x 100
= (20 / 164) x 100
= 12.2
3. Failed Test Cases Percentage = (Total number of failed test cases / Total
number of tests executed) x 100
= (60 / 164) * 100
= 36.59
4. Blocked Test Cases Percentage = (Total number of blocked tests / Total
number of tests executed) x 100
= (4 / 164) * 100
= 2.44
5. Fixed Defects Percentage = (Total number of flaws fixed / Number of
defects reported) x 100
= (12 / 20) * 100
= 60
6. Accepted Defects Percentage = (Defects Accepted as Valid by Dev Team /
Total Defects Reported) x 100
= (15 / 20) * 100
= 75
7. Defects Deferred Percentage = (Defects deferred for future releases / Total
Defects Reported) x 100
= (5 / 20) * 100
= 25

You might also like