0% found this document useful (0 votes)
28 views24 pages

Data Centre Network Optimization

Technology has advanced quite quickly in recent years, and IT and telecom infrastructure is constantly growing. Due to an increased importance and emphasis of cloud computing many growing and emerging enterprises have shifted their computing needs and wants ultimately to the cloud, which leads to increase in inter-server data trafficking and also the bandwidth required for Data Centre Networking (DCCN). Actually this multi-tier hierarchical architecture used in modern data centres is based on tr

Uploaded by

Poonam Kilaniya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views24 pages

Data Centre Network Optimization

Technology has advanced quite quickly in recent years, and IT and telecom infrastructure is constantly growing. Due to an increased importance and emphasis of cloud computing many growing and emerging enterprises have shifted their computing needs and wants ultimately to the cloud, which leads to increase in inter-server data trafficking and also the bandwidth required for Data Centre Networking (DCCN). Actually this multi-tier hierarchical architecture used in modern data centres is based on tr

Uploaded by

Poonam Kilaniya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

International Journal of Advanced Engineering, Management

and Science (IJAEMS)


Peer-Reviewed Journal
ISSN: 2454-1311 | Vol-9, Issue-9; Sep, 2023
Journal Home Page: https://ijaems.com/
Article DOI: https://dx.doi.org/10.22161/ijaems.99.1

Data Centre Network Optimization


Tajammul Hussain
[email protected]

Received: 11 Jul 2023; Received in revised form: 15 Aug 2023; Accepted: 25 Aug 2023; Available online: 03 Sep Aug 2023

Abstract— Technology has advanced quite quickly in recent years, and IT and telecom infrastructure is
constantly growing. Due to an increased importance and emphasis of cloud computing many growing and
emerging enterprises have shifted their computing needs and wants ultimately to the cloud, which leads to
increase in inter-server data trafficking and also the bandwidth required for Data Centre Networking
(DCCN). Actually this multi-tier hierarchical architecture used in modern data centres is based on
traditional Ethernet/fabric switches. Researchers goal was to improve the data centre communication
network design such that the majority of its problems can be solved while still using the existing network
infrastructure and with lower capital expenditures. This is achieved through the deployment of OpenFlow
(OF) switches and the Microelectromechanical Systems (MEMS) based all Optical Switching (MAOS). This
will be beneficial in decreasing network latency, power consumption, and CAPEX costs in addition to helping
with scalability concerns, traffic management, and congestion control. Additionally, by implementing new
virtualization techniques, we may enhance DCCN's resource consumption and cable problems. In order to
address data centre challenges, the researcher came up with an entirely new novel flat data centre
coordination network architecture which is named as “Hybrid Flow based Packet filtering” (HFPFMAOS),
forwarding, and MEMS which is entirely based on optical switching, and will be finally controlled by
Software Defined Network (SDN) controller.
Keywords— Data Centre, Network Optimization, Software defined Networking, OpenFlow

I. INTRODUCTION • Traffic flows which are within the data centre.


1.1 Background • Traffic flows which travel between data centres.
• Traffic flows between data centre and the end user.
Data Centre is specialized building that houses various IT
resources, such as servers, data storage systems, and Data centre IP traffic is increasing at a fast rate [1] as
networking and communication devices. Data centres have depicted in the below figure. The primary driver of this
become increasingly important in IT and data networks over expansion is cloud computing, which by 2023 is predicted
the last few years as a result of the explosive expansion of to account for 4/5 of all traffic in data centres [2].
information technology and internet consumption.
Additionally, as a result of the emergence of cloud
computing, server virtualization, social media, and mobile
data, the number of servers deployed, storage space, and
interconnected network equipment in data centres is
growing tremendously. Data centre IT network's primary
elements include:
• Data Centre Communication Network Fig 1.1: Global Data Centre Traffic, 2016–2021
• High Performance Computing Network Global data centres traffic by destination is shown in the
• Storage Area Network figure below.
Three categories of traffic can be used to categorize a Data
Centre.
This article can be downloaded from here: www.ijaems.com 1
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

1.3 Research Contribution


It relates to developing a low-cost, capital-efficient solution
that will improve the scalability, traffic management,
congestion control, and resource usage of data centre
design. Academic contribution relates to proposing a
solution HFPFMAOS using OpenFlow tools for:

Fig 1.2: Global Data Centres Traffic by Destination • Congestion Reduction


• Improvement in Network Latency
It is evident from the above chart that the majority of traffic • Implementing Scalability
occurs inside data centres, with the majority of that traffic
• Traffic Management
travelling between servers and edge switches. Figure below
depicts the traffic forecast between the various tiers of the • Reducing power Consumption
data centre communication network. • CAPEX Reduction
Practical contribution of this research relates to coming up
with an innovative data centre topology architecture with
improved performance.
1.4 Objectives
• Application of Hybrid Flow to packet filtering
Fig 1.3 Traffic Forecast at Different Layers /Forwarding and MEMS on the basis of an optical
switching solution by utilizing different
1.2 Problem Statement virtualization techniques.
• Efficient utilization of data centre network
Limited server capacity and high oversubscription rates,
resources.
network congestion and latency issues, management of
internal data-centre traffic, support for a variety of traffic o Reducing Congestion
patterns, scalability, agility, effective resource utilization, o Improving Network Latency
management, fault handling, and troubleshooting are o Implementing Scalability
among the problems that DCCN must deal with. When
building a data centre, these issues must be taken into o Traffic Management
consideration. The problems or difficulties that the DCCN • Reducing Power Consumption and saving
will encounter are as follows: CAPEX.
• Limited capacity between servers and a high • Provisioning of centralized intelligence of whole
oversubscription rate as traffic moves through the network.
layers of switches in a hierarchical fashion. 1.5 Research Questions
• Problems with congestion and high network • How to come up with a solution based on
latency have a negative impact on delay-sensitive HFPFMAOS and MEMS based all optical
applications and affect network performance as a switching?
whole. • How to achieve efficient utilization of data centre
• Controlling the nearly 75% of the overall traffic network resources?
that is generated inside the data centre (i.e., east- • How to reduce congestion, improving network
west traffic between racks). latency, scalability and traffic management?
• Ease of Management, fault handling and • How to reduce power consumption and save
troubleshooting in the network. CAPEX?
• Support for a range of traffic patterns, such as • How to implement centralized intelligence of
long-lasting, high-bandwidth "elephant" flows and whole network?
short-lasting persistent "mouse" flows
• How to implement network virtualization
• As a network grows, difficulties with scalability, techniques?
agility, and efficient resource use arise.

This article can be downloaded from here: www.ijaems.com 2


©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

1.6 Research Methodology


OpenFlow (OF) switches were used in the access layer as 2.2 Evolution
TOR switches. OF messages were captured using Wire First Data centre phase (called Phase 1.0) began in the
shark. Traffic was generated between various hosts (i.e. 1950s and consisted of computer rooms with mainframe
server machines) using iperf testing tool. Performance of systems' CPUs and peripheral devices including storage,
the network and the delays were calculated/analyzed. For terminals, and printers, among other things. These
clarity, a reference data centre topology was used. monolithic software-based centralized systems give users
Aggregation of the MEMS switch with conventional switch little control over IT infrastructure and make heavy use of
was done and connected it to the OpenFlow (OF) switches available resources. Phase 2.0 of data centres begins in the
which were installed at the access layer. Network traffic was 1980s when client-server application models gain
generated between various hosts and flow delays were prominent. Currently, servers have replaced main frame
observed. Researcher then directed flows via using MEMS computers, which are fairly compact and accessible through
switch and simultaneously monitored performance of the client PC-installed apps.
network.
In this phase, servers that carry out specific tasks are
1.7 Software’s typically installed close to clients rather than the main IT
The researcher used the following software’s for carrying infrastructure to avoid paying excessive bandwidth costs.
out the requisite analysis: When the internet began to take off in the 1990s and the use
a) MININET of web-based applications increased, this mandated that
b) Putty servers be placed centrally in properly constructed data
c) Wireshark centres. Thus, the data centre that arises from the mainframe
d) Oracle VM Virtual Box computer room has gained significance in the 1990s. Fig.
e) Miniedit 2.2 [3] depicts the timeline of data centre evolution.
f) ODL Controller
g) Xming

II. DATA CENTRE (DC)


2.1 Introduction
Data centre is basically an entirely one of its own kind of
building which is invented to contains, supervises, and
provides [3]. A data centre structure includes very unique
building infrastructures, backup plans for power shortages Fig. 2.2 Evolution Phases
or unavailability, cooling agents for systems, special
equipment Chester’s, servers, mainframes, and High Phase 3.0 began around the year 2000, and because
performance Computing (HPC) networks to handle variety technology evolved so quickly at this time, datacenter
of communication data, as well as well-structured space, electricity, and the IT network are beginning to reach
sophisticated cabling, application of various software, capacity. As a result, it is expensive to expand existing
monitoring centers, and well equipped physical security facilities or to build new ones. According to a 2005 Cisco
systems. IT research, around this time DC networks and servers were
typically only used at 20% of their capacity. Application
silos with discrete sets of servers, networks, and storage
resources were the primary contributors to this problem.
Later, a number of network consolidation projects are
completed that provide new features, enhance resource
utilization, decrease the number of network components
and processes, and improve operational effectiveness.
2.3 Types
The are two in number [4][5]
• Private
Fig. 2.1 Basic Data Centre Diagram
This article can be downloaded from here: www.ijaems.com 3
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

• Cloud Based
2.3.1 Private
It is the on-site hardware that offers computing services and
stores data within a local network that is managed by an
organization's IT department. These are owned and run by
small private/public businesses, governmental
organization’s etc.
2.3.2 Cloud Based
Another name used for cloud data centres is co-
Fig. 2.3 DC Tiers
location/managed services provider. Co-located Data
Centers were created and maintained to offer specialized
infrastructure and provide various services to external 2.6 Design Factors for Data Centre Network
sources and parties. Major elements that force businesses to
• The following factors must be taken into account
either go for cloud computing or develop their own data
when planning and deploying DCCN architecture
centers are:
[3].
• Business market requirements
• Failure Impact & Application Resilience: All local
• Data privacy issues
and distant users will be unable to access their apps
• The increasing cost of related if DCCN fails.
equipment/infrastructure
• • Connectivity between the server and host:
2.4 Data Centre Communication Network (DCCN)
Servers must be linked together using several
Data services and data transit from the server to clients or redundant Ethernet connections.
other servers are the major goals of DCCN. These qualities
• Traffic Direction: In DCCN, the majority of traffic
are now required from the perspectives of dependability,
is between servers inside the data centre.
expansion, and efficiency for data centre communication
networks. • Agility: This term refers to the capacity for
assigning any service or application to any server
• Availability
at any moment in the data centre network while
• Scalability maintaining sufficient performance isolation and
• Flexibility security amongst various applications.

• Efficiency • Growth Rate: Increasing number of servers,


switches, and switch ports, etc., as customers and
• Predictability
their data traffic increases.
Ethernet is the most ubiquitous and well-liked DCCN
• Application Bandwidth Demand &
protocol for data lines, with an interface range ranging from
Oversubscription in case demand increases.
10Mbps to 100Gbps. However, as the volume of traffic
through the DCCN increased, various constraints compelled 2.7 DCCN Challenges
the development and adoption of new virtualization Challenges for the Data Centers Communication Network
technologies as well as the optimization of the data centre fields are as follows:
network through the use of cutting-edge networking
• Due to the fact that router links carry traffic going in
strategies like MEMS all optical switching and
and out of the data centre, the bandwidth available
conventional packet switching.
over these links for server communication across
2.5 DC Tiers various data centre areas is constrained [8].
Various Tiers of Data Centres are shown in the figure below • Network congestion and latency issues. As in a data
from Tier-1 to Tier-4. centre, where many applications are active and
producing distinct packet flows that move both
within and across the DCCN, a multi-tier hierarchical
structure with numerous hops, there are times when
packet aggregation creates a bandwidth constraint.
As the receiver's buffer space for absorbing packet

This article can be downloaded from here: www.ijaems.com 4


©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

flows is running short, the incoming rate from 2.8 DCCN Topologies
numerous transmitters surpasses the rate at which it The architecture of a data centre communication network
can handle packet flows. This congestion causes the (DCCN), which serves as the foundation for its
delay time to lengthen and the receiver to begin applications, must be scalable, reliable, latency-free, and
dropping packets, which has an impact on various have adequate capacity to prevent network congestion.
applications, particularly those that depend on These qualities heavily rely on the network architecture in
latency, and lowers the network's overall which the DCCN is installed and are crucial to the overall
performance. effectiveness of the data centre. Figure below depicts the
• Managing the read/write, backup, and replication Common DCCN Architecture, as stated by Cisco [9].
traffic that moves within the data centre due to the
separation of application servers, databases, and
storage, which accounts for over 75% of all traffic
(i.e., east-west traffic between racks). Additionally,
because jobs are distributed across numerous servers
and assigned in parallel processing, DCCN's internal
traffic is increased.
• Support for a range of traffic patterns, such as long-
lasting, high-bandwidth "elephant" flows and short-
lasting persistent "mouse" flows. Examples of
entirely large data producers include particle
accelerators, planes, trains, metros, undergrounds,
self-driving cars, patient data in healthcare centers,
etc. In the year 2014, the Boeing 787 produces 40 Fig 2.4 Common DCCN Architecture
terabytes of onboard data every hour, of which 0.5
terabytes per hour is transmitted to a data centre. The
majority of flows within a data centre are typically Layered architecture has the advantage of enhancing the
mouse flows, while only a few elephant flows flexibility, modularity, and resilience of networks. Every
contain the majority of the data. Managing every sort layer in this design performs a separate role for unique
of traffic at once while keeping the overall network profiles, such as the routers in the core layer (i.e., the
delay within bounds is therefore a major issue. aggregation router and the border router), which use routing
• As a network grows, difficulties with scalability, protocols to decide how to forward traffic for both ingress
agility, and efficient resource use arise. In addition to and egress. Core switches, which offer incredibly flexible,
making fault handling and debugging more scalable connections with numerous aggregation layer
challenging as the number of communication devices switches, make up the Layer 2 domain. This layer serves as
increases, this will also result in an increase in a default gateway and a focal point for server IP subnets.
management overhead bytes on the network. Between numerous aggregation layer switches, it is
typically used to transfer traffic between servers, and
• As additional devices are added, energy
stateful network devices such server load balancers and
requirements and consumption will increase.
firewalls are attached to this layer. The switches that make
Currently, in traditional data centers, the top of the
up the access layer are typically used by servers to connect
rack (TOR) switch, also known as the fabric
to and communicate with one another. It contributes to
interconnect, is where all of the servers in a rack are
better network administration and is in charge of
connected. Each rack has a number of servers, either
exchanging any kind of traffic between servers, including
rack mount servers or chassis-based servers. On one
broadcast, multicast, and unicast.
side, this aggregation layer provides communication
between data centre clusters, allowing packet-based The bottom of the above illustration shows a server layer
East-West traffic to pass through, while on the other, stacked in server racks.
it also provides connectivity with the core network, Numerous Virtual machines are being operated by these
allowing North-South traffic to pass. As a result, the servers and assigned to various data centre applications. To
aggregation layer is the layer where problems arise manage and respond to requests from external clients
because 75% of the data center’s east-west traffic arriving through the internet, an application is typically
passes through it, causing a bandwidth bottleneck & linked to several public IP addresses. These requests are
an increase in latency. split up among the pool of servers for processing by
This article can be downloaded from here: www.ijaems.com 5
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

specialized equipment called a load balancer that is


redundantly connected to the core switches [10].
2.8.1 Fat-Tree
Al-Fareset and colleagues [11] have proposed this DCCN
architecture. It makes use of the fat-tree idea and aims to
boost fault tolerance, end-to-end cross sectional bandwidth, Fig 2.7 B-Cube Topology
scalability, and cost-effectiveness. With this topology, the 2.8.3 DCell
core and several pods make up the entire infrastructure. Guo et al proposed a recursively specified design. Mini
Servers, TORS (Access switches), and aggregation switches and servers are used in this very marketable design
switches make up the pods. to forward packets. According to below figure, Dcell1 is
made up of n+1 DCell0 modules, each of which is
connected to the others by a single link via its servers.

Fig 2.5 Fat-Tree

The topology adapts the data lines in the DCCN architecture


to give a customized IP address scheme and multipath
Fig 2.8 DCell Topology Architecture
routing algorithm. Below figure compares 3 tier and fat-tree
systems.
2.8.4 VL2
A Fat-Tree based DCCN design called Virtual Layer 2
(VL2) aims to provide a flat automated IP addressing
scheme that makes it possible to install servers anywhere in
the network without manually configuring their IP
addresses. This makes it possible for any service to be
allocated to any network server. In order to scale to a large
Fig 2.6 Comparison of 3 Tier/Fat-Tree Topologies
server pool, it makes use of [14] address resolution based
on end systems.
2.8.2 B-Cube
The structure [12] is suggested for the modular DCs created
within containers and offers quick installation and
migration, but scalability is limited because container data
centres are not meant to be scaled up. Layers of COTS
(commodity off the shelf) switches and servers make up this
architecture, and they are in charge of packet forwarding.
At level 0, the BCube architecture is made up of many
BCube0 modules, each of which has a switch with n ports Fig 2.9 Network Architecture VL2
connected to n servers. n switches make up the BCube1
module at level 1, which is connected to n BCube0 modules
or networks. One server from the BCube0 module is linked In addition to all architectures, there is a need for
to each switch of BCube1. architecture which can handle the present issues the DCCN
is encountering as well as give backward compatibility with
existing infrastructure and its architecture. In this thesis, I
suggested the HFPFMOS data centre design, which can
This article can be downloaded from here: www.ijaems.com 6
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

partially address the shortcomings of the existing 3.3 OOO Switching Benefits
architecture while simultaneously being backwards • POD-based architecture is typically used in data
compatible with it. centres, which results in low use of computing
2.8.5 HFPFMAOS resources. However, optical switching enables
The data centre communication network architecture sharing of computing resources among several
(DCCN) called HFPFMAOS, is suggested as a solution to PODs for optimal effectiveness.
the various problems that data centres in the present face • Increases revenue by quickly deploying new
while maintaining the infrastructure. Particularly,in this services.
architecture, standard switches are used at the aggregation • Less power is lost compared to a traditional
layer. By monitoring all of the flows across the OF equipped electrical switch.
switches, this will give us consolidated network intelligence • On-demand capacity creation and reallocation.
and enable us to dynamically establish High bandwidth data • A remarkable increase in the data centre's
paths between the data centre servers whenever and operational efficiency as a result of the smooth
wherever they are needed. Additionally, it will lower functioning of applications and the efficient use of
network management overhead bytes and aid in quick computational resources and 3D optical MEMS
defect detection and correction. The conceptualized design switching technology.
is shown below. • Boost defect detection and quality of service.
3.4 Technology using OOO Switching
Different optical switching methods [17] are used by many
switches. The same are shown in the below table:
Table 3.1 Contrast of various Optical switching
technologies

Fig 2.10 Architecture – HFPFMAOS - DCCN’s

III. MEMS OPTICAL SWITCHING


3.1 Introduction
Interconnection between DCN demands a significant
capacity increase due to the exponential daily growth of
traffic. Therefore, the best way to relieve network
congestion caused by electronic switched is to use optical
switching. Although optical fibre has a very high 3.5 Micro Electro Mechanical Systems (MEMS)
bandwidth, it is constrained by electronic switching and
The branch of engineering MEMS (Micro electro
transmission capacity. As switching is done in the optical
mechanical system) which merged computer technology
domain, all optical switching can therefore play an
with minuscule mechanical components found in
important role.
semiconductor chips, including actuators, gears, sensors,
3.2 OOO Switching (All Optical) valves, mirrors, and valves [31]. It contains mechanical
Modifying optical cross connections, a circuit is created components, like micro-mirrors or sensors which h can be
from the ingress node to the egress node in all optical easily adjusted as needed, and reflect optical signals from
switching, and data travels via this circuit entirely in optical input to output fibres placed within a tiny silicon chip that
form. This can assist in addressing electronic switched embedded micro-circuitry. Light beam can be reflected
network bottleneck since data is switched in the optical from the input to output by tilting/rotating the mirror at
domain directly rather than going through many optical- varied angles thus directing the traffic to the ports.
electrical-optical conversions.

This article can be downloaded from here: www.ijaems.com 7


©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

3.6 Design and Principle


Below figure shows the Structure of 3D MEMS optical
switch [16].

Fig 3.4 Schematic of 3D 512 MEMS optical ports switch

3.7 MEMS Mirror Structure


The same is shown in the below figure.

Fig. 3.1 Optical Switch

Fig 3.5 MEMS tilt mirror cross-sectional schematic

In order to generate an air gap between the electrodes and


Fig 3.2 Standard Format OS the mirrors, substrates are individually produced, then
linked together by flip chips. Electrostatic force is produced
when electrodes and mirrors are connected by a voltage, and
this force moves the mirrors. Therefore, we are able to
adjust the tilt angle of these mirrors by supplying a driving
voltage to each electrode.
3.7.1 Mirror Substrate
They are formed of a single silicon crystal, giving the mirror
Fig 3.3 A toroidal concave mirror is used in a 3D MEMS incredibly steady movement. MEMS mirror is supported on
optical switch. X-Axis by two folded torsion springs, and the gimbal ring
is connected to the base on the Y axis by a second pair of
folded torsion springs [16]. The mirror cannot be dragged
The tilt angle of MEMS mirror measures the incident angle
down and come into contact with the electrodes because of
at which the optical beam from the I/P collimator array
the great rigidity of these springs in the Z direction relative
meets the concave mirror after reflecting by the I/P MEMS
to torsion direction. As a result, moving the mirror on the X
mirror array. Then light beam is again converged with a
and Y axes makes it simple to reflect an optical beam in 3D
shift in position "/" by this toroidal concave mirror, that
space in a particular direction.
does an optical Fourier transform. This shift"/" can be
quantitatively expressed as follows using the concave
mirror focal length and the MEMS mirror tilt angle:
/ = f x 2δ
512 ports can be obtained by using a 2X2 array of collimator
arrays with 128 optical ports and MEMS mirror arrays with
128 ports. This makes it easier to fabricate high-capacity
optical switches with a negligibly small cumulative pitch
Fig 3.6 High aspect torsion spring and MEMS mirror are
error.
snapped together via SEM
This article can be downloaded from here: www.ijaems.com 8
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

3.7.1.1 Mirror Electrode Substrate Fabrication Method


A layer of polyimide is then spun over the created mirror
pattern in step 7[16] (a) (b). The third process involves dry
etching to create the pattern for the mirror opening (d) and
resistant mask (c) on the opposite side of the bulk Si. The
polyimide coating serves as an etching stopper. Following
the dicing procedure, the polyimide layer is removed using
oxygen plasma (g). "In-process sticking of the mirror" is the
name of the dry procedure used to fabricate mirrors. AS
Since the top surface of the mirror has an au coating, which
gives optical beams high reflectivity, both sides of the
mirror have this coating. Peak-to-valley difference can be Fig 3.9 Driving Electrode Substrate’s Fabrication flow
utilized to assess the mirror surface's flatness, which in our process
case is 0.05 m, as the optical features of the switch made up
of a MEMS mirror array depend on it.
3.8 MEMS Mirror Movement
Figure below shows connection between the voltage applied
and tilt angle of mirror.

Fig 3.7 Mirror substrate fabrication process flow diagram


3.7.2 Driving of Electrode Substrate
Mirrors move due to this electrostatic force.
3.7.2.1 Fabrication of Driving Electrode Substrate
Fig 3.10 Relationship between Voltage applied &
Tilt Angle of Mirror+

Size, flatness, fill factor, and scan angle of the micro


mirrors, along with their scan angle and fill factor, all have
an impact on how many ports the 3D MEMS switch can
support.
3.9 Key Advantages of MEMS Switches in DCCN
• They offer any-to-any communication between
servers, allowing for very low latency and the
transfer of enormous amounts of data. which is
Fig 3.8 Electrodes’ SEM snap much lower than IP switches' latency.
• Can handle any data rate.
• They can circumvent the packet-based aggregation
network and offer direct, high-capacity pure
optical data links between any TOR switches to
reduce network latency and shift sensitive traffic
between servers as needed.

This article can be downloaded from here: www.ijaems.com 9


©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

IV. SOFTWARE DEFINED NETWORKING 4.2.3 Data Plane


(SDN) The infrastructure layer, which is the lowest layer in the
4.1 Introduction SDN network architecture and includes forwarding
Adoption of practice of creating and maintaining networks Network Elements like (Router, Switches). Data
results in an architecture which is called Software Defined transmission, statistics collection, and information
Networking (SDN). The major aim of this kind of monitoring are among its primary functions. MEMS
architecture is to facilitate programmability for the control switches are present at the aggregate layer while OF
plane by separating the devices' that is control plane from switches are located at the access.
their data/forwarding plane. 4.2.4 API’s which are North Bound
4.2 SDN Architecture These are software interfaces b/w controller's software
Control and Data planes at present coexist on the same modules and SDN applications. The informal name used for
network device, or, to put it another way, all data flow the interface between the application/control plane is called
functions, such as switching, forwarding, and routing, as the Northbound interface.
well as the various protocols used to make these decisions, 4.2.5 API’s which are South Bound
are housed on that one device. "Provide open user- South bound APIs are standardized APIs (application
controlled administration of the forwarding hardware of a program interfaces) that allow SDN controllers to
network element," according to the definition of SDN in [7], communicate with switches and routers and other
is the fundamental objective. The following elements are forwarding network hardware [18].
part of an architecture based on software defined
4.2.6 East-West Protocols
networking (SDN) are shown in the below figure:
In a multi-controller-based architecture, these protocols are
used to control how different controllers communicate with
one another. In a broader sense, SDN presents a networking
architecture in which choices regarding the routing of data
traffic are made external to the actual switching hardware.
Therefore, genuine network intelligence can be achieved in
data centres when SDN, PFFS, and MOOOS are applied.
Second, a secure standard protocol should exist to enable
communication between SDN controllers and network
devices. This logical architecture may be implemented
differently by different vendor equipment, but from the
standpoint of an SDN controller, it performs like a
consistent logical switch. AS It is explained that OpenFlow
Fig 4.1: Architecture for Software Defined Networking
(OF) is the first and most extensively utilized interface
(SDN).
between the infrastructure layer (data plane) and the control
plane because it satisfies both of these needs.
4.2.1 Application Layer 4.3 Open Flow (OF)
Applications can include network characteristics like The Open Networking Foundation (ONF) has standardized
forwarding schemes, manageability, and security policies, OpenFlow as the most popular southbound interface of
among others. With the aid of the controller, the application SDN architecture. OF's first version, version 1.0, was
layer can abstract the overall view of all network created by Stanford University and then acquired by ONF.
components and use that data to provide the control layer Version 1.5 of OF was released in December 2014.
the proper direction. In our approach, management
It is an open standard communication protocol that defines
programs that offer CLI/GUI for the network devices are
how one or more control servers interact with switches that
operating along with network monitoring tools.
are SDN compliant. The flow table entries are installed in
4.2.2 Control Plane the OF compliance switches by an OF controller so that
It is network's "brain" and is in charge of managing and traffic is sent in line with these flow entries. These flow
programming forwarding plane. The control plane employs tables can be used by network administrators to alter
a topology database to give an abstract picture of all the network configuration and data traffic patterns.
network components. Additionally, this protocol offers management tools for

This article can be downloaded from here: www.ijaems.com 10


©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

packet filtering and topology change control. The OF 4.3.4 Flow Table
protocol is supported by nearly all of the well-known It is the fundamental component of logical switch design
vendors, including Cisco, HP, IBM, Brocade, and others. that determines the action to be taken on each incoming
There are two types of switches for OF or SDN compliance. packet by comparing it to a specific flow table made up of
4.3.1 Open Flow-Only Switches numerous flow entries. This packet might go via one or
These switches, which solely support OF operation and more flow tables that operate as pipelines. As demonstrated
process all packets using the OF pipeline, are also known as in fig. 4.5, a controller can add, remove, and alter flow
Pure SDN switches or SDN-only switches. entries from an OF switch's flow tables either proactively or
reactively (in reaction to an incoming packet).

Fig 4.2 Architecture of Open Flow-Only Switch

4.3.2 Hybrid Switches with Open Flow Fig 4.5 Flow Table Entries Model

The same are shown in the below figure.


Flow entries can’t be generated until something happens, or
after the receipt of the respective packet, thus subsequent
action is taken in compliance with the instructions by a
controller, while in the case of proactive model, flow entries
are generated as required in advance and the process is
completed without checking from the controller. When a
Fig 4.3 Architecture of Hybrid Switch packet first reaches a switch, the switches OF agent
software looks for flow table (ASIC for hardware switches)
and software flow table (virtual switches). This information
4.3.3 Basic Architecture of Open Flow (OF)
exchange is shown in the figure below.
The below figure shows the basic OpenFlow architecture.

Fig 4.6 Pipeline Processing

Fig 4.4 Architecture of Basic Open Flow (OF) Figure below depicts the packet processing flow chart.

Open Flow switches are deployed in the Access layer as


TORs, which are connected to servers or end hosts on one
end and traditional switches and MEMS switches at the
Aggregation layer on the other. As a control plane, the ODL
controller is set up to connect with the OF switch using the
OF protocol via a secure channel using SSL or TSL
(Transport Layer Security). Three tables make up the OF
switch's logical design. Flow Table, Group Table, Meter
Table, and Table of Groups
This article can be downloaded from here: www.ijaems.com 11
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

4.3.5 Group Table


In order to perform various operations that may have an
impact on several flows, Flow Table may send its flows to
Group Table. A group table is made up of entries for groups
that have group identifiers. The principal elements of a
group entry shown below:
Table 4.3 Group Table

4.3.5.1 Identifier
32-bit unique identifier identifies the group entry.
4.3.5.2 Type of Group
It is used to manage/maintain group types depicting them
Fig 4.7 Open Flow(OF)-Packet Processing Flow Chart. as “Required” and “Optional”.
4.3.5.3 Packet Counters
Table 4.1 Main components of Flow Entry Provide actual packets count which are generated and
processed by a group.
4.3.5.4 Action buckets
AB consists of complete flow of processes which are
4.3.4.1 Match Felds Values performed in a specific pattern for changing packets and
then sending them to a port. This collection of actions is
It is important here that not to pick any packets without always carried out as a set of acts. A group entry may have
matching field values as shown in the below table: zero or more buckets, with the exception of groups specified
Table 4.2 Match Field of Flow Entry as "Required: Indirect," which only include one bucket. If a
group doesn't have a bucket, it will immediately throw the
packets into the air.

4.3.4.2 Priority Allocation


Setting a priority along with Match fields results in a
unique identity of flow entry.
4.3.4.3 Counter Checks
It contains information such as the number of bytes actually
received and number of missed packets too. Fig.4.8 OpenFlow packets pipeline Processing
4.3.4.4 Directions 4.3.6 Meter Table
Instructions are provided for side by side modifications in Having the ability to perform flow-related actions. It is
pipeline processing or desired set of actions. made up of meter entries that specify per-flow meters.
Personalized Service Code Point based metering also
4.3.4.5 Timeouts
enables the division of packets into different according to
It is defined as the maximum time for which any switch data rate. Instead of using ports, meters are directly
remains idol before which the flow of packets also expires. connected to flow entries. After measuring, they can
4.3.4.6 Cookies influence the overall rate of all packet flows and be
Any opaque data values selected by a relevant controller. configured to perform a specific action. The key parts of a
The OF controller can utilize it to filter requests for flow meter entry are presented below:
deletion and modification as well as flow statistics entries.
When processing packets, these values are not used.
This article can be downloaded from here: www.ijaems.com 12
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

Table 4.4 Main components of Meter Entry


4.3.7 Protocols for an Open Flow Channel
Totally safe Open Flow channel is provided by the OF
channel which is typically encrypted by TLS (Transport
4.3.6.1 Identifier of Meter Layer Security) or SSL. The protocol that it employs for
Meter entry is identified by a 32-bit identifier but in a these purposes is known as the OF protocol [20].
distinct way. 4.3.7.1 Controller_to_Switch Messages
4.3.6.2 Meter Bands Messages that the controller initializes to control the
Defines how to handle packets by listing rates of each band's switch's logical state, such as configuration, flow tables,
meter bands in a list that is not ordered. Meter Band is used group tables.
for measuring the response of the meter towards packets of 4.3.7.2 Asynchronous Messages
various meter rate ranges which are calculated from all
Initiated by the switch, these messages comprise status
packets received by that particular meter from all inputs.
updates to inform the controller of changes to the switch's
Meter bands are used to specify. Only one-meter band
condition and network events. Additionally, it has a
processes a packet.
"Packet-In" message that switches utilize to transfer packets
Table 4.5 Meter Band to OF controllers when their flow tables do not match.
4.3.7.3 Symmetric Messages
4.3.6.2.1 Band Types Hello messages are initiated by controller/switch, right after
when the connection is made between two specific devices,
Packets processing ways.
or Echo messages are used to measure the latency and
4.3.6.2.2 Target Rate bandwidth of the connection between the switch and
It is the required rate of a meter band. controller as well as to confirm that the device is functioning
4.3.6.2.3 Burst as intended.

It establishes the metre band's granularity. 4.4 Benefits of SDN

4.3.6.2.4 Counters SDN has the potential to be a promising technology for


managing and solving different problems in data centre
Processing of packets by a meter band updates the
networks. Using the SDN technique, the network
counters.
administrator only needs to declare these settings on a single
4.3.6.2.5 Type specific arguments place from which all devices are managed and directed by
Parameters which are in essence optional for particular band the SDN controller. Data centres are having scaling
types. Meter bands are ranked from 0 to 1, with 1 being the problems, particularly as the number of servers and virtual
default band, depending on how much the desired rate machines (VMs) that run on them rises and, later, as the
increases. The packet is processed by only one-meter band requirement for migration (VM motion) rises. A significant
when the measured rate exceeds the intended rate. bandwidth is needed for virtual machine migration and
updating the MAC address table, which raises network
4.3.6.3 Counters
latency and lowers overall network performance. In typical
They are used to figure out how many packets a meter has data centre architectures, users may encounter interruptions
processed. In the same flow table, multiple flow entries when accessing apps. Consequently, this researcher
may utilize various meters, the same meters, or no meters suggests that SDN, "All optical switching," and other
at all. virtualization technologies like OTV can be used to address
this study area/gap.

Fig. 4.9 Hierarchical DSCP metering and metres.


This article can be downloaded from here: www.ijaems.com 13
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

Table 4.6 OpenFlow Messages • Moving a particular flow of traffic to a high-


bandwidth link constructed using a MOOOS
plane, then moving it back to its normal path and
breaking down the high-bandwidth link when that
particular flow vanishes or resumes operating at its
regular data rate.
5.1.1 Installation of Open Flow in Access and SDN
controller
At initial stages, the respective researcher first deploys an
SDN controller and then turn on an Open Flow in all of the
access layer switches (TOR), without disturbing the
arrangement of the rest of the network's switches and allow
them to work just as before inclusion of this setup. Then I
establish a secure encrypted SSL (secure socket layer)/TCP
connection between each OpenFlow OF enabled Access
switch and the SDN controller. Through the OF protocol,
this link is utilized to communicate between an SDN
controller and an OF switch.
This connectivity is further utilized by SDN controller as
well as by a number of north-bound applications, which
includes initial hello /hi messages (like OFPT HELLO,
V. HFPFMAOS FEATURE REQUEST MESSAGE from controller to
5.1 Conceptualized HFPFMAOS Solution switch & then FEATURE REPLY MESSAGE in the same
manner from switch to controller), topology discovery
As per this research contribution, suggested is the
using LLDP (Link Layer Discovery Protocol) & BDDP
HFPFMAOS as a remedy for the problems and challenges
(Broadcast Domain Discovery Protocol), thus start building
facing DCCN (Hybrid Flow based packet filtering,
data bases, message alerts (PACKET IN to controller &
forwarding & MEMS based all optical switching). Eight
PACKET OUT) messages from table to switch which
phases make up the implementation of this research
specifically flows over all optical path between OF
concept:
switches.
• The deployment of an SDN controller and an OF
5.1.2 Setting up Plane
in TOR switches that link to the controller through
SSL. At the aggregation level, researcher deployed MAOS
(optical switching based on MEMS) plane alongside
• Aggregation-level MOOOS plane deployment and conventional switches. The current packet-based switching
connection to SDN controller. continues to function in this approach together with the
• Development of database for the conceptualized newly proposed MAOS to provide dynamically high speed
network topology. data routes between servers for switching elephant traffic as
• Flow based Packet Filtering and Forwarding – needed. Plus, centralized control is exercised by connecting
FPFSF it to a central SDN controller, network elements(NEs)
supervision, and control of traffic flow. Figure below shows
• Monitoring of outgoing ports, computation of the conceptualized architecture.
links' usage, and alerting the SDN controller.
• Flow Table lookup and flow entry inspection are
used to find the flows consuming more bandwidth
and to inform the source and destination of those
flows.
• Building high-bandwidth data connections
between TOR switches that correspond to the
source and destination indicated.

This article can be downloaded from here: www.ijaems.com 14


©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

Fig 5.4 Structural Framework of LLDP

Fig 5.1 Conceptualized Architecture of HPMOOOS for


DCCN
Fig 5.5 BDDP Frame Structure

At the aggregation layer, the MAOS plane is implemented


A switch's identification, basic capabilities, and other
side by side the conventional switches.
characteristics are determined by an OF controller via the
5.1.3 Developing Topology Database OF protocol, which includes sending "OFPT FEATURES
REQUEST" messages to all switches as part of the first
handshake.

Fig 5.2 Database Topology Development Steps

5.1.3.1 Revelation of OpenFlow OF Switches


Fig 5.6 Links discovery Via OFDP

Messages exchanged between OpenFlow OF switches and


controller for Links are shown below.

Fig 5.3 OpenFlow OF Switches Process

5.1.3.2 Revelation of Active Links


OFDP can detect indirect accessible multi-hop links
(routes) between OpenFlow OF switches by leveraging the
L2 discovery protocol known as BDDP [22]. Most well-
known Open Source controllers, including ODL and
Floodlight, adopt this technique. Fig 5.7 Messages flow between controller and Open Flow
OF switches for Links

This article can be downloaded from here: www.ijaems.com 15


©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

The metadata of OpenFlow OF switches where BDDP Table 5.1 Topology Database
packets are received are included in "OFPT PACKET IN"
messages. Between OpenFlow OF switches and their
controller, messages are only transmitted in one direction.
These massages are also exchanged in the other direction.
Based on data from BDDP and metadata, the controller can
identify indirect multi-hop links between OpenFlow OF
switches after receiving these "OFPT PACKET IN" signals
and store them in its database to create a network topology.
Most crucially, the controller counts the number of hops
between OpenFlow OF switches using the TTL value. The
default time interval for this topology finding operation is 5
seconds. The total number of "OFPT PACKET IN"
messages received by the controller during this entire 5.1.3.3 Discovery of Host
discovery process is equal to double the number of "L" Two methods have been applied for available host
active links accessible in the domain, and this number may discovery which is connected to the OpenFlow OF
be calculated as below: switches.
As all active ports on all switches send BDDP "Packet Out"
However, the total messages “OFPT_PACKET_OUT” sent messages, I Ports for which OpenFlow OF Switches do not
by controller can be computed as: send "Packet In" messages are recognized as host ports
during link discovery. (ii) GARP message is sent by the
Host when Host is connected to an OpenFlow OF Switch.
HSRP and VRRP[24] use GARP to update the MAC
Number of Open Flow OF switches is denoted by S,
address table of L2 switches, In a broadcast domain GARP
whereas active ports are denoted by P. All switches has P. 4
is an advance notification mechanism to keep the controller
OpenFlow OF switches with 2 active ports each make up
aware about host discovery and inserting flow entries of
our reference topology, hence "OFPT PACKET OUT"
MAC address in the flow tables of Open Flow OF switches,
messages issued by the controller are 2+2+2+2=8 in total.
thus updating other hosts’ ARP tables before their ARP
The total number of BDDP OFPT PACKET OUT messages
requests are made, and finally updating ARP tables of other
sent by a controller to a switch can be reduced by
hosts when a new host is also connected to a
implementing OFDPv2[23], in which the Port ID TLV field
switch,however a host IP address or MAC address is
is set to 0 and will be ignored while the source MAC address
changed due to failover. Generated ARP requests are
field has been set with the MAC address of the port through
actually special ARP packets with the source that is (host)
which it is to be sent out. One "OFPT PACKET OUT"
IP and destination.The destination broadcast MAC address
message is sent by the controller for each switch, and the
(ff:ff:ff:ff:ff:ff) is present in the MAC address field and the
total number of messages transmitted is equal to the number
Ethertype field fixt to 0x0806. The parameters listed below
of switches determined by OFDPv2.
are part of a GARP request message:
• FF:FF:FF:FF:FF:FF:FF:FF is the destination MAC
As each OpenFlow OF switch's port is physically connected address (broadcast)
to a MEMS switch's port, we don't need link discovery to • Source MAC address: MAC address of the host
establish connectivity with the MAOS plane; instead, the Example of GARP is shown in the below figure. IP address
controller can create paths between any two points of a Source = IP address of a Destination: Host transmitting
dynamically, and topology database can statically store all GARP Type ,IP address is: ARP (0x0806).
the data related to flows and port connections.

Fig 5.8 GARP Request Message


This article can be downloaded from here: www.ijaems.com 16
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

Each host will ping its neighbor after GARP is finished to For providing maximum flexibility and support to diverse
verify connectivity and reachability. Currently, all hosts are types of data traffic passing through DCCN while supplying
aware of one another's MAC addresses and assigned IP delay-sensitive traffic with minimal latency, I will employ
addresses, which are visible in their ARP tables. To stop a hybrid model of flow table entries in this thesis that
BDDP packet propagation, SDN controllers disable link combines proactive and reactive modalities. The use of a
discovery on the OpenFlow OF switch ports to which hosts proactive model for flow entry is required for delay-
are connected after discovery. Similar to BPDU guard sensitive applications that demand low latency, for example
security function in conventional switches, this suppression while making audio/video calls, live radio/TV
also prevents BPDU propagation [27]. transmissions, financial banks transactions, and routine
Table 5.2 Host-1 linked with OSW1-1 - ARP heavy traffic like web browsing, data file/folder transfer,
and peer-to-peer traffic.

Table 5.3 Host 2 linked with OSW1-1 – ARP

5.1.4 Discovered Routes & Forwarding Table Buildup


Information relating to interface of OpenFlow(OF) switchs
is received by Controller, plus list of all routes as well as
destinations (MAC/IP addresses) also become visible
through it, because it contains map of topology discovery of
the entire network and a centralised database too. Each route Fig 5.9 Anatomy - OpenFlow OF
found is given a Route-Tag by the SDN controller following
topology discovery. Allocating a Route-Tag has the main
aim of to distinctly recognize each respectively occured Discovery of flow entries which have close match with the
route and then generate a forwarding table that gives related fields of the data traffic, it either executes the
information about all of the destination (MAC/IP address) particular action of flow entry or forwards it to a group table
addresses that may be reached using the same route. In order to execute multiple actions. The importance of the priority
to send a flow to an outgoing interface based on Route- parameter cannot be overstated since flow entries are
Tag,this forwarding table is installed by an SDN Controller prioritised, and if there are many flow entries, the one with
into the OF switch. The source address and destination the highest priority will be used and the others will be
address are used to traditionally forward flow frame to next ignored. Table below shows different types of counters.
hop because its content addressable memory table also Table 5.4 Types of Counters
includes a list of MAC addresses that can be reached over
each link (port).later on when the same flow touches an
OpenFlow OF switch, it searches again all the flow tables
and look for the best possible match before forwarding the
flow to the accurate Host port after assuring its destination
address.
5.1.5 Building up of Flow Table by inserting Flow
entries
By issuing the command OFPMP PORT STATS to the
After entering into an OpenFlow OF switch the flow itself
OpenFlow OF switch, the SDN controller continuously
again compares each incoming packet with one or more
receives the statistics of outgoing active links (Ports), such
flow tables, each of which contains multiple flow entries,
is the quantity of Rx packets, Tx packets, Rx bytes, and Tx
and then determines the action to be taken. Any sort of
bytes, as well as the time in seconds, dropped packets, and
matching can be used, for example matching an ingress
Tx/Rx errors. Link utilisation of all the desired ports is
port, a source or destination MAC address or IP address, a
calculated by the controller based on the statistics that were
VLAN ID, a TCP/UDP port number, etc.
returned. The Link Utilization is calculated using the
following formula[28]:
This article can be downloaded from here: www.ijaems.com 17
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

to add another new flow entry to its flow table through the
"Flow Mod" message. When a similar packet arrives at the
switch in the future and matches its fields and masks, this
flow entry guides the switch what to do now.In this way the
message informs the related switch to route any TCP
requests from Host1's IP address or MAC address to Host4's
IP address or MAC address and finally to Port 8.
Additionally, it tells the switch to release the packet it had
been buffering with the BufferID of 250 and to carry out the
Fig 5.10 Formula to calculate Link Utilization
instructions in this message. H4 replies by sending a
SYN/ACK packet to the switch, which receives it but finds
For each of the outgoing interfaces, the controller maintains no flow entry from Host2 to Host1 (yet another table miss).
a specific link utilisation threshold. When the threshold is In order to send this SYN/ACK packet to the controller for
exceeded, it consults its flowtable to filter flows with additional analysis, the switch encapsulates it in a "Packet
outgoing interfaces that have exceeded the threshold. In" message and gives it the reference buffer ID
Following the specification of ports, the SDN controller BufferID=251. In response, the controller instructs the
determines whether any ports are available before creating switch to add a flow entry to the flow table and perform
a Highband dat pathpath across the MEMS Plane. some action, which is to forward the SYN/ACK message to
port7. The controller also sends the switch a Packet Out and
If there is no match, the entry in the table is considered to
a Flow mod message.The remainder of the communication
be incorrect, and the controller is notified by default. The
between Host1 and Host2 would not reach the controller
controller instructs the switch to take certain actions. The
after all of this because switches have flow entries in their
"Flow Mod" message contains a variety of information,
flow tables that tell them what to do with packets. The
including Buffer IDs, Timeouts, Actions, Priorities, and
switch routed HTTP reply and ACK messages directly, as
more. Additionally, flow entries can be permanent or for a
demonstrated in the following figs:
limited period of time, and there are two types: "Hard
Timeout" and "Idle Timeout." Idle timeout refers to the
removal of an entry from the flow table if there is no
matching flow entry request during that time period. Hard
timeout refers to the maximum amount of time an entry can
remain in the flow table, regardless of whether a matching
entry is present. If Hard timeout is set to 0, it is deactivated.
As an illustration, in our reference topology diagram, Host
1 sends an HTTP request to Host 2 (let's say a web server)
following host discovery by GARP. It begins with a SYN
message. Host 1 sends a SYN message to the OSW1-1
switch, which checks its flow table upon receiving the
packet because it is the initial packet and likely has no flow
entries that match the packet. Table miss flow entry is the
term for this. Therefore, by encasing this TCP packet in a
"Packet IN" message, the switch passes it to the controller
by default. This Packet IN message contains the entire TCP
packet or its Buffer ID (for example, Buffer ID = 250, which
designates the location where the switch stores the whole
TCP message). Therefore, the controller will take a few
actions, such as returning a "Flow Mod" or "Packet Out"
message to the switch, where "Packet Out" includes
information regarding the switch's handling of that Fig 5.11 (a) HTTP request with Open Flow messaging
particular as well as the whole encapsulated TCP packet or
the reference buffer ID that the switch uses to store this
packet. Send the TCP SYNpacket reference with buffer
ID=250 out of port 8 to host 2 if the switch OSW1-1
receives a "Packet Out" message.Then the switch is directed
This article can be downloaded from here: www.ijaems.com 18
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

caused by, among other things, VM migration, database


backups, or other software like Hadoop, cause switches'
buffers to overflow, which slows down packet processing
and negatively affects applications that require low latency.
Nowadays, there are many virtual machines running on a
single server for various applications. For whatever reason,
one virtual machine running one application in Server Rack
1 started using a lot of bandwidth and CPU, which had an
adverse effect on other applications operating on the same
server. Since the switch buffers start to overflow, the
administrator is compelled to move the virtual machine to
Server Rack 2 of the same POD, which results in a
bandwidth bottleneck and increased latency on network
links, as shown in the figure below.

Fig 5.11 (b) HTTP Reply-Open Flow messages

Following table is showin OF switch flow entries:


Table 5.5 Flow Table of OF Switch OSW1-1 Flow Entries

Fig 5.12 Congestion

Colored red indicate congestion on an active packet-


switched network link that a packet is attempting to cross.
As seen in the figure, switches alert their
management/controlled plane to network congestion and
increased delay.

The very first entry instructs switch to broadcast packets


that arrive from the controller port, regardless of their ether
type, to all other switch ports, indicating that they are BDDP
messages, and to update the counter. Sec entry instructs the
switch to issue a GARP request and update the counter
regardless of the switch's fields if the ether type is 0x0806.
The third, fourth, and fifth lines are L2 matching and
forwarding, which instructs the switch to perform a specific
action specified by the Action field and update the counter
regardless of other fields if the packet arrives with a certain
DMAC. The sixth and seventh lines instruct IP to route
packets to a particular port based on their destination IP.
Fig 5.13 Congestion Notification by Switches to Control
The eighth line is the default route, as shown for our
Plane
reference topology, and the final two flow entries are TCP
flow entries for HTTP requests and answers. When the bulk
of brief flows, sometimes referred to as "Mouse flows," start In reply, the management/control plane investigates the
flowing alongside long persistent high bandwidth data packets to make clear their source and destination, consults
flows, or "elephant flows," This results in a bandwidth its topology database, and calculates the locations between
bottleneck. High bandwidth data flows, which can be which high bandwidth data routes must be developed.
This article can be downloaded from here: www.ijaems.com 19
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

Resultantly, the optical control plane gives messages to the is made up of two aggregation switches (SW3 & SW4), two
MAOS control plane, which is made up of MEMS-based access (TOR) switches (SW5 & SW6), and eight hosts
optical switches. (Hosts 1 through Host8) that produce traffic. POD2 consists
of 8 hosts, Host1 to Host8, 2 TOR switches, named SW9
and SW10, 2 aggregation switches, named as SW7 and
SW8, and 2 hosts.
6.2 Software used
The tools incorporated are shown in in the figure below:

Fig 6.2 Software Tool Used Mininet


Fig 5.14 MEMS switches produce High data rate optical
path

As seen above, data is travelling across both packet-


switched network and a temporary, high-bandwidth
channel, which is depicted by the green colour and is
enabled by MEMS-based switches. Once the high persistent
flow has subsided and traffic flow has returned to normal,
the control plane will demolish the temporarily established
optical link and reallocate it as needed.

Fig 6.3 Software Tool Used MiniEdit


VI. SOFTWARE IMPLEMENTATION
6.1 Reference Network Topology
In order to implement the proposed software of the proposes
solution, the researcher came up with the below
conceptualized architecture.

Fig 6.4 Software Tool Used Open Day Light

Fig 6.1 Prototype testing Reference network topology


Fig 6.5 Software Tool Used IPERF
Our network topology consists of 2 pods, POD 1 and POD
2, with each pod expected to include eight hosts or servers,
two aggregation switches, and two access switches. POD1
This article can be downloaded from here: www.ijaems.com 20
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

Fig 6.6 Software Tool Used WireShark

6.3 Preparation/Implementation
Fig 6.8 OF switches based Network topology - Access as
Below actions must be taken to prepare for this setup: TOR switches
• Before running my Mininet and Open Day Light
virtual machines, I first downloaded and installed
Links between hosts and OF switches are 4 Mb/s with a 10
Oracle VM VirtualBox.
msec delay, while links between all traditional switches and
• Next, construct a VM in VirtualBox, download the between OF switches are 10 Mb/s with a 5 msec delay, as
Mininet virtual machine image, mount it on top of demonstrated in the following figures:
the fresh virtual machine, and then install Mininet. Host, Nodes, Links and interfaces verification can be done
• Third, create a second VirtualBox virtual machine by these commands, mininet>nodes, mininet>net and
and install the ODL controller setup. mininet>dump.
Mininet and its GUI program MiniEdit (running on o1 VM
on VirtualBox) is used by researcher, to plan out and
simulate the reference topology. I require X forwarding in
order to run Miniedit and connect to Mininet over SSH. To
do this, I've used Putty and XMing. Open DayLight, an
external SDN-based OpenFlow controller, was utilised by
me to operate OpenFlow virtual switches and MEMS
switches (ODL Beryllium). I have generated data flow from
hosts and servers and measured several performance
characteristics using the IPERF, such as link bandwidth and
network latency. I used Wireshark to investigate data Fig 6.9 Creating Links Switches
packets and analyse variety of protocol messages on several
interfaces during the whole network topology.
6.4 Applications
The real deployment of HFPFMAOS was gained in two
main steps

Fig 6.10 Network/Nodes Verification

Fig 6.7 Deployment Steps of HFPFMAOS

This article can be downloaded from here: www.ijaems.com 21


©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

Generation of Traffic by Iperf

Fig 6.11 OF Switch interfaces verification

Flow entries are shown by the researcher in the flow table


of OF switches OSW1-1, OSW1-2, OSW1-3 & OSW1-4
for my proposed solution HFPFMAOS which are as under:
For Discovery of HOST and ARP Table Fig 6.14 Generation of Traffic and Latency (Delay)
mininet> ping all
STEP2: Mininet’s Network topology with MEMS
In the second phase, I add another Open flow switch is
added which is called MEMSSW1 to the aggregation layer
beside conventional switches and connect it to OF switches
to test/apply the concept of MEMS switching (TOR). I
intentionally introduced the links' bandwidth and latency
with MEMSSW1 and keep it on a higher side, that is
Fig 6.12 Host Discovery by Pingall command for
1000Mb/s and 1msec, to testify the idea of MEMS. I set the
buffer size and throughput (speedup) of the links to be equal
to the link's bandwidth, or 1000 Mb/s, to demonstrate all
optical switching. This is how the network topology is
displayed:

Fig 6.13 Host pinging


For Links Discovery GARP Requests:

Fig 6.15 Network Topology with MEMS


For Links Discovery BDDP Requests:

For Enable Layer 2 Forwarding

This article can be downloaded from here: www.ijaems.com 22


©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

6.5 Code for Topology Creation:

This article can be downloaded from here: www.ijaems.com 23


©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/
Hussain International Journal of Advanced Engineering, Management and Science, 9(9) -2023

Zhang, and S. Lu, “BCube: a high performance, server-


centric network architecture for modular data centres,” in
ACM SIGCOMM, Aug. 2009.
[13] C. Guo, H. Wu, K. Tan, L. Shi, Y. Zhang, and S. Lu, “DCell:
A scalable and fault-tolerant network structure for data
centres,” in ACM SIGCOMM, Aug. 2008, pp. 75–86.
[14] http://research.microsoft.com/en-
us/um/people/srikanth/data/vl2_sigcomm09.pdf
[15] “3D MEMS Optical switch with toroidal concavemirror”
https://www.ntt-
review.jp/archive/ntttechnical.php?contents=ntr201211ra1_s
.html
[16] “High-yield Fabrication Methods for MEMS Tilt Mirror
Array for Optical Switches” by Joji Yamaguchi †, Tomomi
Sakata, Nobuhiro Shimoyama, “State of the Art of Optical
Switching Technology for All-Optical Networks”
http://home.deib.polimi.it/bregni/papers/cscc2001_optswitc
h.pdf.
[17] http://sanmapyourtech.blogspot.com/2014_08_01_archive.h
tml
[18] https://www.sdxcentral.com/wp-
content/uploads/2015/11/2015_SDxCentral_-
SDN_Controllers-Report_Cisco_FINAL.pdf
[19] http://www.opendaylight.org/project/technical-overview
[20] http://h17007.www1.hpe.com/docs/networking/solutions/sd
n/devcentre/03_HP_OpenFlow
OF_Technical_Overview_TSG_v1_2013-10-01.pdf
[21] http://www.cisco.com/c/en/us/about/press/internet-protocol-
journal/back-issues/table-contents-59/161-sdn.html
[22] https://arxiv.org/ftp/arxiv/papers/1406/1406.0124.pdf
[23] http://upcommons.upc.edu/bitstream/handle/2117/77672/Cu
rrent+Trends+of+Discovery+Topology+in+SDN.pdf?seque
nce=1
[24] https://www.researchgate.net/file.PostFileLoader.html?id=5
REFERENCES 54cd4f7d2fd64b73e8b456c&assetKey=AS%3A2737733144
12545%401442284052403
[1] https://cisco.com/c/en/us/solutions/collateral/service- [25] http://networkengineering.stackexchange.com/questions/771
provider/global-cloud- 3/how-does-gratuitous-arp-work.
indexgci/Cloud_Index_White_Paper.pdf [26] http://www.taos.com/2014/07/16/understanding-gratuitous-
[2] https://www.ciscoknowledgenetwork.com/files 477_11-11- arps/
2014-CiscoGCIDraftDeck2013-2018_CKN.pdf [27] https://live.paloaltonetworks.com/t5/Management-
[3] http://www.ciscopress.com/store/data-centre-virtualization- Articles/Trigger-a-Gratuitous-ARP-GARP-from-a-Palo-
fundamentals-understanding-9781587143243 Alto-Networks-Device/ta-p/61962
[4] http://www.graybar.com/applications/data-centres/types [28] Poisoning Network Visibility in Software-Defined Networks:
[5] http://www.buusinessn ewsdaily.com/4982-cloud-vs-data- New Attacks and Countermeasures
centre.html http://www.internetsociety.org/sites/default/files/10_4_2.pdf
[6] http://www.cyberciti.biz/faq/data-centre-standard-overview/ [29] http://www.cisco.com/c/en/us/support/docs/ip/simple-
[7] http://www.graybar.com/applications/data-centres/tiers network-management-protocol-snmp/8141-calculate-
[8] https://www.microsoft.com/en-us/research/publication/the- bandwidth-snmp.html
cost-of-a-cloud-research-problems-in-data-centre-networks/. [30] http://internetofthingsagenda.techtarget.com/definition/micr
[9] Cisco systems: Data centre: Load balancing data centre o-electromechanical-systems-MEMS
services, 2004. [31] https://www.opennetworking.org/images/stories/downloads/
[10] http://research.microsoft.com/en- sdn-resources/onf-specifications/OpenFlow OF/OpenFlow
us/um/people/dmaltz/papers/DC-Costs-CCR-editorial.pdf OF-switch-v1.3.1.pdf
[11] M. Al-Fares, A. Loukissas, and A. Vahdat, “A scalable, [32] https://www.opennetworking.org/images/stories/downloads/
commodity datacentre network architecture,” in ACM sdn-resources/onf-specifications/OpenFlow OF/OpenFlow
SIGCOMM, Aug. 2008. OF-switch-v1.5.1.pdf
[12] C. Guo, G. Lu, D. Li, H. Wu, X. Zhang, Y. Shi, C. Tian, Y.
[33] https://www.mininet.org
This article can be downloaded from here: www.ijaems.com 24
©2023 The Author(s). Published by Infogain Publication, This work is licensed under a Creative Commons Attribution 4.0 License.
http://creativecommons.org/licenses/by/4.0/

You might also like