Elsarticle Template

Download as pdf or txt
Download as pdf or txt
You are on page 1of 46

Software-defined load-balanced data center: design,

implementation and performance analysis


Ahmadreza Montazerolghaem

To cite this version:


Ahmadreza Montazerolghaem. Software-defined load-balanced data center: design, implementation
and performance analysis. Cluster Computing, In press, �10.1007/s10586-020-03134-x�. �hal-03048010�

HAL Id: hal-03048010


https://hal.archives-ouvertes.fr/hal-03048010
Submitted on 9 Dec 2020

HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est


archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents
entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non,
lished or not. The documents may come from émanant des établissements d’enseignement et de
teaching and research institutions in France or recherche français ou étrangers, des laboratoires
abroad, or from public or private research centers. publics ou privés.
Software-defined Load-balanced Data Center:
Design, Implementation and Performance Analysis

Ahmadreza Montazerolghaem∗
Department of Computer Engineering, Quchan University of Technology, Quchan,
Khorasan Razavi, Iran

Abstract

Data centers are growing densely and providing various services to millions of
users through a collection of limited servers. That’s why large-scale data center
servers are threatened by the overload phenomenon. In this paper, we propose
a framework for data centers that are based on Software-defined networking
(SDN) technology and, taking advantage of this technology, seek to balance the
load between servers and prevent overloading on a given server. In addition, this
framework provides the required services in a fast time and with low computa-
tional complexity. The proposed framework is implemented in a real testbed,
and a wide variety of experimentations are carried out in comprehensive sce-
narios to evaluate its performance. Furthermore, the framework is evaluated
with four data centers including Three-layer, Fat-Tree, BCube, and Dcell data
centers. In the testbed, Open vSwitch v2.4.1 and Floodlight v1.2 are used to
implement switches and OpenFlow controllers. The results show that in all four
SDN-based architectures, the load balances between the servers is well main-
tained, and a significant improvement has been made in parameters such as
throughput, delay, and resource consumption.
Keywords: Software-defined networking (SDN), Data center design, Load
balancing, PID controller, Throughput, Delay

∗ Correspondingauthor
Email address: [email protected] (Ahmadreza Montazerolghaem)

Preprint submitted to Cluster Computing Journal October 4, 2020


1. Introduction

Recently, introduced Software Defined Networking (SDN) is a major trend in


the telecommunication industry that can enhance the data center networks [1].
The Software defined data center integrates the software defined concepts into
5 data center technology. Data center networks are the backbone of the Internet
services. SDN significantly simplifies and improves this networks management
[2]. In SDN, the control and data planes are separated and logically centralized
using the OpenFlow protocol[3, 4, 5]. It can meet the requirements of the data
center communication network [6]. In this case, each network switch simply
10 forwards the traffic and enforces policy according to instructions received from
the controller. This makes the network programmable in a way that promises
to be more flexible, scalable, and secure than that of traditional networks [7].
SDN has attracted attention for developing different data center applications
which require a higher degree of network awareness [8, 9, 10, 11].
15 Data centers are being transformed into service-oriented networks as fully
connectivity oriented networks. This new paradigm changes the framework and
architecture of networks and changes their design and construction. In the new
paradigm, instead of layering switches and routers, it is attempted to reduce
the complexity of their components and create new capacities by homogenizing,
20 simplifying configurations and developing programmable capabilities in data
centers. The best way to quickly and easily achieve this goal is to achieve a
specific operating model with more coordinated services, thereby guiding us to-
wards the greatest change and flexibility available, Software Defined Networking.
Deploying SDN and other new approaches are the biggest challenges in next-
25 generation data center architecture that wants to move networks towards being
service-oriented. Traditionally data center scaling was done by building a subnet
and adding a new domain. In contrast, SD-DC1 method controls the resources
according to demand through a centralized view. This approach prevents the

1 Software-Defined Data Center

2
saturation of server resources, overload, and loss of service quality.

30 1.1. Related work

Darabesh et al. [1] introduce an experimental simulation (not implementa-


tion) framework to provide a virtualized testbed environment for data center
based on SDN. This work is built based on the Mininet simulator, where its
core components, the host, the switch and the controller, are set. [12] describes
35 a technique based on Domain Name System (DNS) for providing distributed
server load balancing over resources across multiple data centers. [13] explores
micro-level power management in Software defined data center and assess the
feasibility of this new power management paradigm by characterizing the re-
source and power impact of various management operations. [10] proposes a
40 novel algorithm for use in an intelligent, user-aware SDDC which performs run-
time analysis of user storage system activity in a manner that has a minimal
impact on performance and provides accurate estimations of future user activity.
[14] implements the SDN based routing in CamCube data center topology that
is able to outperform the traditional OSPF protocol in the CamCube infras-
45 tructure network. [15] designs new physical layer control instances and combine
SDN control with encoding in a data center. Yifei Xu et al. [16] present a man-
agement model for data center networks managing information is three levels:
network managers, regional controllers, and tenants. The authors in [17] design
a novel network fabric and shortest path algorithm for data center networks
50 based on SDN. New SDN-based TCP slow start mechanism for data center is
proposed in [18]. In this method, the available bandwidth can be obtained by the
SDN controller in every time slot. Then the slow start threshold and the conges-
tion window can be tuned suitably. An OpenFlow and SDN-based QoS traffic
method to achieve effective usage of data center network resources is proposed
55 by [19]. In [20], authors attempt to enhance the QoS of the multi-tenanted data
center network clouds. E3 MC [21] is a method to improve data center energy
efficiency by leveraging SDN. In this regard, the energy optimization for both
forwarding and control plane are investigated by dynamic control mapping. [22]

3
introduces the dynamic timeout for SDN-based data centers, which can assign
60 appropriate timeout to various flows under their characteristics. In [23], Hwang
et al. introduce an approach for fast failover of both the control and data plane
in the data centers based on SDN. An dynamic load management method based
on SDN technology for optimizing data center link utilization by flow priority
is proposed in [24].
65 While much prior research has suggested the potential benefits of applying
SDN in computer networks in order to facilitate network management, there
has only been few studies about the practical approaches of applying SDN in
data center, practically. Additionally, the whole concept of SD-DC is in its
infancy and standardization efforts in terms of framework, protocols, applica-
70 tions, and assessment tools are still underway. Also, as discussed earlier, the
proposed ideas and related work are mostly preliminary proposals about soft-
warization of WSNs; or they focus on security and big data challenges of IoT.
Here, we pay particular attention to management of resources as well as QoS of
data centers. To the best of our knowledge, there are no studies concerning a
75 comprehensive approach for combining data center server load balancing with
QoS mechanism with operational view. Therefore, the exploration of such an
approach is timely and crucial, especially considering the rapid development of
data center applications and the emergence of SDN. In this paper, we propose a
SDN-based architecture for data center applications, so that both the path and
80 server selection can be managed together to improve QoS for the users, and to
balance traffic between servers simultaneously.
According to the above studies, there are no comprehensive studies on an
SDN-based data center load balancing. So, in this paper, we propose an SDN-
based architecture for data center to balance the load between servers and
85 prevent overloading. Also, the proposed framework is implemented in a real
testbed, and a wide variety of experimentations are carried out under various
scenarios.

4
1.2. Motivation

Why do we need a scalable DC communication infrastructure? Knowing


90 that the DC infrastructure is a key service in next generation network, it is
vital to design and develop a high performance and scalable communication
infrastructure with quality of service support. Recently, the DC has been uti-
lized for supporting new services. In addition, the DC is rapidly developing and
spreading in different geographical locations. Moreover, for the future large-
95 scale DC, it will be necessary to collect and transfer data. Therefore, this will
create a huge traffic load which should be efficiently routed and balanced across
the network and servers. To summarize, with the overall modern network road
map, both the number of users and data will increase dramatically. Conse-
quently, a huge amount of data will go through the servers, which imposes a
100 great challenge on the scalability of the traditional DC communication network.
Over the next few years not only will DC servers face overload, but also the
communication network will fail to satisfy the QoS of traffic. So this article
addresses this challenge and proposes a scalable SDN-based DC framework.

1.3. Contributions

105 Our study mainly aims at managing the traffic through the concept of the
SDN. In this regard, by applying the SDN in the data center, we provide a
resource and QoS conscious framework. In this framework, we seek to choose
the best path among the existing paths for each of the data center traffic classes
in a way that the load balance of data center servers is established and the QoS
110 of the traffic class is satisfied. To this end, we design a modular controller that
uses PID system. In this regard, we propose the system for resource managment
based on PID. In other words, PID is used to decide how to load balances
between servers.
So, the main innovation is the architectural design of a modular controller
115 based on SDN technology according to Proportional-Integral-Derivative (PID),
Least Connection and Least Response Time algorithms to achieve a scalable

5
data centers with a higher quality of service. In this regard, the main contribu-
tions of this paper can be summarized as follows:

- Theoretical aspect

120 • Designing a novel SDN-based control and management framework for the
data center (for avoiding the overload occurrence on data center servers
together with increasing the QoS),

• Designing a proactive heuristic load balancing method based on PID sys-


tem,

125 • Designing modules of the proposed framework,

- Implementation aspect

• Designing and preparing a real test platform to evaluate the performance


of the proposed framework (is applied to the well-known topologies).

• Designing comprehensive scenarios to evaluate the efficiency of the pro-


130 posed framework.

1.4. Organization

The rest of this article is organized as follows: in Section 2, we propose a


data center framework as well as an SDN controller for proper distribution of
loads between servers. In Section 3 we discuss the details of the implementation
135 of the proposed schemes and their evaluation. Section 4 presents the conclusion
and further work.

2. Designing an SDN based enterprise data center framework

The framework consists of two parts: infrastructure and control. We first


present the overall architecture of the four important data centers and then
140 propose the controller details of the framework.

6
Data Plane

OpenFlow Control Plane

Switches SDN
S1
Controller

S2 S3

S4 S5 S6 S7

P1 P2 P3 P4 P5 P6 P7 P8

Figure 1: SDN-based three-layer data center architecture

2.1. Three-layer architecture

The three-layer architecture based on SDN technology is illustrated in Fig.1.


The architecture consists of three layers of switches which establish the com-
munication between the servers. These switches use OpenFlow technology and
145 are controlled by an SDN controller. Flows between the servers are also man-
aged by commands sent from the controller to the switches. The traditional
structure of this well-known data center architecture is illustrated in Fig.2.

Figure 2: Traditional three-layer data center architecture

As can be seen in this architecture, there are three layers of edge, aggrega-
tion, and core, each layer having a desired number of switches with different
150 capacities. For example, in this example, there are 8 edge switches, 4 aggrega-
tion switches, and 2 core switches.

7
2.2. Fat-Tree architecture
Fat-Tree architecture is illustrated in Fig.3. The topology of this architecture
is tree-like.
Data Plane
OpenFlow
Switches Control Plane

S1 S2 SDN
Controller

S3 S4 S5 S6

S7 S8 S9 S10

P1 P2 P3 P4 P5 P6 P7 P8
Figure 3: SDN-based data center tree architecture

155 The traditional architecture of this model of the data centers is shown in
Fig.4. As can be seen, the data center consists of 4 pods, each of them consisting
of 8 aggregation and edge switches. A total of 4 core switches establish the
communication between the pods.

Figure 4: Traditional data center tree architecture

2.3. BCube architecture


160 BCube architecture is illustrated in Fig.5. As can be seen, the two levels of
switch establish communication between the servers. Communications between
all four adjacent servers are provided via low-level switches and communications
between non-adjacent servers are provided via high-level switches.

8
Data Plane
OpenFlow Switches Control Plane

SDN
Controller
S1 S2 S3 S4

S5 S6 S7 S8

P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 P16

Figure 5: SDN-based BCube data center architecture

The traditional architecture of this type of data center is shown in Fig.6.


165 The architecture has two levels. At the lower level, 4 four-port switches are
connected to 16 adjacent servers. At the top level, 4 four-port switches provide
communication between non-adjacent servers.

Figure 6: Traditional BCube data center architecture

2.4. Dcell architecture

Dcell architecture is illustrated in Fig.7. The framework of this architecture


170 consists of several cells. Each cell contains one switch and four servers. Commu-
nications between servers of different cells are provided through switches. The
SDN controller also controls the flows and switches.
The traditional architecture of a Dcell data center is also shown in Fig.8.
In the traditional architecture, the communication between cells is provided
175 through traditional switches (not OpenFlow), which increases the complexity of
the infrastructure level, hence makes the architecture unscalable. In traditional
networks, once the flow management (forwarding policy) has been defined, the

9
SDN Controller

OpenFlow Switches

Figure 7: SDN-based Dcell data center architecture

only way to make an adjustment to the policy is via changes to the configuration
of the network equipment. This has proven restrictive for network operators
180 who are keen to scale their networks in response to changing traffic demands,
increasing the use of mobile devices.

2.5. Designing the SDN controller

In this section, we present the proposed architecture of the SDN controller


for balancing server loads in the aforementioned schemes. As shown in Fig.9,
185 there are two layers of infrastructure and control. In the infrastructure layer,
there are servers, switches, and computers. The data flow is distributed through
the switches between the servers. In this regard, the SDN controller plays a role
in the control layer. The Network Statistics module collects traffic information
from switches. Server load information is also collected by the Servers Load
190 Monitoring module. In the SDN controller, Proportional-Integral-Derivative
(PID) controller is used to decide how to load balances between servers. In this
regard, the PID controller seeks to regulate the servers load within the target
load.

10
Figure 8: Traditional Dcell data center architecture

In order to address this issue, we propose here a control theory approach


195 that dynamically (re)adjusts load threshold depending on the system behavior
and the desired utilization levels, allowing the SDN controller to learn over
time depending on current system behavior. We evaluate a PID controller that
readjusts load threshold, i.e., change the level of load that the system is willing to
face, to achieve a desired level of resource utilization for each dimension (CPU,
200 memory) in a safer way. r denotes the desired load level, y the measured level,
e the difference between the current load level and the target one, and finally
u is the increment or decrement in the actual load threshold needed to achieve
the target utilization. The u value at time t can be obtained from Equation (1):

Z t
de(t)
u(t) = KP e(t) + KI e(t)dt + KD (1)
0 dt
In this equation, increasing the P component (KP ) leads to faster response
205 but also to overshooting and oscillation problems. Increasing I component (KI )
reduces stationary errors but at the expense of larger oscillations. Finally, the D
component (KD ) reduces the oscillations but may lead to slower response. Only
setting these values turned out to be insufficient to achieve a good performance
(stable and fast response nal value (P + I + D)). This filter keeps threshold
210 load levels between certain values, i.e., between 0.2 and 0.8 instead of the [0,1]
range. Limiting the spectrum of feasible values for the load threshold reduces
fluctuations caused by fast switching between accepting too many and too few
services.

11
Figure 9: The proposed architecture of the SDN controller

Finally, if the server load tolerance is lower than the threshold (δ), the server
215 with the lowest response time is selected, and otherwise, the server with the
lowest connection is selected.
Least response time method uses the response information from the Network
Statistics module to determine the server that is responding fastest at a particu-
lar time. In least connection method the current request goes to the server that
220 is servicing the least number of active sessions at the current time. The reason
is that if the load tolerance is high and the servers’ load is more imbalanced,
the least connection method can better detect less efficient servers and balance
the servers by injecting load into them, in comparison to the least response time
method. If the server load tolerance is low, the least response time method can
225 better and faster keep the server load status balanced.

12
In the end, depending on the selected server, the Flow Manager module
adjusts the appropriate path of flows to that server by installing the appropriate
rolls on the OpenFlow switches. Algorithm 1 below shows the details of the
proposed load balancing approach.

Algorithm 1: Load Balancing


Result: Select the most appropriate server
Data: r denotes the desired load level;
y the measured load level of server;
e the difference between the current load level and the target one;
u is the increment or decrement in the actual load threshold needed to
achieve the target utilization.
while as long as the traffic is flowing do
– The Flow Manager module receives the incoming packets like
Packet-In message;
– The Network Statistics module extracts the necessary information
such as time from the messages;
– Response time is given to the Least Response Time module;
– The Servers Load Monitoring module monitors and records y;
– The PID Controller module calculates u with regard to r;
if u ≥ δ then
Least Connection method runs;
The server with the least load is selected;
else
Least Response Time method runs;
The server with the least response time is selected;
end
end

13
230 The Flow Manager module is in charge of sending LLDP (Link Layer Dis-
covery Protocol) packets to all the connected switches through Packet Out mes-
sages. These messages instruct the switches to send LLDP packets to all ports.
Once a switch receives the Packet-Out message, the LLDP packets are sent out
among all the ports. If the neighbor device is an OpenFlow switch, it will per-
235 form a flow lookup. Since the switch does not have a flow entry for this LLDP
message, it will send this packet to the controller by means of a Packet-In mes-
sage. When the controller receives the Packet-In, it analyses the packet and
creates a connection in its discovery table for the two switches. All remaining
switches in the network will similarly send a packet into the controller, which
240 would create a complete network topology. LLDP messages are periodically
exchanged and events are brought to the controller when links go up/down,
or new links are added/removed. Information on switches and links are main-
tained in the Network Statistics module. Servers load information and load level
of servers (y) is also recorded by the Servers Load Monitoring module. Load
245 monitoring is defined in one place for the entire network - in the control plane
module. It is easy to scale by replicating the control plane. As mentioned, PID
module is used to decide how to load balances between servers. So, the PID
controller seeks to regulate the servers load (y) with the target load (r). The
PID controller output is u. u is the increment or decrement in the actual load
250 threshold needed to achieve the target utilization. The u value at time t can be
obtained from Equation 1. Finally, if the server load tolerance (u) is upper than
the threshold (δ), the server with the least connection is selected, and otherwise,
the server with the least response time is selected. Least connection load bal-
ancing algorithm selects the server with the fewest active connections. Neither
255 round robin or random take the current server load into consideration when dis-
tributing messages. The least connection algorithm considers the current server
load. In return, the least response time algorithm uses the response data from
a server to determine the server that is responding fastest at a deadline.

14
3. Testing and evaluating the proposed schemes

260 In this section, we first describe the implementation details. We then evalu-
ate the performance and present the results of the proposed approach in several
of the following subsections.
In the testbed, we employ Open vSwitch v2.4.1 and Floodlight v1.2 to im-
plement the OpenFlow switch and controller and modify them as described in
265 the previous section. Floodlight is a Java-based controller. The PID controller
is also coded and tuned as a module in Floodlight in Java. Open vSwitch is
also a virtual and software switch that supports the OpenFlow protocol. We
use SIPp to inject traffic. Oprofile software is also used to observe consumables.
If we need to inject background traffic between the servers, we use iperf to send
270 packets at a fixed rate. We run each experiment three times and reported the
mean as a result.

3.1. Implementation testbed

The testbed provided in the IP-PBX Laboratory of Ferdowsi University


of Mashhad2 for the implementation of the proposed architectures is given in
275 Fig.10.
The list of equipment needed to implement the proposed architectures is as
follows:

• HP G9 DL580 server

• Open vSwitch v2.4.1 Switch

280 • Floodlight v1.2 controller

• SIPp for traffic injection

2 http://voip-lab.um.ac.ir/index.php?lang=en

15
Figure 10: Implementation platform in the IP-PBX type approval laboratory of Ferdowsi
University of Mashhad

3.2. Performance evaluation and experimental results

In the following, we present experimental results for the Three-layer archi-


tecture3 . According to Fig.1, we use 7 OpenFlow switches and 8 servers to test
285 the architecture.

3.2.1. Experiment 1: Fixed load


The first experiment involves two scenarios with different background traffic.
In Scenario 1, each server’s background traffic is equal to 500 bps. In Scenario
2, the background traffic for Servers 1 to 4 (P1 to P4) is 1000, and for Servers 5
290 to 8 (5P to P8) is 500 bps. A fixed offered-load of 400 seconds at a rate of 1500
requests per second (rps) is also injected into the system by a traffic generator.
Fig. 10 shows the performance of the servers.
Three different methods are used to select a server: Round-robin, Random,
and SDN-based (the proposed approach). Servers’ throughput (the number of
295 serviced requests by servers per time), their average response time (the time be-
tween sending a request from the computer and receiving the acknowledgment

3 The results of the other architectures are given in the appendix.

16
by the server), and the resources consumed by the servers are among the eval-
uation criteria. The goal is to achieve the maximum throughput and minimum
response time without overloading and with respect to resources.
300 The proposed method has achieved better results than the Round-robin and
Random methods in both scenarios (Fig. 10). In addition, the proposed method
results are similar in both scenarios, however, the results of Random-Robin and
Random in Scenario 2 are worse than in Scenario 1. The reason is that the
different background traffic of servers in Scenario 2 has made the blind load
305 distribution of the two methods worse over time. As can be seen from the
comparison of Fig. 11a to 11b, the SDN-based method in both scenarios is
able to achieve near offered-load throughput, as it can have a good estimate
of servers’ load using both, response time and the number of connections. Not
only consuming the resources in Round-Robin and Random methods is more
310 than SDN-based method, but also their average response time is longer. The
servers’ resource utilization rate in the SDN-based approach is approximately
equal, indicating a conscious and fair distribution of the load of this method
(Fig. 10e to 10p).
In Round-Robin and Random methods unequal distribution of load over
315 time causes unanswered messages in the server queue. This is because the
redundant retransmissions and manipulation of the timers increase CPU and
memory occupation and worsen server overload. In other words, failure to
perform one task in due time affects subsequent tasks, causing an overload
of CPU and memory. The same is true for subsequent tasks. As a result, the
320 server processor and memory are always involved in a large number of previous
or retransmission messages. All of this is due to the lack of fair distribution.
Besides that, the flow of new requests eventuated to overflowing of the queue
and losing the packages. In this regard, an SDN-based framework is proposed
which uses the global view to distribute fairly and based on servers capacity.
325 On the contrary, in the random and round-robin approaches, the capacity of
the servers is neglected. This over time causes the queues to become full and
also overloaded.

17
1600
Offered load
1400
SDN-based
Round-robin
Random
1200

1000

800

600
0 50 100 150 200 250 300 350 400
Time (second)

(a) Servers’ throughput in Scenario 1

1600
Offered load
1400 SDN-based
Round-robin
1200 Random

1000

800

600

400
0 50 100 150 200 250 300 350 400
Time (Second)

(b) Servers’ throughput in Scenario 2

70
SDN-based
60 Round-robin
Random
50

40

30

20

10
0 50 100 150 200 250 300 350 400
Time (Second)

(c) Servers average response time in Scenario 1


80
SDN-based
70 Round-robin
Random
60

50

40

30

20
0 50 100 150 200 250 300 350 400
Time (Second)

(d) Servers average response time in Scenario 2

18
(e) Servers average CPU usage in Scenario 1 (SDN-based)

P1
100 P2
P3
Average CPU Usage (%)

P4
80
P5
P6

60 P7
P8

40

20

0
100 200 300 400
Time (Second)

Round-robin
(f) Server average ‫ و‬1 ‫سناریو‬
‫ متد‬CPU ‫سرورها در‬
usage ِ‫پردازنده مصرفی‬
in Scenario ‫ میانگین‬-‫گ‬
1 (Round-robin)

P1
100 P2
P3
Average CPU Usage (%)

P4
80
P5
P6

60 P7
P8

40

20

0
100 200 300 400
Time (Second)

Random
(g) Server ‫ و متد‬1 CPU
average ‫در سناریو‬ ‫سرورها‬inِ‫ی‬Scenario
usage ‫پردازنده مصرف‬1 ‫میانگین‬
(Random)-‫ل‬

19
3
(h) Server average CPU usage in Scenario 2 (SDN-based)

P1
100
P2
P3
Average CPU Usage (%)

80 P4
P5
P6
60 P7
P8

40

20

0
100 200 300 400
Time (Second)

Round-robin
(i) Server average ‫ و‬2 ‫سناریو‬
‫متد‬CPU usage‫سرورها در‬ ِ‫پردازنده مصرفی‬
in Scenario ‫ میانگین‬-‫ه‬
2 (Round-robin)

P1
100
P2
P3
Average CPU Usage (%)

80 P4
P5
P6
60 P7
P8

40

20

100 200 300 400


Time (Second)

Random
(j) Server ‫ و متد‬2 ‫سناریو‬
average CPU ‫در‬usage
‫سرورها‬inِ‫مصرفی‬
Scenario
‫پردازنده‬2‫میانگین‬
(Random)
-‫ل‬

20

6
(k) Server average memory utilization in Scenario 1 (SDN-based)

100 P1
Average Memory Usage (%)

P2
P3
80 P4
P5
P6
60 P7
P8

40

20

0
100 200 300 400
Time (Second)

Round-robin
(l) Server average ‫ و متد‬1 utilization
memory ‫سرورها در سناریو‬
in ِ‫ی‬Scenario
‫حافظه مصرف‬1 ‫میانگین‬ -‫م‬
(Round-robin)

100 P1
Average Memory Usage (%)

P2
P3
80 P4
P5
P6
60 P7
P8

40

20

100 200 300 400


Time (Second)

(m) ServerRandom
average ‫ و‬1 ‫ سناریو‬utilization
‫متد‬memory ‫مصرفیِ سرورها در‬ ‫میانگین حافظه‬
in Scenario -‫ص‬
1 (Random)

21
9
‫)‪(n) Server average memory utilization in Scenario 2 (SDN-based‬‬

‫‪P1‬‬
‫‪100‬‬ ‫‪P2‬‬
‫)‪Average Memory Usage (%‬‬

‫‪P3‬‬
‫‪P4‬‬
‫‪80‬‬
‫‪P5‬‬
‫‪P6‬‬
‫‪60‬‬ ‫‪P7‬‬
‫‪P8‬‬

‫‪40‬‬

‫‪20‬‬

‫‪0‬‬
‫‪100‬‬ ‫‪200‬‬ ‫‪300‬‬ ‫‪400‬‬
‫)‪Time (Second‬‬

‫‪(o) Server Round-robin‬‬ ‫سناریو ‪ 2‬و متد‬


‫‪average memory‬‬ ‫مصرفیِ‪in‬سرورها در‬
‫‪utilization‬‬ ‫میانگین حافظه‬
‫‪Scenario‬‬ ‫ق‪-‬‬
‫)‪2 (Round-robin‬‬

‫‪P1‬‬
‫‪100‬‬
‫‪P2‬‬
‫)‪Average Memory Usage (%‬‬

‫‪P3‬‬

‫‪80‬‬ ‫‪P4‬‬
‫‪P5‬‬
‫‪P6‬‬
‫‪60‬‬ ‫‪P7‬‬
‫‪P8‬‬

‫‪40‬‬

‫‪20‬‬

‫‪0‬‬
‫‪100‬‬ ‫‪200‬‬ ‫‪300‬‬ ‫‪400‬‬
‫)‪Time (Second‬‬

‫‪(p) ServerRandom‬‬ ‫‪utilization‬سناریو ‪ 2‬و‬


‫‪memory‬متد‪average‬‬ ‫مصرفیِ سرورها در‬ ‫میانگین حافظه‬
‫‪in Scenario‬‬ ‫ط‪-‬‬
‫)‪2 (Random‬‬

‫شکل ‪ -10-5‬مقایسه کاراییِ سرورهای مرکز داده در دو سناریو‬


‫‪Figure 10: Comparison of the data center servers’ performance in two scenarios‬‬
‫شکل ‪ 11-5‬کاراییِ کنترلکننده را نشان میدهد‪ .‬گذردهی کنترلکننده تعداد جریانهای سرویسداده شده‬
‫در واحد زمان و میانگین زمانِ پاسخِ کنترلکننده‪ ،‬زمان بین ارسالِ پیام ‪ Packet-In‬از طرف سوئیچ تا زماان‬
‫دریافت ‪ Flow-Mod‬از کنترلکننده میباشد‪.‬‬

‫همانگونه که از شکل ‪ 11-5‬الف و ب مشخص اسات‪ ،‬ممککارد کنتارلکنناده مساتقل از ساناریو اسات و‬
‫میانگین گذردهی آن تقریباً ‪ 1450 fps‬و میانگین زمان پاسخ آن تقریباً ‪ 7 ms‬است‪ .‬این نشان مایدهاد کاه‬
‫‪22‬باالیی دست پیادا کناد‪ .‬ایان در حاالی اسات کاه‬
‫کنترلکننده توانسته است با تأخیر بسیار کم به گذردهی‬
‫ماژولهایِ آن سربار اضافی بر روی منابع نمیآورند‪ .‬مصرف منابع کمترِ کنترلکننده نسبت به سرورها به این‬
‫‪12‬‬
‫خاطر است که سرور وظیفه برقراری و خاتمهی کل درخواستها را بر مهده دارد درحالیکاه در ایان راساتا‪،‬‬
‫کنترلکننده وظیفه مدیریت تعداد محدودی سوئیچ را بر مهده دارد‪ .‬این موضوع از شاکل ‪ 12-5‬نیاز قابال‬
‫استنباط است‪ .‬این شکل نشان میدهد که تعداد بستههای پردازششده در واحد زمان توسط سرورها تقریبااً‬
‫‪ 7‬برابرِ کنترلکننده است‪ .‬همانطور که می دانید تقریباا در جریاان برقاراری درخواسات‪ 7 ،‬پیاام شارکت‬
Fig. 11 shows the performance of the proposed controller. The controller
throughput is denoted by the number of serviced flows per time, and the average
330 controller response time is denoted by the time between sending a Packet-In
message from the switch to receiving the Flow-Mod by the controller.
As shown in Fig. 11a and 11b, the controller performance is scenario-
independent, with an average throughput of approximately 1450 fps and an
average response time of approximately 7 ms. This indicates that the controller
335 has been able to achieve high throughput and low delay. However, its modules
do not overload resources. Consuming fewer resources by the controller than
the servers is because the server is responsible for establishing and terminating
all requests, while the controller is responsible for managing a limited number of
switches. This is also deduced from Fig. 12. This figure shows that the number
340 of packets processed per server is approximately 7 times that of the controller.
As you know, at least 7 messages are involved in establishing a connection, while
the process of managing the rules on the switches by the controller is initiated
with the Packet-In message and terminated with the Flow-Mod message. So the
overload on the controller is much less likely than on servers. Also, servers will
345 not be exposed to overload by the fair distribution of the load by the controller.
Other reasons the servers may be in trouble are the sudden failure of network
components and the sudden loss of capacity. This failure may be an imposed
corrupted server load on other servers. In order to test this situation, let P1 to
P4 fail at second 80 and recover at second 160, in the first scenario. Similarly,
350 let P1 to P4 fail at second 240 and recover at second 320, in the second scenario.
Fig. 13 shows the performance under these conditions. In Scenario 1, the con-
troller is able to transfer the entire load, from second 80 to 160, to P5 to P8,
indicating the speed of the controller action, which, despite the sudden failure
of P1 to P4, still maintained the entire system throughput near to offered-load.
355 The consumption of resources P5 to P8 has also increased during this period.
The controller resources consumption has also increased in this period. Under
normal circumstances, server resource usage is almost equal and controller re-
source consumption is very low. In Scenario 2, the same process is repeated in

23
1500

Flow rate (fps)


1450

1400
Scenario 1
1350 Scenario 2

1300
0 50 100 150 200 250 300 350 400
Time (Second)

(a) Controller throughput


‫گذردهیِ کنترلکننده‬ -‫الف‬

25
Average Response Time (ms)

Scenario 2
20
Scenario 1

15

10

0
0 50 100 150 200 250 300 350 400
Time (Second)

(b) The average response


‫کنترلکننده‬ time
ِ‫زمان پاسخ‬ of the
‫میانگین‬ -‫ ب‬controller

50
Scenario1
Average CPU Usage (%)

40 Scenario2

30

20

10

0
0 50 100 150 200 250 300 350 400
Time (Second)

‫کننده‬average
(c) The ‫مصرفیِ کنترل‬ ‫میانگین پردازنده‬
controller CPU -usage
‫ج‬

50
Average memory usage (%)

Scenario 1
40
Scenario 2

30
14
20

10

0
0 50 100 150 200 250 300 350 400
Time (Second)

‫کننده‬
(d) The ‫یِ کنترل‬controller
average ‫میانگین حافظه مصرف‬
memory -‫ د‬usage

Figure 11: Comparison of the controller performance in two scenarios


ِ‫ مقایسه کارایی‬-11-5 ‫شکل‬
‫ کنترلکننده در دو سناریو‬24

15

16
seconds 240 to 320, except that the load to P1 to P4 is not equal to P5 to P8,
360 yet the controller is able to transfer the entire load to the set P5 to P8 in second
240, and redistribute the load across all servers in second 320.

12000
10500
packets processed per second (PPS)

9000
By controller - Scenario 1
7500
By controller - Scenario 2
6000 By servers - Scenario 1
4500 By servers - Scenario 2

3000
1500
0
0 40 80 120 160 200 240 280 320 360 400
Time (Second)

Figure 12:‫ثانیه‬ ‫پردازششده بر‬


Number ‫ تعداد‬-12-5per
‫بستههای‬processed
of packets ‫شکل‬second

‫ خراب شدن ناگهانی اجزای شابکه و‬،‫یکی دیگر از دالیکی که ممکن است کنترلکننده را در تنگنا قرار دهد‬
.‫ این خرابی ممکن است تحمیل بارِ سرورِ خرابشده روی سایر سرورها باشد‬.‫کاهش ناگهانی ظرفیت میباشد‬
3.2.2. Experiment 2: Variable load
160 ‫ دچاار خرابای شاده و در ثانیاه‬80 ‫ در ثانیاه‬P4 ‫ تاا‬P1 ،‫ در سناریو اول‬،‫برای آزمایش چنین وضعیتی‬
In the previous section, a constant load of 1500 requests per second (1500
ً‫ مجددا‬320 ‫ دچار خرابی شده و در ثانیه‬240 ‫ در ثانیه‬P4 ‫ تا‬P1 ‫ در سناریو دوم نیز‬.‫مجدداً به کار میافتند‬
rps) is injected into the system, but in this section, we evaluate variable load
160 ‫ تاا‬80 ‫ و از ثانیاه‬1 ‫ در ساناریو‬.‫ کارایی را در این شرایط نشان مایدهاد‬13-5 ‫ شکل‬.‫به کار میافتند‬
365 performance. In the previous section, the servers are not exposed to overload
‫ که نشان از سرمت ممل کنترل کننده است‬،‫ بدهد‬P8 ‫ تا‬P5 ‫کنترلکننده بهخوبی توانسته است کل بار را به‬
because they had a capacity exceeding 1500 rps (and thus did not exposed to
‫ همچنان توانسته است گذردهی کال سیساتم را نزدیاه باه باار‬P4 ‫ تا‬P1 ‫که مکیرغم خراب شدن ناگهانی‬
resource shortage). But in this section, as the load increases, we introduce the
‫ مصرف منابع کنترلکنناده‬.‫ در این بازه نیز افزایشیافته است‬8 ‫ تا‬5 ‫های‬P ‫ مصرف منابع‬.‫ارائهشده نگه دارد‬
overload phenomenon and test the performance of the servers and controllers.
‫ تقریبااً برابار و مصارف‬P5 ‫ و‬P1 ‫ در شرایط مادی مصارف مناابع‬.‫نیز در این بازه تا حدی افزایشیافته است‬
The results are shown in Fig. 14. The load starts in 1500 and increases to 6000
‫ باا‬،‫ نیز همین روند تکرار شده است‬320 ‫ تا‬240 ‫ و از ثانیه‬2 ‫ در سناریو‬.‫منابع کنترلکننده بسیار کم است‬
370 requests per second (up to 400 seconds) in four steps. Then, as a result of a
‫ کل بار‬،‫ توانسته است‬60 ‫ با اینحال کنترلر در ثانیه‬،‫ برابر نبوده است‬P8 ‫ تا‬P5 ‫ و‬P4 ‫ تا‬P1 ِ‫این تفاوت که بار‬
sudden loss in the second 400, it again reaches 1500 requests per second. In
.‫ مجدداً بار را بین همه سرورها توزیع کند‬320 ‫ منتقل و در ثانیه‬P8 ‫ تا‬P5 ‫را به مجمومه‬
the second 500, it jumps to 6000 requests per second with a sudden jump and
reaches 3000 requests per second in the last 100 seconds. Before the second 200,
the server and controller throughputs are very close to the offered-load. In the
375 second 200, the servers become overloaded until the second 400. During these
200 seconds, the average throughput of18the servers is approximately 3000 rps
and the rest of the load provided by the servers is rejected. Servers’ overload
occurs due to a lack of resources especially the processor (Fig. 14c and 14d).

25
Of f ered Load
Throughput of serv ers 1 to 4 - Sceacnario 1
Throughput of serv ers 5 to 8 - Sceacnario 1
2000 Throughput of serv ers 1 to 4 - Sceacnario 2
Throughput of serv ers 5 to 8 - Sceacnario 2

1500
Throughput (cps)

1000

500

0
0 40 80 120 160 200 240 280 320 360 400
Time (Second)

(a) Throughput
‫ گذردهی‬-‫الف‬

80 Servers 1 to 4 - Scenario 1
Servers 5 to 8 - Scenario 1
Controller - Scenario 1
Average CPU Usage (%)

Servers 1 to 4 - Scenario 2
60
Servers 5 to 8 - Scenario 2
Controller - Scenario 2

40

20

0
40 80 120 160 200 240 280 320 360 400
Time (Second)

(b) Average CPU usage


‫ میانگین پردازنده مصرفی‬-‫ب‬
Servers 1 to 4 - Scenario 1
Servers 5 to 8 - Scenario 1
80
Average Memory Usage (%)

Controller - Scenario 1
Servers 1 to 4 - Scenario 2
60 Servers 5 to 8 - Scenario 2
Controller - Scenario 2

40

20

0
40 80 120 160 200 240 280 320 360 400
Time (Second)

(c) Average memory usage


‫ میانگین حافظه مصرفی‬-‫ج‬
Figure 13: Controller performance when P1 to P4 servers fail
P4 ‫ تا‬P1 ِ‫ کارایی کنترل کننده در هنگام خرابیِ سرورهای‬-13-5 ‫شکل‬

‫ بار متغیر‬:‫آزمایش دوم‬ -1

‫) به سیستم تزریق گشت اما در این بخاش باه‬1500 26


rps( ‫ درخواست در ثانیه‬1500 ِ‫در بخش قبل بارِ ثابت‬
‫ در بخش قبل سرورها با اضافهبار مواجهه نشدند چراکه ظرفیت آنها‬.‫ارزیابی ممککرد با بارِ متغیر میپردازیم‬
‫ پدیاده‬،‫ اما در این بخش با افازایش باار‬.)‫ بیشتر بود (و درنتیجه با کمبود منابع مواجهه نشدند‬1500 cps ‫از‬
‫ بار ارائاهشاده از‬.‫ آمده است‬14-5 ‫اضافهبار را به وجود آورده و ممککرد را محه زدیم که نتایج آن در شکل‬
‫ ساسس باا یاه‬.)400 ‫ ثانیه افزایش مییابد (تا ثانیه‬19‫ درخواست بر‬6000 ‫ شروع و در چهار مرحکه تا‬1500
For example, Fig. 14c shows the server CPU saturation at seconds 250, 350
380 and 550. Usually, in traditional architecture, the server CPU saturation leads
to approaching the throughput to zero, however, SDN architecture has been
able to prevent the severe throughput loss in the situation of overloading and
utilize the maximum server capacity ( 3000 rps). Unlike servers, the controller
does not overload at the offered pick load, and its throughput is always in
385 line with the offered-load. For example, its average throughput in seconds 300
to 400 is approximately 5978 fps. Depending on the throughput, the servers’
response time also varies (Fig. 14b). With overload occurring, response times
also increase by several times. The overload control algorithm may not be able
to restore servers’ throughput to normal after the load is returned to less than
390 server capacity (return to the state without any overloading), but when the
overload terminates (in second 400), SDN-based architecture has again been
able to provide system throughput to the offered-load using the proper load
distribution. Unlike the incremental load presented in the first 400 seconds,
immediate congestion occurs at second 500. At the same time, despite the 100
395 percent busy servers’ processor (Fig. 14c), the SDN-based architecture has still
been able to maximize server throughput. Immediate congestion occurs when
a large number of computers request simultaneously. The high throughput of
servers in the sudden load fluctuations indicates the stability of the system.
Finally, in the last 100 seconds, the rejection call rate is negligible, and the
400 maximum throughput and the response time is approximately 10 milliseconds.
It is worth noting that in a time interval of seconds 200 to 400 or 500 to 600
in which the input load is more than the network resources, we can achieve to
the throughput near to offered-load by increasing server resources and removing
their hardware limitations.

405 3.2.3. Experiment 3: Comparison with the traditional Three-layer architecture


Table 1 provides a comparison of two traditional Three-layer and SDN-based
architectures in terms of throughput, delay, and resource consumption. As can
be seen, SDN technology has been able to significantly improve the quality of

27
‫معماری مبتنی بر ‪ SDN‬بازهم توانسته است گذردهی سرورها را به حداکثر برساند‪ .‬ازدحام آنی زمانی اتفااق‬
‫میافتد که تعداد بسیار زیادی ‪ UA‬بهطور همزمان اقدام به درخواست نمایند‪ .‬گذردهی باالیِ سرورها درافت‬
‫و خیزهای ناگهانیِ بار ارائهشده‪ ،‬نشاندهندهی پایداریِ‪ 2‬سیستم است‪ .‬نهایتااً در صاد ثانیاهی آخار نارخ رد‬
‫تماسها ناچیز و گذردهی ماکزیمم و زمان پاسخ تقریباً ‪ 10‬میکیثانیه است‪.‬‬

‫نکته اینکه در بازه زمانیِ ثانیه ‪ 200‬تا ‪ 400‬و یا ‪ 500‬تا ‪ 600‬که بار ورودی از منابع شابکه بیشاتر اسات‪،‬‬
‫میتوان با افزایش منابع سرورها و رفع محدودیتهای سختافزاری آنها‪ ،‬به گذردهی نزدیه به بار ارائهشده‬
‫دستیافت‪.‬‬

‫‪7500‬‬ ‫‪7500‬‬
‫‪Offered Load‬‬
‫‪Controller Throughput‬‬
‫‪6000‬‬ ‫‪6000‬‬
‫‪Servers Throughput‬‬
‫‪Rejection Rate‬‬

‫)‪Flow rate (fps‬‬


‫‪4500‬‬
‫‪4500‬‬
‫)‪Call rate (rps‬‬
‫‪3000‬‬
‫‪3000‬‬
‫‪1500‬‬
‫‪1500‬‬

‫‪0‬‬
‫‪100‬‬ ‫‪200‬‬ ‫‪300‬‬ ‫‪400‬‬ ‫‪500‬‬ ‫‪600‬‬ ‫‪700‬‬
‫)‪Time (Second‬‬

‫گذردهی)‪(a‬‬ ‫الف‪-‬‬
‫‪Throughput‬‬
‫‪1000‬‬
‫)‪Average response time (ms‬‬

‫‪100‬‬

‫‪10‬‬

‫‪1‬‬
‫‪0‬‬ ‫‪100‬‬ ‫‪200‬‬ ‫‪300‬‬ ‫‪400‬‬ ‫‪500‬‬ ‫‪600‬‬ ‫‪700‬‬
‫)‪Time (Second‬‬

‫‪(b) Average response time of servers‬‬


‫ب‪ -‬میانگین زمان پاسخ سرورها‬
‫‪Servers‬‬
‫‪100‬‬ ‫‪Controller‬‬
‫)‪Average CPU usage (%‬‬

‫‪1‬‬
‫‪Flash Crowd‬‬
‫‪2‬‬
‫‪stability 75‬‬

‫‪50‬‬ ‫‪22‬‬

‫‪25‬‬

‫‪0‬‬
‫‪50‬‬ ‫‪150‬‬ ‫‪250‬‬ ‫‪350‬‬ ‫‪450‬‬ ‫‪550‬‬ ‫‪650‬‬
‫)‪Time (Second‬‬

‫مصرفی)‪(c‬‬
‫‪Average‬‬
‫‪CPU‬پردازنده‬
‫‪usage‬میانگین‬
‫ج‪-‬‬
‫)‪Average memory usage (%‬‬

‫‪100‬‬

‫‪75‬‬

‫‪50‬‬

‫‪25‬‬

‫‪0‬‬
‫‪50‬‬ ‫‪150‬‬ ‫‪250‬‬ ‫‪350‬‬ ‫‪450‬‬ ‫‪550‬‬ ‫‪650‬‬
‫)‪Time (Second‬‬

‫‪(d) Average memory usage‬‬


‫د‪ -‬میانگین حافظه مصرفی‬
‫‪Figure 14: Performance over time and with different offered-loads‬‬

‫شکل ‪ -14-5‬کارایی در طول زمان و با بارهای ارائهشده متفاوت‬

‫‪28‬سنتیِ ‪Three-layer‬‬
‫آزمایش سوم‪ :‬مقایسه با معماری‬ ‫‪-2‬‬

‫در جدول ذیل مقایسه ای از نظر گذردهی‪ ،‬تاخیر و مصرف مناابع باین دو معمااری ‪ Three-layer‬سانتی و‬
‫مبتنی بر ‪ SDN‬آورده شده است‪ .‬همانگونه که مشاهده می کنید‪ ،‬تکنولوژی ‪ SDN‬توانسته است به خوبی در‬
‫افزایش کیفیت سرویس درخواست های بین سرورها شامل گذردهی‪ ،‬تاخیر و مصرف منابع تاثیر گذار باشد‪.‬‬
‫‪23‬‬
Table 1: Comparison of traditional Three-layer and SDN architectures

1500 rps 2000 rps 2500 rps


Traditional SDN Traditional SDN Traditional SDN
Throughput (rps) 956 1432 1432 1897 1998 2456
Delay (ms) 28 12 57 13 98 14
Average CPU Usage (%) 25 16 34 18 44 23
Average Memory Usage (%) 27 17 37 20 55 24

service of server requests including throughput, delay, and resource consump-


410 tion.

3.3. Discussion

The higher performance of our proposed method comes from an efficient


and integrated framework based on SDN concept, which decouples the network
control from the data forwarding by direct programming. With its inherent
415 decoupling of control plane from data plane, SDN offers a greater control of
a network through programming. This combined feature would bring poten-
tial benefits of enhanced configuration, improved performance, and encouraged
innovation in network architecture and operations. Especially, SDN offers a
promising alternative for traffic steering by programmatically configuring for-
420 warding rules. So, our proposed architecture is indebted to the SDN concept
and global view of entire network for performance improvement. Compared to
SDN-based approach, the traditional development of approach leads to inef-
ficiency in the use and management of infrastructure hardware resources. As
opposed to traditional hardware-centric products where control and data planes
425 are embedded into a closed equipment, SDN allows the control plane logic to
be decoupled from the network hardware (data plane), and it moves the con-
trol logic to a programmable software component, the SDN controller 4 . SDN

4 we want the control of the network to be centralized rather than having each device be
its own island, which greatly simplifies the network discovery, connectivity and control issues
that have resulted with the current system. Having this overarching control actually makes

29
controller and APIs (such as OpenFlow) are capable of L2/3/4-based policy
enforcement.
430 So overall, by implementing a new orchestration level, SDN can tackle the
inflexibility and complexity of the traditional network. SDN provides enterprises
with the ability to control their networks programmatically and to scale them
without affecting performance, reliability, or user experience. The data and
control-plane abstractions constitute the immense worth of SDN. By eliminating
435 the complexity of the infrastructure layer and adding visibility for applications
and services, SDN simplifies network management and brings virtualization to
the network. It abstracts flow control from individual devices to the network
level. Network-wide data-flow control gives administrators the power to define
network flows that meet connectivity requirements and address the specific needs
440 of discrete user communities.

4. Conclusion

In this paper, we proposed an SDN-based framework for load balancing in


data centers. This framework was applied to the well-known topologies Three-
layer, Fat-Tree, BCube, and Dcell, each implementing a real platform and con-
445 ducting multiple experiments on each.We also proposed a proactive load bal-
ancer mechanism that uses a PID controller for load balancing between data
center servers. This mechanism is used as a SDN controller module. The fea-
tures of this controller are that it is applicable to all four data center archi-
tectures. In the testbed, we used Open vSwitch v2.4.1 and Floodlight v1.2 to
450 implement the OpenFlow switch and controller. We experimented each archi-
tecture under three tests. The first experiment is the fixed load inject to the
data center, the second experiment is the variable load inject to the data center,
and the third experiment is to compare traditional architecture with SDN based

the whole network programmable instead of having to individually configure each device every
time an application is added or something moves.

30
architecture. Observations and experiments show that all four SDN-based ar-
455 chitectures have been able to well distribute the load across algorithms such as
the proposed method or even Round-Robin. They also achieve higher evalua-
tion criteria than traditional architecture and have significant improvements in
parameters such as throughput, delay, and resource consumption. One of the
future works of this paper is to present a framework based on SDN and NFV
460 (Network Functions Virtualization) technologies in combination, to present vir-
tual servers in data centers in a VNF (Virtual Network Function) manner and
optimize their network communications by SDN controller. In this case, the
energy consumption of data centers can be optimized. Mathematical modeling
of this framework will also be followed in our future work.

465 Acknowledgment

This work was supported by the Quchan University of Technology (Grant


Nos. 11942).

References

[1] A. Darabseh, M. Al-Ayyoub, Y. Jararweh, E. Benkhelifa, M. Vouk,


470 A. Rindos, Sddc: A software defined datacenter experimental framework,
in: 2015 3rd International Conference on Future Internet of Things and
Cloud, 2015, pp. 189–194. doi:10.1109/FiCloud.2015.127.

[2] T. Hu, Z. Guo, P. Yi, T. Baker, J. Lan, Multi-controller based software-


defined networking: A survey, IEEE Access 6 (2018) 15980–15996.

475 [3] Y. Zhang, L. Cui, W. Wang, Y. Zhang, A survey on software defined net-
working with multiple controllers, Journal of Network and Computer Ap-
plications 103 (2018) 101–118.

[4] J. Xie, F. R. Yu, T. Huang, R. Xie, J. Liu, C. Wang, Y. Liu, A survey of


machine learning techniques applied to software defined networking (sdn):

31
480 Research issues and challenges, IEEE Communications Surveys & Tutorials
21 (1) (2018) 393–430.

[5] V. S. N. Amulothu, A. Kapur, K. Khani, V. Shukla, Adaptive software


defined networking controller, uS Patent App. 10/257,073 (Apr. 9 2019).

[6] K. TaKeaWays, The software-defined data center is the future of infras-


485 tructure architecture, strategies.

[7] M. Carlson, A. Yoder, L. Schoeb, D. Deel, C. Pratt, C. Lionetti, D. Voigt,


Software defined storage, Storage Networking Industry Assoc. working
draft (2014) 20–24.

[8] U. Bayram, D. Divine, P. Zhou, E. W. Rozier, Improving reliability with


490 dynamic syndrome allocation in intelligent software defined data centers,
in: 2015 45th Annual IEEE/IFIP International Conference on Dependable
Systems and Networks, IEEE, 2015, pp. 219–230.

[9] V. Törhönen, Designing a software-defined datacenter, Master’s thesis


(2014).

495 [10] E. W. Rozier, P. Zhou, D. Divine, Building intelligence for software de-
fined data centers: modeling usage patterns, in: Proceedings of the 6th
International Systems and Storage Conference, ACM, 2013, p. 20.

[11] U. Paščinski, J. Trnkoczy, V. Stankovski, M. Cigale, S. Gec, Qos-aware


orchestration of network intensive software utilities within software defined
500 data centres, Journal of Grid Computing 16 (1) (2018) 85–112.

[12] L. Cui, X. Hailing, D. Chen, Distributed global load-balancing system for


software-defined data centers, uS Patent 9,998,530 (Jun. 12 2018).

[13] Y. Hu, C. Li, L. Liu, T. Li, Hope: Enabling efficient service orchestration
in software-defined data centers, in: Proceedings of the 2016 International
505 Conference on Supercomputing, ACM, 2016, p. 10.

32
[14] R. Touihri, S. Alwan, A. Dandoush, N. Aitsaadi, C. Veillon, Novel opti-
mized sdn routing scheme in camcube server only data center networks, in:
2019 16th IEEE Annual Consumer Communications Networking Confer-
ence (CCNC), 2019, pp. 1–2. doi:10.1109/CCNC.2019.8651677.

510 [15] Mingwei Yang, H. Rastegarfar, I. B. Djordjevic, Physical-layer adaptive


resource allocation in software-defined data center networks, IEEE/OSA
Journal of Optical Communications and Networking 10 (12) (2018) 1015–
1026. doi:10.1364/JOCN.10.001015.

[16] Y. Xu, Y. Yan, Z. Dai, X. Wang, A management model for sdn-based


515 data center networks, in: 2014 IEEE Conference on Computer Commu-
nications Workshops (INFOCOM WKSHPS), 2014, pp. 113–114. doi:
10.1109/INFCOMW.2014.6849181.

[17] Wei Hou, L. Shi, Yingzhe Wang, Fan Wang, Hui Lyu, M. St-Hilaire, An
improved sdn-based fabric for flexible data center networks, in: 2017 In-
520 ternational Conference on Computing, Networking and Communications
(ICNC), 2017, pp. 432–436. doi:10.1109/ICCNC.2017.7876167.

[18] H. Yao, W. Muqing, L. Shen, An sdn-based slow start algorithm for data
center networks, in: 2017 IEEE 2nd Information Technology, Networking,
Electronic and Automation Control Conference (ITNEC), 2017, pp. 687–
525 691. doi:10.1109/ITNEC.2017.8284820.

[19] Jian Di, Quanquan Ma, Design and implementation of sdn-base qos traffic
control method for electric power data center network, in: 2016 2nd IEEE
International Conference on Computer and Communications (ICCC), 2016,
pp. 2669–2672. doi:10.1109/CompComm.2016.7925182.

530 [20] P. Kathiravelu, Software-defined networking-based enhancements to data


quality and qos in multi-tenanted data center clouds, in: 2016 IEEE Inter-
national Conference on Cloud Engineering Workshop (IC2EW), 2016, pp.
201–203. doi:10.1109/IC2EW.2016.19.

33
[21] K. Xie, X. Huang, S. Hao, M. Ma, P. Zhang, D. Hu, E 3 mc: Improving
535 energy efficiency via elastic multi-controller sdn in data center networks,
IEEE Access 4 (2016) 6780–6791. doi:10.1109/ACCESS.2016.2617871.

[22] H. Zhu, H. Fan, X. Luo, Y. Jin, Intelligent timeout master: Dynamic


timeout for sdn-based data centers, in: 2015 IFIP/IEEE International
Symposium on Integrated Network Management (IM), 2015, pp. 734–737.
540 doi:10.1109/INM.2015.7140363.

[23] R. Hwang, Y. Tang, Fast failover mechanism for sdn-enabled data centers,
in: 2016 International Computer Symposium (ICS), 2016, pp. 171–176.
doi:10.1109/ICS.2016.0042.

[24] U. Zakia, H. Ben Yedder, Dynamic load balancing in sdn-based data center
545 networks, in: 2017 8th IEEE Annual Information Technology, Electronics
and Mobile Communication Conference (IEMCON), 2017, pp. 242–247.
doi:10.1109/IEMCON.2017.8117206.

5. Appendix: Supplementary experiments and results

5.1. Experimental results of the Fat-Tree architecture

550 We use 10 OpenFlow switches and 8 servers according to Fig.3 architecture


and repeat the experiments as before to test this architecture.

5.1.1. Experiment 1: Fixed load


There are two scenarios in this experiment. In Scenario 1, each server’s
background traffic is equal to 500 bps. In Scenario 2, the background traffic for
555 Servers 1 to 4 (P1 to P4) is 1000 and for the Servers 5 to 8 (5P to P8) is 500
bps. A fixed offered-load of 400 seconds at a rate of 1500 requests per second
is also injected into the system by a traffic generator. Figures 15 and 16 show
the performance of the servers.
As can be seen from Figures 15 and 16, the throughput and average delay in
560 both scenarios are slightly improved compared to the Three-layer architecture.

34
‫مطابق بخش قبل آزمایش ها را تکرار مر کنی ‪.‬‬
‫برای آزمایش این معماری‪ ،‬مطابق با معماری شکل ‪ 3-5‬از ‪ 10‬سوئیچ اپن فلو و ‪ 8‬سررور برررم مرر برری و‬
‫کنی ‪.‬‬ ‫بار مر‬
‫ثابت‬ ‫اول‪:‬تکرار‬
‫آزمایش ها را‬ ‫‪-1‬‬
‫مطابق بخش قبل آزمایش‬

‫ثابت ‪ .‬در سناریو ‪ ،1‬ترافیک پسزمینر هرر یرک از سررورهرا بر طرور‬


‫نظر داری‬ ‫سناریو در‬
‫اول‪ :‬بار‬ ‫در این ‪-1‬‬
‫آزمایش ه دو‬
‫آزمایش‬
‫طرور‪ 1000‬و‬
‫ترا بر‪،)P4‬‬ ‫یرکتا از‪P1( 4‬‬
‫سررورهرا‬ ‫سرورهای ‪1‬‬
‫یِزمینر هرر‬
‫زمینپس‬ ‫سناریودر‪2‬سناریو‬
‫ترافیک‪،1‬پس‬
‫ترافیک‬ ‫است‪ .‬در‬
‫نظر داری ‪.‬‬ ‫بست ه بردوثانی‬
‫سناریو در‬ ‫مساویاین‪500‬‬
‫آزمایش‬ ‫در‬
‫‪1000‬‬
‫شدمویِ ثابت‬ ‫مدتتا ‪( 4‬‬
‫‪P1400s‬ثانی‬
‫ترا ‪P4‬بار)‪ ،‬ارائ‬ ‫سرورهای ‪1‬‬
‫همچنین ب‬ ‫باشد‪.‬‬
‫زمین یِ‬ ‫مرپس‬
‫ترافیک‬ ‫‪500‬در بست‬
‫سناریوبر‪2‬ثانی‬ ‫بست‪P5‬برتاثانی‪،)P8‬است‪.‬‬
‫‪( 8500‬‬
‫مساوی ‪ 5‬تا‬
‫سرورهای‬
‫شرکل ‪15-5‬‬
‫گرردد‪.‬یِ ثابت‬
‫مررارائ شدم‬ ‫سیست‪ 400s‬ثانی‬
‫تزریق بار‬ ‫همچنین بب مدت‬ ‫‪ 500‬ازبست بر‬
‫طریقثانی مرباشد‪.‬‬
‫تولیدکنندم ترافیک‬ ‫‪ P5( 8 1500‬تا‬
‫درخواست‪P8‬بر)‪ ،‬ثانی‬ ‫سرورهای ‪ 5‬تا‬
‫ب میزانِ‬
‫طریق تولیدکنندم ترافیک ب سیست تزریق مررگرردد‪ .‬شرکل ‪15-5‬‬ ‫‪ SIP‬از‬
‫مرباشد؛‬ ‫هایبر ثانی‬‫‪1500‬رِ سرور‬
‫درخواست‬ ‫نشان میزانِ‬
‫دهندم کارای‬ ‫ب‬
‫نشاندهندم کارایرِ سرورهای ‪ SIP‬مرباشد؛‬
‫‪1600‬‬
‫‪1600‬‬
‫‪1400‬‬

‫)‪Throughput (rps‬‬
‫‪1400‬‬

‫)‪Throughput (rps‬‬
‫‪12001200‬‬
‫‪Offered‬‬ ‫‪load‬‬
‫‪Offered‬‬ ‫‪load‬‬
‫‪10001000‬‬ ‫‪SDN-based‬‬
‫‪SDN-based‬‬

‫‪800 800‬‬ ‫‪Round-Robin‬‬


‫‪Round-Robin‬‬
‫‪Random‬‬
‫‪Random‬‬
‫‪600 600 0‬‬
‫‪0‬‬ ‫‪50 50 100100 150‬‬
‫‪150 200‬‬ ‫‪200‬‬ ‫‪250‬‬
‫‪250‬‬ ‫‪300‬‬
‫‪300‬‬ ‫‪350‬‬
‫‪350 400‬‬
‫‪400‬‬
‫)‪Time (Second‬‬
‫)‪Time (Second‬‬
‫‪1600‬‬ ‫‪(a) Servers’ throughput in scenario 1‬‬
‫‪1600‬‬
‫‪1400‬‬
‫)‪Throuhput (rps‬‬

‫‪1400‬‬
‫)‪Throuhput (rps‬‬

‫‪1200‬‬
‫‪12001000‬‬
‫‪Offered load‬‬
‫‪1000 800‬‬ ‫‪SDN-based‬‬
‫‪Offered load‬‬
‫‪Round-Robin‬‬
‫‪SDN-based‬‬
‫‪800 600‬‬ ‫‪Random‬‬
‫‪0‬‬ ‫‪50 Round-Robin‬‬
‫‪100‬‬ ‫‪150‬‬ ‫‪200‬‬ ‫‪250‬‬ ‫‪300‬‬ ‫‪350‬‬ ‫‪400‬‬
‫‪600‬‬ ‫‪Random‬‬ ‫)‪Time (Second‬‬
‫‪0‬‬ ‫‪50‬‬ ‫‪100‬‬ ‫‪150‬‬ ‫‪200‬‬ ‫‪250‬‬ ‫‪300‬‬ ‫‪350‬‬ ‫‪400‬‬
‫)‪Time (Second‬‬

‫‪(b) Servers’ throughput in scenario 2‬‬


‫)‪Average Response Time (ms‬‬

‫‪1‬‬
‫)‪Average Response Time (ms‬‬

‫‪60‬‬ ‫‪SDN-based‬‬
‫‪60‬‬ ‫‪SDN-based‬‬
‫‪Round-Robin‬‬
‫‪Round-Robin‬‬
‫‪Random‬‬ ‫‪1‬‬
‫‪40‬‬ ‫‪Random‬‬
‫‪40‬‬

‫‪20‬‬
‫‪20‬‬
‫‪0‬‬
‫‪0‬‬ ‫‪50‬‬ ‫‪100‬‬ ‫‪150‬‬ ‫‪200‬‬ ‫‪250‬‬ ‫‪300‬‬ ‫‪350‬‬ ‫‪400‬‬
‫‪0‬‬ ‫)‪Time (Second‬‬
‫‪0‬‬ ‫‪50‬‬ ‫‪100‬‬ ‫‪150‬‬ ‫‪200‬‬ ‫‪250‬‬ ‫‪300‬‬ ‫‪350‬‬ ‫‪400‬‬
‫)‪Average Response Time (ms‬‬

‫‪60‬‬ ‫)‪Time (Second‬‬


‫‪(c) Servers’ average response time in scenario 1‬‬
‫)‪Average Response Time (ms‬‬

‫‪60‬‬
‫‪40‬‬

‫‪40‬‬
‫‪20‬‬ ‫‪SDN-based‬‬
‫‪Round-Robin‬‬
‫‪20 0‬‬ ‫‪Random‬‬
‫‪0‬‬ ‫‪50‬‬ ‫‪SDN-based‬‬
‫‪100‬‬ ‫‪150‬‬ ‫‪200‬‬ ‫‪250‬‬ ‫‪300‬‬ ‫‪350‬‬ ‫‪400‬‬
‫‪Round-Robin‬‬
‫)‪Time (Second‬‬
‫‪Random‬‬
‫‪0‬‬
‫‪0‬‬ ‫‪50‬‬ ‫‪100‬‬ ‫‪150‬‬ ‫‪200‬‬ ‫‪250‬‬ ‫‪300‬‬ ‫‪350‬‬ ‫‪400‬‬
‫ب‪ -‬گذردهر سرورها در سناریو ‪ Time (Second)2‬الف‪ -‬گذردهر سرورها در سناریو ‪1‬‬

‫‪(d) Servers’ average response time in scenario 2‬‬


‫سناریو‪1‬‬
‫در درسناریو‬
‫‪Figure 15: 1Comparison‬‬ ‫سرورها‬
‫سرورها‬ ‫گذردهر‪of‬‬
‫پاسخ‬ ‫ج‪ -‬الف‪-‬‬
‫‪ performance‬زمان‬
‫‪the‬‬ ‫میانگین‬ ‫‪of data‬‬ ‫‪2center‬‬
‫سناریو ‪2‬‬ ‫سناریو‬
‫در‬ ‫در‬
‫سرورها‬ ‫سرورها‬
‫پاسخ‬
‫‪servers‬‬ ‫گذردهر‬
‫زمان‬ ‫‪scenarios‬دب‪two--‬‬
‫میانگین‬
‫‪in‬‬
‫شکل ‪ -15-5‬مقایس کارایرِ سرورهای مرکز دادم در دو سناریو‬

‫ج‪ -‬میانگین زمان پاسخ سرورها در سناریو ‪1‬‬ ‫د‪ -‬میانگین زمان پاسخ سرورها در سناریو ‪2‬‬
‫‪35‬‬
‫شکل ‪ -15-5‬مقایس کارایرِ سرورهای مرکز دادم در دو سناریو‬

‫‪2‬‬
Table 2: Comparison of traditional Fat-Tree architecture with SDN

1500 rps 2000 rps 2500 rps


Traditional SDN Traditional SDN Traditional SDN
Throughput (rps) 844 1389 1376 1787 1997 2497
Delay (ms) 24 11 55 12 96 12
Average CPU Usage (%) 26 15 33 16 42 20
Average Memory Usage (%) 23 14 35 18 53 21

The reason is that the Fat-Tree architecture (compared to the Three-layer ar-
chitecture) is divided into two general sections. OpenFlow switches are divided
into two subsystems in this architecture, which will result in better load balanc-
ing between the 8 servers. As in the previous sections, the results of Scenario
565 2 is slightly worse than the results of Scenario 1 due to the lack of background
traffic.

5.1.2. Experiment 2: Variable load


In the previous section, a constant load of 1500 requests per second (1500
rps) is injected into the system, but in this section, we evaluate variable load
570 performance. The results are shown in Fig. 17. These results are almost similar
to those obtained from the Three-layer architecture.

5.1.3. Experiment 3: Comparison with the traditional Fat-Tree architecture


Table 2 provides a comparison of traditional Fat-tree and SDN-based archi-
tectures in terms of throughput, delay, and resource consumption. As can be
575 seen, SDN technology has been able to significantly improve the quality of ser-
vice of server requests including throughput, delay, and resource consumption.

5.2. Experimental results for BCube architecture

We use 8 OpenFlow switches and 16 servers in accordance with Fig. 5, to


test the BCube architecture.

36
60

(%)(%)
SDN-based
60 SDN-based
Round-Robin

Usage
Random
Round-Robin

Usage
40
60 Random

Average CPU Usage (%)


40 SDN-based

CPU
Round-Robin
CPU
20
40
Random
Average
20
Average

200
0 100 200 300 400
100 200 300
Time (Second) 400
Time (Second)
0
100Servers’ average
(a) 200 CPU usage 300in scenario 1400
60
(%)

60 Time (Second)
Usage(%)
CPUUsage

60
Average CPU Usage (%)

40
40
AverageCPU

40 SDN-based
20 SDN-based
20
Average

Round-Robin
Round-Robin
SDN-based
Random
20 Random
Round-Robin
0
0
100 Random
100 200
200 300
300 400
400
0 Time (Second)
100 200 300 400
Time (Second)
Usage (%)

60 SDN-based
(b) Servers’ average CPU usage in scenario 2
Round-Robin
Memory Usage
Average Memory Usage (%)

60 SDN-based
Random
Round-Robin
40
Random
AverageMemory

40
20
20
Average

20
0
0 100 200 300 400
0 100 200 300 400
100 200Time (Second)
300
Time (Second) 400
Time (Second)

(c) Servers’ average memory usage in scenario 1


Average Memory Usage (%)

60 SDN-based
Round-Robin
Random

40

3
20 33

0
100 200 300 400
Time (Second)

(d) Servers’ average memory usage in scenario 2

Figure 16: Comparison of data center servers’ resource usage in two scenarios
1 ‫ میانگین پردازندم مصرفرِ سرورها در سناریو‬-‫الف‬ 2 ‫ میانگین پردازندم مصرفرِ سرورها در سناریو‬-‫ب‬

37
1 ‫ میانگین حافظ مصرفرِ سرورها در سناریو‬-‫ج‬ 2 ‫ میانگین حافظ مصرفرِ سرورها در سناریو‬-‫د‬

‫ مقایس مصرف منابع سرورهای مرکز دادم در دو سناریو‬-16-5 ‫شکل‬

‫ گذردهر و میانگین تاخیر در هر دو سرناریو نسربت‬،‫ مشاهدم مرگردد‬16-5 ‫ و‬15-5 ‫همانگون ک از شکل‬
‫ (نسربت بر‬Fat-Tree ِ‫ دلیل این امر این اسرت کر معمراری‬.‫ کمر بربود یافت اند‬Three-layer ‫ب معماری‬
‫ سوئیچ های اپن فلو در این معمراری بر دو زیرر‬.‫) ب دو بخش کلر تقسی شدم است‬Three-layer ‫معماری‬
7500 Offered Load 7500
Controller Throughput
7500 Offered Load 7500
7500 Offered
Servers Load
Throughput 7500
6000
6000 Controller Throughput
Controller Throughput
Rejection
Servers Rate
6000
Throughput
6000

Flow rate (fps)


Call rate (rps) 6000
Servers Throughput
6000
4500
Rejection
RejectionRate
Rate 4500

Flow rate (fps)


Flow rate (fps)
Call rate (rps)
Call rate (rps)
4500
4500 4500
4500
3000 3000
3000 3000
3000
3000
1500
1500
1500
1500 1500
1500
0 0
0 100100 200
200 300
300 400
400 500
500 600
600 700
700
100 200 300 400
Time (Second)
Time (Second) 500 600 700
Time (Second)
1000 (a) Throughput
1000
Average response time (ms)

1000
Average response time (ms)

100
Average response time (ms)

100
100
10

1010
1
0 100 200 300 400 500 600 700
Time (Second)

11
100 00 100
100 200
200 300
300 400
400 500
500 600
600 700
700
Controller
Time (Second)
Time (Second)
100
Average CPU usage (%)

Servers
(b) Servers average response time
100
10075
Controller
Controller
10050
100
Average CPU usage (%)
Average CPU usage (%)

Servers
Servers

25
7575
0
50 150 250 350 450 550 650
5050 Time (Second)

2525

0 050 150 250 350 450 550 650


50 150 250 350 450 550 650
Time (Second)
Time (Second)

(c) Average CPU usage


50 5
Average memory usage (%)

25
5
5

0
50 150 250 350 450 550 650
Time (Second)

(d) Average memory usage

Figure 17: Performance over time and with different offered-loads

‫ گذردهر‬-‫الف‬ ‫ میانگین زمان پاسخ سرورها‬-‫ب‬

38
‫ میانگین پردازندم مصرفر‬-‫ج‬ ‫ میانگین حافظ مصرفر‬-‫د‬

‫ کارایر در طول زمان و با بارهای ارائ شدم متفاوت‬-17-5 ‫شکل‬

Fat-Tree ِ‫ مقایسه با معماری سنتی‬:‫آزمایش سوم‬ -3

‫ سنتر و مبتنر‬Fat-Tree ‫ تاخیر و مصرف منابع بین دو معماری‬،‫در جدول ذیل مقایس ای از نظر گذردهر‬
‫ توانسرت اسرت بر خروبر در‬SDN ‫ تکنولرو ی‬،‫ همانگون ک مشاهدم مرر کنیرد‬.‫ آوردم شدم است‬SDN ‫بر‬
580 5.2.1. Experiment 1: Constant load
The first experiment, as described in the previous sections, consists of two
scenarios with different background traffic. In Scenario 1, each server’s back-
ground traffic is equal to 500 packets per second. However, in the second sce-
nario, the background traffic of the servers are not equal. The results are shown
585 in Fig. 18.
As shown in Fig. 18 and the figures in the previous sections, servers are
less efficient in the BCube architecture than in the Fat-Tree architecture but
relatively better than the Three-layer architecture. This is also illustrated in
the resources usage by the servers in Fig. 19.
590 As can be seen, the servers resource usage in this architecture is slightly more
than in the Fat-Tree architecture. This is because, in the Fat-Tree architecture,
all the components are divided into two parts, and access to each server with
fewer jumps (links) is possible, while in the BCube architecture, access from the
source server to the destination server is possible with more jumps.
595 It is worth noting that all three examined architectures have correctly dis-
tributed loads, and the process of the three SDN-based, Round-Robin and Ran-
dom algorithms are consistent in all three architectures, having no significant
difference. For example, Scenario 2 consumes more resources than Scenario 1
(Fig. 19).

600 5.2.2. Experiment 2: Variable load


In the previous section, a constant load of 1500 requests per second (1500
rps) is injected into the system, however, in this section, we evaluate variable
load performance. The results are shown in Fig.20. These results are almost
similar to those obtained from previous architectures.
605 As can be seen from Fig. 20c and 20d, the amount of resources consumed
by the controller is not comparable to the servers, and therefore the probability
of controller bottlenecks is very low.

39
‫آزمایش اول‪ :‬بار ثابت‬ ‫‪-1‬‬

‫آزمایش اول مطابق بخشهای قبل شامل دو سناریو با ترافیک پسزمین متفاوت است‪ .‬در سناریو ‪ ،1‬ترافیک‬
‫پسزمین هر یک از سرورها ب طور مساوی ‪ 500‬بست بر ثانی است‪ .‬اما در سناریو دوم ترافیک پرس زمینر‬
‫ثابت‪ 18-5‬آوردم شدم است‪.‬‬
‫اول‪:‬دربارشکل‬ ‫سرورها‪1‬با‪-‬ه برابر نیست‪.‬‬
‫آزمایشنتایج‬
‫آزمایش اول مطابق بخشهای قبل شامل دو سناریو با ترافیک پسزمین متفاوت است‪ .‬در سناریو ‪ ،1‬ترافیک‬
‫‪Offered load‬‬
‫پسزمین هر یک از سرورها ب طور مساوی ‪ 500‬بست بر ثانی است‪ .‬اما در سناریو دوم ترافیک پرس زمینر‬
‫‪1500‬‬ ‫‪SDN-based‬‬
‫‪Round-Robin‬‬ ‫سرورها با ه برابر نیست‪ .‬نتایج در شکل ‪ 18-5‬آوردم شدم است‪.‬‬

‫)‪Throughput (rps‬‬
‫‪Random‬‬
‫‪Offered load‬‬

‫‪1500‬‬ ‫‪SDN-based‬‬
‫‪Round-Robin‬‬

‫)‪Throughput (rps‬‬
‫‪1000‬‬ ‫‪Random‬‬

‫‪1000‬‬

‫‪500‬‬
‫‪0‬‬ ‫‪50‬‬ ‫‪100‬‬ ‫‪150‬‬ ‫‪200‬‬ ‫‪250‬‬ ‫‪300‬‬ ‫‪350‬‬ ‫‪400‬‬
‫‪500‬‬ ‫‪Time‬‬ ‫)‪(Second‬‬
‫‪0‬‬ ‫‪50‬‬ ‫‪100‬‬ ‫‪150‬‬ ‫‪200‬‬ ‫‪250‬‬ ‫‪300‬‬ ‫‪350‬‬ ‫‪400‬‬
‫‪Time‬سناریو ‪1‬‬ ‫)‪(Second‬‬
‫سرورها در‬ ‫الف‪ -‬گذردهر‬
‫الف‪ -‬گذردهر سرورها در سناریو ‪1‬‬
‫‪(a) Servers’ throughput in Scenario 1‬‬
‫‪1500‬‬
‫‪1500‬‬
‫)‪Throughput (rps‬‬
‫)‪Throughput (rps‬‬

‫‪Offered load‬‬
‫‪Offered load‬‬
‫‪SDN-based‬‬
‫‪SDN-based‬‬
‫‪Round-Robin‬‬
‫‪Round-Robin‬‬

‫‪10001000‬‬ ‫‪Random‬‬
‫‪Random‬‬

‫‪500‬‬
‫‪500‬‬ ‫‪0‬‬ ‫‪50‬‬ ‫‪100‬‬ ‫‪150‬‬ ‫‪200‬‬ ‫‪250‬‬ ‫‪300‬‬ ‫‪350‬‬ ‫‪400‬‬
‫‪0‬‬ ‫‪50‬‬ ‫‪100‬‬ ‫‪150 Time200‬‬ ‫‪250‬‬
‫)‪(Second‬‬ ‫‪300‬‬ ‫‪350‬‬ ‫‪400‬‬
‫‪Time‬‬
‫سناریو ‪2‬‬ ‫)‪(Second‬‬
‫سرورها در‬ ‫ب‪ -‬گذردهر‬
‫سناریو ‪2‬‬
‫’‪(b) Servers‬‬ ‫‪throughput‬‬ ‫‪in Scenario‬‬
‫گذردهر سرورها در‬ ‫ب‪-‬‬ ‫‪2‬‬
‫)‪(ms‬‬
‫)‪Time(ms‬‬

‫‪60‬‬
‫‪60‬‬
‫‪ResponseTime‬‬

‫‪40‬‬
‫‪40‬‬
‫‪AverageResponse‬‬

‫‪20‬‬
‫‪20‬‬

‫‪0‬‬ ‫‪SDN-based‬‬
‫‪Round-Robin‬‬ ‫‪7‬‬
‫‪0‬‬ ‫‪Round-Robin‬‬
‫‪Average‬‬

‫‪Random‬‬
‫‪Random‬‬
‫‪-20‬‬
‫‪-20 0‬‬ ‫‪50‬‬ ‫‪100‬‬ ‫‪150‬‬ ‫‪200‬‬ ‫‪250‬‬ ‫‪300‬‬ ‫‪350‬‬ ‫‪400‬‬
‫‪0‬‬ ‫‪50‬‬ ‫‪100‬‬ ‫‪150‬‬ ‫‪200‬‬
‫‪7‬‬ ‫‪250‬‬
‫)‪Time (Second‬‬
‫‪300‬‬ ‫‪350‬‬ ‫‪400‬‬
‫)‪Time (Second‬‬
‫ج‪ -‬میانگین تاخیر سرورها در سناریو ‪1‬‬
‫سناریو ‪1‬‬
‫’‪(c) Servers‬‬ ‫سرورها در‬
‫‪average‬‬ ‫‪ in‬تاخیر‬
‫‪delay‬‬ ‫میانگین‬ ‫‪ 1‬ج‪-‬‬
‫‪Scenario‬‬
‫)‪(ms‬‬

‫‪60‬‬
‫)‪(ms‬‬

‫‪60‬‬
‫‪Time‬‬
‫‪Time‬‬

‫‪40‬‬
‫‪Response‬‬

‫‪40‬‬
‫‪Response‬‬

‫‪20‬‬
‫‪20‬‬
‫‪Average‬‬
‫‪Average‬‬

‫‪0‬‬
‫‪0‬‬ ‫‪50‬‬ ‫‪100‬‬ ‫‪150‬‬ ‫‪200‬‬ ‫‪250‬‬ ‫‪300‬‬ ‫‪350‬‬ ‫‪400‬‬
‫‪0‬‬
‫‪0‬‬ ‫‪50‬‬ ‫‪100‬‬ ‫‪150‬‬‫)‪Time (Second‬‬
‫‪200‬‬ ‫‪250‬‬ ‫‪300‬‬ ‫‪350‬‬ ‫‪400‬‬
‫‪2‬‬ ‫سناریو‬ ‫‪Time‬‬
‫در‬ ‫)‪(Second‬‬
‫سرورها‬ ‫تاخیر‬ ‫میانگین‬ ‫‪-‬‬ ‫د‬
‫‪(d) Servers’ average delay in Scenario 2‬‬
‫‪-18-5servers‬‬
‫د‪ -‬میانگین تاخیر سرورها در سناریو ‪2‬‬
‫سناریو‪Figure 18: Comparison‬‬ ‫‪performance‬در دو‬
‫‪of the‬‬ ‫‪of‬سرورهای مرکز دادم‬
‫کارایرِ‬
‫مقایس‪data‬‬
‫‪center‬‬ ‫شکل‬ ‫‪in two scenarios‬‬
‫شکل ‪ -18-5‬مقایس کارایرِ سرورهای مرکز دادم در دو سناریو‬
‫همانگون ک از شکل ‪ 18-5‬و شکل های بخش های قبل مشخص است‪ ،‬سرورها در معماری ‪ BCube‬کارایر‬
‫کارایر‬ ‫‪BCube‬‬
‫مرورد از‬ ‫برترریمعماری‬
‫دارنرد‪ .‬ایرن‬ ‫سرورها در‬
‫است‪،‬کرارایر‬
‫نسربتا‬ ‫‪Three-layer‬‬
‫مشخص‬ ‫معماریهای قبل‬
‫هایاز بخش‬
‫شکل اما‬ ‫شکل ‪ 18-5‬و‬
‫‪ Fat-Tree‬دارند‬ ‫کمتری کاز از‬
‫معماری‬ ‫همانگون‬
‫‪40‬است‪.‬‬
‫‪ Three-layer‬نسربتا کرارایر برترری دارنرد‪ .‬ایرن مرورد از‬ ‫اما ‪5‬از‪ 19-‬نیز‬
‫معماریمشخص‬ ‫شکل‬ ‫‪ Fat-Tree‬در‬
‫دارند‬ ‫توسط سرورها‬ ‫مصرفازمنابع‬
‫معماری‬ ‫کمتری‬
‫مصرف منابع توسط سرورها در شکل ‪ 19-5‬نیز مشخص است‪.‬‬
‫‪60‬‬

‫)‪Average CPU Usage (%‬‬


‫‪60‬‬

‫)‪Average CPU Usage (%‬‬


‫‪40‬‬
‫‪40‬‬
‫‪SDN-based‬‬
‫‪20‬‬ ‫‪Round-Robin‬‬
‫‪SDN-based‬‬
‫‪20‬‬ ‫‪Random‬‬
‫‪Round-Robin‬‬
‫‪Random‬‬

‫‪0‬‬
‫‪100‬‬ ‫‪200‬‬ ‫‪300‬‬ ‫‪400‬‬
‫‪0‬‬
‫‪100‬‬ ‫)‪200Time (Second‬‬
‫‪300‬‬ ‫‪400‬‬
‫)‪Time (Second‬‬
‫الف‪ -‬میانگین پردازندم مصرفر در سناریو ‪1‬‬
‫‪1‬‬
‫سناریو ‪(a) The‬‬ ‫مصرفر در‬
‫‪average‬‬ ‫پردازندم‪CPU‬‬
‫میانگین‪usage‬‬ ‫‪ 1‬الف‪-‬‬
‫‪in Scenario‬‬
‫‪60‬‬
‫)‪Average CPU Usage (%‬‬

‫‪60‬‬
‫‪50‬‬
‫)‪Average CPU Usage (%‬‬

‫‪50‬‬
‫‪40‬‬
‫‪30‬‬
‫‪40‬‬
‫‪SDN-based‬‬
‫‪20‬‬
‫‪30‬‬ ‫‪Round-Robin‬‬
‫‪SDN-based‬‬
‫‪Random‬‬
‫‪10‬‬
‫‪20‬‬ ‫‪Round-Robin‬‬
‫‪Random‬‬
‫‪100‬‬
‫‪200‬‬ ‫‪100‬‬ ‫‪300‬‬ ‫‪400‬‬
‫‪0‬‬ ‫)‪Time (Second‬‬
‫‪100‬‬ ‫‪200‬‬ ‫‪300‬‬ ‫‪400‬‬
‫‪2‬‬ ‫سناریو‬ ‫در‬ ‫‪Time‬‬
‫مصرفر‬ ‫)‪(Second‬‬
‫پردازندم‬
‫‪(b) The average CPU usage in Scenario 2‬‬ ‫میانگین‬ ‫‪-‬‬ ‫ب‬
‫‪Figure 19: Comparison of resource usage by the data center server in two scenarios‬‬
‫سناریو ‪2‬‬
‫در دو سناریو‬ ‫مرکز دادم‬
‫مصرفری در‬
‫پردازندم سرورها‬
‫منابع مصرفر‬ ‫شکل ‪-19-5‬ب‪-‬‬
‫مقایس‬
‫میانگین‬
‫‪ Fat-Tree‬کمرر بیشرتر‬
‫معمراریسناریو‬ ‫معمراری از‬
‫دادم در دو‬ ‫اینهای مرکز‬
‫مصرفردرسرور‬
‫منابعسرورها‬
‫مقایسمنابع‬
‫مصرف‬ ‫‪-19‬‬ ‫شکل ‪-5‬‬
‫گردد‪،‬‬ ‫همانگون ک مالحظ مر‬
‫‪5.2.3. Experiment‬‬ ‫‪3: Comparison‬‬
‫دسترسر بر هرر یرک از‬ ‫‪with‬دو بخش تقسی و‬
‫‪ traditional‬اجزا ب‬ ‫‪BCube‬‬
‫معماری ‪ Fat-Tree‬کل‬ ‫‪architecture‬‬
‫است ک در‬ ‫است‪ .‬این بدلیل این‬
‫همانگون ک مالحظ مر گردد‪ ،‬مصرف منابع سرورها در این معمراری از معمراری ‪ Fat-Tree‬کمرر بیشرتر‬
‫‪Table‬‬ ‫‪31 provides‬بر‬
‫دسترسر‪ a‬از سررور مبردا‬ ‫در معماری ‪BCube‬‬
‫‪comparison‬‬ ‫پذیر بود در حالیک‬ ‫کمتری امکان‬ ‫پرش‬ ‫سرورها با تعداد‬
‫یرک از‬ ‫دسترسر بر هرر‬ ‫‪ of‬دو بخش تقسی و‬
‫‪traditional‬اجزا ب‬ ‫‪BCube‬‬
‫معماری ‪ Fat-Tree‬کل‬ ‫‪and‬ک در‬ ‫‪SDN-based‬‬
‫این است‬ ‫‪ archi-‬است‪ .‬این بدلیل‬
‫سرور مقصد ‪ 1‬با تعداد پرش بیشتری همرام است‪.‬‬
‫‪610‬‬ ‫مبردا ‪in1‬بر ‪tectures‬‬
‫‪terms‬‬‫‪ throughput,‬از‪of‬سررور‬ ‫‪delay,‬‬
‫معماری ‪ BCube‬دسترسر‬ ‫‪ and‬در‬ ‫‪resource‬‬
‫در حالیک‬ ‫‪consumption.‬‬
‫امکان پذیر بود‬ ‫سرورها با تعداد پرش کمتری‬
‫نکت قابل توج این ا ست ک هر س معماری بررسر شدم ب درسرتر برار را توزیرع کرردم انرد و رونرد سر‬
‫‪ able‬است‪.‬‬
‫بیشتری همرام‬ ‫مقصد ‪ 1‬با تعداد پرش‬ ‫‪ the‬سرور‬
‫‪As can be seen, SDN technology has been‬‬ ‫‪to significantly‬‬ ‫‪improve‬‬
‫الگوریت ‪ Round-Robin ،HWAR‬و ‪ Random‬در هر س معماری مطابق با هر اسرت و اخرتالف فاحشرر‬
‫رونرد سر‬
‫‪quality of‬‬ ‫کرردم انرد و‬
‫‪service‬‬ ‫‪ requests‬را توزیرع‬
‫‪of server‬‬ ‫‪ including‬درسرتر برار‬
‫معماری بررسر شدم ب‬ ‫‪delay,‬ست ک هر س‬
‫‪throughput,‬‬ ‫‪and‬توج این ا‬
‫‪ resource‬قابل‬
‫نکت‬
‫ندارد‪ .‬ب عنوان مثال مصرف منابع سناریو ‪ 2‬از سناریو ‪ 1‬بیشتر است (شکل ‪.)19-5‬‬
‫الگوریت ‪ Round-Robin ،HWAR‬و ‪ Random‬در هر س معماری مطابق با هر اسرت و اخرتالف فاحشرر‬
‫‪consumption.up to here, in all three studied architectures, SDN technology has‬‬
‫ندارد‪ .‬ب عنوان مثال مصرف منابع سناریو ‪ 2‬از سناریو ‪ 1‬بیشتر است (شکل ‪.)19-5‬‬
‫‪had a great impact on server performance.‬‬
‫‪9‬‬

‫‪Table 3: Comparison of traditional BCube architecture with SDN‬‬


‫‪9‬‬

‫‪1500 rps‬‬ ‫‪2000 rps‬‬ ‫‪2500 rps‬‬


‫‪Traditional‬‬ ‫‪SDN‬‬ ‫‪Traditional‬‬ ‫‪SDN‬‬ ‫‪Traditional‬‬ ‫‪SDN‬‬
‫)‪Throughput (rps‬‬ ‫‪824‬‬ ‫‪1442‬‬ ‫‪1354‬‬ ‫‪1987‬‬ ‫‪2021‬‬ ‫‪2496‬‬
‫)‪Delay (ms‬‬ ‫‪23‬‬ ‫‪12‬‬ ‫‪56‬‬ ‫‪13‬‬ ‫‪97‬‬ ‫‪11‬‬
‫)‪Average CPU Usage (%‬‬ ‫‪25‬‬ ‫‪14‬‬ ‫‪34‬‬ ‫‪17‬‬ ‫‪41‬‬ ‫‪23‬‬
‫)‪Average Memory Usage (%‬‬ ‫‪27‬‬ ‫‪17‬‬ ‫‪36‬‬ ‫‪20‬‬ ‫‪54‬‬ ‫‪22‬‬

‫‪41‬‬
‫آزمایش دوم‪ :‬بار متغیر‬ ‫‪-2‬‬
‫آزمایش دوم‪ :‬بار متغیر‬ ‫‪-2‬‬
‫در بخش قبل بارِ ثابتِ ‪ 1500‬درخواست در ثانی (‪ )1500 rps‬ب سیست تزریق گشت اما در این بخرش بر‬
‫در بخش قبل بارِ ثابتِ ‪ 1500‬درخواست در ثانی (‪ )1500 rps‬ب سیست تزریق گشت اما در این بخرش بر‬
‫ارزیابر عملکرد با بارِ متغیر مرپردازی ‪ .‬نتایج در شکل ‪ 20-5‬نشان دادم شدم است‪ .‬این نتایج تقریبرا مشراب‬
‫ارزیابر عملکرد با بارِ متغیر مرپردازی ‪ .‬نتایج در شکل ‪ 20-5‬نشان دادم شدم است‪ .‬این نتایج تقریبرا مشراب‬
‫نتایج بدست آمدم از معماریرای قبل مر باشد‪.‬‬
‫نتایج بدست آمدم از معماریرای قبل مر باشد‪.‬‬
‫‪7500‬‬ ‫‪Offered Load‬‬
‫‪7500‬‬
‫‪7500‬‬ ‫‪OfferedController‬‬
‫‪Load‬‬ ‫‪Throughput‬‬
‫‪Rejection‬‬
‫‪Controller‬‬ ‫‪Rate‬‬
‫‪Throughput‬‬
‫‪7500‬‬
‫‪6000‬‬ ‫‪Servers‬‬ ‫‪6000‬‬
‫‪Rejection‬‬ ‫‪Rate Throughput‬‬
‫‪6000‬‬ ‫‪6000‬‬

‫)‪Flow rate (fps‬‬


‫)‪Call rate (rps‬‬
‫‪Servers Throughput‬‬
‫‪4500‬‬ ‫‪4500‬‬

‫)‪Flow rate (fps‬‬


‫)‪Call rate (rps‬‬
‫‪4500‬‬ ‫‪4500‬‬
‫‪3000‬‬ ‫‪3000‬‬
‫‪3000‬‬ ‫‪3000‬‬
‫‪1500‬‬ ‫‪1500‬‬
‫‪1500‬‬ ‫‪1500‬‬
‫‪0‬‬
‫‪0‬‬ ‫‪0‬‬ ‫‪100 200 300 400 500 600 700‬‬
‫‪0 100 200 300 400‬‬ ‫‪500 600 700‬‬
‫)‪Time (Second‬‬
‫)‪Time (Second‬‬
‫الف) گذردهر‬
‫‪(a) Throughput‬‬
‫الف) گذردهر‬
‫‪1000‬‬
‫‪1000‬‬
‫)‪Average response time (ms‬‬
‫)‪Average response time (ms‬‬

‫‪100‬‬
‫‪100‬‬

‫‪10‬‬
‫‪10‬‬

‫‪1‬‬
‫‪1‬‬ ‫‪100 200‬‬ ‫‪300‬‬ ‫‪400‬‬ ‫‪500 600‬‬ ‫‪700‬‬
‫‪100 200‬‬ ‫‪300‬‬ ‫‪400 (Second)500 600‬‬
‫‪Time‬‬ ‫‪700‬‬
‫)‪Time (Second‬‬
‫‪(b) Average‬‬ ‫ب)‪delay‬‬
‫میانگین تاخیر‬
‫‪100‬‬ ‫ب) میانگین تاخیر‬
‫)‪Average CPU Usage (%‬‬

‫‪100‬‬ ‫‪Controller‬‬
‫)‪Average CPU Usage (%‬‬

‫‪Controller‬‬
‫‪Servers‬‬
‫‪Servers‬‬
‫‪50‬‬
‫‪50‬‬

‫‪10‬‬
‫‪10‬‬
‫‪0‬‬ ‫‪0‬‬
‫‪0‬‬ ‫‪0150 150‬‬
‫‪250 250350350 450450 550‬‬
‫‪550‬‬ ‫‪650‬‬
‫‪650‬‬
‫)‪Time (Second‬‬
‫)‪Time (Second‬‬
‫ج) میانگین پردازندم مصرفر کنترلر و سرورها‬
‫سرورها‪(c) Average‬‬
‫کنترلر و‪CPU‬‬
‫مصرفر‪usage‬‬ ‫میانگین پردازندم‬
‫‪of controllers‬‬ ‫‪and‬‬ ‫‪ servers‬ج)‬
‫)‪Average Memory Usage (%‬‬

‫‪100‬‬
‫)‪Average Memory Usage (%‬‬

‫‪100‬‬ ‫‪Controller‬‬
‫‪Controller‬‬
‫‪Servers‬‬
‫‪Servers‬‬
‫‪50‬‬

‫‪50‬‬

‫‪0‬‬
‫‪0‬‬ ‫‪150‬‬ ‫‪250‬‬ ‫‪350‬‬ ‫‪450‬‬ ‫‪550‬‬ ‫‪650‬‬
‫)‪Time (Second‬‬
‫‪0‬‬
‫‪0‬‬ ‫سرورها‬ ‫‪150‬‬
‫مصرفر‪of‬کنترلر و‬ ‫‪250‬‬
‫میانگین حافظ‬ ‫‪350‬‬ ‫‪450‬‬
‫‪ servers‬د)‬ ‫‪550‬‬ ‫‪650‬‬
‫‪(d) Average memory‬‬ ‫‪usage‬‬ ‫‪controllers‬‬ ‫‪and‬‬
‫)‪Time (Second‬‬
‫شکل ‪ -20-5‬کارایر در طول زمان و با بارهای ارائ شدم متفاوت‬
‫‪Figure 20: Performance over time and with different offered-loads‬‬
‫د) میانگین حافظ مصرفر کنترلر و سرورها‬
‫همانگون ک از شکل های ج و د ‪ 20-5‬مشخص است میزان مصرف منابع توسط کنترل کنندم قابل مقایس‬
‫است‪.‬متفاوت‬
‫پایین شدم‬ ‫کنندم بسیار‬
‫بارهای ارائ‬ ‫کنترل‬
‫زمان و با‬ ‫شدنطول‬ ‫دلیل‪-20-5‬‬
‫احتمال تنگنا‬
‫کارایر در‬ ‫با سرورها نیست و ب همینشکل‬

‫استیِ‪42‬‬
‫‪BCube‬مصرف منابع توسط کنترل کنندم قابل مقایس‬
‫میزان‬ ‫‪20‬با مشخص‬
‫معماری سنت‬ ‫سومو‪ :‬د ‪-5‬‬
‫مقایسه‬ ‫های ج‬ ‫همانگون‪ -‬ک از شکل‬
‫آزمایش‬ ‫‪3‬‬
‫‪BCube‬است‪.‬‬
‫سنتر و مبتنر بر‬ ‫بسیار پایین‬
‫کنندممعماری‬
‫کنترلبین دو‬
‫شدن منابع‬
‫تنگنامصرف‬ ‫احتمال‬
‫تاخیر و‬ ‫همیننظردلیل‬
‫گذردهر‪،‬‬ ‫نیست و ب‬
‫مقایس ای از‬ ‫سرورها ذیل‬
‫بادر جدول‬
‫‪ SDN‬آوردم شدم است‪.‬‬
‫آزمایش سوم‪ :‬مقایسه با معماری سنتیِ ‪BCube‬‬ ‫‪-3‬‬
‫جدول ‪ : 3-5‬مقایس معماری ‪ BCube‬سنتر با ‪SDN‬‬

‫منابع بین دو معماری ‪ BCube‬سنتر و مبتنر بر‬ ‫گذردهر‪ ،‬تاخیر و مصرف‬


‫‪1500 rps‬‬ ‫ذیل مقایس ای از نظر‬
‫‪2000 rps‬‬ ‫جدول ‪2500‬‬
‫‪rps‬‬ ‫در‬
‫‪Methods‬‬

‫‪11‬‬
‫‪ SDN‬آوردم شدم است‪.‬‬
‫جدول ‪ : 3-5‬مقایس معماری ‪ BCube‬سنتر با ‪SDN‬‬
615 5.3. Experimental results of Dcell architecture

We use 5 OpenFlow switches and 20 servers in accordance with the archi-


tecture in Fig. 7, to test the Dcell architecture.

5.3.1. Experiment 1: Constant load


This experiment is also a repetition of the first experiment in the previous
620 sections. The first experiment involves two scenarios with different background
traffic. In Scenario 1, each server’s background traffic is equal to 500 bps. In
Scenario 2, the background traffic for servers 1 to 10 (P1 to P10) is 1000 and
for servers 11 to 20 (P11 to P20) is 500 bps. Fig. 21 illustrates the performance
of SIP servers in this architecture.
625 As can be seen, the delay difference of the three SDN-based, Round-Robin
and Random methods is less than in the previous architectures. In other words,
the performance of these three algorithms in this architecture is close to each
other (compared to the previous three).
The results of consuming resources such as CPU and memory are almost
630 similar to the BCube architecture and we refuse to explain them here.
However, as can be seen from the results above, the SDN controller has been
able to distribute the load with high throughput and low latency between the
servers.

5.3.2. Experiment 2: Variable load


635 This section evaluates the performance of variable loads. However, since the
results of this section are similar to those of the BCube architecture, we refuse
to explain them again. It should be noted that the intended architecture for
Dcell consists of five cells and the transfer of packets from one cell to another
is slightly delayed compared to other architectures. For example, according to
640 our observations, this architecture had a slightly longer delay than the previous
architectures in sending loads from server 1 in cell 1 to server 2 in cell 2. It
should be noted, however, that SDN-based architectures have greatly improved
service quality than traditional architectures.

43
Dcell ِ‫ نتایج تجربی برای معماری‬-3-1-1-1

. ‫ سرور بررم مر بری‬20 ‫ سوئیچ اپن فلو و‬5 ‫ از‬7-5 ‫ مطابق با معماری شکل‬،Dcell ‫برای آزمایش معماری‬

‫ بار ثابت‬:‫آزمایش اول‬ -1

‫ آزمایش اول شامل دو سناریو با ترافیک پسزمینر‬.‫این آزمایش نیز تکرار آزمایش اول بخش های قبل است‬
‫ در‬.‫ بست بر ثانی است‬500 ‫ ترافیک پسزمین هر یک از سرورها ب طور مساوی‬،1 ‫ در سناریو‬.‫متفاوت است‬
‫ ترا‬P11( 20 ‫ ترا‬11 ‫ و سررورهای‬1000 ،)P10 ‫ ترا‬P1( 10 ‫ تا‬1 ‫ ترافیک پسزمین یِ سرورهای‬2 ‫سناریو‬
.‫ در این معماری مرباشد‬SIP ‫ نشاندهندم کارایرِ سرورهای‬21-5 ‫ شکل‬.‫ بست بر ثانی مرباشد‬500 ،)P20

1500

Throughput (rps)
Offered load
SDN-based
1000 Round-Robin
Random

500
0 50 100 150 200 250 300 350 400
Time (Second)
1500
1 ‫سناریو‬
Throughput (rps)

(a) Servers ‫گذردهر سرورها در‬


throughput -‫الف‬
in Scenario 1

1500 Offered load


1500
(rps)

1000
Throughput (rps)

Round-Robin
Random
Throughput

Offered load
Offered load
SDN-based
1000
1000
Round-Robin
Round-Robin
500 Random
Random
0 50 100 150 200 250 300 350 400
Time (Second)
500
500
00 50
50 100
100 150
2 ‫سناریو‬150 200
200
‫سرورها در‬
12 250
250
‫گذردهر‬ 300
-‫ ب‬300 350
350 400
400
Time (Second)
Time (Second)
Average Response Time (ms)

60 2 ‫ گذردهر سرورها در سناریو‬-‫ب‬


2 ‫گذردهر سرورها در سناریو‬
(b) Servers throughput in Scenario
-‫ب‬ 2
(ms)

60
(ms)

40 60
Time
Time

40
Response

Round-Robin
20 40
Response

SDN-based
Random
Round-Robin
20 SDN-based
Random
Round-Robin
Average

0 20 Random
0 0 50 100 150 200 250 300 350 400
Average

0 50 100 150 (Second)


Time 200 250 300 350 400
0 Time (Second)
0 50 100 1 ‫سناریو‬
150 ‫سرورها در‬
200 ‫تاخیر‬ 250-‫ ج‬300 350 400
1 ‫سناریو‬
(c) Servers delay‫سرورها در‬ ‫ تاخیر‬-‫ ج‬1
in Scenario
Time (Second)
Average Response Time (ms)

Average Response Time (ms)

60 60 1 ‫ تاخیر سرورها در سناریو‬-‫ج‬


50
Time (ms)

50
60
40 40
50
Average Response

30 30 SDN-based
Round-Robin
40 Round-Robin
Random
20 Random
20 0 50 100 150 200 250 300 350 400
0 30 50 100
SDN-based150 Time 200
(Second)250 300 350 400
Round-Robin Time (Second)

20
Random
2 ‫ تاخیر سرورها در سناریو‬-‫د‬
0 2 ‫سناریو‬
50(d) Servers
100 150‫سرورها در‬
delay 200 ‫تاخیر‬
250
in Scenario-‫ د‬2300 350 400
Time (Second)
13
Figure 21: Server performance over time
2 ‫در سناریو‬13‫ تاخیر سرورها‬-‫د‬

13
44
Table 4: Comparison of traditional Dcell architecture with SDN

1500 rps 2000 rps 2500 rps


Traditional SDN Traditional SDN Traditional SDN
Throughput (rps) 1163 1445 1451 1998 2065 2498
Delay (ms) 22 12 55 11 98 12
Average CPU Usage (%) 24 11 33 18 42 22
Average Memory Usage (%) 25 16 33 18 55 23

5.3.3. Experiment 3: Comparison with traditional Dcell architecture


645 Table 4 provides a comparison of traditional Dcell and SDN-based architec-
tures in terms of throughput, delay, and resource consumption.
As can be seen, SDN technology has been able to significantly improve the
quality of service of server requests including throughput, delay, and resource
consumption.

45

You might also like