Final Copy of Computer Networks
Final Copy of Computer Networks
GROUP ONE
Table of Contents
Quality of Service (QoS)..............................................................................................................3
Fundamentals of congestion control and traffic management.....................................................6
Congestion Control......................................................................................................................7
SDN (Software Defined Networking)........................................................................................11
Network Function Virtualization (NFV)....................................................................................12
Benefits......................................................................................................................................13
Integrated and Differentiated Services.......................................................................................13
Integrated Services...................................................................................................................14
Differentiated Services.............................................................................................................15
Protocols Support for QOS........................................................................................................16
Resource Reservation Protocol (RSVP).................................................................................16
Multiprotocol Label Switching (MPLS)................................................................................17
TCP Congestion Control............................................................................................................18
Quality of Service (QoS)
Refers to any technology that manages data traffic to reduce packet loss, latency and jitter
on the network. QoS controls and manages network resources by setting priorities for specific
types of data on the network.
When one or more of these packets fails to reach its intended destination, this is called
packet loss.
Latency is an expression of how much time it takes for a data packet to travel from one
designated point to another.
Jitter is the variation in the time between data packets arriving, caused by network
congestion, or route changes. The longer data packets take to transmit, the more jitter affects
audio quality. The standard jitter measurement is in milliseconds (ms).
Some techniques that can be used to improve the quality of service. They include; scheduling,
traffic shaping, admission control, and resource reservation.
1. Scheduling
We have 3 scheduling Techniques. They include; FIFO queuing, priority queuing, and weighted
fair queuing.
a. FIFO queing – Here packets wait in a buffer (queue) until the node (router or
switch) is ready to process them. If the average arrival rate is higher than the
average processing rate, the queue will fill up and new packets will be discarded. A
FIFO queue is familiar to those who have had to wait for a bus at a bus stop.
b. Priority Queuing – Here packets are first assigned to a priority class. The packets
in the highest-priority queue are processed first. Packets in the lowest- priority
queue are processed last. A priority queue can provide better QoS than the FIFO
queue because higher priority traffic, such as multimedia, can reach the destination
with less delay. However, there is a potential drawback. If there is a continuous
flow in a high-priority queue, the packets in the lower-priority queues will never
have a chance to be processed. This is a condition called starvation7
2. Traffic Shaping - is a mechanism to control the amount and the rate of the traffic sent to
the network. Two techniques can shape traffic: leaky bucket and token bucket.
a. Leaky bucket - If a bucket has a small hole at the bottom, the water leaks from
the bucket at a constant rate as long as there is water in the bucket. The rate at
which the water leaks does not depend on the rate at which the water is input to
the bucket unless the bucket is empty. The input rate can vary, but the output rate
remains constant. Similarly, in networking, a technique called leaky bucket can
smooth out bursty traffic. Bursty chunks are stored in the bucket and sent out at an
average rate. A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic
by averaging the data rate. It may drop the packets if the bucket is full.
b. Token bucket - The leaky bucket is very restrictive. It does not credit an idle
host. For example, if a host is not sending for a while, its bucket becomes empty.
Now if the host has bursty data, the leaky bucket allows only an average rate. The
time when the host was idle is not taken into account. On the other hand, the
token bucket algorithm allows idle hosts to accumulate credit for the future in the
form of tokens. For each tick of the clock, the system sends n tokens to the
bucket. The system removes one token for every cell (or byte) of data sent. For
example, if n is 100 and the host is idle for 100 ticks, the bucket collects 10,000
tokens. The token bucket can easily be implemented with a counter. The token is
initialized to zero. Each time a token is added, the counter is incremented by 1.
Each time a unit of data is sent, the counter is decremented by 1. When the
counter is zero, the host cannot send data. The token bucket allows bursty traffic
at a regulated maximum rate.
The objective of traffic management is to ensure that each connection gets the quality of service
it was promised.
Congestion
Congestion in a network may occur if the load on the network (number of packets sent to the
network) is greater than the capacity of the network (number of packets a network can handle).
Causes of Congestion
Congestion occurs when a router receives data faster than it can send it; insufficient
bandwidth, slow hosts, data simultaneously arriving from the same outgoing line.
The system is not balanced; correcting the problem at one router will probably just move
it to another router.
Senders that are trying to transmit to a congested destination also become congested; they
must continually resend packets that have been dropped or that have been timed out, they
must continue to hold unacknowledged messages in memory.
The queues have a finite size; overflowing queues will cause the packets to be dropped,
long queues delays will cause packets to be resent, dropped packets will cause packets to
be resent.
Congestion Control
Congestion control refers to techniques and mechanisms that can either prevent congestion,
before it happens, or remove congestion, after it has happened.
In general, congestion control mechanisms is divided into two broad categories including; open
loop/prevention and closed loop/removal congestion control.
In open-loop congestion control, policies are applied to prevent congestion before it happens. In
these mechanisms, congestion control is handled by either the source or the destination.
c. Discarding Policy - A good discarding policy by the routers may prevent congestion and
at the same time may not harm the integrity of the transmission. For example, in audio
transmission, if the policy is to discard less sensitive packets when congestion is likely to
happen, the quality of sound is still preserved and congestion is prevented or alleviated.
e. Window Policy - The type of window at the sender may also affect congestion. The
Selective Repeat window is better than the G o-Back-N window for congestion control.
In the Go-Back-N window, when the timer for a packet times out, several packets may be
resent, although some may have arrived safe and sound at the receiver. This duplication
may make the congestion worse. The Selective Repeat window, on the other hand, tries
to send the specific packets that have been lost or corrupted.
Closed-loop congestion control mechanisms try to alleviate congestion after it happens. Several
mechanisms have been used by different protocols.
A special bit in the packet header set by the router to warn the source when congestion is
detected. The bit is copied and piggy-backed on the ACK and sent to the sender. The sender
monitors the number of ACK packets it receives with the warning bit set and adjusts its
transmission rate accordingly.
Choke Packets
A more direct way of telling the source to slow down. A choke packet is a control packet
generated at a congested node and transmitted to restrict traffic flow. The source, on the
receiving the choke packet must reduce its transmission rate by a certain percentage.
Implicit Signaling
In implicit signaling, there is no communication between the congested node or nodes and the
source. The source guesses that there is congestion somewhere in the network from other
symptoms. For example, when a source sends several packets and there is no acknowledgment
for a while, one assumption is that the network is congested. The delay in receiving an
acknowledgment is interpreted as congestion in the network; the source should slow down.
Explicit Signaling
The node that experiences congestion can explicitly send a signal to the source or destination.
The explicit signaling method, however, is different from the choke packet method. In the choke
packet method, a separate packet is used for this purpose; in the explicit signaling method, the
signal is included in the packets that carry data. Explicit signaling, can occur in either the
forward or the backward direction.
i. Backward Signaling- A bit can be set in a packet moving in the direction opposite to the
congestion. This bit can warn the source that there is congestion and that it needs to slow down
to avoid the discarding of packets.
ii. Forward Signaling- A bit can be set in a packet moving in the direction of the congestion.
This bit can warn the destination that there is congestion. The receiver in this case can use
policies, such as slowing down the acknowledgments, to alleviate the congestion.
Application layer – This layer contains the typical network applications that the orgs use i.e.
intrusion detection systems, load balancing or firewalls.
Control layer – It represents the centralized SDN controller, it acts as the brain of the SDN
network. This controller resides on a server and manages policies and traffic flow throughout the
network.
Benefits of SDN
A system administrator can change any network switch’s rules when necessary –
prioritizing, deprioritizing or even blocking some packets for security purposes.
Centralized network positioning
Lower operating costs - SDN also virtualizes hardware and services that were previously
carried out by dedicated hardware.
More granular Security - SDN Controller provides a central point of control to distribute
security and policy information consistently throughout the enterprise
Challenges with SDN
The centralized SDN controller presents a single point of failure and, if targeted by an
attacker.
Migration to SDN- Asides the technical and cost implications involved, there is also the
challenge of convincing users to switch from legacy networks to SDN especially if the
networks are performing optimally
Controller Design / Performance – SDN control plan may have multiple controllers
depending on the network topology design. If not properly designed there could be failure
in the network.
Integration of SDN with legacy Networks
NFV allows network operators to manage and expand their network capabilities on demand
using virtual, software based applications where physical boxes once stood in the network
architecture. NFV technology offers you the ability to virtualize the network service over the
cloud in its entirety.
This makes it easier to load-balance, scale up and down, and move functions across distributed
hardware resources. With continual updates, operators can keep things running on the latest
software without interruption to their customers. NFV uses virtualized networking components
to support an infrastructure totally independent of hardware.
Firewalls, traffic control and virtual routing are three of the most common virtual network
functions (VNFs). Other functions include working as an alternative to load balancers and
routers. Not only does this framework robustly install software across network locations, but it
also eliminates the need for any hardware infrastructure.
Benefits
SDN operates in a campus, data center and/or cloud environment while NFV targets the
service provider network
SDN uses OpenFlow as a communication protocol while NFV has no protocol
SDN reduces cost of network because now there is no need of expensive switches &
routers while NFV increases scalability and agility as well as speed up time-to-market as
it dynamically allot hardware a level of capacity to network functions needed at a
particular time.
Integrated Services
Integrated services refer to an architecture that ensures the Quality of Service (QoS) on a
network. Moreover, these services allow the receiver to watch and listen to video and sound
without any interruption. Each router in the network implements integrated services.
Furthermore, every application requires some kind of guarantee to make an individual
reservation.
Reserved resources: the router must know the amount of its resources currently reserved
for on-going sessions. That is the standard resources: link capacity, router buffers.
Call setup: A session requiring QoS guarantees must first be able to reserve sufficient
resources at each network router on its source-to destination path to ensure that its end-to-
end QoS requirement is met.
Call Setup Process
Traffic characterization and specification of the desired QoS:
Traffic specification: Characterizes/defines the traffic the sender will be sending
into the network.
Resource specification: Characterizes/defines the QoS being requested by the
connection.
Signaling for Call Setup : A sessions Tspec and Rspec must be carried to the
intermediate routers: RSVP
Per-element call admission: Once a router receives the Tspec and Rspec for a session, it
determines whether or not it can admit the call.
Service Classes
Differentiated Services
Traffic conditioning – Ensures that the traffic enters the Differentiated Service
domain.bb
Packet classification – Categorizes the packet within a specific group using the traffic
descriptor.
A number of different technologies were previously deployed with essentially identical goals,
such as Frame Relay and ATM.
Frame Relay and ATM use "labels" to move frames or cells throughout a network. The header of
the Frame Relay frame and the ATM cell refers to the virtual circuit that the frame or cell resides
on.
The similarity between Frame Relay, ATM, and MPLS is that at each hop throughout the
network, the “label” value in the header is changed.
This is different from the forwarding of IP packets. MPLS technologies have evolved with the
strengths and weaknesses of ATM in mind.
MPLS is designed to have lower overhead than ATM while providing connection-oriented
services for variable-length frames, and has replaced much use of ATM in the market.
Network congestion may occur when a sender overflows the network with too many packets. At
the time of congestion, the network cannot handle this traffic properly, which results in a
degraded quality of service (QoS). The typical symptoms of congestion are:
excessive packet delay,
packet loss and
Retransmission.
The common causes of congestion
Insufficient link bandwidths,
legacy network devices,
Greedy network applications poorly designed or configured network infrastructure.
Greedy network applications or services, such as file sharing, video streaming using
UDP, etc., lacking TCP flow or congestion control mechanisms can significantly
contribute to congestion as well.
The function of TCP is to control the transfer of data so that it is reliable. The main TCP features
are:
Connection management,
Connection management includes connection initialization (a 3-way handshake) and its
termination. The source and destination TCP ports are used for creating multiple virtual
connections.
Reliability,
A reliable P2P transfer between hosts is achieved with the sequence numbers (used for
segments reordering) and retransmission. A retransmission of the TCP segments occurs
after a timeout, when the acknowledgement (ACK) is not received by the sender or when
there are three duplicate ACKs received (it is called fast retransmission when a sender is
not waiting until the timeout expires).
Flow control
Flow control ensures that a sender does not overflow a receiving host. The receiver
informs a sender on how much data it can send without receiving ACK from the receiver
inside of the receiver’s ACK message. This option is called the sliding window (RWND)
and it’s amount is defined in Bytes. Thanks to the sliding window (rwnd), a receiving
host dynamically adjusts the amount of data which can be received from the sender.
Congestion control.
Congestion control ensures that the sender does not overflow the network. Comparing to
the flow control technique where the flow control mechanism ensures that the source host
does not overflow the destination host, congestion control is more global. It ensures that
the capability of the routers along the path does not become overflowed.
TCP uses end-to- end congestion control, which addresses two fundamental questions
How do senders react to congestion when it occurs, as signified by dropped packets?
How does the sender determine the available capacity for a flow at any point in time?
Fast Recovery is now the last improvement of TCP. With using only Fast Retransmit, the
congestion window is dropped down to 1 each time network congestion is detected. Thus, it
takes an amount of time to reach high link utilization as before. Fast Recovery, however,
alleviates this problem by removing the slow-start phase. Particulary, slow-start will be used
only at the beginning of a connection and whenever an RTO period is expired.
The reason for not performing slow-start after receiving 3 dup ACKs is that duplicate ACKs tell
the sending side more than a packet has been lost. Since the receiving side can create a duplicate
ACK only in the case that it receives an out-of-order packet, the dup ACK shows the sending
side that one packet has been left out the network. Thus, the sending side does not need to
drastically decrease cwnd down to 1 and restart slow-start. Moreover, the sender can only
decrease cwnd to one-half of the current cwnd and increase cwnd by 1 each time it receives a
duplicate ACK.
Sammary
Abhay, K., Agrawal, S., Upadhaya, N., & Govarrdhan, A. (2014). Congestion Control
Techniques and Principles in Computer Networks: A Survey. International Journal of
Computer Networking, 60 - 68.
Gurpreet, K., & Kumar, D. (2010). MPLS Technology on IP Backbone Network. International
Journal of Computer Applications, 13 - 16.
Mohammad Mousa, A. B.-E. (2016). Software Defined Networking concepts and challenges.
International Conference on Computer Engineering & Systems (ICCES). Ain Shams
University.
Ominike, A., Seun, E., O., A. A., & Osisanwo, F. Y. (2016). Introduction to Software Defined
Networks (SDN). International Journal of Applied Information Systems, 10 -14.
Rehman, A., & Rui L. Anguiar, J. P. (2018). Network Functions Virtualization: The Long Road
to Commercial Deployments. IEEE Access, 1 - 29.
M. Welzl, “Network Congestion Control: Managing Internet Traffic,” John Wiley & Sons Ltd.,
2005.