0% found this document useful (0 votes)
31 views28 pages

Unit 5 The Transport Layer

Uploaded by

marnew602
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views28 pages

Unit 5 The Transport Layer

Uploaded by

marnew602
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Unit 5 The Transport Layer (7 hours)

Agenda:
 Functions of Transport Layer
 Elements of Transport Protocols: Addressing, Establishing and Releasing Connection,
Flow Control & Buffering, Error Control, Multiplexing & Demultiplexing, Crash
Recovery
 User Datagram Protocol(UDP): User Datagram, UDP Operations, Uses of UDP, RPC
 Principles of Reliable Data Transfer: Building a Reliable Data Transfer Protocol,
Pipeline Reliable Data Transfer Protocol, Go Back-N(GBN), Selective Repeat(SR)
 Transmission Control Protocol(TCP): TCP Services, TCP features, TCP Segment
Header
 Principle of Congestion Control

Functions of Transport Layer


 The primary duties of the transport layer are to transport and regulate the flow of
information from a source to a destination, reliably and accurately. End-to-end control and
reliability are provided by sliding windows, sequencing numbers and acknowledgements.
 To understand reliability and flow control think of someone who studies a foreign language
for one year and then visits the country where the language is used. In conversation, words
must be repeated for the reliability. People must also speak slowly that the conversation is
understood, which relates to flow control.
 The transport layer establishes a logical connection between two end points of a network.
Protocols in transport layer segment and reassemble data sent by upper layer applications
into the same transport layer data string. This transport layer data string provides end-to-end
transport services.
 Functions of transport layer are:
o Error handling
o Flow control
o Multiplexing and Demultiplexing
o Connection set-up and release
o Segmentation and reassembly (with ACK)
o Addressing (Port addressing)

Elements of Transport Protocols

 The transport service is implemented by a transport protocol used between the two
transport entities. In some ways, transport protocols resemble the data link protocols.

 At the data link layer, two routers communicate directly via aphysical channel, whereas
at the transport layer, this physical channel is replaced by the entire subnet.
Figure 6-7. (a) Environment of the data link layer. (b)Environment of the transport layer.

 The elements of Transport protocols are:

a. Addressing

b. Connection Establishment

c. Connection Release

d. Flow Control and Buffering

e. Multiplexing and De-multiplexing

f. Crash Recovery

a. Addressing

 When an application (e.g., a user) process wishes to set up a connection to a remote


application process, it must specify which one to connect to. The method normally used is
to define transport addresses to which processes can listen for connection requests.

 In the Internet, these end points are called ports. We will use the generic term TSAP,
(Transport Service Access Point). The analogous end points in the network layer (i.e.,
network layer addresses) are then called NSAP (Network Service Access Point). IP
addresses are examples of NSAPs.

 Figure 6-8 illustrates the relationship between the NSAP, TSAP and transport connection.
Application processes, both clients and servers, can attach themselves to a TSAP to establish
a connection to a remote TSAP. These connections run through NSAPs on each host, as
shown.
Figure 6-8. TSAPs, NSAPs, and transport connections.

b. Connection Establishment

 Establishing a connection sounds easy, but it is actually surprisinglytricky. At first glance,


it would seem sufficient for one transport entity to just send a CONNECTION REQUEST
TPDU to the destination and wait for a CONNECTION ACCEPTED reply.

 Suppose, for example, that connections are established by having host 1 send a
CONNECTION REQUEST TPDU containing the proposed initial sequence number and
destination port number to a remote peer, host 2. The receiver, host 2, then acknowledges
this request by sending a CONNECTION ACCEPTED TPDU back. If the CONNECTION
REQUEST TPDU is lost but a delayed duplicate CONNECTION REQUEST suddenly
shows up at host 2, the connection will be established incorrectly.

Figure 6-11. Three protocol scenarios for establishing a connection using a three-way
handshake. CR denotes CONNECTION REQUEST.

(a) Normal operation. (b) Old duplicate CONNECTION REQUEST appearing out of nowhere.

(c) Duplicate CONNECTION REQUEST and duplicate ACK.


c. Connection Release

 Releasing a connection is easier than establishing one. Nevertheless,there are more pitfalls
than one might expect. As we mentioned earlier, there are two styles of terminating a
connection: asymmetric release and symmetric release.

o Asymmetric release is the way the telephone system works: when one party hangs
up, theconnection is broken.

o Symmetric release treats the connection as two separate unidirectional connections


and requires each one to bereleased separately.

Figure 6-12. Abrupt disconnection with loss of data.


Figure 6-14. Four protocol scenarios for releasing a connection.

(a) Normal case of three-way handshake.

(b) Final ACK lost.

(c) Response lost.

(d) Responselost and subsequent DRs lost.

d. Flow Control and Buffering

 Having examined connection establishment and release in some detail, let us now look at
how connections are managed while they are in use. One of the key issues has come up
before: flow control.

 In some ways the flow control problem in the transport layer is thesame as in the data link
layer, but in other ways it is different. Thebasic similarity is that in both layers a sliding
window or other scheme is needed on each connection to keep a fast transmitter from
overrunning a slow receiver.
 In the data link layer, the sending side must buffer outgoing frames because they might
have to be retransmitted. If the subnet provides datagram service, the sending transport
entity must also buffer, andfor the same reason. If the receiver knows that the sender buffers
all TPDUs until they are acknowledged, the receiver may or may not dedicate specific
buffers to specific connections, as it sees fit. The receiver may, for example, maintain a
single buffer pool shared by all connections. When a TPDU comes in, an attempt is made
to dynamically acquire a new buffer. If one is available, the TPDU is accepted; otherwise,
it is discarded. Since the sender is prepared to retransmit TPDUs lost by the subnet, no harm
is done by having thereceiver drop TPDUs, although some resources are wasted. The sender
just keeps trying until it gets an acknowledgement.

 Even if the receiver has agreed to do the buffering, there still remains the question of the
buffer size. If most TPDUs are nearly the same size, it is natural to organize the buffers
as a pool of identically-sized buffers, with one TPDU per buffer, as in Fig. 6-15(a).
However, if there is wide variation in TPDU size, from a few characters typed at a
terminal to thousands of characters from file transfers, a pool of fixed-sized buffers
presents problems. If the buffer size is chosen equal to the largest possible TPDU, space
willbe wasted whenever a short TPDU arrives. If the buffer size is chosen less than the
maximum TPDU size, multiple buffers will be needed for long TPDUs, with the
attendant complexity.

Figure 6-15. (a) Chained fixed-size buffers. (b) Chained variable-sized buffers.

(c) One large circularbuffer per connection.

 Another approach to the buffer size problem is to use variable-sizedbuffers, as in Fig. 6-


15(b). The advantage here is better memory utilization, at the price of more complicated
buffer management. A third possibility is to dedicate a single large circular buffer per
connection, as in Fig. 6-15(c). This system also makes good use of memory, provided that
all connections are heavily loaded, but is poor if some connections are lightly loaded.
e. Multiplexing

 Multiplexing several conversations onto connections, virtual circuits, and physical links
plays a role in several layers of the network architecture. In the transport layer the need for
multiplexing can arise in a number of ways.

 For example, if only one network addressis available on a host, all transport connections on
that machine have to use it. When a TPDU comes in, some way is needed to tell which
process to give it to. This situation, called upward multiplexing, is shown in Fig. 6-17(a).
In this figure, four distinct transport connections all use the same network connection (e.g.,
IP address) to the remote host.

Figure 6-17. (a) Upward multiplexing. (b) Downward multiplexing.

 Multiplexing can also be useful in the transport layer for another reason. Suppose, for
example, that a subnet uses virtual circuits internally and imposes a maximum data rate on
each one. If a user needs more bandwidth than one virtual circuit can provide, a way out is
to open multiple network connections and distribute the trafficamong them on a round-robin
basis, as indicated in Fig. 6-17(b).
 Downward multiplexing: With k network connections open, the effective bandwidth is
increased by afactor of k. A common example of downward multiplexing occurs with home
users who have an ISDN line. This line provides for two separate connections of 64 kbps
each. Using both of them to call anInternet provider and dividing the traffic over both lines
makes it possible to achieve an effective bandwidth of 128 kbps.

f. Crash Recovery

 If hosts and routers are subject to crashes, recovery from these crashes becomes an issue. If
the transport entity is entirely within the hosts, recovery from network and router crashes is
straightforward. If the network layer provides datagram service, thetransport entities expect
lost TPDUs all the time and know how to cope with them. If the network layer provides
connection-oriented service, then loss of a virtual circuit is handled by establishing a new
one and then probing the remote transport entity to ask it which TPDUs it has received and
which ones it has not received. The latterones can be retransmitted.
 A more troublesome problem is how to recover from host crashes.In particular, it may be
desirable for clients to be able to continueworking when servers crash and then quickly
reboot. To illustrate the difficulty, let us assume that one host, the client, is sending a long
file to another host, the file server, using a simple stop-and-wait protocol.
 The transport layer on the server simply passes the incoming TPDUs to the transport user,
one by one. Partway throughthe transmission, the server crashes. When it comes back up,
its tables are reinitialized, so it no longer knows precisely where it was.

 Three events are possible at the server: sending an acknowledgement (A), writing to the
output process (W), and crashing (C). The three events can occur in six different orderings:
AC(W), AWC, C(AW), C(WA), WAC, and WC(A), where the parentheses are used to indicate
that neither A nor W can follow C (i.e., once it has crashed, it has crashed). Figure 6-18
shows all eight combinations of client and server strategy and the valid eventsequences for
each one. Notice that for each strategy there is somesequence of events that causes the
protocol to fail. For example, if the client always retransmits, the AWC event will generate
an undetected duplicate, even though the other two events work properly.

Figure 6-18. Different combinations of client and server strategy.

Transport Layer Services


 Unreliable, unordered, unicast or multicast delivery :(UDP)
 Reliable, in-order unicast delivery :(TCP)

Connectionless Transport: UDP (User Datagram Protocol)


 UDP is the connectionless transport protocol in the TCP/IP protocol stack. UDP is a simple
protocol that exchanges datagram without guaranteed delivery. It relies on higher layer
protocols to handle error and retransmit data.
 UDP doesn‘t use window or Asks reliability is provided by application layer protocols. UDP
is designed for applications that do not need to put sequence of segments together.
 The following application layer protocols use UDP: TFTP, SNMP, DHCP, and DNS
UDP Segment Structure:

Source port – number of the port that sends data.


Destination port – Number of the port that receives data.
Length – calculated of bytes in header and data.
Checksum – calculated checksum of the header and data field.
Data – upper-layer protocol data.
Well-known UDP Ports
 Some network services use different transport layer protocols and their ports. The network
services that uses UDP Ports and their Port Values are given below:
• DNS (Port 53)
• DHCP (Port 67)
• TFTP (Port 69)
• SNMP (Port 161)
• SNMP Trap (Port 162)
• RIP (Port 520)
 The Port numbers can be a value between 0 and 65535. And the length is the total length of
the UDP Header and the Data part.
 Now you might be wondering why an application developer would ever choose to build an
application over UDP rather than over TCP. Isn’t TCP always preferable, since TCP provides
a reliable data transfer service, while UDP does not? The answer is no, as many applications
are better suited for UDP for the following reasons:
 Finer application-level control over what data is sent, and when. Under UDP, as soon
as an application process passes data to UDP, UDP will package the data inside a UDP
segment and immediately pass the segment to the network layer. TCP, on the other
hand, has a congestion-control mechanism that throttles the transport-layer TCP sender
when one or more links between the source and destination hosts become excessively
congested. TCP will also continue to resend a segment until the receipt of the segment
has been acknowledged by the destination, regardless of how long reliable delivery takes.
Since real-time applications often require a minimum sending rate, do not want to
overly delay segment transmission, and can tolerate some data loss, TCP’s service
model is not particularly well matched to these applications’ needs.
 No connection establishment. TCP uses a three-way hand- shake before it starts to
transfer data. UDP just blasts away without any formal preliminaries. Thus UDP does not
introduce any delay to establish a connection. This is probably the principal reason why
DNS runs over UDP rather than TCP—DNS would be much slower if it ran over TCP.

 No connection state. TCP maintains connection state in the end systems. This
connection state includes receive and send buffers, congestion-control parameters, and
sequence and acknowledgment number parameters. UDP, on the other hand, does not
maintain connection state and does not track any of these parameters. For this reason, a
server devoted to a particular application can typically support many more active clients
when the application runs over UDP rather than TCP.
 Small packet header overhead. The TCP segment has 20 bytes of header over- head
in every segment, whereas UDP has only 8 bytes of overhead.
Uses of UDP:
The following lists some uses of the UDP protocol:
 UDP is suitable for a process that requires simple request-response communication with little
concern for flow and error control. It is not usually used for a process such as FTP that needs
to send bulk data.
 UDP is suitable for a process with internal flow and error control mechanisms. For example,
the Trivial File Transfer Protocol (TFTP) process includes flow and error control. It can easily

use UDP.
 UDP is a suitable transport protocol for multicasting. Multicasting capability is embedded in
the UDP software but not in the TCP software.
 UDP is used for management processes such as SNMP.
 UDP is used for some route updating protocols such as Routing Information Protocol (RIP)

UDP Checksum
 The UDP checksum provides for error detection. That is, the checksum is used to determine
whether bits within the UDP segment have been altered (for example, by noise in the links or
while stored in a router) as it moved from source to destination.
 UDP at the sender side performs the 1s complement of the sum of all the 16-bit words in
the segment, with any overflow encountered during the sum being wrapped around.
This result is put in the checksum field of the UDP segment.
 Here we give a simple example of the checksum calculation. As an example, suppose that
we have the following three 16-bit words:

0110011001100000
0101010101010101
1000111100001100

The sum of first two of these 16-bit words is

0110011001100000
0101010101010101
1011101110110101
Adding the third word to the above sum gives

1011101110110101
1000111100001100
0100101011000010

 Note that this last addition had overflow, which was wrapped around. The 1s complement
is obtained by converting all the 0s to 1s and converting all the 1s to 0s. Thus the 1s
complement of the sum 0100101011000010 is 1011010100111101, which becomes the
checksum. At the receiver, all four 16-bit words are added, including the checksum. If no
errors are introduced into the packet, then clearly the sum at the receiver will be
1111111111111111. If one of the bits is a 0, then we know that errors have been introduced
into the packet.
Connection–Oriented Transport: TCP (Transmission Control Protocol)

 TCP is a connection-oriented transport layer protocol that provides reliable full-duplex


data transmission. TCP is port of the TCP/IP protocol stack. In a connection-oriented
environment a connection is established between both ends before the transfer of
information can begin. TCP breaks messages into segments, reassembles them at the
destination and resends anything that is not received. TCP supplies a virtual circuit between
end user applications:
 The following application layer protocols use TCP, FTP, HTTP, SMTP and Telnet. Hence
 This is a real protocol which runs in transport layer.
It offers reliable connection-oriented service between source and destination.
 It acts as if it is connecting two end points together, so that it is a point -to-point
connection between two parties.
 It doesn‘t support multicasting (because it is connection oriented)
 The data in TCP is called a segment.
 Segments are obtained after breaking big files into small pieces.
 It assists in flow control.
 It provides buffer to each connection.

TCP segment Structure

Source port (16) Destination port (16)


Sequence number (32)
Acknowledgement Number (32)
Header length (4) Reserved (4) code bits (6) window site (16)
Checksum (16) Urgent Pointer (16)
Options (0 or 32 if any)
Data

Source port – Number of the port that sends data.


Destination port – Number of the port that receives data.
Sequence number – Number used to ensure the data arrives in the correct order.
Acknowledgement number – next expected TCP octet (defines the number of next byte, a
party, expects to receive)
Header length – length of TCP header.
Reserved – reserved for future use.
Code bits – control functions such as setup and termination of a session.
U A P R S F
U -urgent valid
A –acknowledges received data
P –data push is valid
R –reset valid
S –synchronization valid (initiates a connection)
F –final valid (terminates a connection)
Window size - number of octets (bytes) that a receiver is willing to accept.
Checksum – indicates the end of the urgent data.
Option – used when sender and receiver negotiate the maximum segment size.
Data – upper-layer protocol data.

 There are two types of Internet Protocol (IP) traffic. They are TCP or Transmission Control
Protocol and UDP or User Datagram Protocol. TCP is connection oriented – once a
connection is established, data can be sent bidirectional. UDP is a simpler, connectionless
Internet protocol. Multiple messages are sent as packets in chunks using UDP.

Comparison chart between TCP and UDP


TCP UDP
It is a connection-oriented protocol. It is a connectionless protocol.

TCP reads data as streams of bytes, and the message is UDP messages contain packets that were sent one by
transmitted to segment boundaries. one. It also checks for integrity at the arrival time.

TCP messages make their way across the internet from one It is not connection-based, so one program can send lots
computer to another. of packets to another.
UDP protocol has no fixed order because all packets are
TCP rearranges data packets in the specific order.
independent of each other.
The speed for TCP is slower. UDP is faster as error recovery is not attempted.
Header size is 20 bytes Header size is 8 bytes.

TCP is heavy-weight. TCP needs three packets to set up a UDP is lightweight. There are no tracking connections,
socket connection before any user data can be sent. ordering of messages, etc.

UDP performs error checking, but it discards erroneous


TCP does error checking and also makes error recovery.
packets.
Acknowledgment segments No Acknowledgment segments
Using handshake protocol like SYN, SYN-ACK, ACK No handshake (so connectionless protocol)
TCP is reliable as it guarantees delivery of data to the The delivery of data to the destination can’t be
destination router. guaranteed in UDP.

TCP offers extensive error checking mechanisms because UDP has just a single error checking mechanism which
it provides flow control and acknowledgment of data. is used for checksums.
Roundtrip Time (RTT)

 RTT, also called round trip delay is the time required for a signal pulse or packet to travel
from a specific source to a specific destination and back again.

 Obviously, the sample RRT values will fluctuate from segment to segment due to congestion
in the routers and to the varying loads on the end systems.

Multiplexing and De-multiplexing

Fig.: Multiplexing and De-multiplexing


 Extending the host-to-host delivery service provided by the network layer to a process-
to- process delivery service application running on the hosts.
 Consider how a receiving host directs on incoming transport layer segment to the appropriate
socket. Each transport layer segments has a set of fields for this purpose. At the receiving
end, the transport layer examines these fields to identify the receiving socket and then it
directs the segment to that socket. The job of delivering the data in the transport layer
segment to the correct socket is called de-multiplexing. The job of gathering data chunks at
the source host from the different sockets, encapsulating each data chunk with the header
information to create segments and passing the segments to the network layer is called
multiplexing.
Flow Control
A TCP connection sets aside a receiver buffer for the connection. When the TCP connection
receives bytes that are correct and in sequence, it places the data in the receiver buffer. The
associated application process will read data from this buffer, but not necessarily at this instant
the data arrives. Indeed, the receiving application may be busy with some other task and may not
attempt to read the data until longer after it has arrived. If the application is relatively slow at
reading data the sender can very easily overflow the receive buffer by sending too much data
quickly.
TCP provides a flow control service to its application to estimate the possibility of the sender
overflowing the receive buffer. Flow control is thus a speed-matching service matching the rate
at which the sender is sending against the rate at which receiving application is receiving.

Because TCP is not permitted to overflow the allocated buffer, we must have
LastByteRcvd – LastByteRead = RcvBuffer

The receive window, denoted rwnd is set to the amount of spare room in the buffer:
rwnd = RcvBuffer – [LastByteRcvd – LastByteRead]
Because the spare room changes with time, rwnd is dynamic. The variable rwnd is illustrated in Figure 3.38.
Reliable Data Transfer (RDT)
 RDT is the mechanism where no transferred data bits are corrupted (flipped from o to 1, or
vice versa) or lost, and all are delivered in the order in which they were sent.
 TCP creates a RDT service on top of IPs unreliable best effort service.
 TCP‘s RDT service ensures that the data stream that a process reads out of its TCP receive
buffer is uncorrupted, without gaps, without duplication and in sequence, that is, the byte
stream is exactly the same byte stream that was sent by the end system on to other side of the
connection.

Building a RDT protocol


1) Reliable Data transfer over a Perfectly Reliable Channel: rdt1.0
 We first consider the simplest case, in which the underlying channel is completely
reliable.
 It is called finite-state machine (FSM).
In this simple protocol, there is no difference between a unit of data and a packet.
Also, all packet flow is from the sender to receiver; with a perfectly reliable channel
there is no need for the receiver side to provide any feedback to the sender since
nothing can go wrong! Note that we have also assumed that the receiver is abl e to
receive data as fast as the sender happens to send data. Thus, there is no need for the
receiver to ask the sender to slow down!
2) Reliable Data Transfer over a channel with Bit Errors: rdt2.0
 A more realistic model of underlying channel is one in which bits in a packet may be
corrupted.
 If receiver receives the packet, the receiver must acknowledge it to the sender whether
the packet has received with error-free or not through these:
Positive Acknowledgement (ACK)
Negative Acknowledgement (NAK)
 If NAK provided, the sender should retransmit the packet.
 Such protocol is called ARQ (Automatic Repeat Request) protocols.
 Fundamentally, three additional protocol capabilities are required in ARQ protocols to
handle the presence of bit errors :
 Error Detection
 Internet checksum field
 Error-detection and correction techniques
 Require extra bits (beyond the bits of original data to be transferred) to be sent from
the sender to the receiver, these bits will be gathered into the packet checksum field.
 Receiver Feed Back
 Receiver provides feed back
 Positive (ACK) -1 value
 Negative (NAK) – 0 values
 Retransmission
 A packet that is received in error at the receiver will be retransmitted by the sender.
These phenomena are called stop-and-wait protocols.

An amazing case will occur if ACK or NAK is corrupted i.e. the sender could not get the
feedback from sender.
Consider three probabilities for handling corrupted ACKs or NAKs.
 A second alternative is to add enough checksum bits to allow the sender not only to
detect, but also to receiver from bit errors. This solves immediate problem for a channel
that can corrupt packets but not lose them.
 A third approach is for the sender simply to resend the current data packet when it
receives a garbled ACK or NAK packet. This introduces duplicate packets into the
sender-to-receiver channel. The receiver doesn‘t know whether the ACK or NAK it last
sent was received correctly at the ender. Thus, it cannot know whether an arriving packet
contains new data or is a retransmission.
 A solution to this problem is to a new field called ―sequence number‖ to the data packet.
 For this stop-and-wait protocol, a 1-bit sequence number will be ok.
3) Reliable Data Transfer Over A lossy Channel with Bit Errors: rdt 3.0
 Suppose now that in addition to corrupting bits, the underlying channel can lose packet s as
well.
 The sender must get information of packet loss on the way from the receiver so that the
sender can retransmit.
 The sender must clearly wait at least as long as a round-trip delay between he sender and
receiver.
 If ACK is not received within this time, the packet is retransmitted.
 If a packet experiences a particularly large delay, the sender may retransmit the packet
even though neither the data packet nor its ACK have been lost.
 This introduces the possibility of duplicate data packets in the sender-to-receiver channel.
 For all this, we can do is retransmit.
 But implementing a time-based retransmission mechanism requires a ―countdown timer‖
that an interrupt the sender after a given amount of time has expired.
 The sender will thus need to be able to
 Start the timer each time a packet (either a first time packet or a retransmission)
is sent.
 Respond to a timer interrupt (taking appropriate actions)
 Stop the timer.
Checksums: sequence numbers, timers, positive and negative acknowledgement 1

Because packet sequence numbers alternate between 0 and 1, protocol rdt3.0 is sometimes known as
the alternating-bit protocol.
Reliable Data Transfer Protocol
1. Pipelined
2. Go-Back-N(GBN)
3. Selective Repeat (SR)
Pipelined Reliable Data Transfer Protocol:

Fig: stop-and-wait vs. pipelined protocol


 Instead of sending a single packet in stop and wait manner, the sender is allowed to send
multiple packet without waiting for acknowledgements, as illustrate in fib (b). fig (b) shows
that if the sender is allowed to transmit three packets before having to wait for
acknowledgement the utilization of the sender is essentially tripled. Since the many
in-transmit sender-to-receiver packets can be visualized as a filling a pipeline, this
technique is known as pipelining.
Consequences of pipelined protocol
 Increment in the range of sequence numbers.
 Sender and receiver have to buffer more than one packet.
 Range of sequence numbers and the buffering requirements will depend on t he manner in
which a data transfer protocol responds to lost, corrupted and overly delayed packets.
Go-Back-N (GBN):
 In a GBN, protocol, the sender is allowed to transmit multiple packets (when available)
without waiting for an acknowledgement but is allowed to have no more than some
maximum allowable number, N, of an un acknowledged packets in the pipeline.
 Figure shows the sender view of range of sequence numbers in a GBN protocol. If we
define base to be the sequence number of the oldest un acknowledge packet and next
sequence num to be the smallest unused sequence numbers (i.e. the sequence number
of the next packet to be sent), then four intervals in the range of sequence numbers can
be identified. Sequence numbers in the interval [o, base-1] correspond to packets that have
already been transmitted and acknowledged. The interval [base, next sequence-1]
corresponds to packets that have been sent but not yet acknowledged. Sequence numbers
in the interval [next sequence, base+N-1] can be used for packets that can be sent
immediately, should data arrive from the upper layer. Finally, sequence number greater
than or equal to base+ N cannot be used until an unacknowledged packet currently in
the pipeline (specifically, the packet with sequence number base) has been acknowledged.
 As suggested by figure, the range of permissible sequence numbers for transmitted but
not yet acknowledged packets can be viewed as a window of size N over the range of
sequence numbers. As the protocol operates, this window slides forward over the sequence
number space. For this reason, N is often referred to as the window size and the GBN
protocol l itself as a sliding window protocol.
Selective Repeat (SR):
 GBN itself suffers from performance problems. Many packets can be in the pipeline when
the window size and bandwidth-delay product are both large. A single packet error can
thus cause GBN to retransmit a large number of packets, many unnecessarily. As the
probability of channel error increased, the pipeline can become filled with these
unnecessary retransmissions.
 As the name suggests, selective- repeat protocols avoid unnecessary retransmissions by
having the sender retransmit only those packets that it suspects were received in error (i.e.,
were lost or corrupted) at the receiver. A window size of N will again be used to limit, the
number of outstanding, unacknowledged packets in the pipeline. However, unlike GBN,
the sender will have already received ACKs for some of the packets in the window.

 The SR receiver will acknowledge a correctly received packet whether or not it is in


order. Out of order packets are buffered until any missing packets (i.e. packets with
lower sequence numbers) are received at which points a batch of packets can be delivered
in order to the upper layer.
Congestion
 Congestion in a network may occur if the load on the network (the number of packets sent
to the network) is greater than the capacity of the network (the number of packets a network
can handle). Congestion control refers to the mechanisms and techniques to control the
congestion and keep the load below the capacity.
 When too many packets are pumped into the system, congestion occur leading
into degradation of performance.
 Congestion tends to feed upon itself and backups.
 Congestion shows lack of balance between various networking equipment’s.
 It is a global issue.

 When too many packets are present in a subnet or a part of subnet, performance degrades.
This situation is called congestion. When number of packets dumped into the subnet by the
hosts is within its carrying capacity, they are all delivered (except for a few that contain
transmission errors), and the number delivered is proportional to the number sent.
However, as traffic increases too far, the routers are no longer able to cope, and they begin
losing packets. At very high traffics, performance collapses completely and almost no
packets are delivered.
Causes of Congestion
 When there are more input lines and less or single output lines.
 When there is slow router i.e., if routers CPU‘s, are
slow
 If the router has no free buffers i.e., insufficient memory to hold queue of packets.
 If the components used in subnet (link, router, switches, etc) have different traffics carrying
and switching capacities, then congestion occurs.
 If the bandwidths of the lines are low, it can‘t carry large volume of packets and caused
 congestion. Hence, congestion cannot be eradicated but can be controlled.

Congestion Control Algorithms

 In general, we can divide congestion control mechanisms into two broad categories:
open-loop congestion control (prevention) and closed-loop congestion control (removal) as
shown in Figure

22
Open Loop Congestion Control:
 In open-loop congestion control, policies are applied to prevent congestion before it happens.
In these mechanisms, congestion control is handled by either the source or the destination

Retransmission Policy
 Retransmission is sometimes unavoidable. If the sender feels that a sent packet is lost or
corrupted, the packet needs to be retransmitted. Retransmission in general may increase
congestion in the network. However, a good retransmission policy can prevent congestion. The
retransmission policy and the retransmission timers must be designed to optimize efficiency
and at the same time prevent congestion. For example, the retransmission policy used by TCP
is designed to prevent or alleviate congestion.
Window Policy
 The type of window at the sender may also affect congestion. The Selective Repeat window
is better than the Go-Back-N window for congestion control. In the Go-Back-N window, when
the timer for a packet times out, several packets may be resent, although some may have arrived
safe and sound at the receiver. This duplication may make the congestion worse. The Selective
Repeat window, on the other hand, tries to send the specific packets that have been lost or
corrupted.

Acknowledgment Policy:
 The acknowledgment policy imposed by the receiver may also affect congestion. If the receiver
does not acknowledge every packet it receives, it may slow down the sender and help prevent
congestion. Several approaches are used in this case. A receiver may send an acknowledgment
only if it has a packet to be sent or a special timer expires. A receiver may decide to
acknowledge only N packets at a time. We need to know that the acknowledgments are also
part of the load in a network. Sending fewer acknowledgments means imposing less load on the
network.

Discarding Policy:
 A good discarding policy by the routers may prevent congestion and at the same time may not
harm the integrity of the transmission. For example, in audio transmission, if the policy is to
discard less sensitive packets when congestion is likely to happen, the quality of sound is still
preserved and congestion is prevented or alleviated.

Admission Policy:
 An admission policy, which is a quality-of-service mechanism, can also prevent congestion in
virtual-circuit networks. Switches in a flow first check the resource requirement of a flow before
admitting it to the network. A router can deny establishing a virtual- circuit connection if there
is congestion in the network or if there is a possibility of future congestion.

23
Closed-Loop Congestion Control
 Closed-loop congestion control mechanisms try to alleviate congestion after it happens.
Several mechanisms have been used by different protocols.

Back-pressure:
 The technique of backpressure refers to a congestion control mechanism in which a congested
node stops receiving data from the immediate upstream node or nodes. This may cause the
upstream node or nodes to become congested, and they, in turn, reject data from their upstream
nodes or nodes. And so on. Backpressure is a node-to-node congestion control that starts with
a node and propagates, in the opposite direction of data flow, to the source. The backpressure
technique can be applied only to virtual circuit networks, in which each node knows the
upstream node from which a flow of data is corning.

 Node III in the figure has more input data than it can handle. It drops some packets in
its input buffer and informs node II to slow down. Node II, in turn, may be congested because
it is slowing down the output flow of data. If node II is congested, it informs node I to slow
down, which in turn may create congestion. If so, node I informs the source of data to slow
down. This, in time, alleviates the congestion. Note that the pressure on node III is moved
backward to the source to remove the congestion. None of the virtual-circuit networks we
studied in this book use backpressure. It was, however, implemented in the first virtual-
circuit network, X.25.
 The technique cannot be implemented in a datagram network because in this type of
network, a node (router) does not have the slightest knowledge of the upstream router.

Choke Packet
 A choke packet is a packet sent by a node to the source to inform it of congestion. Note the
difference between the backpressure and choke packet methods. In backpresure, the warning
is from one node to its upstream node, although the warning may eventually reach the source
station.
 In the choke packet method, the warning is from the router, which has encountered
congestion, to the source station directly. The intermediate nodes through which the packet
has travelled are not warned. We have seen an example of this type of control in ICMP. When
a router in the Internet is overwhelmed datagrams, it may discard some of them; but it informs
the source. host, using a source quench ICMP message. The warning message goes directly
to the source station; the intermediate routers, and does not take any action.
 Figure shows the idea of a choke packet.

24
Implicit Signalling
 In implicit signalling, there is no communication between the congested node or nodes
and the source. The source guesses that there is a congestion somewhere in the network
from other symptoms. For example, when a source sends several packets and there is no
acknowledgment for a while, one assumption is that the network is congested. The delay in
receiving an acknowledgment is interpreted as congestion in the network; the source should
slow down. We will see this type of signalling when we discuss TCP congestion control
later in the chapter.

Explicit Signalling
 The node that experiences congestion can explicitly send a signal to the source or
destination. The explicit signalling method, however, is different from the choke packet
method. In the choke packet method, a separate packet is used for this purpose; in the explicit
signalling method, the signal is included in the packets that carry data. Explicit signalling, as
we will see in Frame Relay congestion control, can occur in either the forward or the
backward direction.
 Backward Signalling A bit can be set in a packet moving in the direction opposite to the
congestion. This bit can warn the source that there is congestion and that it needs to slow
down to avoid the discarding of packets. Forward Signalling A bit can be set in a packet
moving in the direction of the congestion. This bit can warn the destination that there is
congestion. The receiver in this case can use policies, such as slowing down the
acknowledgments, to alleviate the congestion.

Traffic Shaping
 Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the
network. Two techniques can shape traffic:
1.Leaky bucket and
2. Token bucket.

Leaky Bucket
 If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate
as long as there is water in the bucket. The rate at which the water leaks does not depend on
the rate at which the water is input to the bucket unless the bucket is empty. The input rate
can vary, but the output rate remains constant. Similarly, in networking, a technique called
leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket and sent
out at an average rate. Figure shows a leaky bucket and its effects.

25
 In the figure, we assume that the network has committed a bandwidth of 3 Mbps for a host.
The use of the leaky bucket shapes the input traffic to make it conform to this commitment.
 In Figure 24.19 the host sends a burst of data at a rate of 12 Mbps for 2 s, for a total of 24
Mbits of data. The host is silent for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a
total of 6 Mbits of data. In all, the host has sent 30 Mbits of data in lOs. The leaky bucket
smooths the traffic by sending out data at a rate of 3 Mbps during the same 10 s. Without
the leaky bucket, the beginning burst may have hurt the network by consuming more
bandwidth than is set aside for this host. We can also see that the leaky bucket may prevent
congestion.

Leaky Bucket Implementation

A simple leaky bucket implementation is shown in Figure 24.20. A FIFO queue holds the packets.
If the traffic consists of fixed-size packets (e.g., cells in ATM networks), the process removes a
fixed number of packets from the queue at each tick of the clock. If the traffic consists of variable-
length packets, the fixed output rate must be based on the number of bytes or bits.

26
The following is an algorithm for variable-length packets:
1. Initialize a counter to n at the tick of the clock.
2. If n is greater than the size of the packet, send the packet and decrement the counter by the
packet size. Repeat this step until n is smaller than the packet size.
3. Reset the counter and go to step 1.

A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the data rate. It
may drop the packets if the bucket is full.

2. Token Bucket

• In contrast to the LB, the Token Bucket (TB) algorithm, allows the output rate to vary,
depending on the size of the burst.
• In the TB algorithm, the bucket holds tokens. To transmit a packet, the host must capture and
destroy one token.
• Tokens are generated by a clock at the rate of one token every t sec.
• Idle hosts can capture and save up tokens (up to the max. size of the bucket) in order to send
larger bursts later.

(a) Before (b) After


Token bucket operation
• TB accumulates fixed size tokens in a token bucket
• Transmits a packet (from data buffer, if any are there) or arriving packet if the sum of the
token sizes in the bucket add up to packet size
• More tokens are periodically added to the bucket (at rate t). If tokens are to be added when
the bucket is full, they are discarded

Token bucket properties


• Does not bound the peak rate of small bursts, because bucket may contain enough token to
cover a complete burst size
• Performance depends only on the sum of the data buffer size and the token bucket size
27
Token bucket – example
• 2 tokens of size 100 bytes added each second to the token bucket of capacity 500 bytes
– Avg. rate = 200 bytes/sec, burst size = 500 bytes
– Packets bigger than 500 bytes will never be sent
– Peak rate is unbounded – i.e., 500 bytes of burst can be transmitted arbitrarily fast

Leaky Bucket vs Token Bucket


• LB discards packets; TB does not. TB discards tokens.
• With TB, a packet can only be transmitted if there are enough tokens to cover its length in
bytes.
• LB sends packets at an average rate. TB allows for large bursts to be sent faster by speeding
up the output.
• TB allows saving up tokens (permissions) to send large bursts. LB does not allow saving.

28

You might also like