Unit - Iv: Transport Layer
Unit - Iv: Transport Layer
1. INTRODUCTION
The transport layer is the fourth layer of the OSI model and is the core of the Internet
model.
It responds to service requests from the session layer and issues service requests to
the network Layer.
The transport layer provides transparent transfer of data between hosts.
It provides end-to-end control and information transfer with the quality of service
needed by the application program.
It is the first true end-to-end layer, implemented in all End Systems (ES).
Process-to-Process Communication
The Transport Layer is responsible for delivering data to the appropriate application
process on the host computers.
This involves multiplexing of data from different application processes, i.e. forming
data packets, and adding source and destination port numbers in the header of each
Transport Layer data packet.
Together with the source and destination IP address, the port numbers constitutes a
network socket, i.e. an identification address of the process-to-process
communication.
Flow Control
Flow Control is the process of managing the rate of data transmission between two
nodes to prevent a fast sender from overwhelming a slow receiver.
It provides a mechanism for the receiver to control the transmission speed, so that the
receiving node is not overwhelmed with data from transmitting node.
2
CS8591 – Computer Networks Unit 4
Error Control
Error control at the transport layer is responsible for
1. Detecting and discarding corrupted packets.
2. Keeping track of lost and discarded packets and resending them.
3. Recognizing duplicate packets and discarding them.
4. Buffering out-of-order packets until the missing packets arrive.
Error Control involves Error Detection and Error Correction
Congestion Control
Congestion in a network may occur if the load on the network (the number of
packets sent to the network) is greater than the capacity of the network (the number
of packets a network can handle).
Congestion control refers to the mechanisms and techniques that control the
congestion and keep the load below the capacity.
Congestion Control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened
Congestion control mechanisms are divided into two categories,
1. Open loop - prevent the congestion before it happens.
2. Closed loop - remove the congestion after it happens.
2. PORT NUMBERS
A transport-layer protocol usually has several responsibilities.
One is to create a process-to-process communication.
Processes are programs that run on hosts. It could be either server or client.
A process on the local host, called a client, needs services from a process usually
on the remote host, called a server.
Processes are assigned a unique 16-bit port number on that host.
Port numbers provide end-to-end addresses at the transport layer
They also provide multiplexing and demultiplexing at this layer.
3
CS8591 – Computer Networks Unit 4
ICANN (Internet Corporation for Assigned Names and Numbers) has divided the port
numbers into three ranges:
Well-known ports
Registered
Ephemeral ports (Dynamic Ports)
WELL-KNOWN PORTS
These are permanent port numbers used by the servers.
They range between 0 to 1023.
This port number cannot be chosen randomly.
These port numbers are universal port numbers for servers.
Every client process knows the well-known port number of the corresponding server
process.
For example, while the daytime client process, a well-known client program, can
use an ephemeral (temporary) port number, 52,000, to identify itself, the daytime
server process must use the well-known (permanent) port number 13.
4
CS8591 – Computer Networks Unit 4
REGISTERED PORTS
The ports ranging from 1024 to 49,151 are not assigned or controlled.
Each protocol provides a different type of service and should be used appropriately.
5
CS8591 – Computer Networks Unit 4
UDP - UDP is an unreliable connectionless transport-layer protocol used for its simplicity
and efficiency in applications where error control can be provided by the application-layer
process.
TCP - TCP is a reliable connection-oriented protocol that can be used in any application
where reliability is important.
SCTP - SCTP is a new transport-layer protocol designed to combine some features of UDP
and TCP in an effort to create a better protocol for multimedia communication.
UDP PORTS
Processes (server/client) are identified by an abstract locator known as port.
Server accepts message at well known port.
Some well-known UDP ports are 7–Echo, 53–DNS, 111–RPC, 161–SNMP, etc.
< port, host > pair is used as key for demultiplexing.
Ports are implemented as a message queue.
When a message arrives, UDP appends it to end of the queue.
When queue is full, the message is discarded.
When a message is read, it is removed from the queue.
When an application process wants to receive a message, one is removed from the
front of the queue.
If the queue is empty, the process blocks until a message becomes available.
6
CS8591 – Computer Networks Unit 4
7
CS8591 – Computer Networks Unit 4
Length
This field denotes the total length of the UDP Packet (Header plus data)
The total length of any UDP datagram can be from 0 to 65,535 bytes.
Checksum
UDP computes its checksum over the UDP header, the contents of the message
body, and something called the pseudoheader.
The pseudoheader consists of three fields from the IP header—protocol number,
source IP address, destination IP address plus the UDP length field.
Data
Data field defines tha actual payload to be transmitted.
Its size is variable.
UDP SERVICES
Process-to-Process Communication
UDP provides process-to-process communication using socket addresses, a
combination of IP addresses and port numbers.
Connectionless Services
UDP provides a connectionless service.
There is no connection establishment and no connection termination .
Each user datagram sent by UDP is an independent datagram.
There is no relationship between the different user datagrams even if they are
coming from the same source process and going to the same destination program.
The user datagrams are not numbered.
Each user datagram can travel on a different path.
Flow Control
UDP is a very simple protocol.
There is no flow control, and hence no window mechanism.
The receiver may overflow with incoming messages.
The lack of flow control means that the process using UDP should provide for this
service, if needed.
Error Control
There is no error control mechanism in UDP except for the checksum.
This means that the sender does not know if a message has been lost or duplicated.
When the receiver detects an error through the checksum, the user datagram is
silently discarded.
8
CS8591 – Computer Networks Unit 4
The lack of error control means that the process using UDP should provide for this
service, if needed.
Checksum
UDP checksum calculation includes three sections: a pseudoheader, the UDP header,
and the data coming from the application layer.
The pseudoheader is the part of the header in which the user datagram is to be
encapsulated with some fields filled with 0s.
Congestion Control
Since UDP is a connectionless protocol, it does not provide congestion control.
UDP assumes that the packets sent are small and sporadic(occasionally or at irregular
intervals) and cannot create congestion in the network.
This assumption may or may not be true, when UDP is used for interactive real-time
transfer of audio and video.
Queuing
In UDP, queues are associated with ports.
At the client site, when a process starts, it requests a port number from the operating
system.
Some implementations create both an incoming and an outgoing queue associated
with each process.
Other implementations create only an incoming queue associated with each process.
9
CS8591 – Computer Networks Unit 4
APPLICATIONS OF UDP
UDP is used for management processes such as SNMP.
UDP is used for route updating protocols such as RIP.
UDP is a suitable transport protocol for multicasting. Multicasting capability is
embedded in the UDP software
UDP is suitable for a process with internal flow and error control mechanisms such
as Trivial File Transfer Protocol (TFTP).
UDP is suitable for a process that requires simple request-response communication
with little concern for flow and error control.
UDP is normally used for interactive real-time applications that cannot tolerate
uneven delay between sections of a received message.
TCP SERVICES
Process-to-Process Communication
TCP provides process-to-process communication using port numbers.
Stream Delivery Service
TCP is a stream-oriented protocol.
TCP allows the sending process to deliver data as a stream of bytes and allows the
receiving process to obtain data as a stream of bytes.
TCP creates an environment in which the two processes seem to be connected by an
imaginary ―tube‖ that carries their bytes across the Internet.
The sending process produces (writes to) the stream and the receiving process
consumes (reads from) it.
10
CS8591 – Computer Networks Unit 4
Full-Duplex Communication
TCP offers full-duplex service, where data can flow in both directions at the same
time.
Each TCP endpoint then has its own sending and receiving buffer, and segments
move in both directions.
Connection-Oriented Service
TCP is a connection-oriented protocol.
A connection needs to be established for each pair of processes.
When a process at site A wants to send to and receive data from another
process at site B, the following three phases occur:
1. The two TCP’s establish a logical connection between them.
2. Data are exchanged in both directions.
3. The connection is terminated.
Reliable Service
TCP is a reliable transport protocol.
It uses an acknowledgment mechanism to check the safe and sound arrival of data.
TCP SEGMENT
A packet in TCP is called a segment.
Data unit exchanged between TCP peers are called segments.
A TCP segment encapsulates the data received from the application layer.
The TCP segment is encapsulated in an IP datagram, which in turn is encapsulated in
a frame at the data-link layer.
CS8591 – Computer Networks Unit 4
TCP is a byte-oriented protocol, which means that the sender writes bytes into a TCP
connection and the receiver reads bytes out of the TCP connection.
TCP does not, itself, transmit individual bytes over the Internet.
TCP on the source host buffers enough bytes from the sending process to fill a
reasonably sized packet and then sends this packet to its peer on the destination host.
TCP on the destination host then empties the contents of the packet into a receive
buffer, and the receiving process reads from this buffer at its leisure.
TCP connection supports byte streams flowing in both directions.
The packets exchanged between TCP peers are called segments, since each one
carries a segment of the byte stream.
12
CS8591 – Computer Networks Unit 4
Connection Establishment
While opening a TCP connection the two nodes(client and server) want to agree on a
set of parameters.
The parameters are the starting sequence numbers that is to be used for their
respective byte streams.
Connection establishment in TCP is a three-way handshaking.
1. Client sends a SYN segment to the server containing its initial sequence number (Flags
= SYN, SequenceNum = x)
2. Server responds with a segment that acknowledges client’s segment and specifies its
initial sequence number (Flags = SYN + ACK, ACK = x + 1 SequenceNum = y).
3. Finally, client responds with a segment that acknowledges server’s sequence number
(Flags = ACK, ACK = y + 1).
13
CS8591 – Computer Networks Unit 4
The reason that each side acknowledges a sequence number that is one larger
than the one sent is that the Acknowledgment field actually identifies the ―next
sequence number expected,‖
A timer is scheduled for each of the first two segments, and if the expected
response is not received, the segment is retransmitted.
Data Transfer
After connection is established, bidirectional data transfer can take place.
The client and server can send data and acknowledgments in both directions.
The data traveling in the same direction as an acknowledgment are carried on the
same segment.
The acknowledgment is piggybacked with the data.
Connection Termination
Connection termination or teardown can be done in two ways :
Three-way Close and Half-Close
14
CS8591 – Computer Networks Unit 4
15
CS8591 – Computer Networks Unit 4
1
6
CS8591 – Computer Networks Unit 4
Send Buffer
Sending TCP maintains send buffer which contains 3 segments
(1) acknowledged data
(2) unacknowledged data
(3) data to be transmitted.
Send buffer maintains three pointers
(1) LastByteAcked, (2) LastByteSent, and (3) LastByteWritten
such that:
LastByteAcked ≤ LastByteSent ≤ LastByteWritten
A byte can be sent only after being written and only a sent byte can be
acknowledged.
Bytes to the left of LastByteAcked are not kept as it had been acknowledged.
Receive Buffer
Receiving TCP maintains receive buffer to hold data even if it arrives out-of-order.
Receive buffer maintains three pointers namely
(1) LastByteRead, (2) NextByteExpected, and (3) LastByteRcvd
such that:
LastByteRead ≤ NextByteExpected ≤ LastByteRcvd + 1
A byte cannot be read until that byte and all preceding bytes have been received.
If data is received in order, then NextByteExpected = LastByteRcvd + 1
Bytes to the left of LastByteRead are not buffered, since it is read by the application.
17
CS8591 – Computer Networks Unit 4
TCP TRANSMISSION
TCP has three mechanism to trigger the transmission of a segment.
They are
o Maximum Segment Size (MSS) - Silly Window Syndrome
o Timeout - Nagle’s Algorithm
The sending TCP may create a silly window syndrome if it is serving an application
The result is a lot of 1-byte segments that are traveling through an internet.
The solution is to prevent the sending TCP from sending the data byte by byte.
The sending TCP must be forced to wait and collect data to send in a larger block.
18
CS8591 – Computer Networks Unit 4
Nagle’s Algorithm
If there is data to send but is less than MSS, then we may want to wait some amount
of time before sending the available data
If we wait too long, then it may delay the process.
If we don’t wait long enough, it may end up sending small segments resulting in
Silly Window Syndrome.
The solution is to introduce a timer and to transmit when the timer expires
19
CS8591 – Computer Networks Unit 4
20
CS8591 – Computer Networks Unit 4
Slow Start
Slow start is used to increase CongestionWindow exponentially from a cold start.
Source TCP initializes CongestionWindow to one packet.
TCP doubles the number of packets sent every RTT on successful transmission.
When ACK arrives for first packet TCP adds 1 packet to CongestionWindow and
sends two packets.
When two ACKs arrive, TCP increments CongestionWindow by 2 packets and sends
four packets and so on.
Instead of sending entire permissible packets at once (bursty traffic), packets are sent
in a phased manner, i.e., slow start.
Initially TCP has no idea about congestion, henceforth it increases
CongestionWindow rapidly until there is a timeout. On timeout:
CongestionThreshold = CongestionWindow/ 2
CongestionWindow = 1
Slow start is repeated until CongestionWindow reaches CongestionThreshold and
thereafter 1 packet per RTT.
21
CS8591 – Computer Networks Unit 4
22
CS8591 – Computer Networks Unit 4
For example, packets 1 and 2 are received whereas packet 3 gets lost.
o Receiver sends a duplicate ACK for packet 2 when packet 4 arrives.
o Sender receives 3 duplicate ACKs after sending packet 6 retransmits packet 3.
o When packet 3 is received, receiver sends cumulative ACK up to packet 6.
The congestion window trace will look like
The idea is to evenly split the responsibility for congestion control between the
routers and the end nodes.
Each router monitors the load it is experiencing and explicitly notifies the end nodes
when congestion is about to occur.
This notification is implemented by setting a binary congestion bit in the packets that
flow through the router; hence the name DECbit.
23
CS8591 – Computer Networks Unit 4
The destination host then copies this congestion bit into the ACK it sends back to the
source.
The Source checks how many ACK has DEC bit set for previous window packets.
If less than 50% of ACK have DEC bit set, then source increases its congestion
window by 1 packet
Using a queue length of 1 as the trigger for setting the congestion bit.
A router sets this bit in a packet if its average queue length is greater than or equal to
1 at the time the packet arrives.
Average queue length is measured over a time interval that includes the
last busy + last idle cycle + current busy cycle.
It calculates the average queue length by dividing the curve area with time interval.
24
CS8591 – Computer Networks Unit 4
Each router is programmed to monitor its own queue length, and when it detects that
there is congestion, it notifies the source to adjust its congestion window.
RED differs from the DEC bit scheme by two ways:
b. DECbit may lead to tail drop policy, whereas RED drops packet based on
drop probability in a random manner. Drop each arriving packet with some
drop probability whenever the queue length exceeds some drop level. This
idea is called early random drop.
RED has two queue length thresholds that trigger certain activity: MinThreshold and
MaxThreshold
When a packet arrives at a gateway it compares Avglen with these two values
according to the following rules.
25
CS8591 – Computer Networks Unit 4
SCTP SERVICES
Process-to-Process Communication
SCTP provides process-to-process communication.
Multiple Streams
SCTP allows multistream service in each connection, which is called association in
SCTP terminology.
If one of the streams is blocked, the other streams can still deliver their data.
Multihoming
An SCTP association supports multihoming service.
The sending and receiving host can define multiple IP addresses in each end for an
association.
In this fault-tolerant approach, when one path fails, another interface can be used for
data delivery without interruption.
26
CS8591 – Computer Networks Unit 4
Full-Duplex Communication
SCTP offers full-duplex service, where data can flow in both directions at the same
time. Each SCTP then has a sending and receiving buffer and packets are sent in both
directions.
Connection-Oriented Service
SCTP is a connection-oriented protocol.
In SCTP, a connection is called an association.
If a client wants to send and receive message from server , the steps are :
Step1: The two SCTPs establish the connection with each other.
Step2: Once the connection is established, the data gets exchanged in both the
directions.
Step3: Finally, the association is terminated.
Reliable Service
SCTP is a reliable transport protocol.
It uses an acknowledgment mechanism to check the safe and sound arrival of data.
An SCTP packet has a mandatory general header and a set of blocks called chunks.
General Header
The general header (packet header) defines the end points of each association to
which the packet belongs
It guarantees that the packet belongs to a particular association
It also preserves the integrity of the contents of the packet including the header itself.
There are four fields in the general header.
Source port
This field identifies the sending port.
Destination port
This field identifies the receiving port that hosts use to route the packet to the
appropriate endpoint/application.
27
CS8591 – Computer Networks Unit 4
Verification tag
A 32-bit random value created during initialization to distinguish stale packets
from a previous connection.
Checksum
The next field is a checksum. The size of the checksum is 32 bits. SCTP uses
CRC-32 Checksum.
Chunks
Control information or user data are carried in chunks.
Chunks have a common layout.
The first three fields are common to all chunks; the information field depends on the
type of chunk.
The type field can define up to 256 types of chunks. Only a few have been defined so
far; the rest are reserved for future use.
The flag field defines special flags that a particular chunk may need.
The length field defines the total size of the chunk, in bytes, including the type, flag,
and length fields.
Types of Chunks
An SCTP association may send many packets, a packet may contain several chunks,
and chunks may belong to different streams.
SCTP defines two types of chunks - Control chunks and Data chunks.
A control chunk controls and maintains the association.
A data chunk carries user data.
28
CS8591 – Computer Networks Unit 4
SCTP ASSOCIATION
SCTP is a connection-oriented protocol.
A connection in SCTP is called an association to emphasize multihoming.
SCTP Associations consists of three phases:
Association Establishment
Data Transfer
Association Termination
Association Establishment
Association establishment in SCTP requires a four-way handshake.
In this procedure, a client process wants to establish an association with a server
process using SCTP as the transport-layer protocol.
The SCTP server needs to be prepared to receive any association (passive open).
Association establishment, however, is initiated by the client (active open).
The client sends the first packet, which contains an INIT chunk.
The server sends the second packet, which contains an INIT ACK chunk. The INIT
ACK also sends a cookie that defines the state of the server at this moment.
The client sends the third packet, which includes a COOKIE ECHO chunk. This is a
very simple chunk that echoes, without change, the cookie sent by the server. SCTP
allows the inclusion of data chunks in this packet.
The server sends the fourth packet, which includes the COOKIE ACK chunk that
acknowledges the receipt of the COOKIE ECHO chunk. SCTP allows the inclusion
of data chunks with this packet.
Data Transfer
The whole purpose of an association is to transfer data between two ends.
After the association is established, bidirectional data transfer can take place.
The client and the server can both send data.
SCTP supports piggybacking.
29
CS8591 – Computer Networks Unit 4
2. Multistream Delivery
SCTP can support multiple streams, which means that the sender process
can define different streams and a message can belong to one of these
streams.
Each stream is assigned a stream identifier (SI) which uniquely defines
that stream.
SCTP supports two types of data delivery in each stream: ordered (default)
and unordered.
Association Termination
In SCTP,either of the two parties involved in exchanging data (client or server) can
close the connection.
SCTP does not allow a ―half closed‖ association. If one end closes the association,
the other end must stop sending new data.
If any data are left over in the queue of the recipient of the termination request, they
are sent and the association is closed.
Association termination uses three packets.
Receiver Site
The receiver has one buffer (queue) and three variables.
30
CS8591 – Computer Networks Unit 4
The queue holds the received data chunks that have not yet been read by the process.
The first variable holds the last TSN received, cumTSN.
The second variable holds the available buffer size; winsize.
The third variable holds the last accumulative acknowledgment, lastACK.
The following figure shows the queue and variables at the receiver site.
When the site receives a data chunk, it stores it at the end of the buffer (queue) and
subtracts the size of the chunk from winSize.
The TSN number of the chunk is stored in the cumTSN variable.
When the process reads a chunk, it removes it from the queue and adds the size of the
removed chunk to winSize (recycling).
When the receiver decides to send a SACK, it checks the value of lastAck; if it is less
than cumTSN, it sends a SACK with a cumulative TSN number equal to the
cumTSN.
It also includes the value of winSize as the advertised window size.
Sender Site
The sender has one buffer (queue) and three variables: curTSN, rwnd, and inTransit.
We assume each chunk is 100 bytes long. The buffer holds the chunks produced by
the process that either have been sent or are ready to be sent.
The first variable, curTSN, refers to the next chunk to be sent.
All chunks in the queue with a TSN less than this value have been sent, but not
acknowledged; they are outstanding.
The second variable, rwnd, holds the last value advertised by the receiver (in bytes).
The third variable, inTransit, holds the number of bytes in transit, bytes sent but not
yet acknowledged.
The following figure shows the queue and variables at the sender site.
31
CS8591 – Computer Networks Unit 4
A chunk pointed to by curTSN can be sent if the size of the data is less than or equal
to the quantity rwnd - inTransit.
After sending the chunk, the value of curTSN is incremented by 1 and now points to
the next chunk to be sent.
The value of inTransit is incremented by the size of the data in the transmitted chunk.
When a SACK is received, the chunks with a TSN less than or equal to the
cumulative TSN in the SACK are removed from the queue and discarded. The sender
does not have to worry about them anymore.
The value of inTransit is reduced by the total size of the discarded chunks.
The value of rwnd is updated with the value of the advertised window in the SACK.
Receiver Site
The receiver stores all chunks that have arrived in its queue including the out-of-
order ones. However, it leaves spaces for any missing chunks.
It discards duplicate messages, but keeps track of them for reports to the sender.
The following figure shows a typical design for the receiver site and the state of the
receiving queue at a particular point in time.
32
CS8591 – Computer Networks Unit 4
An array of variables keeps track of the beginning and the end of each block that is
out of order.
An array of variables holds the duplicate chunks received.
There is no need for storing duplicate chunks in the queue and they will be discarded.
Sender Site
At the sender site, it needs two buffers (queues): a sending queue and a
retransmission queue.
Three variables were used - rwnd, inTransit, and curTSN as described in the previous
section.
The following figure shows a typical design.
33