0% found this document useful (0 votes)
14 views18 pages

CN-Unit 4 - Notes

The transport layer, the fourth layer of the OSI model, provides communication services between application processes on different hosts, utilizing protocols like TCP and UDP. It offers services such as end-to-end delivery, addressing, reliable delivery, flow control, and multiplexing, ensuring that messages are delivered accurately and efficiently. TCP is a connection-oriented protocol that guarantees reliable transmission, while UDP is a connectionless protocol that prioritizes speed over reliability.

Uploaded by

swapnil jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views18 pages

CN-Unit 4 - Notes

The transport layer, the fourth layer of the OSI model, provides communication services between application processes on different hosts, utilizing protocols like TCP and UDP. It offers services such as end-to-end delivery, addressing, reliable delivery, flow control, and multiplexing, ensuring that messages are delivered accurately and efficiently. TCP is a connection-oriented protocol that guarantees reliable transmission, while UDP is a connectionless protocol that prioritizes speed over reliability.

Uploaded by

swapnil jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Transport Layer

o The transport layer is a 4th layer from the top.


o The main role of the transport layer is to provide the communication services
directly to the application processes running on different hosts.
o The transport layer provides a logical communication between application
processes running on different hosts. Although the application processes on
different hosts are not physically connected, application processes use the
logical communication provided by the transport layer to send the messages to
each other.
o The transport layer protocols are implemented in the end systems but not in
the network routers.
o A computer network provides more than one protocol to the network
applications. For example, TCP and UDP are two transport layer protocols that
provide a different set of services to the network layer.
o All transport layer protocols provide multiplexing/demultiplexing service. It also
provides other services such as reliable data transfer, bandwidth guarantees,
and delay guarantees.
o Each of the applications in the application layer has the ability to send a
message by using TCP or UDP. The application communicates by using either
of these two protocols. Both TCP and UDP will then communicate with the
internet protocol in the internet layer. The applications can read and write to
the transport layer. Therefore, we can say that communication is a two-way
process.
Services provided by the Transport Layer
The services provided by the transport layer are similar to those of the data link layer.
The data link layer provides the services within a single network while the transport
layer provides the services across an internetwork made up of many networks. The
data link layer controls the physical layer while the transport layer controls all the lower
layers.

The services provided by the transport layer protocols can be divided into five
categories:

o End-to-end delivery
o Addressing
o Reliable delivery
o Flow control
o Multiplexing

End-to-end delivery:
The first duty of a transport-layer protocol is to provide process-to-process
communication. A process is an application-layer entity (running program) that uses
the services of the transport layer. Before we discuss how process-to-process
communication can be accomplished, we need to understand the difference between
host-to-host communication and process-to-process communication. The network
layer is responsible for communication at the computer level (host-to-host
communication). A network-layer protocol can deliver the message only to the
destination computer. However, this is an incomplete delivery. The message still needs
to be handed to the correct process. This is where a transport-layer protocol takes
over. A transport-layer protocol is responsible for delivery of the message to the
appropriate process.

Addressing: Port Numbers


Although there are a few ways to achieve process-to-process communication, the
most common is through the client-server paradigm. A process on the local host,
called a client, needs services from a process usually on the remote host, called a
server.

However, operating systems today support both multiuser and multiprogramming


environments. A remote computer can run several server programs at the same time,
just as several local computers can run one or more client programs at the same time.
For communication, we must define the local host, local process, remote host, and
remote process. The local host and the remote host are defined using IP addresses. To
define the processes, we need second identifiers, called port numbers. In the TCP/IP
protocol suite, the port numbers are integers between 0 and 65,535 (16 bits).

It should be clear by now that the IP addresses and port numbers play different roles
in selecting the final destination of data. The destination IP address defines the host
among the different hosts in the world. After the host has been selected, the port
number defines one of the processes on this particular host (see Figure 23.4).
Socket Addresses:
A transport-layer protocol in the TCP suite needs both the IP address and the port
number, at each end, to make a connection. The combination of an IP address and a
port number is called a socket address. The client socket address defines the client
process uniquely just as the server socket address defines the server process uniquely.

Reliable delivery:
The transport layer provides reliability services by retransmitting the lost and damaged
packets.

he reliable delivery has four aspects:


o Error control
o Sequence control
o Loss control
o Duplication control

Error Control

o The primary role of reliability is Error Control. In reality, no transmission will be


100 percent error-free delivery. Therefore, transport layer protocols are
designed to provide error-free transmission.
o The data link layer also provides the error handling mechanism, but it ensures
only node-to-node error-free delivery. However, node-to-node reliability does
not ensure the end-to-end reliability.
o The data link layer checks for the error between each network. If an error is
introduced inside one of the routers, then this error will not be caught by the
data link layer. It only detects those errors that have been introduced between
the beginning and end of the link. Therefore, the transport layer performs the
checking for the errors end-to-end to ensure that the packet has arrived
correctly.
Sequence Control

o The second aspect of the reliability is sequence control which is implemented


at the transport layer.
o On the sending end, the transport layer is responsible for ensuring that the
packets received from the upper layers can be used by the lower layers. On the
receiving end, it ensures that the various segments of a transmission can be
correctly reassembled.

Loss Control

Loss Control is a third aspect of reliability. The transport layer ensures that all the
fragments of a transmission arrive at the destination, not some of them. On the
sending end, all the fragments of transmission are given sequence numbers by a
transport layer. These sequence numbers allow the receiver?s transport layer to
identify the missing segment.

Duplication Control

Duplication Control is the fourth aspect of reliability. The transport layer guarantees
that no duplicate data arrive at the destination. Sequence numbers are used to identify
the lost packets; similarly, it allows the receiver to identify and discard duplicate
segments.

Flow Control
Flow control is used to prevent the sender from overwhelming the receiver. If the
receiver is overloaded with too much data, then the receiver discards the packets and
asking for the retransmission of packets. This increases network congestion and thus,
reducing the system performance. The transport layer is responsible for flow control.
It uses the sliding window protocol that makes the data transmission more efficient as
well as it controls the flow of data so that the receiver does not become overwhelmed.
Sliding window protocol is byte oriented rather than frame oriented.

Multiplexing and Demultiplexing


Whenever an entity accepts items from more than one source, this is referred to as
multiplexing (many to one); whenever an entity delivers items to more than one source,
this is referred to as demultiplexing (one to many). The transport layer at the source
performs multiplexing; the transport layer at the destination performs demultiplexing
(Figure 23.8). Figure 23.8 shows communication between a client and two servers.
Three client processes are running at the client site, P1, P2, and P3. The processes P1
and P3 need to send requests to the corresponding server process running in a server.
The client process P2 needs to send a request to the corresponding server process
running at another server. The transport layer at the client site accepts three messages
from the three processes and creates three packets. It acts as a multiplexer. The packets
1 and 3 use the same logical channel to reach the transport layer of the first server.
When they arrive at the server, the transport layer does the job of a demultiplexer and
distributes the messages to two different processes. The transport layer at the second
server receives packet 2 and delivers it to the corresponding process. Note that we still
have demultiplexing although there is only one message.
Transport Layer protocols
o The transport layer is represented by two protocols: TCP and UDP.
o The IP protocol in the network layer delivers a datagram from a source host to
the destination host.
o Nowadays, the operating system supports multiuser and multiprocessing
environments, an executing program is called a process. When a host sends a
message to other host means that source process is sending a process to a
destination process. The transport layer protocols define some connections to
individual ports known as protocol ports.
o An IP protocol is a host-to-host protocol used to deliver a packet from source
host to the destination host while transport layer protocols are port-to-port
protocols that work on the top of the IP protocols to deliver the packet from
the originating port to the IP services, and from IP services to the destination
port.
o Each port is defined by a positive integer address, and it is of 16 bits.

UDP
o UDP stands for User Datagram Protocol.
o UDP is a simple protocol and it provides nonsequenced transport functionality.
o UDP is a connectionless protocol.
o This type of protocol is used when reliability and security are less important
than speed and size.
o UDP is an end-to-end transport level protocol that adds transport-level
addresses, checksum error control, and length information to the data from the
upper layer.
o The packet produced by the UDP protocol is known as a user datagram.

User Datagram Format


The user datagram has a 16-byte header which is shown below:
Where,

o Source port address: It defines the address of the application process that has
delivered a message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process that
will receive the message. The destination port address is of a 16-bit address.
o Total length: It defines the total length of the user datagram in bytes. It is a
16-bit field.
o Checksum: The checksum is a 16-bit field which is used in error detection.

Disadvantages of UDP protocol

o UDP provides basic functions needed for the end-to-end delivery of a


transmission.
o It does not provide any sequencing or reordering functions and does not
specify the damaged packet when reporting an error.
o UDP can discover that an error has occurred, but it does not specify which
packet has been lost as it does not contain an ID or sequencing number of a
particular data segment.

TCP
o TCP stands for Transmission Control Protocol.
o It provides full transport layer services to applications.
o It is a connection-oriented protocol means the connection established between
both the ends of the transmission. For creating the connection, TCP generates
a virtual circuit between sender and receiver for the duration of a transmission.

Features Of TCP protocol


o Stream data transfer: TCP protocol transfers the data in the form of
contiguous stream of bytes. TCP group the bytes in the form of TCP segments
and then passed it to the IP layer for transmission to the destination. TCP itself
segments the data and forward to the IP.
o Reliability: TCP assigns a sequence number to each byte transmitted and
expects a positive acknowledgement from the receiving TCP. If ACK is not
received within a timeout interval, then the data is retransmitted to the
destination.
The receiving TCP uses the sequence number to reassemble the segments if
they arrive out of order or to eliminate the duplicate segments.
o Flow Control: When receiving TCP sends an acknowledgement back to the
sender indicating the number the bytes it can receive without overflowing its
internal buffer. The number of bytes is sent in ACK in the form of the highest
sequence number that it can receive without any problem. This mechanism is
also referred to as a window mechanism.
o Multiplexing: Multiplexing is a process of accepting the data from different
applications and forwarding to the different applications on different
computers. At the receiving end, the data is forwarded to the correct
application. This process is known as demultiplexing. TCP transmits the packet
to the correct application by using the logical channels known as ports.
o Logical Connections: The combination of sockets, sequence numbers, and
window sizes, is called a logical connection. Each connection is identified by the
pair of sockets used by sending and receiving processes.
o Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both the
directions at the same time. To achieve Full Duplex service, each TCP should
have sending and receiving buffers so that the segments can flow in both the
directions. TCP is a connection-oriented protocol. Suppose the process A wants
to send and receive the data from process B. The following steps occur:
o Establish a connection between two TCPs.
o Data is exchanged in both the directions.
o The Connection is terminated.

TCP Segment Format


Where,

o Source port address: It is used to define the address of the application


program in a source computer. It is a 16-bit field.
o Destination port address: It is used to define the address of the application
program in a destination computer. It is a 16-bit field.
o Sequence number: A stream of data is divided into two or more TCP segments.
The 32-bit sequence number field represents the position of the data in an
original data stream.
o Acknowledgement number: A 32-field acknowledgement number
acknowledge the data from other communicating devices. If ACK field is set to
1, then it specifies the sequence number that the receiver is expecting to
receive.
o Header Length (HLEN): It specifies the size of the TCP header in 32-bit words.
The minimum size of the header is 5 words, and the maximum size of the header
is 15 words. Therefore, the maximum size of the TCP header is 60 bytes, and the
minimum size of the TCP header is 20 bytes.
o Reserved: It is a six-bit field which is reserved for future use.
o Control bits: Each bit of a control field functions individually and
independently. A control bit defines the use of a segment or serves as a validity
check for other fields.

There are total six types of flags in control field:

o URG: The URG field indicates that the data in a segment is urgent.
o ACK: When ACK field is set, then it validates the acknowledgement number.
o PSH: The PSH field is used to inform the sender that higher throughput is
needed so if possible, data must be pushed with higher throughput.
o RST: The reset bit is used to reset the TCP connection when there is any
confusion occurs in the sequence numbers.
o SYN: The SYN field is used to synchronize the sequence numbers in three types
of segments: connection request, connection confirmation ( with the ACK bit
set ), and confirmation acknowledgement.
o FIN: The FIN field is used to inform the receiving TCP module that the sender
has finished sending data. It is used in connection termination in three types of
segments: termination request, termination confirmation, and
acknowledgement of termination confirmation.
o Window Size: The window is a 16-bit field that defines the size of the
window.
o Checksum: The checksum is a 16-bit field used in error detection.
o Urgent pointer: If URG flag is set to 1, then this 16-bit field is an offset
from the sequence number indicating that it is a last urgent data byte.
o Options and padding: It defines the optional fields that convey the
additional information to the receiver.

Differences b/w TCP & UDP


Basis for TCP UDP
Comparison

Definition TCP establishes a virtual circuit UDP transmits the data directly to the
before transmitting the data. destination computer without verifying
whether the receiver is ready to receive or
not.

Connection Type It is a Connection-Oriented It is a Connectionless protocol


protocol

Speed slow high

Reliability It is a reliable protocol. It is an unreliable protocol.

Header size 20 bytes 8 bytes


acknowledgement It waits for the acknowledgement It neither takes the acknowledgement, nor it
of data and has the ability to retransmits the damaged frame.
resend the lost packets.

TCP Connection Management


The steps required to establish and release connections can be represented in a finite
state machine with the 11 states listed in Fig. 6-38. In each state, certain events are
legal. When a legal event happens, some action may be taken. If some other event
happens, an error is reported. Each connection starts in the CLOSED state. It leaves that
state when it does either a passive open (LISTEN) or an active open (CONNECT). If the
other side does the opposite one, a connection is established, and the state becomes
ESTABLISHED. Connection release can be initiated by either side. When it is complete,
the state returns to CLOSED.

TCP Flow Control


Flow control in TCP is handled using a variable-sized sliding window. The Window size
field tells how many bytes may be sent starting at the byte acknowledged. A Window
size field of 0 is legal and says that the bytes up to and including Acknowledgement
number − 1 have been received, but that the receiver has not had a chance to consume
the data and would like no more data for the moment, thank you. The receiver can
later grant permission to send by transmitting a segment with the same
Acknowledgement number and a nonzero Window size field. In the protocols of Chap.
3, acknowledgements of frames received and permission to send new frames were tied
together. This was a consequence of a fixed window size for each protocol. In TCP,
acknowledgements and permission to send additional data are completely decoupled.
In effect, a receiver can say: ‘‘I have received bytes up through k but I do not want any
more just now, thank you.’’ This decoupling (in fact, a variable-sized window) gives
additional flexibility.

TCP Timer Management


TCP uses multiple timers (at least conceptually) to do its work. The following are the
timers used by TCP:

 Retransmission TimeOut Timer


 Persistence Timer
 Keepalive Timer
 Time Wait Timer

Retransmission TimeOut Timer :

The most important of these is the RTO (Retransmission TimeOut). When a segment is
sent, a retransmission timer is started. If the segment is acknowledged before the timer
expires, the timer is stopped. If, on the other hand, the timer goes off before the
acknowledgement comes in, the segment is retransmitted (and the timer os started
again).

Persistence Timer

A second timer is the persistence timer. It is designed to prevent the following


deadlock. The receiver sends an acknowledgement with a window size of 0, telling the
sender to wait. Later, the receiver updates the window, but the packet with the update
is lost. Now the sender and the receiver are each waiting for the other to do something.
When the persistence timer goes off, the sender transmits a probe to the receiver. The
response to the probe gives the window size. If it is still 0, the persistence timer is set
again and the cycle repeats. If it is nonzero, data can now be sent.
Keepalive Timer

A third timer that some implementations use is the keepalive timer. When a connection
has been idle for a long time, the keepalive timer may go off to cause one side to check
whether the other side is still there. If it fails to respond, the connection is terminated.
This feature is controversial because it adds overhead and may terminate an otherwise
healthy connection due to a transient network partition.

Time Wait Timer

The last timer used on each TCP connection is the one used in the TIME WAIT state
while closing. It runs for twice the maximum packet lifetime to make sure that when a
connection is closed, all packets created by it have died off.

Metrics are the network variables used to determine the best route to the destination.
For some protocols use the static metrics means that their value cannot be changed
and for some other routing protocols use the dynamic metrics means that their value
can be assigned by the system administrator.

TCP Congestion Control


When the load offered to any network is more than it can handle, congestion builds
up. The Internet is no exception. The network layer detects congestion when queues
grow large at routers and tries to manage it, if only by dropping packets. It is up to the
transport layer to receive congestion feedback from the network layer and slow down
the rate of traffic that it is sending into the network. In the Internet, TCP plays the main
role in controlling congestion, as well as the main role in reliable transport. That is why
it is such a special protocol. One key takeaway was that a transport protocol using an
AIMD (Additive Increase Multiplicative Decrease) control law in response to binary
congestion signals from the network would converge to a fair and efficient bandwidth
allocation. TCP congestion control is based on implementing this approach using a
window and with packet loss as the binary signal. To do so, TCP maintains a congestion
window whose size is the number of bytes the sender may have in the network at any
time. The corresponding rate is the window size divided by the round-trip time of the
connection. TCP adjusts the size of the window according to the AIMD rule. Recall that
the congestion window is maintained in addition to the flow control window, which
specifies the number of bytes that the receiver can buffer. Both windows are tracked
in parallel, and the number of bytes that may be sent is the smaller of the two windows.
Thus, the effective window is the smaller of what the sender thinks is all right and what
the receiver thinks is all right. It takes two to tango. TCP will stop sending data if either
the congestion or the flow control window is temporarily full.
SCTP
SCTP stands for Stream Control Transmission Protocol. It is a new reliable,
messageoriented transport layer protocol. SCTP, however, is mostly designed
for Internet applications that have recently been introduced. These new
applications, such as IUA (ISDN over IP), M2UA and M3UA (telephony
signaling), H.248 (media gateway control), H.323 (IP telephony), and SIP (IP
telephony), etc.

SCTP combines the best features of UDP and TCP. SCTP is a reliable message-
oriented protocol. It preserves the message boundaries, and at the same time,
detects lost data, duplicate data, and out-of-order data. It also has congestion
control and flows control mechanisms.

Features of SCTP
There are various features of SCTP, which are as follows −

Transmission Sequence Number

The unit of data in TCP is a byte. Data transfer in TCP is controlled by


numbering bytes by using a sequence number. On the other hand, the unit of
data in SCTP is a DATA chunk that may or may not have a one-to-one
relationship with the message coming from the process because of
fragmentation.

Stream Identifier

In TCP, there is only one stream in each connection. In SCTP, there may be
several streams in each association. Each stream in SCTP needs to be identified
by using a stream identifier (SI). Each data chunk must carry the SI in its
header so that when it arrives at the destination, it can be properly placed in
its stream. The 51 is a 16-bit number starting from O.

Stream Sequence Number

When a data chunk arrives at the destination SCTP, it is delivered to the


appropriate stream and in the proper order. This means that, in addition to an
SI, SCTP defines each data chunk in each stream with a stream sequence
number (SSN).
Packets

In TCP, a segment carries data and control information. Data is carried as a


collection of bytes; control information is defined by six control flags in the
header. The design of SCTP is totally different: data is carried as data chunks;
control information is carried as control chunks.

Flow Control

Like TCP, SCTP implements flow control to avoid overwhelming the receiver.

Error Control

Like TCP, SCTP implements error control to provide reliability. TSN numbers
and acknowledgement numbers are used for error control.

Congestion Control

Like TCP, SCTP implements congestion control to determine how many data
chunks can be injected into the network.

You might also like