Unit 4 Transport Layer

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

UNIT 4 TRANSPORT LAYER

Various responsibilities of a Transport Layer –


• Process to process delivery – While Data Link Layer requires the MAC address (48 bits
address contained inside the Network Interface Card of every host machine) of source-
destination hosts to correctly deliver a frame and Network layer requires the IP address
for appropriate routing of packets , in a similar way Transport Layer requires a Port
number to correctly deliver the segments of data to the correct process amongst the
multiple processes running on a particular host. A port number is a 16 bit address used
to identify any client-server program uniquely.
• End-to-end Connection between hosts – The transport layer is also responsible for
creating the end-to-end Connection between hosts for which it mainly uses TCP and
UDP. TCP is a secure, connection- orientated protocol which uses a handshake protocol
to establish a robust connection between two end- hosts. TCP ensures reliable delivery
of messages and is used in various applications. UDP, on the other hand, is a stateless
and unreliable protocol which ensures best-effort delivery. It is suitable for applications
which have little concern with flow or error control and requires to sendthe bulk of
data like video conferencing. It is often used in multicasting protocols.
• Multiplexing and Demultiplexing – Multiplexing allows simultaneous use of different
applications over a network which is running on a host. The transport layer provides this
mechanism which enables us to send packet streams from various applications
simultaneously over a network. The transport layer accepts these packets from different
processes differentiated by their port numbers and passes them to the network layer after
adding proper headers. Similarly, Demultiplexing is required at the receiver side to obtain
the data coming from various processes. Transport receives the segments of data from
the network layer and delivers it to the appropriate process running on the receiver’s
machine.
• Congestion Control – Congestion is a situation in which too many sources over a
network attempt to send data and the router buffers start overflowing due to which loss
of packets occur. As a result retransmission of packets from the sources increases the
congestion further. In this situation, the Transport layer provides Congestion Control
in different ways. It uses open loop congestion control to prevent the congestion and
closed loop congestion control to remove the congestion in a network once it occurred.
TCP provides AIMD- additive increase multiplicative decrease, leaky bucket technique
for congestion control.
• Data integrity and Error correction – Transport layer checks for errors in themessages
coming from application layer by using error detection codes, computing checksums, it
checks whether the received data is not corrupted and uses the ACK and NACK services
to inform the sender if the data has arrived or not and checks for the integrity of data.
• Flow control – The transport layer provides a flow control mechanism between the
adjacent layers of the TCP/IP model. TCP also prevents data loss due to a fast sender
and slow receiver by imposing some flow control techniques. It uses the method of
sliding window protocol which is accomplished by the receiver by sending a window
back to the sender informing the size of data it can receive.
User Datagram Protocol (UDP)
User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of Internet Protocol
suite, referred as UDP/IP suite. Unlike TCP, it is unreliable and connectionless protocol. So,
there is no need to establish connection prior to data transfer.

Though Transmission Control Protocol (TCP) is the dominant transport layer protocol used with
most of Internet services; provides assured delivery, reliability and much more but all these
services cost us with additional overhead and latency. Here, UDP comes into picture.For the
realtime services like computer gaming, voice or video communication, live conferences; we
need UDP. Since high performance is needed, UDP permits packets to be dropped instead of
processing delayed packets. There is no error checking in UDP, so it also save bandwidth.
User Datagram Protocol (UDP) is more efficient in terms of both latency and bandwidth.

UDP Header –
UDP header is 8-bytes fixed and simple header, while for TCP it may vary from 20 bytes to 60
bytes. First 8 Bytes contains all necessary header information and remaining part consist of data.
UDP port number fields are each 16 bits long, therefore range for port numbers defined from 0
to 65535; port number 0 is reserved. Port numbers help to distinguish different user requests or
process.

• Source Port : Source Port is 2 Byte long field used to identify port number of source.
• Destination Port : It is 2 Byte long field, used to identify the port of destined packet.
• Length : Length is the length of UDP including header and the data. It is 16-bits field.
• Checksum : Checksum is 2 Bytes long field. It is the 16-bit one’s complement of the
one’s complement sum of the UDP header, pseudo header of information from the IP
header and the data, padded with zero octets at the end (if necessary) to make a multiple
of two octets.

Notes – Unlike TCP, Checksum calculation is not mandatory in UDP. No Error control or flow
control is provided by UDP. Hence UDP depends on IP and ICMP for error reporting.

Applications of UDP:
• Used for simple request response communication when size of data is less and hence
there is lesser concern about flow and error control.
• It is suitable protocol for multicasting as UDP supports packet switching.
• UDP is used for some routing update protocols like RIP(Routing Information Protocol).
• Normally used for real time applications which can not tolerate uneven delays between
sections of a received message.
• Following implementations uses UDP as a transport layer protocol:
1. NTP (Network Time Protocol)
2. DNS (Domain Name Service)
3. BOOTP, DHCP.
4. NNP (Network News Protocol)
5. Quote of the day protocol
6. TFTP, RTSP, RIP, OSPF.
• Application layer can do some of the tasks through UDP-
1. Trace Route
2. Record Route
3. Time stamp
• UDP takes datagram from Network Layer, attach its header and send it to the user. So,
it works fast.
• Actually, UDP is null protocol if you remove checksum field.

When to use UDP?


1. Reduce the requirement of computer resources.
2. When using the Multicast or Broadcast to transfer.
3. The transmission of Real-time packets, mainly in multimedia applications.

TCP
It contains four layers, unlike seven layers in the OSI model. The layers are:
1. Process/Application Layer
2. Host-to-Host/Transport Layer
3. Internet Layer
4. Network Access/Link Layer

1. Network Access Layer –


This layer corresponds to the combination of Data Link Layer and Physical Layer of the OSI
model. It looks out for hardware addressing and the protocols present in this layer allows for the
physical transmission of data.
We just talked about ARP being a protocol of Internet layer, but there is a conflict about declaring
it as a protocol of Internet Layer or Network access layer. It is described as residing in layer 3,
being encapsulated by layer 2 protocols.
2. Internet Layer –
This layer parallels the functions of OSI’s Network layer. It defines the protocols which are
responsible for logical transmission of data over the entire network. The main protocols residing
at this layer are :
• IP – stands for Internet Protocol and it is responsible for delivering packets from the
source host to the destination host by looking at the IP addresses in the packet headers.
IP has 2 versions: IPv4 and IPv6.
IPv4 is the one that most of the websites are using currently. But IPv6 is growing as the number
of IPv4 addresses are limited in number when compared to the number of users.
• ICMP – stands for Internet Control Message Protocol. It is encapsulated within IP
datagrams and is responsible for providing hosts with information about network
problems.
• ARP – stands for Address Resolution Protocol. Its job is to find the hardware address of
a host from a known IP address. ARP has several types: Reverse ARP, Proxy ARP,
Gratuitous ARP and Inverse ARP.

3. Host-to-Host Layer –
This layer is analogous to the transport layer of the OSI model. It is responsible for end-to-end
communication and error-free delivery of data. It shields the upper-layer applications from the
complexities of data. The two main protocols present in this layer are :
• Transmission Control Protocol (TCP) – It is known to provide reliable and error-free
communication between end systems. It performs sequencing and segmentation of data.
It also has acknowledgment feature and controls the flow of the data through flow control
mechanism. It is a very effective protocol but has a lot of overhead due to such features.
Increased overhead leads to increased cost.
• User Datagram Protocol (UDP) – On the other hand does not provide any such features.
It is the go-to protocol if your application does not require reliable transport as it is very
cost-effective. Unlike TCP, which is connection-oriented protocol, UDP is
connectionless.

4. Process Layer –
This layer performs the functions of top three layers of the OSI model: Application, Presentation
and Session Layer. It is responsible for node-to-node communication and controlsuser-interface
specifications. Some of the protocols present in this layer are: HTTP, HTTPS, FTP, TFTP,
Telnet, SSH, SMTP, SNMP, NTP, DNS, DHCP, NFS, X Window, LPD. Have a
look at Protocols in Application Layer for some information about these protocols. Protocols
other than those present in the linked article are :
• HTTP and HTTPS – HTTP stands for Hypertext transfer protocol. It is used by the
World Wide Web to manage communications between web browsers and servers.
HTTPS stands for HTTP-Secure. It is a combination of HTTP with SSL(Secure Socket
Layer). It is efficient in cases where the browser need to fill out forms, sign in,
authenticate and carry out bank transactions.
• SSH – SSH stands for Secure Shell. It is a terminal emulations software similar to Telnet.
The reason SSH is more preferred is because of its ability to maintain the encrypted
connection. It sets up a secure session over a TCP/IP connection.
• NTP – NTP stands for Network Time Protocol. It is used to synchronize the clocks on
our computer to one standard time source. It is very useful in situations like bank
transactions. Assume the following situation without the presence of NTP. Suppose you
carry out a transaction, where your computer reads the time at 2:30 PM while the server
records it at 2:28 PM. The server can crash very badly if it’s out of sync.

Features
• TCP is connection oriented. TCP requires that connection between two remote points
be established before sending actual data.
• TCP provides error-checking and recovery mechanism.
• TCP provides end-to-end communication.
• TCP provides flow control and quality of service.
• TCP operates in Client/Server point-to-point mode.

RPC
• A remote procedure call is an interprocess communication technique that is used for
client-server based applications. It is also known as a subroutine call or a function call.
• A client has a request message that the RPC translates and sends to the server. This
request may be a procedure or a function call to a remote server.
• When the server receives the request, it sends the required response back to the client.
The client is blocked while the server is processing the call and only resumed execution
after the server is finished.

The sequence of events in a remote procedure call are given as follows:


• The client stub is called by the client.
• The client stub makes a system call to send the message to the server and puts the
parameters in the message.
• The message is sent from the client to the server by the client’s operating system.
• The message is passed to the server stub by the server operating system.
• The parameters are removed from the message by the server stub.
• Then, the server procedure is called by the server stub.

A diagram that demonstrates this is as follows:

Advantages:
Remote procedure calls support process oriented and thread oriented models.
• The internal message passing mechanism of RPC is hidden from the user.
• The effort to re-write and re-develop the code is minimum in remote procedure calls.
• Remote procedure calls can be used in distributed environment as well as the local
environment.
• Many of the protocol layers are omitted by RPC to improve performance.

Disadvantages:
• The remote procedure call is a concept that can be implemented in different ways. It is
not a standard.
• There is no flexibility in RPC for hardware architecture. It is only interaction based.
• There is an increase in costs because of remote procedure call.

Congestion Control
• A state occurring in network layer when the message traffic is so heavy that it slows
down network response time.

Effects of Congestion
• As delay increases, performance decreases.
• If delay increases, retransmission occurs, making situation worse.

Congestion control algorithms


1. Leaky Bucket Algorithm
Imagine a bucket with a small hole in the bottom. No matter at what rate water enters the bucket,
the outflow is at constant rate. When the bucket is full with water additional water entering spills
over the sides and is lost.

Similarly, each network interface contains a leaky bucket and the following steps are involved
in leaky bucket algorithm:
• When host wants to send packet, packet is thrown into the bucket.
• The bucket leaks at a constant rate, meaning the network interface transmits packets
at a constant rate.
• Bursty traffic is converted to a uniform traffic by the leaky bucket.
• In practice the bucket is a finite queue that outputs at a finite rate.

2. Token bucket Algorithm


Need of token bucket Algorithm:
The leaky bucket algorithm enforces output pattern at the average rate, no matter how bursty the
traffic is. So in order to deal with the bursty traffic we need a flexible algorithm so that the data
is not lost. One such algorithm is token bucket algorithm.
Steps of this algorithm can be described as follows:
1. In regular intervals tokens are thrown into the bucket. ƒ
2. The bucket has a maximum capacity. ƒ
3. If there is a ready packet, a token is removed from the bucket, and the packet is sent.
4. If there is no token in the bucket, the packet cannot be sent.

Let’s understand with an example,


In figure (A) we see a bucket holding three tokens, with five packets waiting to be transmitted.
For a packet to be transmitted, it must capture and destroy one token. In figure (B) We see that
three of the five packets have gotten through, but the other two are stuck waiting for more tokens
to be generated.

Ways in which token bucket is superior to leaky bucket:


The leaky bucket algorithm controls the rate at which the packets are introduced in the network,
but it is very conservative in nature. Some flexibility is introduced in the token bucketalgorithm.
In the token bucket, algorithm tokens are generated at each tick (up to a certain limit). For an
incoming packet to be transmitted, it must capture a token and the transmission takes place at
the same rate. Hence some of the busty packets are transmitted at the same rate if tokens are
available and thus introduces some amount of flexibility in the system.

Formula: M * s = C + ρ * s
where S – is time taken
M – Maximum output rateρ
– Token arrival rate
C – Capacity of the token bucket in byte

Choke Packet
• In this method of congestion control, congested router or node sends a special type of
packet called choke packet to the source to inform it about the congestion.
• Here, congested node does not inform its upstream node about the congestion as in
backpressure method.
• In choke packet method, congested node sends a warning directly to the source station
i.e. the intermediate nodes through which the packet has traveled are not warned.
Choke Packet Method
Implicit Signaling
• In implicit signaling, there is no communication between the congested node or nodes
and the source.
• The source guesses that there is congestion somewhere in the network when it does not
receive any acknowledgment. Therefore the delay in receiving an acknowledgment is
interpreted as congestion in the network.
• On sensing this congestion, the source slows down.
• This type of congestion control policy is used by TCP.

Explicit Signaling
• In this method, the congested nodes explicitly send a signal to the source or destination
to inform about the congestion.
• Explicit signaling is different from the choke packet method. In choke packed method,
a separate packet is used for this purpose whereas in explicit signaling method, the signal
is included in the packets that carry data .
• Explicit signaling can occur in either the forward direction or the backward direction .
• In backward signaling, a bit is set in a packet moving in the direction opposite to the
congestion. This bit warns the source about the congestion and informs the source to slow
down.
• In forward signaling, a bit is set in a packet moving in the direction of congestion. This
bit warns the destination about the congestion. The receiver in this case uses policies such
as slowing down the acknowledgements to remove the congestion.

Quality of Service (QoS)


Quality-of-Service (QoS) refers to traffic control mechanisms that seek to either differentiate
performance based on application or network-operator requirements or provide predictable or
guaranteed performance to applications, sessions or traffic aggregates. Basic phenomenon for
QoS means in terms of packet delay and losses of various kinds.

Need for QoS –


• Video and audio conferencing require bounded delay and loss rate.
• Video and audio streaming requires bounded packet loss rate, it may not be so sensitive
to delay.
• Time-critical applications (real-time control) in which bounded delay is considered to
be an important factor.
• Valuable applications should be provided better services than less valuable applications.

QoS Specification –
QoS requirements can be specified as:
• Delay
• Delay Variation(Jitter)
• Throughput
• Error Rate
There are two types of QoS Solutions:
• Stateless Solutions – Routers maintain no fine grained state about traffic, one
positive factor of it is that it is scalable and robust. But it has weak services as
there is no guarantee about kind of delay or performance in a particular
application which we haveto encounter.
• Stateful Solutions – Routers maintain per flow state as flow is very important in
providing the Quality-of-Service i.e. providing powerful services such as
guaranteed services and high resource utilization, provides protection and is much
less scalable androbust.

You might also like