CN-Unit 4
CN-Unit 4
Transport Layer
MKK
• The transport layer builds on the network layer to provide data transport from a process on a
source machine to a process on a destination machine with a desired level of reliability that is
independent of the physical networks currently in use where as The network layer provides end-
to-end packet delivery using datagrams or virtual circuits.
• It provides the abstractions that applications need to use the network. Without the transport
layer, the whole concept of layered protocols would make little sense.
• Transport layer implementations are contained in both the TCP/IP model (RFC 1122), which is
the foundation of the Internet, and the Open Systems Interconnection (OSI) model of general
networking
• In the Open Systems Interconnection model the transport layer is most often referred to as
Layer 4 or L4.
• The best-known transport protocol is the Transmission Control Protocol (TCP). It lent its name to
the title of the entire Internet Protocol Suite, TCP/IP, used for connection- oriented
transmissions, whereas the connectionless User Datagram Protocol (UDP) is used for simpler
messaging transmissions.
• The software and/or hardware within the transport layer that does the work is called the
transport entity. The transport entity can be located in the operating system kernel, in a library
package bound into network applications, in a separate user process, or even on the networkMKK
interface card.
• The (logical) relationship of the network, transport, and application layers is illustrated
in Figure
• Just as there are two types of network service, connection-oriented and connectionless,
there are also two types of transport service.
• In both cases, connections have three phases: establishment, data transfer, and release.
MKK
Services
Transport layer services are conveyed to an application via a programming interface to the
transport layer protocols. The services may include the following features:
• End-to-end delivery:
– The transport layer transmits the entire message to the destination. Therefore, it ensures the end-to-
end delivery of an entire message from a source to the destination.
• Reliable delivery:
– The transport layer provides reliability services by retransmitting the lost and damaged packets.
– The reliable delivery has four aspects:
• Error control: The primary role of reliability is Error Control. Transport layer protocols are designed to
provide error-free transmission.
• Sequence control: On the sending end, the transport layer is responsible for ensuring that the packets
received from the upper layers can be used by the lower layers. On the receiving end, it ensures that the
various segments of a transmission can be correctly reassembled.
• Loss control : The transport layer ensures that all the fragments of a transmission arrive at the destination,
not some of them.
• Duplication control : The transport layer guarantees that no duplicate data arrive at the destination.MKK
Sequencing allows the receiver to identify and discard duplicate segments.
• Flow Control:
– Flow control is used to prevent the sender from overwhelming the receiver. If the receiver is
overloaded with too much data, then the receiver discards the packets and asking for the
retransmission of packets.
• Multiplexing:
– The transport layer uses the multiplexing to improve transmission efficiency.
– Multiplexing can occur in two ways:
• Upward multiplexing: Upward multiplexing means multiple transport layer connections use the same
network connection. To make more cost-effective, the transport layer sends several transmissions bound for
the same destination along the same path; this is achieved through upward multiplexing.
• Downward multiplexing: Downward multiplexing means one transport layer connection uses the multiple
network connections. Downward multiplexing allows the transport layer to split a connection among several
paths to improve the throughput. This type of multiplexing is used when networks have a low or slow
capacity.
MKK
Elements of Transport Protocols
• Addressing
• Connection Establishment/Release
• Error Control
• Flow Control
• Multiplexing/De multiplexing
• Fragmentation and re-assembly
• Crash Recovery
MKK
Addressing
• Transport Layer deals with addressing or labelling a frame. It also differentiates between
a connection and a transaction. Connection identifiers are ports or sockets that label
each frame, so the receiving device knows which process it has been sent from. This
helps in keeping track of multiple-message conversations. Ports acts as end points in the
Internet.
• We will use the generic term TSAP (Transport Service Access Point) to mean a specific
endpoint in the transport layer. The analogous endpoints in the network layer (i.e.,
network layer addresses) are not-surprisingly called NSAPs (Network Service Access
Points). IP addresses are examples of NSAPs.
• The purpose of having TSAPs is that in some networks, each computer has a single
NSAP, so some way is needed to distinguish multiple transport endpoints that share that
NSAP.
MKK
• The following Figure illustrates the relationship between the NSAPs, the TSAPs, and a transport connection.
MKK
Connection Establishment/Release
• The transport layer creates and releases the connection across the network. This
includes a naming mechanism so that a process on one machine can indicate with
whom it wishes to communicate. The transport layer enables us to establish and delete
connections across the network to multiplex several message streams onto one
communication channel.
• At first glance, it would seem sufficient for one transport entity to just send a
CONNECTION REQUEST segment to the destination and wait for a
CONNECTIONACCEPTED reply. The problem occurs when the network can lose, delay,
corrupt,and duplicate packets. This behavior causes serious complications.
MKK
Three protocol scenarios for establishing a connection
using a
three-way handshake. CR denotes CONNECTION
REQUEST.
(a) Normal operation.
(b) Old duplicate CONNECTION REQUEST appearing out
of nowhere.
(c) Duplicate CONNECTION REQUEST and duplicate ACK.
MKK
• Releasing a connection is easier than establishing one. Nevertheless, there are more
pitfalls than one might expect here. As we mentioned earlier, there are two styles of
terminating a connection: asymmetric release and symmetric release.
• Asymmetric release is the way the telephone system works: when one party hangs up,
the connection is broken. Symmetric release treats the connection as two separate
unidirectional connections and requires each one to be released separately.
– Asymmetric release is abrupt and may result in data loss. Consider the
scenario of Figure. After the connection is established, host 1 sends a
segment that arrives properly at host 2. Then host 1 sends another segment.
Unfortunately, host 2 issues a DISCONNECT before the second segment
arrives. The result is that the connection is released and data are lost.
– Symmetric release does the job when each process has a fixed amount of
data to send and clearly knows when it has sent it.
MKK
Error Control
• Error detection and error recovery are an integral part of reliable service, and therefore
they are necessary to perform error control mechanisms on an end-to-end basis. To
control errors from lost or duplicate segments, the transport layer enables unique
segment sequence numbers to the different packets of the message, creating virtual
circuits, allowing only one virtual circuit per session.
Flow Control
• The underlying rule of flow control is to maintain a synergy between a fast process and
a slow process. The transport layer enables a fast process to keep pace with a slow one.
Acknowledgements are sent back to manage end-to-end flow control. Go back N
algorithms are used to request retransmission of packets starting with packet number
N. Selective Repeat is used to request specific packets to be retransmitted.
MKK
Multiplexing/De multiplexing
• The transport layer establishes a separate network connection for each transport
connection required by the session layer. To improve throughput, the transport layer
establishes multiple network connections. When the issue of throughput is not
important, it multiplexes several transport connections onto the same network
connection, thus reducing the cost of establishing and maintaining the network
connections.
• When several connections are multiplexed, they call for demultiplexing at the receiving
end. In the case of the transport layer, the communication takes place only between
two processes and not between two machines. Hence, communication at the transport
layer is also known as peer-to-peer or process-to-process communication.
MKK
Fragmentation and re-assembly
• When the transport layer receives a large message from the session layer, it breaks the
message into smaller units depending upon the requirement. This process is called
fragmentation. Thereafter, it is passed to the network layer. Conversely, when the
transport layer acts as the receiving process, it reorders the pieces of a message before
reassembling them into a message.
Crash Recovery
• If hosts and routers are subject to crashes or connections are long-lived (e.g., large
software or media downloads), recovery from these crashes becomes an issue. If the
transport entity is entirely within the hosts, recovery from network and router crashes is
straightforward. The transport entities expect lost segments all the time and know how
to cope with them by using retransmissions.
• A more troublesome problem is how to recover from host crashes. In particular, it may
be desirable for clients to be able to continue working when servers crash and quickly
reboot. MKK
CONGESTION CONTROL
• If the transport entities on many machines send too many packets into the network too
quickly, the network will become congested, with performance degraded as packets are
delayed and lost. Controlling congestion to avoid this problem is the combined
responsibility of the network and transport layers. Congestion occurs at routers, so it is
detected at the network layer.
• However, congestion is ultimately caused by traffic sent into the network by the
transport layer. The only effective way to control congestion is for the transport
protocols to send packets into the network more slowly.
• Congestive-Avoidance Algorithms (CAA) are implemented at the TCP layer as the
mechanism to avoid congestive collapse in a network.
• There are two congestion control algorithm which are as follows:
MKK
Leaky Bucket
• The leaky bucket algorithm discovers its use in the context of network traffic shaping or rate-
limiting. The algorithm allows controlling the rate at which a record is injected into a network
and managing burstiness in the data rate.
• In this algorithm, a bucket with a volume of ‘b’ bytes and a hole in the bottom is considered. If
the bucket is null, it means b bytes are available as storage. A packet with a size smaller than b
bytes arrives at the bucket and will forward it. If the packet's size increases by more than b
bytes, it will either be discarded or queued. It is also considered
that the bucket leaks through the hole in its bottom at a constant
rate of r bytes per second.
• The outflow is considered constant when there is any packet in
the bucket and zero when it is empty. This defines that if data
flows into the bucket faster than data flows out through the hole,
the bucket overflows.
• The disadvantages compared with the leaky-bucket algorithm
are the inefficient use of available network resources. The leak
rate is a fixed parameter. In the case of the traffic, volume is
deficient, the large area of network resources such as bandwidth is not being used effectively.
MKK
The leaky-bucket algorithm does not allow individual flows to burst up to port speed to effectively
consume network resources when there would not be resource contention in the network.
Token Bucket Algorithm
• The leaky bucket algorithm has a rigid output design at the average rate independent of the
bursty traffic. In some applications, when large bursts arrive, the output is allowed to speed up.
This calls for a more flexible algorithm, preferably one that never loses information. Therefore, a
token bucket algorithm finds its uses in network traffic shaping or rate-limiting.
• It is a control algorithm that indicates when traffic should be sent. This order comes based on
the display of tokens in the bucket. The bucket contains tokens. Each of the tokens defines a
packet of predetermined size. Tokens in the bucket are deleted for the ability to share a packet.
• When tokens are shown, a flow to transmit traffic appears in the display of tokens. No token
means no flow sends its packets. Hence, a flow transfers traffic up to its peak burst rate in good
tokens in the bucket.
• Thus, the token bucket algorithm adds a token to the bucket each 1 / r seconds. The volume of
the bucket is ‘b’ tokens. When a token appears, and the bucket is complete, the token is
discarded. If a packet of n bytes appears and n tokens are deleted from the bucket, the packet is
forwarded to the network.
• When a packet of n bytes appears but fewer than n tokens are available. No tokens are removed
from the bucket in such a case, and the packet is considered non-conformant. The non-
conformant packets can either be dropped or queued for subsequent transmission when
sufficient tokens have accumulated in the bucket.
• They can also be transmitted but marked as being non-conformant. The possibility is thatMKK they
may be dropped subsequently if the network is overloaded.
TCP and UDP Protocols
The transport layer is represented by two protocols: TCP and UDP.
TCP
• TCP stands for Transmission Control Protocol.
• It provides full transport layer services to applications.
• It is a connection-oriented protocol means the connection established between both
the ends of the transmission. For creating the connection, TCP generates a virtual circuit
between sender and receiver for the duration of a transmission.
MKK
MKK
UDP
• UDP stands for User Datagram Protocol.
• UDP is a simple protocol and it provides non sequenced transport functionality.
• UDP is a connectionless protocol.
• This type of protocol is used when reliability and security are less important than speed
and size.
• UDP is an end-to-end transport level protocol that adds transport-level addresses,
checksum error control, and length information to the data from the upper layer.
• The packet produced by the UDP protocol is known as a user datagram.
MKK
User Datagram Format
• Source port address: It defines the address of the application process that has delivered a message. The
source port address is of 16 bits address.
• Destination port address: It defines the address of the application process that will receive the message.
The destination port address is of a 16-bit address.
• Total length: It defines the total length of the user datagram in bytes. It is a 16-bit field.
• Checksum: The checksum is a 16-bit field which is used in error detection.
The size of the UDP header is 8-bytes (16-bits for source port, 16-bits for destination port,
16-bits for length, 16-bits for checksum); it’s significantly smaller than the TCP header.
MKK
Quality of Service Model
• Quality-of-Service (QoS) refers to traffic control mechanisms that seek to either differentiate
performance based on application or network-operator requirements or provide predictable or
guaranteed performance to applications, sessions, or traffic aggregates. Basic phenomenon for
QoS means in terms of packet delay and losses of various kinds.
• The QoS is primarily used to control resources like bandwidth, equipment, wide-area facilities
etc. It can get more efficient use of network resources.
QoS requirements can be specified as below:
• Congestion Management: QoS allows a router to put packets into different queues.
• Queue Management: The queues in a buffer can fill and overflow. A packet would be dropped if
a queue is complete, and the router cannot prevent it from being dropped
• Elimination of overhead bits: It can also increase efficiency by removing too many overhead
bits.
• Traffic shaping and policing: Shaping can prevent the overflow problem in buffers by limiting
the full bandwidth potential of the applications packets. Sometimes, many network topologies
with a high bandwidth link connected with a low-bandwidth link in remote sites can overflow
low bandwidth connections. Therefore, shaping is used to provide the traffic flow from the high
MKK
bandwidth link closer to the low bandwidth link to avoid the low bandwidth link's overflow.
PERFORMANCE ISSUES
• Performance issues are very important in computer networks. When hundreds or
thousands of computers are interconnected, complex interactions, with unforeseen
consequences, are common.
we will look at six aspects of network performance:
1. Performance problems.
2. Measuring network performance.
3. Host design for fast networks.
4. Fast segment processing.
5. Header compression.
6. Protocols for ‘‘long fat’’ networks.
MKK
1. Performance problems
• Some performance problems, such as congestion, are caused by temporary resource
overloads. If more traffic suddenly arrives at a router than the router can handle,
congestion will build up and performance will suffer.
• Performance also degrades when there is a structural resource imbalance. For example,
if a gigabit communication line is attached to a low-end PC, the poor host will not be
able to process the incoming packets fast enough and some will be lost. These packets
will eventually be retransmitted, adding delay, wasting bandwidth, and generally
reducing performance.
2. Measuring network performance
Measurements can be made in different ways and at many locations. Few of them are:
• Make Sure That the Sample Size Is Large Enough
• Be Sure That Nothing Unexpected Is Going On during Your Tests
• Be Careful When Using a Coarse-Grained Clock MKK
• Be Careful about Extrapolating(like prediction) the Results
3. Host design for fast networks
There are some rules of thumb for software implementation of network protocols on
hosts.
• Host Speed Is More Important Than Network Speed
• Reduce Packet Count to Reduce Overhead
• Minimize Data Touching
• Minimize Context Switches (e.g., from kernel mode to user mode)
4. Fast segment processing
• The key to fast segment processing is to separate out the normal, successful case (one-
way data transfer) and handle it specially.
• Many protocols tend to emphasize what to do when something goes wrong (e.g., a
packet getting lost), but to make the protocols run fast, the designer should aim to
minimize processing time when everything goes right. Minimizing processing time when
an error occurs is secondary. MKK
5. Header compression.
• When we consider performance on wireless and other networks in which bandwidth is limited.
Reducing software overhead can help mobile computers run more efficiently, but it does
nothing to improve performance when the network links are the bottleneck.
• Header compression is used to reduce the bandwidth taken over links by higher-layer protocol
headers. Specially designed schemes are used instead of general purpose methods. This is
because headers are short, so they do not compress well individually, and decompression
requires all prior data to be received.
6. Protocols for ‘‘long fat’’ networks.
• Since the 1990s, there have been gigabit networks that transmit data over large distances.
Because of the combination of a fast network, or ‘‘fat pipe,’’ and long delay, these networks are
called long fat networks.
– The first problem is that many protocols use 32-bit sequence numbers. When the Internet began, the lines
between routers were mostly 56-kbps leased lines, so a host blasting away at full speed took over 1 week
to cycle through the sequence numbers.
– A second problem is that the size of the flow control window must be greatly increased.
– The conclusion that can be drawn here is that for good performance, the receiver’s window must be at
least as large as the bandwidth-delay product, and preferably somewhat larger since the receiver may not
MKK
respond instantly.
• Bandwidth delay product is a measurement of how many bits can fill up a network link.
MKK