0% found this document useful (0 votes)
8 views102 pages

Transport Layer

Uploaded by

gtg878387
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views102 pages

Transport Layer

Uploaded by

gtg878387
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 102

TRANSPORT LAYER

MODULE 3
Transport Layer
• The transport layer in the TCP/IP suite is located between
the application layer and the network layer.
• It provides services to the application layer and receives
services from the network layer.
• Provide logical communication between app processes
running on different hosts
• it is the end-to-end logical vehicle for transferring data from
one point to another in the Internet.
Network vs Transport Layer
• The network layer is responsible for communication at the
computer level (host-to-host communication).
• A network-layer protocol can deliver the message only to the
destination computer.
• The message still needs to be handed to the correct process. This
is done by transport-layer protocol.
• A transport-layer protocol is responsible for delivery of the
message to the appropriate process.
• The destination IP address defines the host among the different
hosts in the world.
• After the host has been selected, the port number defines
one of the processes on this particular host.
Q: differentiate TCP and UDP
Transport layer services
• Process-to-Process Communication
• Addressing: Port Numbers
• Encapsulation and Decapsulation
• Multiplexing and Demultiplexing
• Flow Control
• Error Control
• Congestion Control
Addressing port numbers
• For communication, we must define the local host, local process,
remote host, and remote process.
• The local host and the remote host are defined using IP addresses .
• To define the processes, we need second identifiers, called port
numbers.
• In the TCP/IP protocol suite, the port numbers are integers between
0 and 65,535 (16 bits).
• The client program defines itself with a port number, called the
ephemeral port number.
• The word ephemeral means short-lived and is used because the
life of a client is normally short.
• An ephemeral port number is recommended to be greater than
1,023.
• The server process must also define itself with a port number.
• This port number cannot be chosen randomly.
• TCP/IP has decided to use universal port numbers for servers; these
are called well-known port numbers.

• ICANN has divided the port numbers into three ranges: well-known,
registered, and dynamic (or private).
• The ports ranging from 0 to 1,023 are assigned and controlled by
ICANN. These are the well-known ports.
• The ports ranging from 1,024 to 49,151 are not assigned or controlled
by ICANN. They can only be registered with ICANN to prevent
duplication. These are known as registered ports.
• The ports ranging from 49,152 to 65,535 are neither controlled nor
registered. They can be used as temporary or private port numbers.
Encapsulation and Decapsulation
• To send a message from one process to another, the transport-
layer protocol encapsulates and decapsulates messages.
• Encapsulation happens at the sender site.
• When a process has a message to send, it passes the message to
the transport layer along with a pair of socket addresses and
some other pieces of information, which depend on the
transport-layer protocol.
• The transport layer receives the data and adds the transport-
layer header.
• The packets at the transport layers in the Internet are called
user datagrams, segments, or packets, depending on what
transport-layer protocol we use.
• Decapsulation happens at the receiver site.
• When the message arrives at the destination transport layer, the
header is dropped and the transport layer delivers the message to the
process running at the application layer.
• The sender socket address is passed to the process in case it needs to
respond to the message received.
Multiplexing and Demultiplexing
• Whenever an entity accepts items from more than one source,
this is referred to as multiplexing (many to one);
• whenever an entity delivers items to more than one source,
this is referred to as demultiplexing (one to many).
• The transport layer at the source performs multiplexing; the
transport layer at the destination performs demultiplexing.
• Although there is only one message, we use demultiplexer.
Flow Control
• If the items are produced faster than they can be consumed, the
consumer can be overwhelmed and may need to discard some
items.
• Flow control is related to this issue.
• We need to prevent losing the data items at the consumer site.
• If the sender delivers items whenever they are produced
without a prior request from the consumer⎯the delivery is
referred to as pushing.
• If the producer delivers the items after the consumer has
requested them, the delivery is referred to as pulling.
• One of the solutions is normally to use two buffers: one at the
sending transport layer and the other at the receiving transport layer.
• A buffer is a set of memory locations that can hold packets at the
sender and receiver.
• The flow control communication can occur by sending signals
from the consumer to the producer.
• When the buffer of the sending transport layer is full, it informs the
application layer to stop passing chunks of messages; when there are
some vacancies, it informs the application layer that it can pass
message chunks again.
• When the buffer of the receiving transport layer is full, it informs the
sending transport layer to stop sending packets. When there are some
vacancies, it informs the sending transport layer that it can send
packets again.
Error Control
• In the Internet, since the underlying network layer (IP) is
unreliable, we need to make the transport layer reliable if the
application requires reliability.
• Reliability can be achieved to add error control services to
the transport layer.
• Error control at the transport layer is responsible for
• 1. Detecting and discarding corrupted packets.
• 2. Keeping track of lost and discarded packets and
resending them.
• 3. Recognizing duplicate packets and discarding them.
• 4. Buffering out-of-order packets until the missing packets
Congestion Control
• Congestion in a network may occur if the load on the network—the
• number of packets sent to the network—is greater than the capacity of
the network—the number of packets a network can handle.
• Congestion control refers to the mechanisms and techniques that
control the congestion and keep the load below the capacity.
• Congestion in a network or internetwork occurs because routers and
• switches have queues—buffers that hold the packets before and
after processing.
• A router, for example, has an input queue and an output queue for
each interface.
• If a router cannot process the packets at the same rate at which
they arrive, the queues become overloaded and congestion
occurs.
• Congestion at the transport layer is actually the result of congestion
at the network layer.
Multiplexing and Demultiplexing
• Transport layer responsible for delivering data segments to
appropriate applications running in a host.
• There can be more than one socket at receiving host and
each socket has a unique identifier.
• Identifier format depends on if the socket is UDP or TCP.
• Each transport layer segment has multiple fields.
• The gathering of data chunk at source host and
encapsulating header information with data chunks to create
segment and passing segments to network layer is called
multiplexing.
• At receiving end transport layer examines field to identify
receiving socket and direct segment to that socket.The job
delivering segment to correct socket is known as
demultiplexing.
• For multiplexing transport layer requires
1. Socket have unique identifier
2. Each segment have field that indicate the socket to which
segment is delivered. Which source and destination port
number. Port number varies from 0 to 65535.
• host receives IP
datagrams each datagram
has source IP address,
destination IP address.
• Each datagram carries
one transport-layer
segment each segment
has source, destination
port number .
Connectionless Multiplexing and Demultiplexing
• DatagramSocket mySocket1 = new DatagramSocket(12534);
• recall: when creating datagram to send into UDP socket, must
specify destination IP address and destination port number
• When host receives UDP segment: checks destination port
number in segment directs UDP segment to socket with that
port number.
• IP datagrams with same dest. port number, but different source
IP addresses and/or source port numbers will be directed to
same socket at destination.
Connection Oriented Multiplexing and Demultiplexing
• TCP socket is identified by 4 tuple (source ip address, source
port number, destination ip address,destination port number).
• First establish connection through welcome socket.
• Once welcomes connection is established and TCP segments
that matches the four values will be demultiplexed to this
socket.
UDP
• UDP take message from application layer adds source and
destination port no along with two other field and send to
network layer.
• no three way handshake.
• DNS uses UDP connection to avoid TCP connection
establishment delays.
• Why prefer UDP over TCP? ( 4 points)
UDP segment structure
• Source Port: Source Port is a 2 Byte long field used to identify
the port number of the source.
• Destination Port: It is a 2 Byte long field, used to identify the
port of the destined packet.
• Length: Length is the length of UDP including the header and
the data. It is a 16-bits field.
• Checksum: Checksum is 2 Bytes long field. It is the 16-bit
one’s complement of the one’s complement sum of the UDP
header, the pseudo-header of information from the IP header,
and the data, padded with zero octets at the end (if necessary)
to make a multiple of two octets.
CHECKSUM
• It is used to check whether bits within the UDP segment has been
altered as it moved from source to destination.
Sender side:
• It treats segment contents as sequence of 16-bit integers.
• All segments are added. Let's call it sum.
• Checksum: 1's complement of sum.(In 1's complement all 0s are converted into 1s and all 1s are
converted into 0s).
• Sender puts this checksum value in UDP checksum field.

Receiver side:
• Calculate checksum
• All segments are added and then sum is added with sender's checksum.
• Check that any 0 bit is presented in checksum. If receiver side checksum contains any 0 then error is
detected. So the packet is discarded by receiver.
As an example, suppose that we have the bit stream 01100110011001100101010101010000111100001111
This bit stream is divided into segments of 16-bits integers.
0110011001100110 (16-bit integer segment)
0101010101010101
0000111100001111
The sum of first of these 16-bit words is:
0110011001100110
0101010101010101
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
1011101110111011
Adding the third word to the above sum gives
1011101110111011
0000111100001111
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
1100101011001010 (sum of all segments)
• Now to calculate checksum 1's complement of sum is taken. As I mentioned
earlier, 1's complement is achieved by converting all 1s into 0s and all 0s into
1s. So, the checksum at sender side is: 0011010100110101.

• Now at the receiver side, again all segments are added .and sum is added with
sender's checksum.

• If no error than check of receiver would be: 1111111111111111.

• If any 0 bit is presented in the header than there is an error in checksum. So,
the packet is discarded.
Principles of reliable data transfer

• Underlying network layer provides unreliable transfer of data.


• reliable transfer protocol is added to transport layer to support
reliable transfer.
• Speed of transfer will reduce when we make a channel reliable.
• We are now considering unidirectional data transfer for easy
understanding.
Reliable data transfer over a prefectly reliable channel
rdt1.0
• Underlying channel is completely reliable.
• it assumes that the underlying channel has:
1. No bit errors and
2. No loss of packets
• The transfer of data is shown using FSM (finite state machine)
• Only one state for sender and receiver.
• Sender Side: When the sender sends data from the application
layer, RDT simply accepts data from the upper layer via the
rdt_send(data) event. Then it puts the data into a packet (via
the make_pkt(packet,data) event) and sends the packet into
the channel using the udp_send(packet) event.
• Receiving Side: On receiving data from the channel, RDT
simply accepts data via the rdt_rcv(data) event. Then it
extracts the data from the packet (via the extract(packet, data))
and sends the data to the application layer using the
deliver_data(data) event.
Reliable data transfer over a channel with bit errors rdt
2.0
• We will be assuming that all transmitted packets that are
received in the order in which they were sent .
• Here we use positive and negative acknowledgments.
• In networks settings, reliable data transfer protcols based on
retransmission are known as ARQ (automatic repeat request)
protocol.
• ARQ require 3 capabilities:
1. Error Detection: mechanism allow receiver to detect when bit
errors have occured.Require extra bit to be send from sender
to receiver.
2. Receiver feedback: ACK and NAk packets are send from
receiver to sender to check if any error occured.These
packets are only 1 bit long . 0 value could indicate NAK and 1
value could indicate ACK.
3. Retransmission: A packet that is received in error at the
receiver will be retransmitted by sender.
• It is important to note that when the receiver is in the wait-for-
ACK-or-NAK state, it can not get more data from the upper
layer, that will only happen after the sender receives an ACK
and leaves this state. Thus, the sender will not send a new
piece of data until it is sure that the receiver has correctly
received the current packet, due to this behavior of protocol
this protocol is also known as Stop and Wait Protocol.
rdt 2.1
What if ACK or NAK packets get corrupted? Methods to handle it:
1. The addition of extra checksum bits to detect the corrupted
packet and also to recover the packet. This is good but needs
extra data and processing of these packet headers.
2. Resending the data with corrupted acknowledgments by the
sender, But this method introduces duplicate packets at the
receiver end, This will be a flaw as there is no duplicate
handling at the receiver’s side.
3. Adding an additional field that constitutes a sequence bit, and
the receiver can check whether it is a duplicated packet or
retransmitted one.
• The logic of sequence number is sender sends packets with sequence
numbers ‘0’ and ‘1’ alternatively as the continuous transmission of
the same packet can be tracked.
• State-1: In the figure top-left state called ‘Wait for 0’ is the start, Start
state waits until it receives a message from the upper application
layer. After it is received as the transport layer it adds a transport
layer header added with sequence number ‘0’ sent into the network.
• State-2:After the packet is thrown into the network it moves to the
next state. If the received acknowledgment is corrupt or in case of
negative acknowledgment it resends the packet. Else the state moves
to the next.
• State-3 and State-4 are the same as the 1 and 2 but it sends the packet
with sequence number ‘1’.
• State-1(left): If it receives the corrupted packet it sends the
negative acknowledgment for resend request. Else if it receives
the non-corrupt packet with sequence number ‘1’ it sends a
positive acknowledgment. Else if it receives the non-corrupt
packet with sequence number ‘0’ it moves to the next state.

• State-2(Right): If it receives the corrupted packet it sends the


negative acknowledgment for resend request. Else if it receives
the non-corrupt packet with sequence number ‘0’ it sends a
positive acknowledgment. Else if it receives the non-corrupt
packet with sequence number ‘1’ it moves to the next state.
rdt 2.2
• Cons in RDT 2.1:
1. No duplicate packet management.
2. Not works in a channel with packet loss.

• In rdt 2.2 we do not use NAK.


• Instead of sending NAK we send ACK for last correctly
received packet.A sender that receives ACKs for same packet
knows that receiver did not receive correctly the packet
following the packet ACKed twice.
• State-1(Top-Left): In the figure top-left state called ‘Wait for 0’ is the start,
Start state waits until it receives a message from the upper application layer.
After it is received as the transport layer it adds a transport layer header added
with sequence number ‘0’ sent into the network.

• State-2(Top-Right): In this state, protocol checks whether the packet


acknowledgment is corrupt or seq ‘1’ is received as acknowledge. In this case,
the sender resends the packet as the transmitted sequence is not equal to the
received. If the sender receives a non-corrupt and correct sequence number, It
moves to the next state.
• State-3(Bottom-Right): In the figure top-left state called ‘Wait for 1’
waits until it receives a message from the upper application layer.
After it is received as the transport layer it adds a transport layer
header added with sequence number ‘1’ sent into the network.

• State-4(Bottom-Left): In this state, protocol checks whether the


packet acknowledgment is corrupt or seq ‘0’ is received as
acknowledge. In this case, the sender resends the packet as the
transmitted sequence is not equal to the received. If the sender
receives a non-corrupt and correct sequence number, It moves to the
next state.
• State-1(Left): If the received packet is corrupt or sequence with ‘1’ it
sends acknowledgment with sequence number ‘1’ to it which tells the
sender that the packet sent was not in order. If the packet is not
corrupt and has sequence number ‘0’ the receiver extracts the data
and sends acknowledgment with the sequence ‘0’. It moves to the
next state.
• State-2(Right): If the received packet is corrupt or sequence with ‘0’
it sends acknowledgment with sequence number ‘0’ to it which tells
the sender that the packet sent was not in order. If the packet is not
corrupt and has sequence number ‘1’ the receiver extracts the data
and sends acknowledgment with the sequence ‘1’. It moves to the
next state.
rdt 3.0
Cons in RDT 2.2:
Protocol RDT 2.2 doesn’t address packet loss.

• RDT 3.0 introduces a timer at the sender side if the


acknowledgment is not received within a particular time the
sender resends the packet. This method solves the issue of
packet loss.
• State-1(Top-Left): This is the Start state in the sender’s FSM
which is called “wait for call 0 from above”. It waits till it
receives a message to start from the application layer. In this
state, the datagram is created with the sequence number “0” in
its header and payload as a message in its body. Finally, the
packet is pushed into the network, and execution moves to the
next state.
• State-2(Top-Right): This state confirms whether the receiver
has received the packet or not. state check that if the received
acknowledgment is not corrupt, has the sequence number “1”,
and is reached within the time. If these two criteria are satisfied
the execution moves to the next state else the state resends
the packet.
• State-3(Bottom-Right): This state in the sender’s FSM is called
“wait for a call 1 from above”. It waits till it receives a message
to start from the application layer. In this state, the datagram is
created with the sequence number “1” in its header and
payload as a message in its body. Finally, the packet is pushed
into the network, and execution moves to the next state.
• State-4(Bottom-Left): This state confirms whether the receiver
has received the packet or not. state check that if the received
acknowledgment is not corrupt, has the sequence number “0”,
and is reached within the time. If these two criteria are satisfied
the execution moves to the next state else the state resends
the packet.
• State-1(Left): This is the First State in the receiver’s FSM which
is called “wait for call 0 from below”. This state whether the
received packet has sequence number “0” and is not corrupted.
If these conditions satisfy this state creates an
acknowledgment packet with the sequence “0” and pushes it
into the network which signifies the correct packet is received,
the execution moves to the next state else it creates an
acknowledgment packet with the sequence “1” and pushes it
into the network which signifies the correct packet is not
received.
• State-2(Right): This is the First State in the receiver’s FSM
which is called “wait for call 1 from below”. This state whether
the received packet has sequence number “1” and is not
corrupted. if these conditions satisfy This state creates an
acknowledgment packet with the sequence “1” and pushes it
into the network which signifies the correct packet is received,
the execution moves to the next state else it creates an
acknowledgment packet with the sequence “0” and pushes it
into the network which signifies the correct packet is not
received.
Pipelined reliable data transfer protocols
• Pipelining: The sender is allowed to send multiple packets
without waiting for acknowledgments.
• Consequences of pipelining in RDT:
1. The sender and receiver side of protocol need to buffer more
than one packets.
2. The range of sequence number has to be increased since
there may be multiple unacknowledged packets.
3. Above two condition depends on manner in which data
transfer protocol responds to lost, corrupted and overly
delayed packets. The two basic approaches are Go Back N
and Selective Repeat.
Go Back N Protocol
• Sender allowed to transmit multiple packets without ack but
constrained to have no more than some maximum allowable
number N of unacknowledged packets.
• Go back- 3 - means 3 frames send at a time without receiving
ack.

• base - sequence of oldest unacknowledged packet.


• nextseqnum - smallest unused sequence number
• [0,base-1] - packets already been transmitted and
acknowledged
• [base,nextseqnum-1] - packets send not acknowledged
• [nextseqnum,base+N-1] - packets that can be send
immediately should data arrive from upper layer.
• [base + N] - cannot be used unless unacknowledged packets
in pipeline has been acknowledged.
• The range of permissable sequence numbers for transmitted
but not yet acknowledged packets can be viewed as a window
of size N over range of sequence numbers.
• As protocol operates window slides over sequence number
space.
• N - referred to as window size and GBN is called sliding
window protocol.
• packet sequence number is carries in fixed length field in
packet header.
• if k is the number of bits in packet sequence number field the
range of sequence number is [0,2k-1].
• The key to Go-back-N is that we can send several packets before
receiving acknowledgments, but the receiver can only buffer one
packet.
• We keep a copy of the sent packets until the acknowledgments arrive.
• several data packets and acknowledgments can be in the channel at
the same time.
• if the acknowledgment number (ackNo) is 7, it means all packets with
sequence number up to 6 have arrived, safe and sound, and the
receiver is expecting the packet with sequence number 7.
• Advantage is simplicity of receiver buffering - receiver need not buffer
any out of order packets.
• But sender must maintain upper and lower bound of window and
position of nextseqnum.
• This protocol is inefficient if the underlying network protocol loses a
lot of packets.
• In Go-Back-N, N determines the sender's window size, and the size
of the receiver's window is always 1.
• Each time a single packet is lost or corrupted, the sender resends all
outstanding packets, even though some of these packets may have
been received safe and sound but out of order.
• If the network layer is losing many packets because of congestion in
the network, the resending of all of these outstanding packets makes
the congestion worse, and eventually more packets are lost.
• This has an avalanche effect that may result in the total collapse of
the network.
• Refer FSM state of GBN and SR from textbook
Selective Repeat

• It resends only selective packets, those that are actually lost.


• The Selective-Repeat protocol uses two windows: a send window and
a receive window.
• The receive window is the same size as the send window.
• ackNo defines the sequence number of one single packet that is
received safe and sound.
• It avoids unnecessary retransmission and only retransmit
packets that it suspects is received in error.
• Out of order packets are buffered until any missing packets are
received at which point batch of packets can be delivered in
order to upper layer.
TCP :RTT Estimation and Timeout
• SampleRTT: denote amount of time between when a segment is sent
and when acknowledged.
• Only take SampleRTT of one segment transmitted at a time.
• Doesnt computee SampleRTT for retransmitted segment.
• EstimatedRTT is average of SampleRTT’s
• EstimatedRTT - (1-α) * EstimatedRTT +α * SampleRTT
• α =0.125
• DevRTT measure how much SampleRTT vary from EstimatedRTT.
• DevRTT - (1-β) * DevRTT + β* |SampleRTT - EstimatedRTT|
• β - 0.25
• Timeout interval should be greater than or equal to
EstimatedRTT or unnecessary retransmission would take
place.
• TimeoutInterval - EstimatedRTT + 4 * DevRTT
Reliable Data Transfer
• There are 3 major event for reliable data transmission in TCP
1. Data received from application: TCP receive data from
application and encapsulate it to segment and passes
segment to IP.Each segment has a sequence number.Tcp
starts timer when segment is passed to IP.
2. Tmeout: Retransmit segment when timeout occurs and starts
the timer again.
3. ACK receipt: TCP uses cumulative ack. TCP compares ack
value ‘y’ with SendBase.SendBase - sequence no of oldest
unacknowledged packet. y- receipt of all bytes before byte
number y.
• if y > SendBase then ack is acknowledging one or more
previosuly acknowledged segments and updates SendBase.

• 3 scenarios to understanf working of TCP:


1. host A send segment with sq no:92 and wait for segment
from host B with ack no 100 .The segment is received from
A to B but ack from B to A is lost and timeout occurs
therefore host A resends it. Host B identify it as
retransmitted and discard it.
2. host A send two segment back to back with seq no 92 and
100 and wait acknowledgment from B with ack no 100 and
120 respectively.The segments and received at B but both
acks are lost, timeout occurs and first packet is retransmitted.As
long as ack for second segment arrive before new timeout the
second segment will not be transmited.
3. Same scenario as above but the ack of first one is only lost
and ack of second packet is received. That is ack 120
therefore host A knows everything upto 119 has been received
and will not retransmit although first ack is lost.
TCP modifications

1. Doubling the timeout interval: If timeout interval associated


with oldest not yet acknowledged segment is 0.75 sec when
the timer first expires, TCP retransmit the segment and set
new expiration time to 1.5 sec and then 3.0 sec.This
modification provide little congestion control .
2. Fast retransmit: Using duplicate ack sender can detect
packet loss well before timeout.TCP receiver receives a
segment with seqno larger than next one in order.It
understands that segment is missing.Since it doesnt uses
NAK it sends ack of last in order segment.
• When ack of in order segment is received thrice by sender
before timeout , sender takes as an indication that segment
following the segment acked has been lost and retransmit it.
Flow Control
• TCP provides flow control service to its application to eliminate
the possibility of sender overflowing receivers buffer.
• Flow control is a speed matching service.
• TCP provides flow control by having sender maintain a variable
called receiver window.It provides sender an idea about free
buffer space available at receiver.
• Consider host A want to send a large file to host B.
• RcvBuffer - receive buffer of host B.Host B reads time to time
from this buffer.
• LastByteRead: the seq number of the last byte read from buffer
by application or host B.
• LastByteRcvd: the seq number of the last byte that has arrived
from network and placed in buffer of host B.
• TCP does not permit overflow so
• LastByteRcvd-LastByteRead <= RcvBuffer
• RcvWindow - amount of spare room in buffer
• RcvWindow = RcvBuffer - [ LastByteRcvd-LastByteRead ]
• Host A keep tracks of 2 variables LastByteSent and
LastByteAcked.
• Difference between these two ie LastByteSent -
LastByteAcked is the amount of unacknowledged data that A
has send into connection.
• By keeping the amount of unacknowledged less than
RcvWindow host A assures that it is not overflowing host B.
• LastByteSent - LastByteAcked <= RcvWindow
TCP Connection Management
• Inorder for a client application to establish connection with
server application , the client TCP needs to proceed to
establish a TCP connection with the TCP in server.
• Following are the steps involved:
1. Client side TCP sends a special TCP segment to server side
TCP.Contains no application layer data.SYN flag bit is there
in header and is set to 1 therefore it is known as SYN
segment.Client randomly chooses an initial sequence
number known as client_isn and puts this number in
seuquence number field.This segment is encapsulated within
an IP datagram and send to server.
2. When IP datagram containing TCP SYN segment arrives at
the server host, server extracts TCP SYN segment from
datagram. Allocates TCP variables and buffer to connection
and sends connection granted segment to client TCP. It
contain no application layer data. It contains 3 important
values in header a) SYN bit with value 1 b) ack field of TCP
segment header is set to client_isn +1 c) server_isn - server
chooses its initial sequence number. It is also referred as
SYNACK segment.
3. Upon receiving SYNACK segment the client allocates buffer
and variables to connection. Then client host then sends the
server another segment.
The last segment acknowledges the server connection granted
segment that is by putting server_isn+1 in ack field and
seq=client_isn+1 ,SYN bit set to 0 since connection established.

• Once connection data segments are exchanged.Since three


segments are send for connection establishment it is known
as 3 way handshake.
• To close the connection TCP client sends TCP segment with
flag bit FIN set to 1 in segment header.Server sends an
acknowledgment segment first and then sends its own
shutdown segment which has FIN bit 1.Client acknowledges
the server segment and all the resources are deallocated.
Structure
States
Principles of Congestion Control : Causes and cost of
congestion
• We are considering 3 scenarios where congestion occurs:
1. Two sender, a router with infinite buffer: Host A and B is
sending data into the connection at an average rate of λ
bytes/sec.Packets from Hosts A and B pass through a router
and over a shared outgoing link of capacity R. The router has
buffers that allow it to store incoming packets when the packet-
arrival rate exceeds the outgoing link’s capacity. In this first
scenario, we assume that the router has an infinite amount of
buffer space.
Even in this (extremely) idealized scenario, we’ve already found
one cost of a congested network—large queuing delays are
experienced as the packet-arrival rate nears the link capacity.
2. Two Senders and a Router with Finite Buffers:Let’s now
slightly modify scenario 1 in the following two ways. First, the
amount of router buffering is assumed to be finite. A
consequence of this real-world assumption is that packets will
be dropped when arriving to an already full buffer. Second, we
assume that each connection is reliable. If a packet containing
a transport-level segment is dropped at the router, the sender
will eventually retransmit it.
We see here another cost of a congested network—the sender
must perform retransmissions in order to compensate for
dropped (lost)packets due to buffer overflow.
• let us consider the case that the sender may time out
prematurely and retransmit a packet that has been delayed in
the queue but not yet lost. In this case,both the original data
packet and the retransmission may reach the receiver. Of
course, the receiver needs but one copy of this packet and will
discard the retransmission.Here then is yet another cost of a
congested network—unneeded retransmissions by the sender
in the face of large delays may cause a router to use its link
bandwidth to forward unneeded copies of a packet.
3. Four Senders, Routers with Finite Buffers, and Multihop
Paths : In our final congestion scenario, four hosts transmit
packets, each over overlapping two-hop paths, as shown in
Figure 3.47. We again assume that each host uses a
timeout/retransmission mechanism to implement a reliable
data transfer service, that all hosts have the same value of
lin, and that all router links have capacity R bytes/sec.So
herewe see yet another cost of dropping a packet due to
congestion—when a packet is dropped along a path, the
transmission capacity that was used at each of the upstream
links to forward that packet to the point at which it is dropped
ends up having been wasted.
Approaches to congestion control
1. End-to-end congestion control: In an end-to-end approach to
congestion control,the network layer provides no explicit
support to the transport layer for congestion-control
purposes. Even the presence of network congestion must be
inferred by the end systems based only on observed network
behavior (for example, packet loss and delay).
2. Network-assisted congestion control: With network-assisted
congestion control,routers provide explicit feedback to the
sender and/or receiver regarding the congestion state of the
network. This feedback may be as simple as a single bit
indicating congestion at a link
• More sophisticated feedback is also possible. For example, in ATM
Available Bite Rate (ABR) congestion control, a router informs the
sender of the maximum host sending rate it (the router) can support on
an outgoing link.
For network-assisted congestion control, congestion information is
typically fed back from the network to the sender in one of two ways.
1. Direct feedback may be sent from a network router to the sender. This
form of notification typically takes the form of a choke packet .
2. The second and more common form of notification occurs when a
router marks/updates a field in a packet flowing from sender to
receiver to indicate congestion. Upon receipt of a marked packet, the
receiver then notifies the sender of the congestion indication. This
latter form of notification takes a full round-trip time.
TCP Classic Congestion Control
• TCP uses end to end congestion control.
• When TCP sender perceives that there is congestion in
network it should slow down the rate.
1. How does a TCP sender limit the rate at which it sends traffic
into its connection?
2. How does a TCP sender perceive that there is congestion on
the path between itself and the destination?
3. What algorithm should the sender use to change its send rate
as a function of perceived end-to-end congestion?
1. First we examine how sender control the data send rate.
• The TCP congestion-control mechanism operating at the
sender keeps track of an additional variable, the congestion
window. The congestion window, denoted cwnd, imposes a
constraint on the rate at which a TCP sender can send traffic
into the network. Specifically, the amount of unacknowledged
data at a sender may not exceed the minimum of cwnd and
rwnd.
• LastByteSent – LastByteAcked <= min{cwnd, rwnd}
• In scenarios where the TCP receive buffer is sufficiently large,
allowing the receive-window constraint to be ignored, the
sender's unacknowledged data is solely limited by cwnd.
• In the absence of significant packet loss and transmission
delays,at the beginning of every RTT, the constraint permits
the sender to send cwnd bytes of data into the connection; at
the end of the RTT the sender receives acknowledgments for
the data.
• The sender's send rate is approximately cwnd/RTT bytes per
second. Adjusting the value of cwnd enables the sender to
dynamically control its data transmission rate into the
connection.
2. TCP can determine congestion in transmission using loss
event.That is the packet is lost which is known either through
timeout or 3 acks of previous received segment.
3. Now let us consider scenario where we think the
transmission link is congestion free then sender will receive
ack for each of its segment correctly.
• TCP will take the arrival of these acknowledgments as an
indication that all is well that segments being transmitted into
the network are being successfully delivered to the destination
and will use acknowledgments to increase its congestion
window size.
• If acknowledgments arrive at a relatively slow rate, then the
congestion window will be increased at a relatively slow rate.
On the other hand, if acknowledgments arrive at a high rate,
then the congestion window will be increased more quickly.
Because TCP uses acknowledgments to trigger (or clock) its
increase in congestion window size, TCP is said to be self-
clocking.
• Some of the principles used by TCP to handle the sending rate:
1. A lost segment implies congestion, and hence, the TCP sender’s rate
should be decreased when a segment is lost.
2. An acknowledged segment indicates that the network is delivering the
sender’s segments to the receiver, and hence, the sender’s rate can
be increased when an ACK arrives for a previously unacknowledged
segment.
3. Bandwidth probing: Given ACKs indicating a congestion-free source-
to-destination path and loss events indicating a congested path, TCP’s
strategy for adjusting its transmission rate is to increase its rate in
response to arriving ACKs until a loss event occurs, at which point, the
transmission rate is decreased. The TCP sender thus increases its
transmission rate to probe for the rate that at which congestiononset
begins, backs off from that rate, and then to begins probing again to
see if the congestion onset rate has changed.
TCP congestion control algorithm
• The algorithm has three major components: (1) slow start, (2) congestion
avoidance, and (3) fast recovery.
1. Slow Start : When a TCP connection begins, the value of cwnd is typically
initialized to a small value of 1 MSS [maximum segment size], resulting in
an initial sending rate of roughly MSS/RTT. For example, if MSS = 500
bytes and RTT = 200 msec, the resulting initial sending rate is only about 20
kbps.
Since the available bandwidth to the TCP sender may be much larger than
MSS/RTT, the TCP sender would like to find the amount of available bandwidth
quickly. Thus, in the slow-start state, the value of cwnd begins at 1 MSS and then
double it for each RTT.It has an aggressive growth approach.
• But when should this exponential growth end? Slow start provides
several answers to this question. First, if there is a loss event
indicated by a timeout, the TCP sender sets the value of cwnd to 1
and begins the slow start process a new. It also sets the value of a
second state variable, ssthresh (shorthand for “slow start threshold”)
to cwnd/2—half of the value of the congestion window value when
congestion was detected.
• The second way in which slow start may end is directly tied to the
value of ssthresh. Since ssthresh is half the value of cwnd when
congestion was last detected, it might be a bit reckless to keep
doubling cwnd when it reaches or surpasses the value of ssthresh.
Thus, when the value of cwnd equals ssthresh, slow start ends and
TCP transitions into congestion avoidance mode.
• The final way in which slow start can end is if three duplicate ACKs
are detected, in which case TCP performs a fast retransmit and enters
the fast recovery state.
2. Congestion Avoidance : On entry to the congestion-avoidance state,
the value of cwnd is approximately half its value when congestion
was last encountered.Congestion avoidance takes a more
conservative approach, increasing cwnd linearly by just 1 MSS per
RTT, compared to the exponential growth of slow start.
In case of congestion, if triggered by a timeout, congestion avoidance
behaves similarly to slow start by resetting cwnd to 1 MSS and
updating ssthresh to half the value of cwnd when congestion was
detected.
• However, when congestion is indicated by triple duplicate ACKs,
TCP reduces cwnd less drastically by halving it (adding 3 MSS for
the triple duplicates) and sets ssthresh accordingly, entering the fast-
recovery state.
• Fast Recovery: In fast recovery, the value of cwnd is increased by 1
MSS for every duplicate ACK received for the missing segment that
caused TCP to enter the fast-recovery state. Eventually, when an
ACK arrives for the missing segment, TCP enters the congestion-
avoidance state after deflating cwnd. If a timeout event occurs, fast
recovery transitions to the slow-start state after performing the same
actions as in slow start and congestion avoidance: The value of
cwnd is set to 1 MSS, and the value of ssthresh is set to half the
value of cwnd when the loss event occurred.

You might also like