0% found this document useful (1 vote)
162 views62 pages

CH 6-The Transport Layer

The document discusses various aspects of the transport layer. The transport layer provides reliable data transmission services above the unreliable network layer. It ensures successful end-to-end delivery through mechanisms like connection establishment, flow control, congestion control, and error checking. The transport layer uses port numbers along with IP addresses for transport addressing to identify processes running on destination hosts. It provides both connection-oriented and connectionless services through protocols like TCP and UDP.

Uploaded by

rushalin7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (1 vote)
162 views62 pages

CH 6-The Transport Layer

The document discusses various aspects of the transport layer. The transport layer provides reliable data transmission services above the unreliable network layer. It ensures successful end-to-end delivery through mechanisms like connection establishment, flow control, congestion control, and error checking. The transport layer uses port numbers along with IP addresses for transport addressing to identify processes running on destination hosts. It provides both connection-oriented and connectionless services through protocols like TCP and UDP.

Uploaded by

rushalin7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 62

Ch - 6 The Transport layer

Introduction to transport layer


Goal-effective, reliable and cost effective data transmission services
Services:

Upper Layer Services


Transport Service Primitives
Berkeley Sockets
Example of Socket Programming:
Internet File Server
Packetizing
Connection control
Transport addressing.
Flow control.

Why two distinct layers, one layer is


not adequate?
Transport layer runs on users machine which are operated by carrier
(atleast in WAN) network layer works on routers.
So if the packet loss, inadequate services or router crash, happens
then user equipment has the data to transmit the packets. Thus
transport layer ensures the successful end to end packet delivery
User doesnt have any control over network services and protocols
therefore there should be some layer above network layer to increase
the over all QoS.
Incase of failures in network the transport layer can set up a new
network to the transport entity to insure end to end transmission
Thus transport layer provides reliable service on top of unreliable
network layer

Congestion control in Layer 4


Congestion control is done by both data link layer and network layer
But transport layer provides the congestion control for better QoS than
network layer
Typical QoS parameters are

Connection establishment delay


Throughput
Protection
Transit delay
Residual error
Resilience
Priority

Transport layer takes care to process to process packet delivery unlike


network layer that takes care of router to router packet delivery

Addressing in Transport layer


Data link layer needs MAC address, network layer needs IP
address similarly transport layer needs Port number to chose
among multiple process running on the destination host.
Port numbers are 16 bit integers between 0 to 65535
The client program defines itself with a port no. which is
chosen randomly, is called ephemeral port number.
Port nos. for servers also but no decided randomly
Universe port no for servers is called Well known port no.
So every client knows the well known port no corresponding to
the servers.

Addressing in Transport layer


Network layers IP address defines the particular host in millions
of hosts and after detecting the host the, which supports many
processes, the port no defines the one of the process on the
selscted host.
Port address ranges:
They are given by IANA-International Assigned Number Authority
Well known ports: 0 to 1023, controlled by IANA
Registered Ports: 1024 to 49151 not controlled by IANA but
registered with it to avoid duplication
Dynamic Ports: 49152 to 63535 are neither controlled nor
registered by IANA. They are used by any process.

Addressing in Transport layer


It is a set of system calls or procedures for communication.
Socket acts as an end point
Two processes of two end terminals can communicate if the
have sockets at each end.
Socket Addressing:
It is the combination of IP address and port number
Two types of socket: client sockets and server sockets
These contains 4 pieces which goes into the IP header
So IP header contains the IP address while TCP/UDP header has
port numbers

Transport Service Primitives


Primitives means the codes/programs stored in OS
kernels, or in library packages or stored in network
interface cards
For eg. Server executes LISTEN packet, and when client
wants to talk to server it executes CONNECT primitive.

Segments-Packets- Frames nesting


Message exchanged between two transport entities is
called Segment(TPDU-Transport Protocol Data Unit, in
some older protocol)
Segments are engulfed in packet(for in network layer)
which is engulfed in the frame

Elements of transport protocol


Transport layer responsibilities are same as
the data link layer like error control,
sequencing and flow control.
So what is difference between these two
layers??
At the layer 2 two routers communicate
with each other on a single link so in here
the flow, error control etc. is taken care on
single link where as in transport layer the
physical channel is replaced by entire
subnet
The flow control, error control etc. is taken
care on the subnet level

(a)Environment of the data link


layer.
(b)Environment of the transport
layer.

Difference between Data link layer&


Transport layer
Data link layer

1.

Communication is through physical channel

2.

Destination router is not important to specify

3.

Simple connection establishments

4.

No storage capacity

5.

No additional delays

Transport layer

1.

Communication through subnet

2.

Explicit destination addressing

3.

Complicated initial connection establishment

4.

Storage is done on users device and few in subnets

5.

Delays introduced due to storage

Connection oriented Services


For connection between two hosts, Host1 and Host2
firstly the connection request and reply
acknowledgement from one host to another is
exchanged.
Each connection has a unique sequence no. to avoid
duplication
The sequence no. ensures the sender cannot create
more than one connection if no. repeats then
duplication results
Similarly each acknowledge has an acknowledgment no.
Receiver has to keep records of the sequence and

Connection oriented Services


In transport layer the connection establishment is comparatively
complex process
Problems:
1. If ACKs are received on time then retransmissions of each packet results
2. In subnets the datagrams take different routes which results in loss of
datagrams if paths has congestions or failures.
3. Same connections reestablishes due to packet duplication

Remedy: Techniques for restricting packet lifetime


1. Restricted network design.
2. Putting a hop counter in each packet.
3. Time stamping each packet.

Normal
Operation
Three way
Handshaking
Technique
Host 1 chooses sequence no.x
and sends the connection
request segment containing it
to host2
Host 2 replies with the
connection accept segment to
acknowledge x nd to announce
its own Seq no.
Host 1 sends CK to host2 and
sends the first data segment.

Three
way
Handshaking
Abnormal Operation
Technique
st
The 1 segment is delayed
duplicate CR from old
connection and hoast1 doesnt
know about it.
Host 2 Rxes this segment and
sends ACK to host2 as
connection accepted
But host1 is not trying for any
connection thus it sends REJECT
along ACK=y
Host 2 releases as its fooled by
delayed duplicate=> aborts the
connection.

Three way Handshaking


Technique
Abnormal Operation:
Duplicate CR and
Duplicate ACK

Worst case: floating delayed


duplicate CR and ACKs both.
Host 2 Rxes this segment and
sends ACK with its seq no. y
But host1 is not trying for any
connection thus it sends REJECT
along ACK=y
Host 2 releases as its fooled by

Three way Handshaking


Technique
Connection Release

In this if one hosts releases the


connection the other host can
keep sending the segments
Host1 sends a msg. to release
connection to host2, to which
host2 confirms and sends the
ACK
Here still host2 can continue
sending data then once sending
is over it sends connection
release msg. to host1

Two Army Problem


Example for handshake
Blue and white are two army
fighting against each other
White solders are more than
any one of blue1&blue2 army.
Blue army 1&2 individually has
less no. of members(impossible
to win against white) but if
combined they have higher no
of soldiers(possibly win over
white army)
So blue 1&2 needs
synchronization for combined

The two-army problem

Formulating algorithm so that blue


army wins
Protocol1:Two way handshake
signal
Chief of blue1 army sends a message to
the chief of blue2 army proposing the
attack on 1st January is it ok?
The messenger reaches to the chief of
blue army2 to which he agrees and sends
a reply Yess(ACK).
This process is called two way handshake
Will the attack take place?
Answer is probably not because the chief
of blue2 army doesnt know whether his
reply has reached the blue1 army
successfully

Formulating algorithm so that blue


army wins
Protocol2:Three way handshake signal
Improving two way algo.by three way algorithm
Assuming that non of the messages are lost and blue2 army will
lso get an acknowledgement from blue1 army
But now the chief of blue1 army will hesitate, because he doesnt
know if the last message he has sent has got through or not.
So we can make four way handshake.
But it also does not help, because in every protocol the
uncertainity after the last handshake message always remains.
Infact there is no protocol exits that works.

Flow Control and Buffering


Sliding window protocol is ment for flow control but it is
impractical because sending ACKs after each small
packets increases the congestion
So including buffer is better, it buffers the segments and
sends acknowledgements as soon as they taken inside
for processing by the Rxer.
No individual buffer for individual connection instead a
pool of buffers for all connections(not in case of high
bandwidth apps.)
A general way to manage dynamic buffer allocation is to
decouple the buffering from the acknowledgments.

Flow Control and Buffering


Dynamic buffer management in a variable- sized window:
Initially, the sender requests a certain no of buffers, based on its perceived
needs.
The receiver grants then as many buffers it can afford
Every time the sender terminates a segment, it must decrement its
allocation and stopping altogether when allocations reaches to zero.
The receiver then separately piggybacks both ACKs and buffer allocation
onto reverse traffic.
The senders window size could be dynamically adjusted not only by the
availability of the buffer at Rxer.side but also by the capacity and traffic of
the subnet.
The bigger the subnet capacity->lighter the traffic-> larger the senders
window-> lager the buffer size.

Flow Control and Buffering

(a)Chained fixed-size buffers. (b) Chained variable-sized buffers.


(c) One large circular buffer per connection.

Error Control and Crash recovery


Crash results in loss of the data
In case of router crash the two entities must exchange information about
which segments received and which are lost then resending the lost ones
If network layer provides connection oriented services then the loss of
virtual circuit connection is only rebuilt
The receiver first sends an ACK and then perform the write to the output
stream(about the transmitted segments)
But crash occurs in middle of the transmission. Then it better to reverse
the order of sending ACKs and performing write option
But if ACKs are lost duplicate segments will be transmitted. So no matter
how the sender and receiver are programmed, there are always situations
where the protocol fails to recover properly

Error Control and Crash recovery


Recovery from IMP and Host crashes:
Txport entity should exchange information after the crash about which
segments Rxed and which are crashed
Rxer must send a broadcast information about its crash to all other
neighboring nodes
Transport entity first send write then sends ACKs
Rxer can be programmed in one of the following two ways: ACK first or write
first
Sender can be programmed in one of the following ways:

Always transmit the last segment


Never transmit the last segment
Re-transmit only in state S0
Re-transmit only in state SI

Error Control and Crash recovery

Different combinations of client and server strategy

The Internet Transport Protocols


The internet has two main protocols in the transport
layer.
One of them is connection oriented and the other one is
connectionless services.
TCP(Transmission Control Protocol) is connection
oriented protocol
UDP(Users Datagram Protocol) is a connectionless
protocol
UDP is similar to IP with an additional short header

Transmission Control Protocol


Introduction to TCP
The TCP service model
The TCP protocol
The TCP segment header
TCP connection establishment
TCP
TCP
TCP
TCP
TCP

connection release
Connection Management
windows management
Congestion Control
Timer management

TCP- Introduction
TCP (Transmission Control Protocol) was specifically
designed to provide a reliable end-to-end byte stream
over an unreliable internetwork.
An internetwork differs from a single network because
different parts may have quite different topologies,
bandwidth, delays, packet sizes, and other parameters.
TCP was designed to dynamically adapt to properties of
the internetwork and to be robust in the face of many
kinds of failures.

TCP- Introduction
Since connection oriented service it is reliable.
Services-stream data transmission, stability, flow control, full duplex
operations and multiplexing
Due to reliable channel the data chopping does not take place simply
the segments are formed and passed them to IP for delivery
Blocks not ACKed in specified time period(timer mechanism) are only
retransmitted
Reliability in TCP deals with lost, delayed or duplicate or misread
packets.
Full duplex operation- TCP can send and receive at same time.
Multiplexing-numerous simultaneous upper layer conversations can be
multiplexed over a single connections.

TCP-Service Model
For obtaining the TCP services both sender and receiver necessarily has to
create end points called sockets, where in each socket has a socket number
and a socket address.
A Port is a TCP name in order to obtain the TCP service, it is necessary to
establish a socket between the sending and receiving machine.

Socket calls

TCP-Service Model
A TCP connection is byte stream, not a message stream.
Message boundaries are not preserved end to end.
For example, if the sending process does four 512-byte
writes to a TCP stream, these data may be delivered to
the receiving process as four 512-byte chunks, or two
1024-byte chunks, or one 2048-byte chunk, or some
other way.
When an application passes data to TCP, TCP may send
it immediately or buffer it (in order to collect a larger
amount to send at once), at its discretion.

TCP-Service Model
Urgent Data
Its is a flag in TCP
If there is a data to sent with some control information
the sending entity puts it with an URGENT flag.
Then TCP stops accumulating data and transmits
everything it has for the connection immediately
When the data reaches Rxer. the receiving application is
interrupted application is interrupted and the urgent
data stream is processed by it.

TCP-Service Model
Well known ports:
Port no 1024-well known ports reserved for servers
All TCP connections are point-point(has endpoints) and full duplex
(means bidirectional).
No multicasting and broad casting
Push flag and buffering:
When message boundaries are not preserved the TCP may send it
immediately or may collect the data for some time and sent it at
once(buffering)
For immediate data transmission TCP can use PUSH flag which
forces TCP send the data without any delays

TCP-Service Model
Port
21
23
25
69
79
80
110
119

Protocol
FTP
Telnet
SMTP
TFTP
Finger
HTTP
POP-3
NNTP

Use
File transfer
Remote login
E-mail
Trivial File Transfer Protocol
Lookup info about a user
World Wide Web
Remote e-mail access
USENET news

Some assigned ports.

The TCP- Protocol


Every byte on TCP connection has 32 bit sequence no.
Segments:
Data exchange in the form of 20 byte long segments followed by zero or more data

Segment Size:
Each segment with TCP header has 65535 byte IP payload
Each network has MTU(Maximum Transfer Unit), each segment must fit MTU which is
few thousands of bytes, it defines the maximum upper limit to segment size

Fragmentation:
If segment is too large it is divided in to fragments
Each new fragment get a new IP header so fragmentation increases the payload

Timer:
TCPs basic protocol is sliding window protocol where in the retransmissions are only
done if time delays are violated.

The TCP- Protocol


Possible Problems
Segments as whole or part of it may can reach the destination
Segments may reach out of order
Delays in segment transmissions are not fix so unnecessary
retransmissions may occur
Due to seq. no proper sequence has to be maintained while
receiving
Possibility of congestion and broken network along the path

The TCP Segment Header


20 bytes fixed format
After 65535-2020=65495 data bytes
may follow
In this 20 corresponds
to IP header and 20
corresponds to TCP
header
For ACKs and control
messages only TCP
header is sent

The TCP Segment Header


Source
port : tells about the

port address, divided into 3


parts 1.well known(01023)2.registered ports(102449151)3.private(dynamic)
ports(419152-65535)
Destination port:16 bit port
address of segment destined
for.
Sequence no.:a 32bit long
unique Seq. no. to each
segment is given in this field
total -1 seq. nos. can be
allotted

The TCP Segment Header


Acknowledge no.: provides 32 bit long ACK no given
to each ACK exchanged
Header Length/Offset: tells about the total length of
the header excluding the data payload varies.
Multiple of 4 to these 4 bits will give the value.
Reserved: this field is currently unused thus kept
for future (all 0s).
Control bits/flag bits:
CWR-Congestion window reduced, when set means
the congestion is removed
ECE-when ECE=1 tells the ECN(Explicit Congestion
Notification)-Echo to inform the sender to slow down
as congestion is has taken place.
URG-Urgent Pointer-1=incoming urgent data packet
ACK-1=valid ACK
PSH-Push Function, 1=the sender should sent this
packet with least delay
RST-Reset the connection, aborts the connection and
buffers are made empty
Synchronization(SYN)-when set means sender is
trying to sync the sequence no.

The TCP Segment Header


Window size: for flow control for data
transmission size, this field tells how
much data the receiver is willing to
accept, maximum size is limited to
65535.
Checksum: CRC or Checksum bit
used only for header checksum
Urgent Pointer: this field tells the
receiver when is the last byte of
urgent data in the segment ends.
Options: for additional data to be
mentioned if any by the sender in this
field
Padding: many info in the header so
to differentiate the header is padded
with extra series of zeros are
appended

TCP Connection Management


To establish a connection, one
side, say a server, passively
waits for an incoming connection
by executing LISTEN and ACCEPT
primitives
The other side, say a client,
executes a CONNECT primitive,
specifying the IP address and
port to which it wants to connect,
and the max TCP segment size it
is willing to accept
The CONNECT primitive sends a

TCP Connection Management


When this segment arrives at the
destination, the TCP entity there checks
to see if there is a process that has
done a LISTEN on the port given in the
Destination port field. If not, it sends a
reply with the RST bit on to reject the
connection.
If some process is listening on the
port, that process is given the incoming
TCP segment. It can either accept or
reject the connection. If it accepts, an
acknowledgment segment is sent back.

TCP- Congestion Control


When the load offered to any networks is more than it can handle,
congestion builds up. The Internet is no exception.
Algorithms have been developed over the past decade to deal with
congestion.
Although the network layer also tries to manage congestion, most
of the heavy lifting is done by TCP because the real solution to
congestion is to slow down the data rate.
In theory congestion can be dealt with by employing a principle
borrowed from physics: the law of conservation of packets. The idea
is not to inject a new packet into the network until an old one
leaves (i.e. is delivered). TCP attempts to achieve this goal by
dynamically manipulating the Window size.

TCP- Congestion Control

(a) A fast network feeding a low capacity


receiver
(b) A slow network feeding a high

TCP Termination Protocol

The states used in the TCP connection


management finite state machine.

TCP Sliding Window

Window management in

TCP- Timer Management


TCP uses multiple timers (at least conceptually) to do its work.
The most important of these is the retransmission timer. When a segment is
sent, a retransmission timer is started. If the segment is acknowledged
before the timer expires, the timer is stopped. If, on the other hand, the
timer goes off before the acknowledgment comes in the segment is
retransmitted (and the timer started again).
The question that arises is: How long should the timeout interval be?
This problem is much more difficult in the Internet transport layer than in the
generic data link protocols, where the delay is very predictable.
The solution is to use a highly dynamic statistical algorithm that constantly
adjusts the timeout interval based on continuous measurements of network
performance. This algorithm was proposed by Jacobson in 1988.

TCP- Timer Management


Jacobsons Algorithm
There is a timer called Round Trip Time(RTT) for each connection of TCP and is variable
per connection.
When segment is sent the timer starts
If ACK fails to reach the source before the timer expires then retransmissions do occure
If ACK reaches before timer expires the TCP measures the time taken by the ACK and
adjusts the RTT to a new value
RTT=RTT+(1- )M
is called the smoothing factor. = 7/8, M is time taken by successful ACK to reach
source
Even if good value of RTT is given, it is not easy to choose the timeout.
Jacobson Proposed n new smoothing factor D(deviation) which is given by
D= D+(1- ) I RTT-M I
Time out is calculated as Timeout=RTT+4D

UDP-User Datagram Protocol


UDP is a connectionless, unreliable Transport level service protocol. It is
primarily used for protocols that require a broadcast capability.
Many client-server applications that have 1 request and 1 response use
UDP rather than go to the trouble of establishing and later releasing a
connection.
It provides no packet sequencing, may lose packets, and does not check
for duplicates.
It is used by applications that do not need a reliable transport service.
Application data is encapsulated in a UDP header which in turn is
encapsulated in an IP header.
UDP distinguishes different applications by port number which allows
multiple applications running on a given computer to send /receive
datagrams independently of one another.

UDP-User Datagram Protocol


Connectionless:
no handshaking between UDP sender, receiver
each UDP segment handled independently of others
Why is there a UDP?
no connection establishment (which can add delay)
simple: no connection state at sender, receiver
small segment header
no congestion control: UDP can blast away as fast as desired
Often used for streaming multimedia apps
less tolerant
rate sensitive
Other UDP uses
DNS
SNMP
Reliable transfer over UDP: add reliability at application layer
Application-specific error recovery! (e.g, FTP based on UDP but with recovery)

UDP-Header
A UDP segment consists of an 8-byte header followed by the data.
UDP only provides TSAPs (ports) for applications to bind to. UDP does
not provide reliable or ordered service. The checksum is optional.

The UDP header.

UDP-Header
The two ports serve the same function as they do in TCP: to identify the end points
within the source and destination machines.
The UDP length field includes the 8-byte header and the data.
The UDP checksum is used to verify the size of header and data.

Sender:
treat segment contents as sequence of 16-bit integers
checksum: addition (1s complement sum) of segment contents
sender puts checksum value into UDP checksum field

Receiver:
compute checksum of received segment
check if computed checksum equals checksum field value:
NO - error detected
YES - no error detected. But maybe errors nonetheless? More later .

UDP pseudoheader
0

16

31

Source IP Address
Destination IP Address
00000000

Protocol port no.

UDP Length

1.Pseudoheader is to ensure that the datagram has indeed


reached the correct destination host and port.
2. The padding of 0s and pseudoheader is only for the
computation of checksum and not be transmitted.

UDP- Well known ports

UDP Operations
Connectionless services :
each datagram sent by UDP is n independent datagram even though they are
coming from the same source
These datagrams are not numbered. No connection establishment or
connection release is necessary, each datagram can follow different path

Flow Control & Error Control :


UDP is simple and unreliable protocol. There is no flow control, hence the
receiver can overflow with incoming messages.
No mechanism for error control except for checksum in header.
If error is detected by checksum the segment get discarded.

Encapsulation & Decapsulation


Queuing.

UDP-Encapsulation & Decapsultion

Queues in UDP

Queues in UDP
Client requests for the port no. from the OS. Incoming and outgoing queues are
created for each process
One port for each process thus it results in to only one queue per process
Client sends the messages on its output line using output port address
UDP Removes the queue messages one by one by adding the UDP header and
delivers them to IP
If queue over flows then OS tells Client to wait before sending next messages
While client receiving the messages, UDP checks if incoming port has queue or
not, if yes then UDP sends received datagrams to the end of the queue
If incoming messages overflows then UDP simply discards the datagram and
prepares to notify sender of port unavailability.
In case of server queuing the port address is a well known port address rest all
steps of queuing are the same

Applications of UDP
It is suitable for application that have following
requirements
A simple response to simple request made
Flow control & Error control is not essential
Bulk data is not be sent

UDP is suitable for multicasting applications


UDP used for management process like SNMP(Simple
Network Management Protocol)
UDP is used for RIP(Routing Information Protocol)

Comparison of TCP and UDP


TCP

UDP

1.

Full featured protocol

1.

Less featured protocol.

2.

Connection oriented protocol

2.

Connectionless protocol

3.

Data transmitted in streams

3.

Message based transmitted

4.

Reliable transmissions

4.

Unreliable transmissions

5.

High overhead

5.

Low overheads

6.

LOW transmission speeds

6.

High transmission speeds

7.

Retransmissions

7.

No retransmissions occurs

8.

Flow & Error Control

8.

No Flow & Error Control

END
Chapter 6

You might also like