0% found this document useful (0 votes)
16 views52 pages

Unit-7 Transport Layer

dccn

Uploaded by

grand magician
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views52 pages

Unit-7 Transport Layer

dccn

Uploaded by

grand magician
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Transport Layer:

• The transport layer builds on the network layer to provide data transport from a process
on a source machine to a process on a destination machine with a desired level of
reliability that is independent of the physical networks currently in use.
The Transport Service:
1) Services provided to the upper layers:
• The ultimate goal of the transport layer is to provide efficient, reliable, and cost-effective
data transmission service to its users, normally processes in the application layer.
• To achieve this, the transport layer makes use of the services provided by the network
layer.
• The software and/or hardware within the transport layer that does the work is called the
transport entity.
• The transport entity can be located in the operating system kernel, in a library package
bound into network applications, in a separate user process, or even on the network
interface card.
• There are two types of network service, connection-oriented and connectionless,
there are also two types of transport service.
• The transport code runs entirely on the users’ machines, but the network layer
mostly runs on the routers, which are operated by the carrier (at least for a wide
area network).
• What happens if the network layer offers inadequate service? What if it
frequently loses packets? What happens if routers crash from time to time?
• The users have no real control over the network layer, so they cannot solve the
problem of poor service by using better routers or putting more error handling in
the data link layer because they don’t own the routers.
• The only possibility is to put on top of the network layer another layer that
improves the quality of the service.
• If, in a connectionless network, packets are lost or mangled, the transport entity
can detect the problem and compensate for it by using retransmissions.
• The bottom four layers can be seen as the transport service provider, whereas the
upper layer(s) are the transport service user.
Transport Service Primitives:
• To allow users to access the transport service, the transport layer must provide
some operations to application programs, that is, a transport service interface.
Each transport service has its own interface.
• The transport service is similar to the network service, but there are also some
important differences.
• The main difference is that the network service is intended to model the service
offered by real networks, warts and all.
• Real networks can lose packets, so the network service is generally unreliable.
• The connection-oriented transport service, in contrast, is reliable.
• Of course, real networks are not error-free, but that is precisely the purpose of the
transport layer—to provide a reliable service on top of an unreliable network.
• To get an idea of what a transport service might be like, consider the five
primitives listed in Fig. 6-2.
• It allows application programs to establish, use, and then release
connections, which is sufficient for many applications.
• To see how these primitives might be used, consider an application
with a server and a number of remote clients.
• To start with, the server executes a LISTEN primitive, typically by
calling a library procedure that makes a system call that blocks the
server until a client turns up.
• When a client wants to talk to the server, it executes a CONNECT
primitive. The transport entity carries out this primitive by blocking
the caller and sending a packet to the server.
• The client’s CONNECT call causes a CONNECTION REQUEST
segment to be sent to the server.
• When it arrives, the transport entity checks to see that the server is blocked on a LISTEN
(i.e., is interested in handling requests).
• If so, it then unblocks the server and sends a CONNECTION ACCEPTED segment back
to the client.
• When this segment arrives, the client is unblocked and the connection is established.
• Data can now be exchanged using the SEND and RECEIVE primitives.
• When a connection is no longer needed, it must be released to free up table space within
the two transport entities.
• Disconnection has two variants: asymmetric and symmetric.
• In the asymmetric variant, either transport user can issue a DISCONNECT primitive,
which results in a DISCONNECT segment being sent to the remote transport entity.
Upon its arrival, the connection is released.
• In the symmetric variant, each direction is closed separately, independently of the other
one. When one side does a DISCONNECT, that means it has no more data to send but it
is still willing to accept data from its partner. In this model, a connection is released when
both sides have done a DISCONNECT.
Sockets:
• Sockets in computer networks are used for allowing the transmission of
information between two processes of the same machines or different machines in
the network.
• The socket is the combination of IP address and software port number used for
communication between multiple processes.
• Socket helps to recognize the address of the application to which data is to be sent
using the IP address and port number.
Berkeley Sockets:
• This is another set of transport primitives, the socket primitives as they are used
for TCP.
• Sockets were first released as part of the Berkeley UNIX 4.2BSD software
distribution in 1983.
• The primitives are now widely used for Internet programming on many operating
systems, especially UNIX-based systems, and there is a socket-style API for
Windows called ‘‘winsock.’’
• The primitives are listed below:
• The first four primitives in the list are executed in that order by servers.
• The SOCKET primitive creates a new endpoint and allocates table space for it
within the transport entity.
• The parameters of the call specify the addressing format to be used, the type of
service desired (e.g., reliable byte stream), and the protocol.
• A successful SOCKET call returns an ordinary file descriptor for use in
succeeding calls, the same way an OPEN call on a file does.
• Newly created sockets do not have network addresses.
• These are assigned using the BIND primitive. Once a server has bound an address
to a socket, remote clients can connect to it.
Elements of Transport protocols:
• The transport service is implemented by a transport protocol used between the two
transport entities.
• In some ways, transport protocols resemble the data link protocols.
• Both have to deal with error control, sequencing, and flow control, among other issues.
• There exists major dissimilarities between the environments in which the two protocols
operate, as shown in Fig. 6-7.
• At the data link layer, two routers communicate directly via a physical channel, whether
wired or wireless, whereas at the transport layer, this physical channel is replaced by the
entire network.
Addressing:
• When an application (e.g., a user) process wishes to set up a connection to a
remote application process, it must specify which one to connect to.
• The method normally used is to define transport addresses to which processes can
listen for connection requests.
• In the Internet, these endpoints are called ports. We will use the generic term
TSAP (Transport Service Access Point) to mean a specific endpoint in the
transport layer.
• The analogous endpoints in the network layer (i.e., network layer addresses) are
not-surprisingly called NSAPs (Network Service Access Points). IP addresses
are examples of NSAPs.
• Figure 6-8 illustrates the relationship between the NSAPs, the TSAPs, and a
transport connection.
• Application processes, both clients and servers, can attach themselves to a local
TSAP to establish a connection to a remote TSAP
• These connections run through NSAPs on each host, as shown.
• The purpose of having TSAPs is that in some networks, each computer
has a single NSAP, so some way is needed to distinguish multiple
transport endpoints that share that NSAP.
Connection Establishment:
• It would seem sufficient for one transport entity to just send a CONNECTION REQUEST
segment to the destination and wait for a CONNECTION ACCEPTED reply.
• The problem occurs when the network can lose, delay, corrupt, and duplicate packets.
This behavior causes serious complications.
• The crux of the problem is that the delayed duplicates are thought to be new packets.
• One way is to use throwaway transport addresses. In this approach, each time a transport
address is needed, a new one is generated. When a connection is released, the address is
discarded and never used again.
• Another possibility is to give each connection a unique identifier (i.e., a sequence number
incremented for each connection established) chosen by the initiating party and put in
each segment, including the one requesting the connection.
• After each connection is released, each transport entity can update a table listing obsolete
connections as (peer transport entity, connection identifier) pairs.
• Whenever a connection request comes in, it can be checked against the table to see if it
belongs to a previously released connection.
• Rather than allowing packets to live forever within the network, we devise a mechanism
to kill off aged packets that are still hobbling about.
• With this restriction, the problem becomes somewhat more manageable.
• Packet lifetime can be restricted to a known maximum using one (or more) of the
following techniques:
1. Restricted network design.
2. Putting a hop counter in each packet.
3. Timestamping each packet
• Tomlinson (1975) introduced the three-way handshake.
• This establishment protocol involves one peer checking with the other that the connection
request is indeed current. The normal setup procedure when host 1 initiates is shown in
Fig. 6-11(a).
• Host 1 chooses a sequence number, x, and sends a CONNECTION REQUEST segment
containing it to host 2.
• Host 2 replies with an ACK segment acknowledging x and announcing its own initial
sequence number, y.
• Finally, host 1 acknowledges host 2’s choice of an initial sequence number in the first data
segment that it sends.
• In Fig. 6-11(b), the first segment is a delayed duplicate CONNECTION REQUEST from
an old connection.
• This segment arrives at host 2 without host 1’s knowledge. Host 2 reacts to this segment
by sending host 1 an ACK segment, in effect asking for verification that host 1 was indeed
trying to set up a new connection.
• When host 1 rejects host 2’s attempt to establish a connection, host 2 realizes that it was
tricked by a delayed duplicate and abandons the connection.
• In this way, a delayed duplicate does no damage.
• The worst case is when both a delayed CONNECTION REQUEST and an ACK are
floating around in the subnet.
• This case is shown in Fig. 6-11(c). As in the previous example, host 2 gets a delayed
CONNECTION REQUEST and replies to it.
• At this point, it is crucial to realize that host 2 has proposed using y as the initial sequence
number for host 2 to host 1 traffic, knowing full well that no segments containing
sequence number y or acknowledgements to y are still in existence.
• When the second delayed segment arrives at host 2, the fact that z has been acknowledged
rather than y tells host 2 that this, too, is an old duplicate.
Connection Release:
• There are two styles of terminating a connection: asymmetric release and
symmetric release.
• Asymmetric release is the way the telephone system works: when one party
hangs up, the connection is broken.
• Symmetric release treats the connection as two separate unidirectional
connections and requires each one to be released separately.
• Asymmetric release is abrupt and may result in data loss.
• Consider the scenario of Fig. 6-12. After the connection is established, host 1 sends a
segment that arrives properly at host 2.
• Then host 1 sends another segment. Unfortunately, host 2 issues a DISCONNECT before
the second segment arrives.
• The result is that the connection is released and data are lost.
• Symmetric release does the job when each process has a fixed amount of data to send and
clearly knows when it has sent it.
• In other situations, determining that all the work has been done and the connection should
be terminated is not so obvious.
• One can envision a protocol in which host 1 says ‘‘I am done. Are you done too?’’ If host
2 responds: ‘‘I am done too. Goodbye, the connection can be safely released.’’
• Figure 6-14 illustrates four scenarios of releasing using a three-way handshake.
• In Fig. 6-14(a), we see the normal case in which one of the users sends a DR
(DISCONNECTION REQUEST) segment to initiate the connection release.
• When it arrives, the recipient sends back a DR segment and starts a timer, just in case its
DR is lost.
• When this DR arrives, the original sender sends back an ACK segment and releases the
connection.
• Finally, when the ACK segment arrives, the receiver also releases the connection.
• If the final ACK segment is lost, as shown in Fig. 6-14(b), the situation is saved by the
timer.
• When the timer expires, the connection is released anyway.
• Now consider the case of the second DR being lost.
• The user initiating the disconnection will not receive the expected response, will time out,
and will start all over again.
• In Fig. 6-14(c), we see how this works, assuming that the second time no segments are
lost and all segments are delivered correctly and on time.
• Our last scenario, Fig. 6-14(d), is the same as Fig. 6-14(c) except that now we assume all
the repeated attempts to retransmit the DR also fail due to lost segments.
• After N retries, the sender just gives up and releases the connection. Meanwhile, the
receiver times out and also exits.
Error Control and Flow Control:
• how connections are managed while they are in use.
• The key issues are error control and flow control.
• Error control is ensuring that the data is delivered with the desired level of reliability,
usually that all of the data is delivered without any errors.
• Flow control is keeping a fast transmitter from overrunning a slow receiver.
• There is little duplication between the link and transport layers in practice.
• Even though the same mechanisms are used, there are differences in function and degree.
For a difference in function, consider error detection.
• The link layer checksum protects a frame while it crosses a single link.
• The transport layer checksum protects a segment while it crosses an entire network path.
It is an end-to-end check, which is not the same as having a check on every link.
Multiplexing:
• In the transport layer, the need for multiplexing can arise in a number of ways.
• For example, if only one network address is available on a host, all transport connections
on that machine have to use it.
• When a segment comes in, some way is needed to tell which process to give it to. This
situation, called multiplexing, is shown in Fig. 6-17(a).
• In this figure, four distinct transport connections all use the same network connection
(e.g., IP address) to the remote host.
• Suppose, for example, that a host has multiple network paths that it can use.
• If a user needs more bandwidth or more reliability than one of the network paths can
provide, a way out is to have a connection that distributes the traffic among multiple
network paths on a round-robin basis, as indicated in Fig. 6-17(b).
• This modus operandi is called inverse multiplexing.
TRANSPORT LAYER PROTOCOLS:
• The Internet has two main protocols in the transport layer, a connectionless
protocol and a connection-oriented one. The protocols complement each other.
• The connectionless protocol is UDP. It does almost nothing beyond sending
packets between applications, letting applications build their own protocols on top
as needed.
• The connection-oriented protocol is TCP. It does almost everything.
• It makes connections and adds reliability with retransmissions, along with flow
control and congestion control, all on behalf of the applications that use it.
• Since UDP is a transport layer protocol that typically runs in the operating system
and protocols that use UDP typically run in user s pace, these uses might be
considered applications.
USER DATAGRAM PROTOCOL (UDP):
• The Internet protocol suite supports a connectionless transport protocol called
UDP (User Datagram Protocol).
• UDP provides a way for applications to send encapsulated IP datagrams without
having to establish a connection.
• UDP transmits segments consisting of an 8-byte header followed by the payload.
• The header is shown in Fig. 6-27. The two ports serve to identify the endpoints
within the source and destination machines.
• When a UDP packet arrives, its payload is handed to the process attached to the
destination port.
• This attachment occurs when the BIND primitive. Without the port fields, the
transport layer would not know what to do with each incoming packet. With them,
it delivers the embedded segment to the correct application.
• The UDP header consists of four fields:- (i) Source Port (ii) Destination Port (iii)
UDP length (iv) UDP checksum.
• The source port is primarily needed when a reply must be sent back to the source.
By copying the Source port field from the incoming segment into the Destination
port field of the outgoing segment, the process sending the reply can specify
which process on the sending machine is to get it.
• UDP length: Includes 8-byte header and the data.
• An optional Checksum is also provided for extra reliability. It checksums the
header, the data, and a conceptual IP pseudoheader.
• The pseudo-header for the case of IPv4 is shown in Fig. 6-28.
• It contains the 32-bit IPv4 addresses of the source and destination machines, the
protocol number for UDP (17), and the byte count for the UDP segment (including
the header).
• The purpose of using a pseudo-header is to verify that the UDP datagram has
reached its correct destination.
• The correct destination consist of a specific machine and a specific protocol port
number within that machine.
• An application that uses UDP this way is DNS (Domain Name System),
• a program that needs to look up the IP address of some host name, for example,
www.cs.berkeley.edu, can send a UDP packet containing the host name to a DNS
server.
• The server replies with a UDP packet containing the host’s IP address.
• No setup is needed in advance and no release is needed afterward. Just two
messages go over the network.
TCP (Transmission Control Protocol):
• TCP (Transmission Control Protocol) was specifically designed to provide a
reliable end-to-end byte stream over an unreliable internetwork.
• TCP was designed to dynamically adapt to properties of the internetwork and to be
robust in the face of many kinds of failures.
• Each machine supporting TCP has a TCP transport entity, which accepts user data
streams from local processes, breaks them up into pieces not exceeding 64kbytes
and sends each piece as a separate IP datagram.
• When these datagrams arrive at a machine, they are given to TCP entity, which
reconstructs the original byte streams.
• It is up to TCP to time out and retransmits them as needed, also to reassemble
datagrams into messages in proper sequence.
• The transmission control protocol operates in connection-oriented mode.
• Data transmissions between end systems require a connection setup step.
• Once the connection is established, TCP provides a stream abstraction that provides
reliable, in-order delivery of data.
• To implement this type of stream data transfer, TCP uses reliability, flow control, and
congestion control.
• TCP is widely used in the Internet, as reliable data transfers are imperative for many
applications.
• The different issues to be considered are:
(i). The TCP Service Model
(ii). The TCP Protocol
(iii). The TCP Segment Header
(iv). TCP Sliding Window
(v). TCP Timer Management
(vi). TCP Congestion Control
TCP Service Model
• TCP service is obtained by having both the sender and receiver create end points called
SOCKETS.
• Each socket has a socket number(address)consisting of the IP address of the host, called a
“PORT” ( = TSAP ).
• To obtain TCP service a connection must be explicitly established between a socket on the
sending machine and a socket on the receiving machine.
• All TCP connections are full duplex and point to point i.e., multicasting or broadcasting
is not supported.
• A TCP connection is a byte stream, not a message stream i.e., the data is delivered as
chunks.
• Message boundaries are not preserved end to end. ‘
• For example, if the sending process does four 512-byte writes to a TCP stream, these data
may be delivered to the receiving process as four 512-byte chunks, two 1024-byte chunks,
one 2048-byte chunk, or some other way.
• There is no way for the receiver to detect the unit(s) in which the data were written, no
matter how hard it tries
TCP Protocol:
• A key feature of TCP, and one which dominates the protocol design, is that every
byte on a TCP connection has its own 32-bit sequence number.
• When the Internet began, the lines between routers were mostly 56-kbps leased
lines, so a host blasting away at full speed took over 1 week to cycle through the
sequence numbers.
• The basic protocol used by TCP entities is the sliding window protocol.
• When a sender transmits a segment, it also starts a timer.
• When the segment arrives at the destination, the receiving TCP entity sends back a
segment (with data if any exist, otherwise without data) bearing an
acknowledgement number equal to the next sequence number it expects to receive.
• If the sender's timer goes off before the acknowledgement is received, the sender
transmits the segment again.
TCP Segment Header:
• Every segment begins with a fixed-format, 20-byte header.
• The fixed header may be followed by header options. After the options, if any, up to
65,535 - 20 - 20 = 65,495 data bytes may follow, where the first 20 refer to the IP header
and the second to the TCP header.
• Segments without any data are legal and are commonly used for acknowledgements and
control messages.
• The Source port and Destination port fields identify the local end points of the
connection.
• Sequence number: Specifies the sequence number of the segment
• Acknowledgement Number: Specifies the next byte expected.
• The TCP header length tells how many 32-bit words are contained in the TCP
header.
• Now come eight 1-bit flags. CWR and ECE are used to signal congestion
• URG is set to 1 if the Urgent pointer is in use.
• The Urgent pointer is used to indicate a byte offset from the current sequence
number at which urgent data are to be found.
• The ACK bit is set to 1 to indicate that the Acknowledgement number is valid.
This is the case for nearly all packets.
• If ACK is 0, the segment does not contain an acknowledgement, so the
Acknowledgement number field is ignored.
• The PSH bit indicates PUSHed data. The receiver is hereby kindly requested to
deliver the data to the application upon arrival and not buffer it until a full buffer
has been received (which it might otherwise do for efficiency).
• The RST bit is used to abruptly reset a connection that has become confused due
to a host crash or some other reason.
• The SYN bit is used to establish connections.
• The FIN bit is used to release a connection. It specifies that the sender has no
more data to transmit.
• Flow control in TCP is handled using a variable-sized sliding window.
• The Window size field tells how many bytes may be sent starting at the byte
acknowledged.
• A Checksum is also provided for extra reliability.
• It checksums the header, the data, and a conceptual pseudoheader in exactly the
same way as UDP, except that the pseudoheader has the protocol number for TCP
(6) and the checksum is mandatory.
• The Options field provides a way to add extra facilities not covered by the regular
header. Many options have been defined and several are commonly used.
• A widely used option is the one that allows each host to specify the MSS
(Maximum Segment Size) it is willing to accept.
• The timestamp option carries a timestamp sent by the sender and echoed by the
receiver.
• Finally, the SACK (Selective ACKnowledgement) option, With SACK, the
sender is explicitly aware of what data the receiver has and hence can determine
what data should be retransmitted.
TCP Connection Establishment/Management:
• To establish a connection, one side, say, the server, passively waits for an
incoming connection by executing the LISTEN and ACCEPT primitives in that
order, either specifying a specific source or nobody in particular.
• The other side, say, the client, executes a CONNECT primitive, specifying the IP
address and port to which it wants to connect, the maximum TCP segment size it
is willing to accept, and optionally some user data (e.g., a password).
• The CONNECT primitive sends a TCP segment with the SYN bit on and ACK bit
off and waits for a response.
• When this segment arrives at the destination, the TCP entity there checks to see if
there is a process that has done a LISTEN on the port given in the Destination port
field. If not, it sends a reply with the RST bit on to reject the connection.
TCP Connection Release:
➢ Although TCP connections are full duplex, to understand how connections are
released it is best to think of them as a pair of simplex connections.
➢ Each simplex connection is released independently of its sibling. To release a
connection, either party can send a TCP segment with the FIN bit set, which means
that it has no more data to transmit.
➢ When the FIN is acknowledged, that direction is shut down for new data. Data
may continue to flow indefinitely in the other direction, however.
➢ When both directions have been shut down, the connection is released.
➢ Normally, four TCP segments are needed to release a connection, one FIN and
one ACK for each direction. However, it is possible for the first ACK and the second
FIN to be contained in the same segment, reducing the total count to three.
TCP Sliding Window:
• The sliding window is a technique that allows TCP to adjust the amount of data
that can be sent or received at any given time.
• The sliding window is a variable-sized buffer that represents the available space in
the sender's or the receiver's end of the connection.
• The sender can only send data that fits within the window, and the receiver can
only accept data that fits within the window.
• The window size can change depending on the network congestion, the available
bandwidth, and the feedback from the other end of the connection.
• window management in TCP decouples the issues of acknowledgement of the
correct receipt of segments and receiver buffer allocation.
• For example, suppose the receiver has a 4096-byte buffer, as shown in Fig. 6-40.
• If the sender transmits a 2048-byte segment that is correctly received, the receiver
will acknowledge the segment.
• However, since it now has only 2048 bytes of buffer space (until the application
removes some data from the buffer), it will advertise a window of 2048 starting at
the next byte expected.
• Now the sender transmits another 2048 bytes, which are acknowledged, but the
advertised window is of size 0.
• The sender must stop until the application process on the receiving host has
removed some data from the buffer, at which time TCP can advertise a larger
window and more data can be sent.
• When the window is 0, the sender may not normally send segments, with two
exceptions.
• First, urgent data may be sent, for example, to allow the user to kill the process
running on the remote machine.
• Second, the sender may send a 1-byte segment to force the receiver to reannounce
the next byte expected and the window size. This packet is called a window probe.
Silly Window Syndrome
• Silly Window Syndrome is a problem that arises due to the poor implementation
of TCP.
• It degrades the TCP performance and makes the data transmission extremely
inefficient.
• The window size shrinks to such an extent where the data being transmitted is
smaller than TCP Header.
• The problem arises due to following causes-
1. Sender transmitting data in small segments repeatedly
2. Receiver accepting only few bytes at a time repeatedly
• Sender Transmitting Data In Small Segments Repeatedly-
• Consider application generates one byte of data to send at a time.
• The poor implementation of TCP causes the sender to send each byte of data in an
individual TCP segment.
• This problem is solved using Nagle’s Algorithm.
Nagle’s algorithm suggests-
• Sender should send only the first byte on receiving one byte data from the
application.
• Sender should buffer all the rest bytes until the outstanding byte gets
acknowledged.
• In other words, sender should wait for 1 RTT.
• After receiving the acknowledgement, sender should send the buffered data in one
TCP segment.
• Then, sender should buffer the data again until the previously sent data gets
acknowledged.
Receiver Accepting Only Few Bytes Repeatedly-
• Consider the receiver continues to be unable to process all the incoming data.
• In such a case, its window size becomes smaller and smaller.
• A stage arrives when it repeatedly sends the window size of 1 byte to the sender.
• This problem is solved using Clark’s Solution.
Clark’s solution suggests-
• Receiver should not send a window update for 1 byte.
• Receiver should wait until it has a decent amount of space available.
• Receiver should then advertise that window size to the sender.
TCP Timer Management:
• Timers used by TCP to avoid excessive delays during communication are called as TCP Timers.
• The 4 important timers used by a TCP implementation are-
1. Time Out Timer
2. Time Wait Timer
3. Keep Alive Timer
4. Persistent Timer
Time Out Timer-
• TCP uses a time out timer for retransmission of lost segments.
• Sender starts a time out timer after transmitting a TCP segment to the receiver.
• If sender receives an acknowledgement before the timer goes off, it stops the timer.
• If sender does not receives any acknowledgement and the timer goes off, then TCP
Retransmission occurs.
• Sender retransmits the same segment and resets the timer.
• The value of time out timer is dynamic and changes with the amount of traffic in the network.
• Time out timer is also called as Retransmission Timer.
Time Wait Timer-
• TCP uses a time wait timer during connection termination.
• Sender starts the time wait timer after sending the ACK for the second FIN segment.
• It allows to resend the final acknowledgement if it gets lost.
• The value of time wait timer is usually set to twice the lifetime of a TCP segment.
Keep Alive Timer-
• TCP uses a keep alive timer to prevent long idle TCP connections.
• Each time server hears from the client, it resets the keep alive timer to 2 hours.
• If server does not hear from the client for 2 hours, it sends 10 probe segments to the client.
• These probe segments are sent at a gap of 75 seconds.
• If server receives no response after sending 10 probe segments, it assumes that the client is down.
• Then, server terminates the connection automatically.
Persistent Timer
• TCP uses a persistent timer to deal with a zero-widow-size deadlock situation.
• It keeps the window size information flowing even if the other end closes its
receiver window.
• Sender starts the persistent timer on receiving an ACK from the receiver with a
zero window size.
• When persistent timer goes off, sender sends a special segment to the receiver.
• This special segment is called as probe segment and contains only 1 byte of new
data.
• Response sent by the receiver to the probe segment gives the updated window
size.
• If the updated window size is non-zero, it means data can be sent now.
• If the updated window size is still zero, the persistent timer is set again and the
cycle repeats.

You might also like