0% found this document useful (0 votes)
39 views57 pages

Module-4-TransportLayer - 6.1 - 6.2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views57 pages

Module-4-TransportLayer - 6.1 - 6.2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

The Transport Layer:

The Transport Service, Elements of transport layer


(Module 4)

Prepared by
Mrs. Deeksha
Assistant professor,
Acharya Institute of Technology, Bangalore
Services provided to the upper layer
Introduction
• The network layer provides end-to-end packet delivery using datagrams or virtual
circuits.
• The transport layer builds on the network layer to provide data transport from a process
on a source machine to a process on a destination machine with a desired level of
reliability that is independent of the physical networks currently in use.
• The ultimate goal of the transport layer is to provide efficient, reliable, and cost-effective
data transmission service to its users, normally processes in the application layer.
• To achieve this, the transport layer makes use of the services provided by the network
layer. The software and/or hardware within the transport layer that does the work is
called the transport entity.
• The transport entity can be located in the operating system kernel, in a library package
bound into network applications, in a separate user process, or even on the network
interface card.
6.1 THE TRANSPORT SERVICE
Services Provided to the Upper Layers
• The ultimate goal of the transport layer is to provide efficient, reliable, and cost-effective
data transmission service to its users, that is processes in the application layer.
• To achieve this, the transport layer makes use of the services provided by the network layer.
The software and/or hardware within the transport layer that does the work is called the
transport entity.
• The transport entity can be located in the operating system kernel, in a library package
bound into network applications, in a separate user process, or even on the network
interface card.
• The first two options are most common on the Internet. The (logical) relationship of the
network, transport, and application layers is illustrated in Fig. 6-1.
• There are also two types of transport service. The connection-oriented transport service is
similar to the connection-oriented network service in many ways.
• In both cases, connections have three phases: establishment, data transfer, and release.
Addressing and flow control are also similar in both layers.
• The connectionless transport service is also very similar to the connectionless network
service.
• The transport layer services are similar to network layer services but 2 different layers are
used the reason is The transport code runs entirely on the users’ machines, but the network
layer mostly runs on the routers, which are operated by the carrier.
• The existence of the transport layer makes it possible for the transport service to be more
reliable than the underlying network.
Transport Service Primitives
• To allow users to access the transport service, the transport layer must provide some
operations to application programs, that is, a transport service interface. Each transport
service has its own interface.
• The transport service is similar to the network service, but there are also some important
differences.
• The main difference is that the network service is intended to model the service offered by
real networks.
• Real networks can lose packets, so the network service is generally unreliable.
• The connection-oriented transport service, in contrast, is reliable. The real networks are not
error-free, but that is precisely the purpose of the transport layer—to provide a reliable
service on top of an unreliable network.
• The transport layer can also provide unreliable (datagram) service.
• A second difference between the network service and transport service is whom the services
are intended for.
• To get an idea of what a transport service might be like, consider the five primitives listed in
Fig. 6-2.
• To see how these primitives might be used, consider an application with a server and a
number of remote clients. To start with, the server executes a LISTEN primitive, typically by
calling a library procedure that makes a system call that blocks the server until a client turns
up.
• When a client wants to talk to the server, it executes a CONNECT primitive. The transport
entity carries out this primitive by blocking the caller and sending a packet to the server.
Encapsulated in the payload of this packet is a transport layer message for the server’s
transport entity.
• The client’s CONNECT call causes a CONNECTION REQUEST segment to be sent to the server.
When it arrives, the transport entity checks to see that the server is blocked on a LISTEN. If
so, it then unblocks the server and sends a CONNECTION ACCEPTED segment back to the
client. When this segment arrives, the client is unblocked and the connection is established.
• Data can now be exchanged using the SEND and RECEIVE primitives. In then simplest form,
either party can do a (blocking) RECEIVE to wait for the other party to do a SEND. When the
segment arrives, the receiver is unblocked. It can then process the segment and send a
reply.
• When a connection is no longer needed, it must be released to free up table space within
the two transport entities. Disconnection has two variants: asymmetric and symmetric. In
the asymmetric variant, either transport user can issue a DISCONNECT primitive, which
results in a DISCONNECT segment being sent to the remote transport entity. Upon its arrival,
the connection is released.
• In the symmetric variant, each direction is closed separately, independently of the other
one. When one side does a DISCONNECT, that means it has no more data to send but it is
still willing to accept data from its partner. In this model, a connection is released when both
sides have done a DISCONNECT.
• Term segment is used for messages sent from transport entity to transport entity. TCP,
UDP and other Internet protocols use this term.
• Thus, segments (exchanged by the transport layer) are contained in packets (exchanged by
the network layer). In turn, these packets are contained in frames (exchanged by the data
link layer).
• When a frame arrives, the data link layer processes the frame header and, if the
destination address matches for local delivery, passes the contents of the frame payload
field up to the network entity. The network entity similarly processes the packet header
and then passes the contents of the packet payload up to the transport entity.
• A state diagram for connection establishment and release for these simple primitives is
given in Fig. 6-4. Each transition is triggered by some event, either a primitive executed by
the local transport user or an incoming packet.
• For simplicity, it is assumed that each segment is separately acknowledged. Also assumed
that a symmetric disconnection model is used, with the client going first.
Berkeley Sockets
The primitives are now widely used for Internet programming on many operating systems,
especially UNIX-based systems
• The first four primitives in the list are executed in that order by servers. The SOCKET primitive
creates a new endpoint and allocates table space for it within the transport entity.
• The parameters of the call specify the addressing format to be used, the type of service
desired (e.g., reliable byte stream), and the protocol.
• A successful SOCKET call returns an ordinary file descriptor for use in succeeding calls, the
same way an OPEN call on a file does.
• Newly created sockets do not have network addresses. These are assigned using the BIND
primitive. Once a server has bound an address to a socket, remote clients can connect to it.
• The LISTEN call allocates space to queue incoming calls for the case that several clients try to
connect at the same time.
• To block waiting for an incoming connection, the server executes an ACCEPT primitive. When
a segment asking for a connection arrives, the transport entity creates a new socket with the
same properties as the original one and returns a file descriptor for it.
• The server can then fork off a process or thread to handle the connection on the new socket
and go back to waiting for the next connection on the original socket. ACCEPT returns a file
descriptor, which can be used for reading and writing in the standard way, the same as for
files.
• At the client side, a socket must first be created using the SOCKET primitive, but BIND is not
required since the address used does not matter to the server.
• The CONNECT primitive blocks the caller and actively starts the connection process. When it
completes (i.e., when the appropriate segment is received from the server), the client
process is unblocked and the connection is established.
• Both sides can now use SEND and RECEIVE to transmit and receive data over the full-duplex
connection. The standard UNIX READ and WRITE system calls can also be used if none of the
special options of SEND and RECEIVE are required.
• Connection release with sockets is symmetric. When both sides have executed a CLOSE
primitive, the connection is released.
• The socket API is often used with the TCP protocol to provide a connection-oriented service
called a reliable byte stream, which is simply the reliable bit pipe.
• Sockets can also be used with transport protocols that provide a message stream rather
than a byte stream and that do or do not have congestion control. DCCP (Datagram
Congestion Controlled Protocol) is a version of UDP with congestion control
• Newer protocols and interfaces have been devised that support groups of related streams
more effectively and simply for the application.
• Two examples are SCTP (Stream Control Transmission Protocol) and SST (Structured Stream
Transport)
The server program (refer slide-15)
• It includes some standard headers. The SERVER PORT is defined as 12345. This number was
chosen arbitrarily. Any number between 1024 and 65535 will work just as well, as long as it is
not in use by some other process; ports below 1023 are reserved for privileged users.
• The next two lines in the server define two constants needed. The first one determines the
chunk size in bytes used for the file transfer. The second one determines how many pending
connections can be held before additional ones are discarded upon arrival.
• After the declarations of local variables, the server code begins. It starts out by initializing a
data structure that will hold the server’s IP address. This data structure will soon be bound to
the server’s socket. The call to memset sets the data structure to all 0s.
• The three assignments following it fill in three of its fields. The last of these contains the
server’s port. The functions htonl and htons have to do with converting values to a standard
format so the code runs correctly on both little-endian machines and big-endian machines.
• The server creates a socket and checks for errors (indicated by s < 0)
• The call to setsockopt is needed to allow the port to be reused so the server can run
indefinitely, fielding request after request. Now the IP address is bound to the socket and a
check is made to see if the call to bind succeeded.
• The final step in the initialization is the call to listen to announce the server’s willingness to
accept incoming calls and tell the system to hold up to QUEUE SIZE of them in case new
requests arrive while the server is still processing the current one. If the queue is full and
additional requests arrive, they are quietly discarded.
• The call to accept blocks the server until some client tries to establish a connection with it. If
the accept call succeeds, it returns a socket descriptor that can be used for reading and
writing.
• Sockets are bidirectional, so sa (the accepted socket) can be used for reading from the
connection and also for writing to it.
• After the connection is established, the server reads the file name from it. If the name is not
yet available, the server blocks waiting for it. After getting the file name, the server opens
the file and enters a loop that alternately reads blocks from the file and writes them to the
socket until the entire file has been copied.
• Then the server closes the file and the connection and waits for the next connection to
show up. It repeats this loop forever.
• To understand how it works, it is necessary to understand how it is invoked.
• Assuming it is called client, a typical call is client flits.cs.vu.nl /usr/tom/filename >f
• This call only works if the server is already running on flits.cs.vu.nl and the file
/usr/tom/filename exists and the server has read access to it.
• If the call is successful, the file is transferred over the Internet and written to f, after which
the client program exits. Since the server continues after a transfer, the client can be started
again and again to get other files.
• The client code starts with some includes and declarations. Execution begins by checking to
see if it has been called with the right number of arguments (argc = 3 means the program
name plus two arguments). Note that argv [1] contains the name of the server (e.g.,
flits.cs.vu.nl) and is converted to an IP address by gethostbyname
• After that, the client attempts to establish a TCP connection to the server, using connect. If
the server is up and running on the named machine and attached to SERVER PORT and is
either idle or has room in its listen queue, the connection will (eventually) be established.
Using the connection, the client sends the name of the file by writing on the socket
• The procedure fatal prints an error message and exits. The server needs the same
procedure, but it was omitted due to lack of space on the page.
6.2 Elements of transport protocol
• The transport service is implemented by a transport protocol used between the two
transport entities.
• These differences are due to major dissimilarities between the environments in which
the two protocols operate
At the data link layer, two routers communicate directly via a physical channel, whether
wired or wireless, whereas at the transport layer, this physical channel is replaced by the
entire network.
• Another difference between the data link layer and the transport layer is the potential
existence of storage capacity in the network. When a router sends a packet over a link, it may
arrive or be lost, but it cannot bounce around for a while, go into hiding in a far corner of the
world, and suddenly emerge after other packets that were sent much later.
• If the network uses datagrams, which are independently routed inside, there is a non-
negligible probability that a packet may take the scenic route and arrive late and out of the
expected order, or even that duplicates of the packet will arrive. The consequences of the
network’s ability to delay and duplicate packets can sometimes be disastrous and can require
the use of special protocols to correctly transport information
• A final difference between the data link and transport layers is one of degree rather than of
kind. Buffering and flow control are needed in both layers, but the presence in the transport
layer of a large and varying number of connections with bandwidth that fluctuates as the
connections compete with each other may require a different approach than we used in the
data link layer
Addressing
• When an application process wishes to set up a connection to a remote application
process, it must specify which one to connect to.
• The method normally used is to define transport addresses to which processes can listen
for connection requests. In the Internet, these endpoints are called ports.
• The generic term TSAP (Transport Service Access Point) to mean a specific endpoint in the
transport layer. The analogous endpoints in the network layer (i.e., network layer
addresses) are called NSAPs (Network Service Access Points). IP addresses are examples of
NSAPs.
• Figure 6-8 illustrates the relationship between the NSAPs, the TSAPs, and a transport
connection. Application processes, both clients and servers, can attach themselves to a
local TSAP to establish a connection to a remote TSAP. These connections run through
NSAPs on each host, as shown. The purpose of having TSAPs is that in some networks, each
computer has a single NSAP
A possible scenario for a transport connection is as follows:
1. A mail server process attaches itself to TSAP 1522 on host 2 to wait for an incoming call.
How a process attaches itself to a TSAP is outside the networking model and depends
entirely on the local operating system. A call such as our LISTEN might be used, for
example.
2. An application process on host 1 wants to send an email message, so it attaches itself to
TSAP 1208 and issues a CONNECT request. The request specifies TSAP 1208 on host 1 as
the source and TSAP 1522 on host 2 as the destination. This action ultimately results in a
transport connection being established between the application process and the server.
3. The application process sends over the mail message.
4. The mail server responds to say that it will deliver the message.
5. The transport connection is released.
• For example, the /etc/services file on UNIX systems lists which servers are permanently
attached to which ports, including the fact that the mail server is found on TCP port 25
• User processes, in general, often want to talk to other user processes that do not have
TSAP addresses that are known in advance, or that may exist for only a short time. To
handle this situation, an alternative scheme can be used. In this scheme, there exists a
special process called a portmapper.
• To find the TSAP address corresponding to a given service name, such as ‘‘BitTorrent,’’ a
user sets up a connection to the portmapper (which listens to a well-known TSAP). The
user then sends a message specifying the service name, and the portmapper sends back
the TSAP address.
• Then the user releases the connection with the portmapper and establishes a new one
with the desired service
• When a new service is created, it must register itself with the portmapper, giving both its
service name (typically, an ASCII string) and its TSAP
• Each machine that wishes to offer services to remote users has a special process server that
acts as a proxy for less heavily used servers.
• This server is called inetd on UNIX systems. It listens to a set of ports at the same time,
waiting for a connection request.
• Potential users of a service begin by doing a CONNECT request, specifying the TSAP address
of the service they want. If no server is waiting for them, they get a connection to the
process server, as shown in Fig. 6-9(a)
• After it gets the incoming request, the process server spawns the requested server, allowing
it to inherit the existing connection with the user. The new server does the requested work,
while the process server goes back to listening for new requests, as shown in Fig. 6-9(b). This
method is only applicable when servers can be created on demand
Connection Establishment
• A transport entity sends a CONNECTION REQUEST segment to the destination and wait for a
CONNECTION ACCEPTED reply. The problem occurs when the network can lose, delay, corrupt,
and duplicate packets. This behaviour causes serious complications.
• Imagine a network that is so congested that acknowledgements hardly ever get back in time and
each packet times out and is retransmitted two or three times.
• Suppose that the network uses datagrams inside and that every packet follows a different route.
Some of the packets might get stuck in a traffic jam inside the network and take a long time to
arrive. That is, they may be delayed in the network and pop out much later, when the sender
thought that they had been lost.
• The problem is that the delayed duplicates are thought to be new packets. We cannot prevent
packets from being duplicated and delayed. But if and when this happens, the packets must be
rejected as duplicates and not processed as fresh packets. Solution is to give each connection a
unique identifier (i.e., a sequence number incremented for each connection established) chosen
by the initiating party and put in each segment, including the one requesting the connection.
• After each connection is released, each transport entity can update a table listing obsolete
connections as (peer transport entity, connection identifier) pairs. Whenever a connection
request comes in, it can be checked against the table to see if it belongs to a previously
released connection.
• Unfortunately, this scheme has a basic flaw: it requires each transport entity to maintain a
certain amount of history information indefinitely. This history must persist at both the source
and destination machines. Otherwise, if a machine crashes and loses its memory, it will no
longer know which connection identifiers have already been used by its peers.
• To solve and simplify the problem, rather than allowing packets to live forever within the
network, we devise a mechanism to kill off aged packets that are still hobbling about. With this
restriction, the problem becomes somewhat more manageable.
• Packet lifetime can be restricted to a known maximum using one (or more) of the following
techniques:
• 1. Restricted network design.
• 2. Putting a hop counter in each packet.
• 3. Time-stamping each packet.
• The first technique includes any method that prevents packets from looping, combined with
some way of bounding delay including congestion over the longest possible path.
• The second method consists of having the hop count initialized to some appropriate value and
decremented each time the packet is forwarded. The network protocol simply discards any
packet whose hop counter becomes zero.
• The third method requires each packet to bear the time it was created, with the routers
agreeing to discard any packet older than some agreed-upon time. This latter method requires
the router clocks to be synchronized, which itself is a nontrivial task, and in practice a hop
counter is a close enough approximation to age.
• In the three-way handshake. This establishment protocol involves one peer checking with the
other that the connection request is indeed current. The normal setup procedure when host 1
initiates is shown in Fig. 6-11(a).
• Host 1 chooses a sequence number, x, and sends a CONNECTION REQUEST segment containing
it to host 2. Host 2 replies with an ACK segment acknowledging x and announcing its own initial
sequence number, y. Finally, host 1 acknowledges host 2’s choice of an initial sequence number
in the first data segment that it sends.
• Fig. 6-11(b) shows how the three-way handshake works in the presence of delayed duplicate
control segments, its observed that the first segment is a delayed duplicate CONNECTION
REQUEST from an old connection. This segment arrives at host 2 without host 1’s
knowledge.
• Host 2 reacts to this segment by sending host 1 an ACK segment, in effect asking for
verification that host 1 was indeed trying to set up a new connection. When host 1 rejects
host 2’s attempt to establish a connection, host 2 realizes that it was tricked by a delayed
duplicate and abandons the connection. In this way, a delayed duplicate does no damage.
• The worst case is when both a delayed CONNECTION REQUEST and an ACK are floating
around in the subnet. This case is shown in Fig. 6-11(c). As in the previous example, host 2
gets a delayed CONNECTION REQUEST and replies to it.
• At this point, it is crucial to realize that host 2 has proposed using y as the initial sequence
number for host 2 to host 1 traffic, knowing full well that no segments containing sequence
number y or acknowledgements to y are still in existence.
• When the second delayed segment arrives at host 2, the fact that z has been acknowledged
rather than y tells host 2 that this, too, is an old duplicate.
• The important thing to realize here is that there is no combination of old segments that can
cause the protocol to fail and have a connection set up by accident when no one wants it.
Connection Establishment
• A transport entity sends a CONNECTION REQUEST segment to the destination and wait for a
CONNECTION ACCEPTED reply. The problem occurs when the network can lose, delay, corrupt,
and duplicate packets. This behaviour causes serious complications.
• Imagine a network that is so congested that acknowledgements hardly ever get back in time and
each packet times out and is retransmitted two or three times.
• Suppose that the network uses datagrams inside and that every packet follows a different route.
Some of the packets might get stuck in a traffic jam inside the network and take a long time to
arrive. That is, they may be delayed in the network and pop out much later, when the sender
thought that they had been lost.
• The problem is that the delayed duplicates are thought to be new packets. We cannot prevent
packets from being duplicated and delayed. But if and when this happens, the packets must be
rejected as duplicates and not processed as fresh packets. Solution is to give each connection a
unique identifier (i.e., a sequence number incremented for each connection established) chosen
by the initiating party and put in each segment, including the one requesting the connection.
Connection Release
• There are two approaches of terminating a connection: asymmetric release and symmetric
release.
• Asymmetric release is abrupt and may result in data loss. Consider the scenario of Fig. 6-12.
After the connection is established, host 1 sends a segment that arrives properly at host 2. Then
host 1 sends another segment. Unfortunately, host 2 issues a DISCONNECT before the second
segment arrives. The result is that the connection is released and data are lost.
• To avoid data loss, One way is to use symmetric release, in which each direction is released
independently of the other one.
• Here, a host can continue to receive data even after it has sent a DISCONNECT segment.
• Symmetric release does the job when each process has a fixed amount of data to send and
clearly knows when it has sent it.
• Figure 6-14 illustrates four scenarios of releasing using a three-way handshake. While this
protocol is not infallible, it is usually adequate.
• In Fig. 6-14(a), we see the normal case in which one of the users sends a DR
(DISCONNECTION REQUEST) segment to initiate the connection release. When it arrives, the
recipient sends back a DR segment and starts a timer, just in case its DR is lost.
• When this DR arrives, the original sender sends back an ACK segment and releases the
connection. Finally, when the ACK segment arrives, the receiver also releases the connection.
• Releasing a connection means that the transport entity removes the information about the
connection from its table of currently open connections and signals the connection’s owner
(the transport user) somehow.
• If the final ACK segment is lost, as shown in Fig. 6-14(b), the situation is saved by the timer.
When the timer expires, the connection is released.
• Now consider the case of the second DR being lost. The user initiating the disconnection
will not receive the expected response, will time out, and will start all over again. In Fig. 6-
14(c), we see how this works, assuming that the second time no segments are lost and all
segments are delivered correctly and on time.
• In the last scenario, Fig. 6-14(d), is the same as Fig. 6-14(c) except that now we assume all
the repeated attempts to retransmit the DR also fail due to lost segments. After N retries,
the sender just gives up and releases the connection. Meanwhile, the receiver times out
and also exits.
Error Control and Flow Control
• Error and flow control ensures reliable communication between source and destination hosts
over a network.
• The transport layer is responsible for end-to-end communication and provides error detection,
correction, and flow control to ensure data integrity and efficient transmission.
• To manage connections, the key issues are error control and flow control. Error control is
ensuring that the data is delivered with the desired level of reliability, usually that all of the data
is delivered without any errors. Flow control is keeping a fast transmitter from overrunning a
slow receiver.
• The solutions that are used at the transport layer are the same mechanisms like data-link
layer.
• A frame carries an error-detecting code (e.g., a CRC or checksum) that is used to check if
the information was correctly received.
• A frame carries a sequence number to identify itself and is retransmitted by the sender
until it receives an acknowledgement of successful receipt from the receiver. This is
called ARQ (Automatic Repeat reQuest).
• There is a maximum number of frames that the sender will allow to be outstanding at
any time, pausing if the receiver is not acknowledging frames quickly enough. If this
maximum is one packet the protocol is called stop-and-wait. Larger windows enable
pipelining and improve performance on long, fast links.
• The sliding window protocol combines these features and is also used to support
bidirectional data transfer.
• To differentiate the functions between data-link and transport layer, consider error
detection. The link layer checksum protects a frame while it crosses a single link. The
transport layer checksum protects a segment while it crosses an entire network path. It is
an end-to-end check, which is not the same as having a check on every link.
• The transport layer check that runs end-to-end is essential for correctness, and the link
layer checks are not essential but nonetheless valuable for improving performance.
• Given that transport protocols generally use larger sliding windows, Consider, the issue of
buffering data more carefully. Since a host may have many connections, each of which is
treated separately, it may need a substantial amount of buffering for the sliding windows.
• The buffers are needed at both the sender and the receiver. Certainly they are needed at
the sender to hold all transmitted but as yet unacknowledged segments. They are needed
there because these segments may be lost and need to be retransmitted.
• However, since the sender is buffering, the receiver may or may not dedicate specific buffers
to specific connections, as it sees fit. The receiver may, for example, maintain a single buffer
pool shared by all connections.
• When a segment comes in, an attempt is made to dynamically acquire a new buffer. If one is
available, the segment is accepted; otherwise, it is discarded. Since the sender is prepared to
retransmit segments lost by the network, no permanent harm is done by having the receiver
drop segments, although some resources are wasted. The sender just keeps trying until it
gets an acknowledgement.
• The best trade-off between source buffering and destination buffering depends on the type
of traffic carried by the connection.
• For low-bandwidth bursty traffic, it is reasonable not to dedicate any buffers, but rather to
acquire them dynamically at both ends.
• For file transfer and other high-bandwidth traffic, it is better if the receiver does dedicate a
full window of buffers, to allow the data to flow at maximum speed. This is the strategy that
TCP uses.
• To organize the buffer pool. If most segments are nearly the same size, it is natural to organize
the buffers as a pool of identically sized buffers, with one segment per buffer, as in Fig. 6-15(a).
• However, if there is wide variation in segment size, a pool of fixed-sized buffers presents
problems. If the buffer size is chosen to be equal to the largest possible segment, space will
be wasted whenever a short segment arrives.
• If the buffer size is chosen to be less than the maximum segment size, multiple buffers will be
needed for long segments, with the attendant complexity.
• Another approach to the buffer size problem is to use variable-sized buffers, as in Fig. 6-15(b).
The advantage here is better memory utilization, at the price of more complicated buffer
management.
• A third possibility is to dedicate a single large circular buffer per connection, as in Fig. 6-15(c).
This system is simple and elegant and does not depend on segment sizes, but makes good
use of memory only when the connections are heavily loaded.
• As connections are opened and closed and as the traffic pattern changes, the sender and
receiver need to dynamically adjust their buffer allocations.
• Dynamic buffer management means, in effect, a variable-sized window. Initially, the sender
requests a certain number of buffers, based on its expected needs. The receiver then grants
as many of these as it can afford.
• Every time the sender transmits a segment, it must decrement its allocation, stopping
altogether when the allocation reaches zero.
• The receiver separately piggybacks both acknowledgements and buffer allocations onto the
reverse traffic.
• TCP uses this scheme, carrying buffer allocations in a header field called Window size.
• Figure 6-16 shows an example of how dynamic window management might work in a
datagram network with 4-bit sequence numbers.
• In this example, data flows in segments from host A to host B and acknowledgements and
buffer allocations flow in segments in the reverse direction.
• Initially, A wants eight buffers, but it is granted only four of these. It then sends three
segments, of which the third is lost. Segment 6 acknowledges receipt of all segments up to
and including sequence number 1, thus allowing A to release those buffers, and
furthermore informs A that it has permission to send three more segments starting beyond
1 (i.e., segments 2, 3, and 4).
• A knows that it has already sent number 2, so it thinks that it may send segments 3 and 4,
which it proceeds to do. At this point it is blocked and must wait for more buffer allocation.
• Timeout-induced retransmissions (line 9), however, may occur while blocked, since they
use buffers that have already been allocated.
• In line 10, B acknowledges receipt of all segments up to and including 4 but refuses to let A
continue.
• The next segment from B to A allocates another buffer and allows A to continue. This will
happen when B has buffer space, likely because the transport user has accepted more
segment data.
• Problems with buffer allocation schemes of this kind can arise in datagram networks if
control segments can get lost—which they most certainly can.
• At line 16. B has now allocated more buffers to A, but the allocation segment was lost.
• Since control segments are not sequenced or timed out, A is now deadlocked. To prevent
this situation, each host should periodically send control segments giving the
acknowledgement and buffer status on each connection. That way, the deadlock will be
broken, sooner or later.
• If the network can handle c segments/sec and the round-trip time is r, the sender’s window
should be cr. With a window of this size, the sender normally operates with the pipeline full.
• Any small decrease in network performance will cause it to block. Since the network capacity
available to any given flow varies over time, the window size should be adjusted frequently and
dynamically, to track changes in the carrying capacity.
Multiplexing
• Multiplexing, or sharing several conversations over connections, virtual circuits, and
physical links plays a role in several layers of the network architecture.
• In the transport layer, the need for multiplexing can arise in a number of ways.
• For example, if only one network address is available on a host, all transport connections
on that machine have to use it. When a segment comes in, some way is needed to tell
which process to give it to. This situation, called multiplexing, is shown in Fig. 6-17(a).
• In this figure, four distinct transport connections all use the same network connection
(e.g., IP address) to the remote host.
• Multiplexing can also be useful in the transport layer for another reason.
• Suppose, for example, that a host has multiple network paths that it can use. If a user
needs more bandwidth or more reliability than one of the network paths can provide, a
way out is to have a connection that distributes the traffic among multiple network paths
on a round-robin basis, as indicated in Fig. 6-17(b). This mode of operation is called
inverse multiplexing.
• With k network connections open, the effective bandwidth might be increased by a
factor of k. An example of inverse multiplexing is SCTP (Stream Control Transmission
Protocol), which can run a connection using multiple network interfaces. In contrast, TCP
uses a single network endpoint.
• Inverse multiplexing is also found at the link layer, when several low-rate links are used in
parallel as one high-rate link
Crash Recovery
• If hosts and routers are subject to crashes, recovery from these crashes becomes an issue. If
the transport entity is entirely within the hosts, recovery from network and router crashes is
straightforward. The transport entities expect lost segments all the time and know how to
cope with them by using retransmissions.
• 2 host client and server. If server crashes it sends broadcast TPDU to all host announcing it
has crashed and requesting status from clients about all the connections.
• Each client can be in one of two states: one segment outstanding, S1, or no segments
outstanding, S0. Based on only this state information, the client must decide whether to
retransmit the most recent segment.
• No matter how the client and server are programmed, there are always situations where the
protocol fails to recover properly.
• The server can be programmed in one of two ways:
Acknowledge first or write first.
• The client can be programmed in one of four ways:
1. always retransmit the last segment
2. never retransmit the last segment
3. retransmit only in state S0
4. retransmit only in state S1.
• This gives eight combinations, but as we shall see, for each combination there is some set
of events that makes the protocol fail.
• Three events are possible at the server:
1. sending an acknowledgement (A)
2. writing to the output process (W)
3. crashing (C).
The three events can occur in six different orderings: AC(W), AWC, C(AW), C(WA), WAC, and
WC(A),
where the parentheses are used to indicate that neither A nor W can follow C (i.e., once it
has crashed, it has crashed).
• Figure 6-18 shows all eight combinations of client and server strategies and the valid
event sequences for each one. Notice that for each strategy there is some sequence of
events that causes the protocol to fail. For example, if the client always retransmits, the
AWC event will generate an undetected duplicate, even though the other two events
work properly.
• In more general terms, this result can be restated as ‘‘recovery from a layer N crash can
only be done by layer N + 1,’’ and then only if the higher layer retains enough status
information to reconstruct where it was before the problem occurred.
• This is consistent with the case mentioned above that the transport layer can recover
from failures in the network layer, provided that each end of a connection keeps track of
where it is.

You might also like