CN Unit-4 Updated
CN Unit-4 Updated
To achieve its goal, the transport layer makes use of the services provided by the
network layer. The hardware/software within the transport layer that does this
work is called the transport entity. This can be located in the Operating System
Kernel , in a separate user process or on the network interface card.
The (logical) relationship of the network, transport and application layers is given
in the following figure.
3. Transport layer
The core protocols of the Transport layer are TCP and UDP.
• Let us consider the reason behind both the layers when both are offering
similar services : The transport layer code runs on the user’s machines,
whereas the network layer code runs on routers, which are operated by
the carrier.
• Thus, the users have no control over the network layer , so they cannot
solve the problem of poor service by using better routers or putting more
error handling in the data link layer.
• The only possibility is to put another layer above the network layer
that improves the quality of service.
Transport Service Primitives
:
To allow the users to access the transport service, the transport layer must
provide some operations to application programs, that is, a transport service
interface. Each transport service has its own interface.
The transport service is similar to that network service, but there are some
differences:
Main difference is that the network service is intended to model the service
offered by real networks. Real networks can lose packets, so the network service
is unreliable ( generally). The transport service ( in contrast ) is reliable.
Another difference is between network service and transport service whom the
services are intended for.
To get an idea of a transport service, let us consider the
primitives listed in the following figure :
The network entity processes the packet header and passes the
contents of the packet payload up to the transport entity. This nesting
is shown in the following figure :
3. Berkeley Sockets
• The socket primitives as they are used for TCP. They quickly became popular. The primitives
are now widely used for Internet programming on many operating systems, especially UNIX-
based systems.
• The primitives are listed in Fig. Roughly speaking, they follow the model of our first
example but offer more features and flexibility.
• The first four primitives in the list are executed in that order by servers. The SOCKET
primitive creates a new endpoint and allocates table space for it within the transport entity.
• Next comes the LISTEN call, which allocates space to queue incoming calls for the case
that several clients try to connect at the same time. In contrast to LISTEN in our first example,
in the socket model LISTEN is not a blocking call. To block waiting for an incoming
connection, the server executes an ACCEPT primitive.
• Now let us look at the client side. Here, too, a socket must first be created using the SOCKET
primitive, but BIND is not required since the address used does not matter to the server. The
CONNECT primitive blocks the caller and actively starts the connection process.
• When it completes the client process is unblocked and the connection is established. Both
sides can now use SEND and RECEIVE to transmit and receive data over the full-duplex
connection. Connection release with sockets is symmetric. When both sides have executed a
CLOSE primitive, the connection is released.
• Newly created sockets do not have network
addresses. These are assigned using the BIND
primitive.
Now the sender transmits another 2048 bytes, which are acknowledged,
but the advertised window is 0. The sender must stop until the application
process on the receiving host has removed some data from the buffer, at
which time TCP can advertise a larger window.
Note :- When the window is 0, the sender may not send segments
normally, but with 2 exceptions it will send.
Window management in
Second, the sender may send a 1-byte segment to make the receiver re-
announce the next byte expected and window size. The TCP explicitly provides
this option to prevent deadlock if a window announcement gets lost ever.
Senders are not required to transmit the data as soon as it arrives. Neither
receivers are required to send acknowledgements as soon as possible
For example, in the above figure, when the first 2KB of data came in,
TCP can buffer it until another 2KB data arrives ( because it has a
window of 4KB ), to be able to transmit a segment of 4-KB payload at
once.
The transport service is implemented by a transport protocol used
between the 2 transport entities.
These transport protocols resemble the data link layer protocols. Both have to
deal with error control, sequencing and flow control etc.
Significant differences also exist due to dissimilarities between the
environments in which the two protocols operate.
At the data link layer, 2 routers communicate directly via a physical channel ,
whereas at the transport layer this physical channel is replaced by the entire
subnet.
In the data link layer, it is not necessary for a router to specify which router it
wants to talk to – each outgoing line specifies a particular router, whereas in
transport layer , explicit addressing of destinations is required.
(a)Environment of the data link layer.
(b)Environment of the transport layer.
When an application ( user ) process wishes to set up a connection to a remote
application process, it must specify which one to connect to.
The method normally used is to define transport addresses to which processes
can listen for connection requests. In the Internet, these end points are called as
( In ATMs these are c called as AAL-SAPs ).
We use the term called
Following figure shows the relationship between NSAP, TSAP and transport
connection. Application processes, both servers and clients, can attach
themselves to a TSAP to establish a connection to a remote TSAP. These
connections run through NSAPs on each host, as shown in the figure.
1)A time of day server process on host 2 attaches itself to
TSAP 1522 to wait for an incoming call. A call such as our
LISTEN can be used.
2)An application process on host 1 wants to find out the time-
of-day, so it issues a CONNECT request specifying TSAP 1208 as
the source and TSAP 1522 as the destination. This action results
in a transport connection being established between the
application process on host 1 and server 1 on host 2.
3) The application process then sends a request for the time.
4) The time server process responds with the current time.
5) The transport connection is then released.
First method includes any method that prevents packets from looping.
The main difference is that a router usually has a relatively few lines,
whereas a host may have numerous connections. Due to this , it is
impractical to implement the data link buffering methods.
In the data link layer, the sending side must buffer outgoing frames because
they might have to be retransmitted. If the subnet uses datagram service , the
sending transport entity must also buffer. If the receiver knows that the sender
buffers all TPDUs until they are acknowledged, the receiver may or may not
dedicate buffers to specific connections.
The receiver may ( for example ) maintain a single buffer pool shared by all connections.
When a TPDU comes in, an attempt is made to dynamically acquire a new buffer . If one is
available , the TPDU is accepted otherwise rejected. ( since the sender is prepared to
retransmit TPDUs lost by the subnet. ).
Even if the receiver has agreed to do the buffering, still there remains the question of buffer size.
If most TPDUs are nearly of the same size , it is easy to organize the buffers as a pool of
identically-sized buffers, with one TPDU per buffer ,as shown in fig(a).
However, if there is a wide variation in TPDU size, a pool of fixed size buffers will be a problem.
If the buffer size is chosen equal to the largest possible TPDU , space will be wasted whenever a
short TPDU arrives. If the buffer size is chosen less than the maximum TPDU size, multiple
buffers will be needed for long TPDUs, with the complexity.
(a) Chained fixed-size buffers. (b) Chained
variable-sized buffers.
(c) One large circular buffer per connection.
Another approach to the buffer size problem is to
b but more complicated in
terms of buffer management.
A third possibility is to
This
connections are opened and
closed as the traffic pattern changes, the.
Problems with this method arrives if control TPDUs can get lost.
Consider line 16, B has allocated more buffers to A, but the allocation
TPDU was lost. Since control TPDUs are not sequenced or timed
out, A is now deadlocked. To prevent this, each host periodically send
control TPDUs giving the acknowledgement and
The UDP
header.
The 2 ports serve to identify the end points within the source and destination
machines. When a UDP packet arrives , its payload is handed to the process
attached to the destination port. This attachment occurs when BIND primitive (or
something similar is used). Without the port fields, the transport layer would not
know what to do with the packet. With them, it delivers segments correctly.
The source port is mainly needed when a reply must be sent back to the source.
By copying the source port field from the incoming segment into the
destination port field of the outgoing segment, the process sending the reply
can specify which process on the sending machine is to get it.
The UDP length field includes the 8-byte header and the data. The I is
optional and stored as 0 if not computed.
A port is the TCP name for a TSAP. For TCP service to be obtained, a
connection must be explicitly established between a socket on the
sending machine and a socket on the receiving machine.
The sending and receiving TCP entities exchange data in the form of segments.
A TCP Segment consists of a fixed 20-byte header (plus an optional part)
followed by 0 or mote data bytes.
Second, each network has a maximum transfer unit ( MTU ), and each segment
must fit in the MTU. MTU is generally 1500 bytes (default).
The basic protocol used by the TCP entities is the sliding window protocol.
When a sender transmits a segment, it also starts the timer.
When the segment arrives at the destination, the receiving TCP entity sends back a
segment ( with data if exist ) bearing an acknowledgement number equal to the next
sequence number it expects to receive.
If the sender’s timer goes off before the acknowledgement is received, the sender transmits the
segment again.
Segments can arrive out of order and segments can also be delayed.
TCP must be prepared to deal with these problems and solve them in an efficient way.
The following figure shows the layout of the TCP
header :-
Every segment begins with a 20-byte fixed-format header. This may be
followed by header options. After the options, data bytes may appear.
Segments without any data are legal and are commonly used for
acknowledgements and control messages.
The Source port and Destination port fields identify the local end points
of the connection.
The TCP header length tells how many 32-bit words are present in TCP
header. This is needed because the Options field is of variable length.
URG is set to 1 if the Urgent pointer is in use. The Urgent pointer is
used to indicate a byte offset from the current sequence number at
which urgent data are to be found.
The RST bit is used to reset a connection due to crash or some other
reason. It is also used to reject an invalid segment or refuse an
attempt to open a connection.
The FIN bit is used to release a connection. It specifies that the sender
has no more data to transmit. However, it may receive data.
Note :- In theory, the initial DR and N retransmissions may lost. The sender
will give up and release the connection, while the other side knows nothing at
all about the attempts to disconnect and is still active. This results in a.
When the load offered to any network is more than it can handle,
congestion builds up.
The idea is to refrain from injecting a new packet into the network until an old
one leaves (i.e., delivered). TCP attempts to achieve this by dynamically
manipulating the window size.
Thus 2 problems
exists :