0% found this document useful (0 votes)
10 views96 pages

CN Unit-4 Updated

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views96 pages

CN Unit-4 Updated

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 96

* The Transport Service

* Elements of Transport Protocols


* The Internet Transport Protocols :
UDP
•The Internet Transport Protocols :
TCP
•Congestion control
•Best effort Model
•Quality of Service
* Network Performance Issues
Services Provided to the upper layers :
The main goal of the Transport layer is to provide efficient, reliable, and cost-
effective service to its users, normally processes in the application layer.

To achieve its goal, the transport layer makes use of the services provided by the
network layer. The hardware/software within the transport layer that does this
work is called the transport entity. This can be located in the Operating System
Kernel , in a separate user process or on the network interface card.

The (logical) relationship of the network, transport and application layers is given
in the following figure.
3. Transport layer

The core protocols of the Transport layer are TCP and UDP.

TCP provides a one-to-one, connection-oriented, reliable


communications service. TCP establishes connections,
sequences and
acknowledges packets sent, and recovers packets lost during
transmission.

UDP (User Data Gram Protocol) provides a one-to-one or


one-to-many, connectionless, unreliable communications
service.
UDP is used when the amount of data to be transferred is small
(such
as the data that would fit into a single packet), when an
application
developer does not want the overhead associated with TCP
connections, or when the applications or upper-layer protocols
provide
The network, transport, and application
layers.
Connection oriented transport service. Connectionless transport service.
For both cases establishment, data transfer and release

• Let us consider the reason behind both the layers when both are offering
similar services : The transport layer code runs on the user’s machines,
whereas the network layer code runs on routers, which are operated by
the carrier.

• Thus, the users have no control over the network layer , so they cannot
solve the problem of poor service by using better routers or putting more
error handling in the data link layer.

• The only possibility is to put another layer above the network layer
that improves the quality of service.
Transport Service Primitives
:
To allow the users to access the transport service, the transport layer must
provide some operations to application programs, that is, a transport service
interface. Each transport service has its own interface.

The transport service is similar to that network service, but there are some
differences:

Main difference is that the network service is intended to model the service
offered by real networks. Real networks can lose packets, so the network service
is unreliable ( generally). The transport service ( in contrast ) is reliable.

Another difference is between network service and transport service whom the
services are intended for.
To get an idea of a transport service, let us consider the
primitives listed in the following figure :

The primitives for a simple transport


service.
When a frame arrives, the data link layer processes the frame header
and passes the contents of the frame payload field up to the network
entity.

The network entity processes the packet header and passes the
contents of the packet payload up to the transport entity. This nesting
is shown in the following figure :
3. Berkeley Sockets

• The socket primitives as they are used for TCP. They quickly became popular. The primitives
are now widely used for Internet programming on many operating systems, especially UNIX-
based systems.

• The primitives are listed in Fig. Roughly speaking, they follow the model of our first
example but offer more features and flexibility.

• The first four primitives in the list are executed in that order by servers. The SOCKET
primitive creates a new endpoint and allocates table space for it within the transport entity.

• Next comes the LISTEN call, which allocates space to queue incoming calls for the case
that several clients try to connect at the same time. In contrast to LISTEN in our first example,
in the socket model LISTEN is not a blocking call. To block waiting for an incoming
connection, the server executes an ACCEPT primitive.

• Now let us look at the client side. Here, too, a socket must first be created using the SOCKET
primitive, but BIND is not required since the address used does not matter to the server. The
CONNECT primitive blocks the caller and actively starts the connection process.

• When it completes the client process is unblocked and the connection is established. Both
sides can now use SEND and RECEIVE to transmit and receive data over the full-duplex
connection. Connection release with sockets is symmetric. When both sides have executed a
CLOSE primitive, the connection is released.
• Newly created sockets do not have network
addresses. These are assigned using the BIND
primitive.

The socket primitives for


TCP.
A state diagram for a simple connection management scheme.
Transitions labeled in italics are caused by packet arrivals. The solid
lines show the client’s state sequence. The dashed lines show the
Window management in TCP is not directly tied to acknowledgements.
Suppose the receiver has a 4096-byte buffer, as shown in the figure.

If the sender transmits a 2048-byte segment that is correctly received,


the receiver will acknowledge it. Since, it has only 2048-bytes of buffer
space now ( until the application removes some data from the buffer ), it
will advertise a window of 2048, starting at the next byte expected.

Now the sender transmits another 2048 bytes, which are acknowledged,
but the advertised window is 0. The sender must stop until the application
process on the receiving host has removed some data from the buffer, at
which time TCP can advertise a larger window.

Note :- When the window is 0, the sender may not send segments
normally, but with 2 exceptions it will send.
Window management in
Second, the sender may send a 1-byte segment to make the receiver re-
announce the next byte expected and window size. The TCP explicitly provides
this option to prevent deadlock if a window announcement gets lost ever.
Senders are not required to transmit the data as soon as it arrives. Neither
receivers are required to send acknowledgements as soon as possible

For example, in the above figure, when the first 2KB of data came in,
TCP can buffer it until another 2KB data arrives ( because it has a
window of 4KB ), to be able to transmit a segment of 4-KB payload at
once.
The transport service is implemented by a transport protocol used
between the 2 transport entities.
These transport protocols resemble the data link layer protocols. Both have to
deal with error control, sequencing and flow control etc.
Significant differences also exist due to dissimilarities between the
environments in which the two protocols operate.
At the data link layer, 2 routers communicate directly via a physical channel ,
whereas at the transport layer this physical channel is replaced by the entire
subnet.
In the data link layer, it is not necessary for a router to specify which router it
wants to talk to – each outgoing line specifies a particular router, whereas in
transport layer , explicit addressing of destinations is required.
(a)Environment of the data link layer.
(b)Environment of the transport layer.
When an application ( user ) process wishes to set up a connection to a remote
application process, it must specify which one to connect to.
The method normally used is to define transport addresses to which processes
can listen for connection requests. In the Internet, these end points are called as
( In ATMs these are c called as AAL-SAPs ).
We use the term called

The end points at network layer are called as NSAPs ( Network


Service Access Points ). IP addresses are examples of NSAPs.

Following figure shows the relationship between NSAP, TSAP and transport
connection. Application processes, both servers and clients, can attach
themselves to a TSAP to establish a connection to a remote TSAP. These
connections run through NSAPs on each host, as shown in the figure.
1)A time of day server process on host 2 attaches itself to
TSAP 1522 to wait for an incoming call. A call such as our
LISTEN can be used.
2)An application process on host 1 wants to find out the time-
of-day, so it issues a CONNECT request specifying TSAP 1208 as
the source and TSAP 1522 as the destination. This action results
in a transport connection being established between the
application process on host 1 and server 1 on host 2.
3) The application process then sends a request for the time.
4) The time server process responds with the current time.
5) The transport connection is then released.

There may be other servers on host 2 that are attached to


other TSAPs and waiting for incoming connections that arrive
over the same NSAP.
How does the user process on host 1 know that the time-of-day server
is attached to TSAP 1522 ?
One possibility is that the time-of-day server has been
attaching itself to TSAP 1522 for years and gradually all the
network users have learnt this. In this model, services have
stable TSAP addresses.

Stable TSAP addresses work well with small number of key


services that never change. But there may be user
processes which want to talk to other processes that exist
for a short time and do not have a TSAP address that is
known in advance. Thus, a better scheme is needed.
One such scheme is known as Instead of every server
listening at a well-known TSAP, each machine that wishes to
offer services to remote users has a special process server
that acts as a proxy.
It listens to a set of ports at the same time, waiting for a
connection request. Potential users of a service begin by
doing a CONNECT request, specifying the TSAP address of
the service they want. If no
How a user process in host 1 establishes a connection with a
time-of-day server in host 2.
After it gets the incoming request,
the process server creates/forks the requested server, allowing it to
inherit the existing connection with the user.
The new server then does the requested work, while the process
server goes back to listening for new requests, as shown in fig(b).
While the initial connection protocol works fine for those servers that
can be created when required, there may be situations in which
services do exist independently of the process server. ( Ex: A file
server needs to run on a special hardware and cannot be just created
when someone wants).
special
To handle process
this, ancalled a name
alternative serverisorused.
scheme directory
In this model, there
server
exists
To finda . the TSAP address corresponding to a given service name,
such as time- of-day, a user sets up a connection to the name server.
The user then sends a message specifying the service name, and the
name server sends back the TSAP address. Then the user releases
the connection with the name server and establishes a new one with
the desired service.
In this,
when a new service is created, it must register itself with the
name server, giving both its service name (ASCII string) and its
TSAP.

The name server records this information in its internal database so


that when queries come in later, it can answer.
For establishing a connection, we assume that it is sufficient for one
transport entity to just send a CONNECTION REQUEST TPDU to the
destination and wait for a CONNECTION ACCEPTED reply. The
problem occurs when the network can lose, store and duplicate packets.

Imagine a subnet that is so congested that acknowledgements hardly


ever get back in time and each packet times out and is retransmitted.
Suppose that the subnet uses datagrams inside and that every packet
follows a different route. Some of the packets might get stuck in traffic
inside the subnet and take long time to arrive.
Consider a worst possibility :
A user establishes a connection with a bank, sends messages telling
the bank to transfer a large amount of money to the account of a
person, and then releases the connection.
After the connection has been released, all the packets pop out of the subnet and
arrive at the destination in the order, asking the bank to establish a new
connection, transfer money again, and release the connection.

The bank has no way to find that this is a It


assumes that this is a second independent transaction and transfers the money.
Thus, the problem is the existence of the delayed duplicates.
Another possibility is to give each connection a connection identifier ( ie., a
sequence number incremented for each connection established ) chosen by the
initiating party and put in each TPDU, including the one requesting the
connection. After each connection is released, each transport entity could update a
table listing obselete connections as ( peer transport entity, connection identifier )
pairs. Whenever a connection
identifier comes in, it could be checked against the table, to see if it belonged to a
previously released connection.
This method has a problem : it requires each transport entity to maintain certain
amount of history information.

Instead, we use a different technique. Rather than allowing packets to live


forever within the subnet, we must devise a mechanism to kill off aged packets.
Packet lifetime can be restricted to a known maximum using one (or more)
of the following techniques :
.

First method includes any method that prevents packets from looping.

Second method consists of having the hop count initialized to some


appropriatevalue and decremented each time the packet is forwarded. The
network protocol simply discards any packet whose hop counter becomes zero.
The third method requires each packet to bear the time it was
created, with the routers agreeing to discard any packet older
than some agreed-upon time. This latter method requires the
router clocks to be synchronized, which itself is a nontrivial
task.

To solve this , a technique called is used.

The normal setup procedure when host 1 initiates is shown


in fig(a). Host 1 chooses a sequence number, x, and sends
a

Host 2 replies with an acknowledging x and


announcing its own initial sequence number, y.
Finally, host 1 acknowledges host 2’s choice of an initial

sequence number in the first data TPDU that it sends.


Three protocol scenarios for establishing a connection using
a three- way handshake. CR denotes CONNECTION
REQUEST.
(a) Normal operation,
(b)Old CONNECTION REQUEST appearing out of nowhere.
Now let us consider the presence of delayed duplicate control TPDUs.
In the fig(b), the first TPDU is a delayed duplicate CONNECTION
REQUEST from an old connection. This TPDU arrives at host 2 without
host 1’s knowledge.
Host 2 reacts to this TPDU by sending host 1 an ACK TPDU, in effect
asking for verification that host 1 was indeed trying to set up a new
connection. When host 1 rejects host 2’s attempt to establish a connection,
host 2 realizes that it was a delayed duplicate and abandons the
connection.
In this way, a delayed duplicate does no damage.
There are two ways of terminating a connection :
a) Asymmetric release b) Symmetric release.

Asymmetric release is the way the telephone system


works : when one party hangs up, the connection is broken.
Symmetric release treats the connection as 2 separate
unidirectional connections and requires each one to be
released separately.

Consider the scenario of figure. After the connection is


established, host 1 sends a TPDU that arrives properly at host
2. Then host 1 sends another TPDU. Suppose host 2 issues a
DISCONNECT before the second TPDU arrives. The result is
that the connection is released and data is lost.
Flow control problem in transport layer is similar to that of data link layer’s
concept , but with some differences. The basic similarity is that in both
layers a sliding window or other method is needed on each connection to
keep a fast sender from overrunning a slow receiver.

The main difference is that a router usually has a relatively few lines,
whereas a host may have numerous connections. Due to this , it is
impractical to implement the data link buffering methods.

In the data link layer, the sending side must buffer outgoing frames because
they might have to be retransmitted. If the subnet uses datagram service , the
sending transport entity must also buffer. If the receiver knows that the sender
buffers all TPDUs until they are acknowledged, the receiver may or may not
dedicate buffers to specific connections.
The receiver may ( for example ) maintain a single buffer pool shared by all connections.
When a TPDU comes in, an attempt is made to dynamically acquire a new buffer . If one is
available , the TPDU is accepted otherwise rejected. ( since the sender is prepared to
retransmit TPDUs lost by the subnet. ).
Even if the receiver has agreed to do the buffering, still there remains the question of buffer size.
If most TPDUs are nearly of the same size , it is easy to organize the buffers as a pool of
identically-sized buffers, with one TPDU per buffer ,as shown in fig(a).

However, if there is a wide variation in TPDU size, a pool of fixed size buffers will be a problem.
If the buffer size is chosen equal to the largest possible TPDU , space will be wasted whenever a
short TPDU arrives. If the buffer size is chosen less than the maximum TPDU size, multiple
buffers will be needed for long TPDUs, with the complexity.
(a) Chained fixed-size buffers. (b) Chained
variable-sized buffers.
(c) One large circular buffer per connection.
Another approach to the buffer size problem is to
b but more complicated in
terms of buffer management.

A third possibility is to
This
connections are opened and
closed as the traffic pattern changes, the.

The following figure shows an example of how dynamic


window management might work in a datagram subnet with
4-bit sequence numbers. Assume that buffer allocation
information travels in separate TPDUs, as shown , and is
not piggybacked onto the reverse traffic.
Dynamic buffer allocation. The arrows show the
direction of transmission. An ellipsis (…)
Initially, A wants 8 buffers, but is granted only 4 of them. It then sends
3 TPDUs , of which third is lost. TPDU 6 acknowledges receipt of all
TPDUs up to including sequence number 1, thus allowing A to release
those buffers, and furthermore informs A that it has permission to send
3 more TPDUs starting beyond 1 ( ie., TPDUs 2, 3, 4 ).
A knows that it has already sent number 2, so it assumes that it
may send 3, 4 which it proceeds to do. At this point, it is blocked and
must wait for more buffer allocation.
Time out retransmissions ( line 9) may occur while blocked, since
they use buffers that have already been allocated. In line 10, B
acknowledges receipt of all TPDUs up to including 4 but refuses to
let A continue. The next TPDU from B to A allocates another buffer
and allows A to continue.

Problems with this method arrives if control TPDUs can get lost.
Consider line 16, B has allocated more buffers to A, but the allocation
TPDU was lost. Since control TPDUs are not sequenced or timed
out, A is now deadlocked. To prevent this, each host periodically send
control TPDUs giving the acknowledgement and

buffer status on each connection. In this way, the deadlock


will be broken. ( sooner or later).
Multiplexing several conversations onto connections, virtual
circuits and physical links plays a vital role in several layers of
the network architecture. In the transport layer, need for
multiplexing can arise in a number of ways.

For example, if only one network address is available on a


host, all transport connections on that machine have to
use it.
process
When a TPDUto givearrives
it to. This situation
, there shouldisbe upward
called
some method to
as
tell shown
which in fig(a). multiplexing,
In this, all 4 distinct transport connections use the same
network connection ( ex: IP address ) to the remote
host.

Multiplexing is useful for one more reason also. Suppose that a


subnet uses virtual circuits internally and imposes a maximum
data rate on each one. If a user needs more bandwidth than 1
virtual circuit can provide, a way is to open multiple network
connections and distribute traffic among them on a round-
(a) Upward (b) Downward
multiplexing. multiplexing.
If hosts and routers are subject to crashes, recovery becomes
an issue.
Main problem is how to recover from host crashes. It may be
desirable for some clients to be able to continue working when
servers crash and then quickly reboot.
Let us assume that one host, client, is sending a long file to
another host, file server, using a stop-and-wait protocol.
The transport layer on the server simply passes the
incoming TPDUs to the transport user, one by one. During
the transmission, the server crashes. When it comes back
up, its tables are reinitialized, so it no longer knows where it
was.
Thus to recover from the previous status, the server might
send a broadcast TPDU to all other hosts, announcing that it
had just crashed
outstanding, and
S1, or norequesting
TPDUs that its
SS clients inform it of the
outstanding
status of all open connections. Each client S0 can be in one of the
2 states : one TPDU outstanding, S1, or0 no TPDUs outstanding
We assume that the client should retransmit if and only if it
has an unacknowledged TPDU outstanding ( in state S1 )
when it learns of the crash.

Consider for example, the situation in which the server’s


transport entity first sends an acknowledgement, and then ,
when the acknowledgement has been sent, writes to the
application process. Writing a TPDU onto the output stream,
and sending an acknowledgement are 2 distinct events that
cannot be done simultaneously. If a crash occurs after the ack
has been sent but before the write has been done, the client
will receive the ack and thus be in state S0 (when the recovery
announcement arrives ). The client will therefore not retransmit
thinking that the TPDU has arrived. Thus, it leads to a missing
TPDU.
Now let us reprogram the transport entity to first do the write
and then send the ack. Imagine that write has been done but
the crash occurs before the ack can be sent. The client will be
in state S1 and thus retransmit, leading to an undetected
duplicate TPDU in the output stream to server application.
Different combinations of client and server
strategy
� Introduction to UDP

� Remote Procedure Call

� The Real-Time Transport Protocol

The Internet has two main protocols in the transport


layer, a connectionless protocol and a connection-
oriented protocol.
Transmission Control
Protocol
The connectionless protocol is UDP (User Datagram Protocol )
, the connection-oriented protocol is TCP ( Transmission
Control Protocol ).
UDP provides a way for applications ( users ) to send
encapsulated IP datagrams without having to establish a
connection.

UDP transmits Consisting of an 8-byte header


followed by the payload. The header is shown below :

The UDP
header.
The 2 ports serve to identify the end points within the source and destination
machines. When a UDP packet arrives , its payload is handed to the process
attached to the destination port. This attachment occurs when BIND primitive (or
something similar is used). Without the port fields, the transport layer would not
know what to do with the packet. With them, it delivers segments correctly.

The source port is mainly needed when a reply must be sent back to the source.
By copying the source port field from the incoming segment into the
destination port field of the outgoing segment, the process sending the reply
can specify which process on the sending machine is to get it.

The UDP length field includes the 8-byte header and the data. The I is
optional and stored as 0 if not computed.

UDP does not do flow control, error control or retransmission upon


receipt of a bad segment. All that is up to user processes. UDP just
provides an interface to the IP protocol with the added feature of
demultiplexing multiple processes using the ports.
UDP is mainly useful in client-server applications. Often, the client
sends a short request to the server and expects a short reply back. If
either the request or reply is lost, the client can just time out and
retransmit. Thus, it requires few messages than with a protocol
requiring an initial setup.

An Application that uses UDP in this way is


. A program that needs IP address of some host
name, can send a UDP packet containing host name to a DNS server.
The server replies with a UDP packet containing the IP address. No
setup is needed in advance and no release is needed later.
When a process on machine 1 calls a procedure on
machine 2, the calling process on 1 is suspended and
execution of the called procedure takes place on 2.
Information can be transported from the caller to the
callee in the parameters and can come back in the
procedure result. Message passing is not visible to the
programmer. This technique is known as Remote
Procedure Call, it’s a basis for many networking
applications.

The calling procedure is called as the client and the called


procedure is called as server.

To call a remote procedure, the client program is bound with


a small library procedure, called the Similarly, the server is
bound with a procedure called the
parameters pushed onto the stack in the normal way.
Step 2 : The client stub packing the parameters into a message and
marking a system call to send the message. Packing the parameters
is called Step 3 : The kernel sending the message from the
client machine to the server machine.
Step 4 : The kernel passing the incoming packet to the server stub.
Step 5 : The server stub calling the server procedure with the
unmarshaled parameters. The reply traces the same path in the
other direction.

Main problem relates to use of global variables. Normally, the calling


and called procedures can communicate by using global variables,
apart from communicating via parameters. If the called procedure is
now moved to a remote machine, the code will fail because the
global variables are no longer shared. Irrespective of the problem ,
RPC is widely used , but with some restrictions.

UDP is commonly used for RPC. However, when the parameters or


results may be larger than the maximum UDP packet, it may be
necessary to set up a TCP connection and send the request over it
rather than using UDP.
Client-server RPC is an area where UDP is widely used.
Another area is real-time multimedia applications. In particular,
internet telephony, video conferencing, video-on-demand and
other multimedia applications became more common. Thus, a
generic real-time transport protocol for multimedia
) came into
applications is needed. Thus
existence.
RTP is on user space and runs over UDP. It operates as
follows :
The multimedia application consists of multiple audio, video,
text and possibly other streams. These are fed into the RTP
library, which is in user space along with the application. This
library then multiplexes the streams and encodes them in RTP
packets, which it then stuffs into a socket. At the other end of
socket ( in the Operating system kernel), UDP packets are
generated and embedded in IP packets. If the computer is on
an Ethernet , the IP packets are then put in Ethernet frames for
The basic function of RTP is to multiplex several real time data
streams onto a single stream of UDP packets. The UDP stream can
be sent to a single destination (unicasting) or to multiple destinations
(multicasting).

Each packet sent in an RTP stream is given a number 1 higher than


its predecessor. This numbering allows the destination to determine if
any packets are missing. If a packet is missing, the destination
approximate the missing value by interpolation. Retransmission is not
useful, since the retransmitted packet would probably arrive too late to
be useful. Due to this, RTP has no flow control , no error control, no
acknowledgements and no mechanism to request retransmissions.

The RTP header is given in the following figure . It consists of 32-bit


words and some extensions.
The RTP
header.
The first word contains version field,
The P bit indicates that the packet has been padded to a multiple
of 4 bytes. The X bit indicates that an extension header is present.
The CC field tells how many contributing sources are present,
from 0 to 15. The M bit is an application-specific marker bit.
It can be used to mark the start of a video frame, the start of a
word in an audio channel , so on.
The Payload type field tells which encoding algorithm has
been used ( ex:- uncompressed 8-bit audio, MP3 etc. ). Since
every packet carries this field , the encoding can change
during transmission.
The sequence number is a counter that is incremented on
each RTP packet sent. It is used to detect lost packets.

The timestamp is produced by the stream’s source to note when


the first sample in the packet was made. This can be used to
reduce the jitter at the receiver. The synchronization source
identifier tells which stream the packet belongs to. (It is the
method used to multiplex and demultiplex multiple data
streams onto a single stream of UDP packets)
.
The contributing source identifiers , if any, are used when
mixers are present in the studio. ie, the mixer is
synchronizing source, and
other streams being mixed.
For most of the Internet applications , reliable ,sequenced delivery is
needed. UDP cannot provide this.
was specifically designed to
provide a reliable end-to-end byte stream over an unreliable internetwork.
An internetwork differs because different parts may have different
topologies, bandwidths, delays, packet sizes etc. TCP was designed to
dynamically adapt to these properties.
Each machine supporting TCP has a TCP transport entity, either a library
procedure, a user process, or part of kernel. A TCP entity accepts user
data streams from local processes, breaks them into pieces not
exceeding 64 KB ( in order to fit in a single Ethernet frame with the IP
and TCP headers ) and sends each piece as a separate IP datagram.
When datagrams containing TCP data arrive at a machine, they are
given to the TCP entity which reconstructs the original byte streams.
Note :-
 The IP layer gives no guarantee that datagrams will be delivered
properly, so it is up to TCP to time out and retransmit as need be.
 Datagrams maythemarrive in wrong order, it is up to TCP to
them into messages in the proper reassemble
sequence.
sequence.
TCP service is obtained by both the sender and receiver creating end
points, called sockets. Each socket has a socket number (address)
consisting of the IP address of the host and a 16-bit number local to
that host, called a port.

A port is the TCP name for a TSAP. For TCP service to be obtained, a
connection must be explicitly established between a socket on the
sending machine and a socket on the receiving machine.

A socket may be used for multiple connections at the same time.


Connections are identified by the socket identifiers at both ends ie.,
(socket1, socket2).

Note :- No virtual circuit numbers are used


Few of them are
:-

Port Protocol Use


21 FTP File transfer
23 Telnet Remote login
25 SMTP E-mail
80 HTTP World Wide Web
119 NNTP USENET news
Message boundaries are not preserved end to end. For example, if the
sending process does four 512-byte writes to a TCP stream, these data
may be delivered to the receiving process as four 512-byte chunks, 2
1024-byte chunks , one 2048-byte chunk ( as shown in the figure ) or
some other way. There is no way for the receiver to detect the units in
which the data were written.

( Files in UNIX have this


property. )
When an application passes data to TCP, TCP may send it
immediately or buffer it ( in order to collect a large amount of
data to send at once ). However, sometimes, the application
really wants the data to be sent immediately.
Example:- Suppose a user is logged in to a remote machine.
After a command line has been finished and the enter is
typed, it is essential that the line be shipped off to the remote
machine immediately and not buffered until the next line
comes in.

To force data out, applications can use the flag, which


tells TCP not to delay the transmission.
One more important feature of TCP is urgent data. When an
interactive user hits DEL or CTRL-C key to break off a remote
computation that has already begun, the sending application
puts some control information in the data stream and gives it
to the TCP along with the URGENT flag. This event causes TCP
to stop accumulating data and transmit everything it has for
that connection immediately.
The TCP Protocol

The sending and receiving TCP entities exchange data in the form of segments.
A TCP Segment consists of a fixed 20-byte header (plus an optional part)
followed by 0 or mote data bytes.

Two limits restrict the segment size :


First, each segment, including the TCP header, must fit in the 65,515-byte IP payload.

Second, each network has a maximum transfer unit ( MTU ), and each segment
must fit in the MTU. MTU is generally 1500 bytes (default).
The basic protocol used by the TCP entities is the sliding window protocol.
When a sender transmits a segment, it also starts the timer.
When the segment arrives at the destination, the receiving TCP entity sends back a
segment ( with data if exist ) bearing an acknowledgement number equal to the next
sequence number it expects to receive.

If the sender’s timer goes off before the acknowledgement is received, the sender transmits the
segment again.
Segments can arrive out of order and segments can also be delayed.
TCP must be prepared to deal with these problems and solve them in an efficient way.
The following figure shows the layout of the TCP
header :-
Every segment begins with a 20-byte fixed-format header. This may be
followed by header options. After the options, data bytes may appear.

Segments without any data are legal and are commonly used for
acknowledgements and control messages.

The Source port and Destination port fields identify the local end points
of the connection.

The Sequence number and acknowledgement numbers work in the same


way. Acknowledgement number specifies the next byte expected, not the
byte received.

The TCP header length tells how many 32-bit words are present in TCP
header. This is needed because the Options field is of variable length.
URG is set to 1 if the Urgent pointer is in use. The Urgent pointer is
used to indicate a byte offset from the current sequence number at
which urgent data are to be found.

The ACK bit is set to 1 to indicate that the Acknowledgement number


is valid.
If ACK is 0, the segment does not contain an acknowledgement.

The PSH bit indicates PUSHed data. The receiver is requested to


deliver the data to the application upon arrival and not buffer it until a
full buffer has been received.

The RST bit is used to reset a connection due to crash or some other
reason. It is also used to reject an invalid segment or refuse an
attempt to open a connection.

The SYN bit is used to establish connections. The connection


request has SYN=1 and ACK=0 to indicate that the piggyback
acknowledgement field is not in use. The connection reply has an
acknowledgement , so it has SYN=1 and Ack=1.
SYN bit is used to denote CONNECTION REQUEST and
CONNECTION ACCEPTED, with the ACK bit used to distinguish between
them.

The FIN bit is used to release a connection. It specifies that the sender
has no more data to transmit. However, it may receive data.

Flow control in TCP is handled using a variable sized sliding window.


The Window size field tells how many bytes may be sent starting at the
byte acknowledged. A Window size 0 is valid and says that the bytes
up to and including Acknowledgement number – 1 have been received
, but the receiver is not ready to receive more data at that instance. The
receiver can later grant permission to send by transmitting a segment
with the same acknowledgement number and a nonzero Window size
field.

A Checksum is used for reliability. It checksums the header, the data


and the conceptual pseudoheader shown in the following figure :
The pseudoheader included in the TCP
checksum.

The pseudo header contains the 32-bit IP addresses of the


source and the destination machines, protocol number for
TCP (6) and the byte count for the TCP segment ( including
the header ).

Options field provides a way to add additional facilities that


were not included in the regular header. Main option is one that
allows each host to specify maximum TCP payload it is willing
to accept.
Connections are established in TCP by means of three-way
handshake method.

To establish a connection, one side, say, server passively waits


for an incoming connection by executing the and
primitives.
The other side, say, client executes a primitive
specifying the IP address and port to which it wants to
connect, the maximum TCP segment size it is willing to accept,
and optionally some user data. The CONNECT primitive sends
a TCP segment with the SYN bit on and ACK bit off and waits
for a response.
When this segment arrives at the destination, the TCP entity
checks to see if there is a process that has done a LISTEN on
the port given in the Destination port field. If not, it sends a
reply with RST bit on to reject the connection.
If two hosts simultaneously attempt to establish a connection
between the same two sockets, the sequence of events is given in
fig(b). The result of these events is that only one connection is
established , not two because connections are identified by their end
points.

If the first setup results in a connection identified by (x, y) and the


second one does same, only one entry is made in the table, namely
(x, y).
TCP Connection Release

To release a connection, either party can send a TCP segment


with the FIN bit set, which means that is has no more data to
transmit.
When the FIN is acknowledged that direction is shut down.
Data may continue to flow in the other direction.
When both directions have been shut down, the connection
is released.
Note :- Both ends of a TCP connection may send FIN
segments at the same time also.
Steps required to establish and release the connections can
be represented in a finite state machine with the 11 states
listed in the figure :
does either a passive open ( or an active open (.
If other side does the opposite one, a connection is established and the state
becomes Connection release can be initiated by either
side. When it is complete, the state returns to

The finite state machine is shown in the next figure. The


common case of a client actively connecting to a passive
server is shown with heavy lines –
. The light lines are unusual event sequences.

Each line is marked by pair. The event can


either be a user-initiated system call ( CONNECT, LISTEN,
SEND or CLOSE ), a segment arrival ( SYN, FIN, ACK, RST ) or
a timeout of twice the maximum packet lifetime. The
action is the sending of a control segment (SYN,FIN or RST)
or nothing, indicated by - (comments), (shown in
parantheses).

Note :- we can understand the diagram by first following the


path of a t (heavy solid line) , then later
Thus, a sophisticated release protocol is needed to avoid data
loss.
One way is to use symmetric release in which each direction is
released independently of other. Here, a host can continue to
receive data even after it has sent a DISCONNECT TPDU.
of data to send and knows when it has sent it. But,
determining when the connection should be terminated is a
problem.

Consider a protocol in which host 1 asks for termination and


get the acknowledgement from the other side properly.
Unfortunately this protocol does not work.

There is a famous problem called “


as “ .

Imagine that a white army is encamped in a valley, as shown


in the figure. On both the sides are blue armies. The white
army is larger than either of the blue armies alone, but
together the blue armies are larger than the white army. If
either blue armies attacks by itself, it will be defeated, but if the
two blue armies attack simultaneously, they will be victorious.
The two-army
problem.

Suppose that the commander of the blue army #1 sends a


message : “ I propose to attack at dawn on Oct 1. How about
it ? “. Now suppose
that the message arrives , the commander of blue army #2
agrees, and his reply reaches safely back to blue army #1. Will
the attack happen?
Probably not, because the commander #2 does not know if his
reply got through. If it did not, blue army #1 will not attack, so it
would be foolish for them to attack.
Let us improve the protocol by using a
The initiator of the original proposal must acknowledge the
response. Assuming no messages are lost, blue army #2 will
get the acknowledgement, but the commander of blue army
#1 will hesitate. Now let us consider 4 scenarios for releasing
the connection using the three-way handshaking.
In figure(a), we see the normal procedure in which one of the
users sends a to
initiate the connection release. When it arrives, the recipient
sends back a , too, and starts a timer, just in
case its DR is lost. When this
DR arrives, the original sender sends back an and
releases the connection. Finally, when arrives, the
receiver also releases the connection.

If the final ACK TPDU is lost, as shown in figure(b), the


situation is saved by the timer. When the timer expires, the
connection is released anyway.
Four protocol scenarios for releasing a connection. (a) Normal
case of a three-way handshake. (b) final ACK lost.
Our last scenario, figure(d) is similar to that of figure( c) except that now we
assume all the repeated attempts to retransmit the DR also fail due to lost
TPDUs. After N entries, the sender just gives up and releases the connection.
Meanwhile, the reciever times out and also exits.

(c) Response lost. (d) Response lost and subsequent


DRs lost.

Note :- In theory, the initial DR and N retransmissions may lost. The sender
will give up and release the connection, while the other side knows nothing at
all about the attempts to disconnect and is still active. This results in a.
When the load offered to any network is more than it can handle,
congestion builds up.
The idea is to refrain from injecting a new packet into the network until an old
one leaves (i.e., delivered). TCP attempts to achieve this by dynamically
manipulating the window size.

Let us try to prevent congestion from occurring initially. When a connection is


established, a suitable window size has to be chosen. The receiver can specify a
window based on its buffer size.
If the sender sticks to this window, problems will not occur due to buffer
overflow at the receiving side, but they may still occur due to internal congestion
within the network.

In figure(a), we see a thick pipe leading to a small-capacity receiver. As long


as the sender does not send more water than the bucket can contain, no water
will be lost.
(a)A fast network feeding a low capacity
receiver.
(b) A slow network feeding a high-capacity
In figure(b), the limiting factor is not the bucket capacity, but
the internal carrying capacity of the network. If too much
water comes in too fast, it will back up and some will be lost.

Thus 2 problems
exists :

network capacity and receiver – and to deal with


capacity To do so, each sender maintains
separately. them2 windows: the window
receiver has granted and a second window, the congestion window. Each
reflects the number of bytes the sender may transmit. The number of bytes
that may sent is the minimum of the 2 windows.

When a connection is established, the sender initializes the congestion window


to the size of the maximum segment in use on the connection. It then sends
one maximum segment. If this segment is acknowledged before the timer goes
off, it adds 1 segment’s worth of bytes to the congestion window to make it 2
maximum size segments and sends 2 segments. As each of the segments is
acknowledged, the congestion window is increased by 1 maximum segment
size.
When the congestion window is n segments, if all n are
acknowledged on time, the congestion window is increased by
the byte count corresponding to n segments.

The congestion window keeps growing exponentially until


either a timeout occurs or the receiver’s window is reached.
This algorithm is called slow start. It is exponential. All TCP
implementations are required to support it.

Let us consider theInternet congestion control It uses


p algorithm.
aarameter, the threshold , initially 64 KB, in addition to the
receiver and congestion windows. When a timeout occurs, the
threshold is set to half of the current congestion window, and
the congestion window is reset to one maximum segment.

Slow start is then used to determine what the network can


handle, except that exponential growth stops when the
threshold is hit.
• Data transmitted over the network gets divided into packets
containing control information such as network addresses
(source and destination), protocol, sequence number, etc.
• QoS works by classifying and marking the packets based on
their service type to determine which packets need priority over
bandwidth within a network.
• Classification analyzes the packet header, which contains
important instructions about the data within the packet.
• Traffic flow then gets marked to determine the packets with priority
access. After traffic marking, networking devices (routers, switches,
etc.) get configured to create queues for different packets per their
priority. It ensures bandwidth is available for critical applications with
high priority.
• Queuing and bandwidth management are two common QoS
mechanisms used to handle packets per their classification.
• Applying some techniques to reduce congestion and improve
network performance , by growing multimedia networking , Ad-hoc
• measures are not sufficient to measure
Serious attempt is at guaranteeing Quality of Service (QoS)
through the network and protocol design
• Improved QoS works by identifying the traffic flow within
the network and prioritizing it accordingly. It ensures critical applications run
at their best and are available with fast response times for users.

• With QoS, network admins can better manage traffic


flow by setting different bandwidths for different types of packets. This
prioritization helps better drive traffic and avoid potential network
congestion.
• QoS enables better management of network resources. It
reduces the need for organizations to upgrade network bandwidth and
purchase additional network infrastructure.

• Improved QoS can detect abnormalities in the network. Network


admins can block unwanted traffic and ensure application reliability by
setting specific QoS security policies.

• Network congestion can lead to packet loss and


hamper the performance of critical applications. QoS prioritization
policies ensure packets get queued accordingly to avoid traffic jams
within the network.

• Listed below are the parameters organizations can use to
measure QoS:
• Packet loss occurs when network devices
drop incoming data packets due to heavy network
overload. As packets fail to reach their destination, it
results in packet loss.
• The total time it takes for a packet to traverse the network
from its source to destination is called latency. The lower the
latency, the better. High latency can lead to unwanted
bottlenecks in communication.
• Jitter: Jitter occurs due to network congestion or variation in
routing.
It’s technically referred to as packet delay variation
(PDV) as packets are delayed and arrive out of
sequence.
• Bandwidth: The maximum data transmitted across a
network path at one time. QoS helps analyze what
applications need more bandwidth than others.
Network Performances

• Performance of a network pertains to the measure of


service quality of a network as perceived by the user.
There are different ways to measure the performance of
a network, depending upon the nature and design of the
network. The characteristics that measure the
performance of a network are
• Bandwidth
• Throughput
• Latency (Delay)
• Bandwidth – Delay Product
• Jitter
One of the most essential conditions of a website’s
performance is the amount of bandwidth allocated to the
network. Bandwidth determines how rapidly the webserver
is able to upload the requested information. While there are
different factors to consider with respect to a site’s
performance, bandwidth is every now and again the
restricting element.

• Bandwidth is characterized as the measure of data or
information that can be transmitted in a fixed measure
of time. The term can be used in two different contexts
with two distinctive estimating values. In the case of
digital devices, the bandwidth is measured in bits per
second(bps) or bytes per second. In the case of
analogue devices, the bandwidth is measured in cycles
• per second, or Hertz (Hz).
Bandwidth is only one component of what an individual
sees as the speed of a network. People frequently
mistake bandwidth with internet speed in light of the
fact that internet service providers (ISPs) tend to claim
that they have a fast “40Mbps connection” in their
advertising campaigns. True internet speed is actually
the amount of data you receive every second and that
has a lot to do with latency too.
Throughput is the number of messages successfully
transmitted per unit time. It is controlled by available
bandwidth, the available signal-to-noise ratio and hardware
limitations. The maximum throughput of a network may be
consequently higher than the actual throughput achieved in
everyday consumption. The terms ‘throughput’ and
‘bandwidth’ are often thought of as the same, yet they are
different. Bandwidth is the potential measurement of a link,
whereas throughput is an actual measurement of how fast
• we can send data.
Throughput is measured by tabulating the amount of data
transferred between multiple locations during a specific
period of time, usually resulting in the unit of bits per
second(bps), which has evolved to bytes per second(Bps),
kilobytes per second(KBps), megabytes per second(MBps)
and gigabytes per second(GBps).
Throughput may be affected by numerous factors, such
as the hindrance of the underlying analogue physical
medium, the available processing power of the system
components, and end- user behaviour. When numerous
protocol expenses are taken into account, the use rate of
the transferred data can be significantly lower than the
maximum achievable throughput.
• Let us consider: A highway which has a capacity of moving,
say, 200 vehicles at a time. But at a random time, someone
notices only, say, 150 vehicles moving through it due to
some congestion on the road. As a result, the capacity is
likely to be 200 vehicles per unit time and the throughput is
150 vehicles at a time.
• Example:
• Input: A network with bandwidth of 10 Mbps can pass only
an average of 12, 000 frames per minute where each frame
• carries an average of 10, 000 bits. What will be the
throughput for this network?
Output: We can calculate the throughput as- Throughput =
(12, 000 x 10, 000) / 60 = 2 Mbps The throughput is nearly
equal to one-fifth of the bandwidth in this case.
LATENC
Y In a network, during the process of data communication, latency ( also

known as delay) is defined as the total time taken for a complete
message to arrive at the destination, starting with the time when the first
bit of the message is sent out from the source and ending with the time
when the last bit of the message is delivered at the destination. The
network connections where small delays occur are called “Low- Latency-
Networks” and the network connections which suffer from long delays are
• known as “High-Latency-Networks”.
High latency leads to the creation of bottlenecks in any network
communication. It stops the data from taking full advantage of the network
pipe and conclusively decreases the bandwidth of the communicating
network. The effect of the latency on a network’s bandwidth can be
temporary or never-ending depending on the source of the delays. Latency
• is
In also known
simpler as latency
terms: a ping rate
mayand is measured
be defined intime
as the milliseconds(ms).
required to
successfully send a packet across a network.
• It is measured in many ways like round trip, one way, etc.
• It might be affected by any component in the chain which is utilized to
vehiculate data, like workstations, WAN links, routers, LAN, servers and
eventually may be limited for large networks, by the speed of light.
• Latency = Propagation Time + Transmission Time + Queuing Time +
Processing Delay
• Propagation Time: It is the time required for a bit to travel from the
source to the destination. Propagation time can be calculated as the
ratio between the link length (distance) and the propagation speed
over the communicating medium. For example, for an electric
signal, propagation time is the time taken for the signal to travel
• through a wire.
• Propagation time = Distance / Propagation
• speed Example:
Input: What will be the propagation time when the distance between
two points is 12, 000 km? Assuming the propagation speed to be
2.4 * 10^8 m/s in cable. Output: We can calculate the propagation
• time as- Propagation time
= (12000 * 10000) / (2.4 * 10^8) = 50 ms
Transmission Time: Transmission time is a time based on how long
it takes to send the signal down the transmission line. It consists of
time costs for an EM signal to propagate from one side to the other,
or costs like the training signals that are usually put on the front of
a packet by the sender, which helps the receiver synchronize clocks.
The transmission time of a message relies upon the size of the
message and the bandwidth of the channel.
• Transmission time = Message size / Bandwidth
BANDWIDTH – DELAY PRODUCT

• Bandwidth and delay are two performance


measurements of a link. However, what is
significant in data communications is the
product of the two, the bandwidth-delay product.
• Let us take two hypothetical cases as
• examples. Case 1: Assume a link is of
bandwidth 1bps and the delay of the link is 5s.
Let us find the bandwidth-delay
product in this case. From the image, we can
say that this product 1 x 5 is the maximum
number of bits that can fill the link. There can be
close to 5 bits at any time on the link.
Case 2: Assume a link is of bandwidth 3bps. From the
image, we can say that there can be a maximum of 3 x 5 =
15 bits on the line. The reason is that, at each second, there
are 3 bits on the line and the duration of each bit is 0.33s.
Questions

1.a.Briefing about transport service?


b. Discuss about elements of Transport protocol
2. Discuss about UDP quantitatively?
3.Discuss about TCP quantitatively?
4. Explain about QoS?
5. Discuss in detailed about network performance
issues?

You might also like