Unit Iv
Unit Iv
23.1 PROCESS-TO-PROCESSDELIVERY
The data link layer is responsible for delivery of frames between two
neighboring nodes over a link. This is called node-to-node delivery. The
network layer is responsible for delivery of datagrams between two hosts.
This is called host-to-host delivery. Communication on the Internet is not
defined as the exchange of data between two nodes or between two
hosts. Real communication takes place between two processes
(application programs). We need process-to-process delivery. However,at
any moment, several processes may be running on the source host and
several on the destination host. To complete
thedelivery,weneedamechanismtodeliverdatafromoneoftheseprocessesrunn
ingon the source host to the corresponding process running on the
destinationhost.
The transport layer is responsible for process-to-process delivery-the
delivery of a packet, part of a message, from one process to another. Two
processes communicate in a client/server relationship, as we will see later.
Figure 23.1 shows these three types of deliveries and their domains.
Socket Addresses
Process-to-process delivery needs two identifiers, IP address and the port
number, at each end to make a connection. The combination of an IP
address and a port number is called a socket address. The client socket
address defines the client process uniquely just as the server socket
address defines the server process uniquely (see Figure 23.5).
A transport layer protocol needs a pair of socket addresses: the client
socket address and the server socket address. These four pieces of
information are part of the IP header and the transport layer protocol
header. The IP header contains the IP addresses; the UDP or TCP header
contains the port numbers.
Port number
Multiplexing and Demultiplexing
The addressing mechanism allows multiplexing and demultiplexing by the
transport layer, as shown in Figure 23.6.
Processes Processes
Multiplexing
At the sender site, there may be several processes that need to send packets.
However, there is only one transport layer protocol at any time. This is a
many-to-one relationship and requires multiplexing. The protocol accepts
messages from different processes, differentiated by their assigned port
numbers. After adding the header, the transport layer passes the packet to the
network layer.
Demultiplexing
At the receiver site, the relationship is one-to-many and requires
demultiplexing. The transport layer receives datagrams from the network
layer. After error checking and dropping of the header, the transport layer
delivers each message to the appropriate process based on the port number.
ConnectionlessService
In a connectionless service, the packets are sent from one party to another
with no need for connection establishment or connection release. The packets
are not numbered; they may be delayed or lost or may arrive out of
sequence. There is no acknowledgment either. We will see shortly that one of
the transport layer protocols in the Internet model, UDP, isconnectionless.
Service
In a connection-oriented service, a connection is first established between the
sender and the receiver. Data are transferred. At the end, the connection is
released. We will see shortly that TCP and SCTP are connection-oriented
protocols.
UDP
o Source port address: It defines the address of the application process that has delivered a
message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process that will receive
the message. The destination port address is of a 16-bit address.
o Total length: It defines the total length of the user datagram in bytes. It is a 16-bit field.
o Checksum: The checksum is a 16-bit field which is used in error detection.
TCP
o Stream data transfer: TCP protocol transfers the data in the form of contiguous stream of
bytes. TCP group the bytes in the form of TCP segments and then passed it to the IP layer for
transmission to the destination. TCP itself segments the data and forward to the IP.
o Reliability: TCP assigns a sequence number to each byte transmitted and expects a positive
acknowledgement from the receiving TCP. If ACK is not received within a timeout interval,
then the data is retransmitted to the destination.
The receiving TCP uses the sequence number to reassemble the segments if they arrive out of
order or to eliminate the duplicate segments.
o Flow Control: When receiving TCP sends an acknowledgement back to the sender indicating
the number the bytes it can receive without overflowing its internal buffer. The number of
bytes is sent in ACK in the form of the highest sequence number that it can receive without
any problem. This mechanism is also referred to as a window mechanism.
o Multiplexing: Multiplexing is a process of accepting the data from different applications and
forwarding to the different applications on different computers. At the receiving end, the data
is forwarded to the correct application. This process is known as demultiplexing. TCP
transmits the packet to the correct application by using the logical channels known as ports.
o Logical Connections: The combination of sockets, sequence numbers, and window sizes, is
called a logical connection. Each connection is identified by the pair of sockets used by
sending and receiving processes.
o Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both the directions at
the same time. To achieve Full Duplex service, each TCP should have sending and receiving
buffers so that the segments can flow in both the directions. TCP is a connection-oriented
protocol. Suppose the process A wants to send and receive the data from process B. The
following steps occur:
o Establish a connection between two TCPs.
o Data is exchanged in both the directions.
o The Connection is terminated.
Where,
o Source port address: It is used to define the address of the application program in a source
computer. It is a 16-bit field.
o Destination port address: It is used to define the address of the application program in a
destination computer. It is a 16-bit field.
o Sequence number: A stream of data is divided into two or more TCP segments. The 32-bit
sequence number field represents the position of the data in an original data stream.
o Acknowledgement number: A 32-field acknowledgement number acknowledge the data
from other communicating devices. If ACK field is set to 1, then it specifies the sequence
number that the receiver is expecting to receive.
o Header Length (HLEN): It specifies the size of the TCP header in 32-bit words. The
minimum size of the header is 5 words, and the maximum size of the header is 15 words.
Therefore, the maximum size of the TCP header is 60 bytes, and the minimum size of the
TCP header is 20 bytes.
o Reserved: It is a six-bit field which is reserved for future use.
o Control bits: Each bit of a control field functions individually and independently. A control
bit defines the use of a segment or serves as a validity check for other fields.
Characteristics of SCTP:
1. Unicast with Multiple properties –
It is a point-to-point protocol which can use different paths to reach end host.
2. Reliable Transmission –
It uses SACK and checksums to detect damaged, corrupted, discarded, duplicate and reordered
data. It is similar to TCP but SCTP is more efficient when it comes to reordering of data.
3. Message oriented –
Each message can be framed and we can keep order of datastream and tabs on structure. For
this, In TCP, we need a different layer for abstraction.
4. Multi-homing –
It can establish multiple connection paths between two end points and does not need to rely on
IP layer for resilience.
Advantages of SCTP:
1. It is a full- duplex connection i.e. users can send and receive data simultaneously.
2. It allows half- closed connections.
3. The message’s boundaries are maintained and application doesn’t have to split messages.
4. It has properties of both TCP and UDP protocol.
5. It doesn’t rely on IP layer for resilience of paths.
Disadvantages of SCTP:
1. One of key challenges is that it requires changes in transport stack on node.
2. Applications need to be modified to use SCTP instead of TCP/UDP.
3. Applications need to be modified to handle multiple simultaneous streams.
CONGESTION
Congestion in a network or internetwork occurs because routers and switches have queues-
buffers that hold the packets before and after processing. CONGESTIONCONTROL
Congestion control refers to techniques and mechanisms that can either prevent congestion, before it
happens, or remove congestion, after it has happened. In general, we can divide congestion control
mechanisms into two broad categories: open-loop congestion control(prevention) and closed loop
congestion control (removal)as shown in Figure24.5.
Retransmission A
policy Window c
policy k
nowledgment
Back pressure
policy Discarding Choke packet
policy Admission
policy Implicit
signaling
Explicit
signaling
Open-Loop Congestion Control
In open-loop congestion control, policies are applied to prevent congestion before it happens. In
these mechanisms, congestion control is handled by either the source or the destination. We give a
brief list of policies that can prevent congestion.
Retransmission Policy
Retransmission is sometimes unavoidable. If the sender feels that a sent packet is lost or
corrupted, the packet needs to be retransmitted. Retransmission in general may increase
congestion in the network. However, a good retransmission policy can prevent congestion.
Window Policy
The type of window at the sender may also affect congestion. The Selective Repeat window is
better than the Go-Back-N window for congestion control. In the Go-Back-N window, when the
timer for a packet times out, several packets may be resent, although some may have arrived safe
and sound at the receiver.
Acknowledgment Policy
The acknowledgment policy imposed by the receiver may also affect congestion. If the receiver
does not acknowledge every packet it receives, it may slow down the sender and help prevent
congestion.
Discarding Policy
A good discarding policy by the routers may prevent congestion and at the same time may not
harm the integrity of the transmission. For example, in audio transmission, if the policy is to
discard less sensitive packets when congestion is likely to happen, the quality of sound is still
preserved and congestion is prevented or alleviated.
Admission Policy
An admission policy, which is a quality-of-service mechanism, can also prevent congestion in
virtual-circuit networks. Switches in a flow first check the resource requirement of a flow before
admitting it to the network.
Closed-Loop Congestion Control
Closed-loop congestion control mechanisms try to alleviate congestion after it happens. Several
mechanisms have been used by different protocols. We describe a few of them here.
Backpressure
The technique of backpressure refers to a congestion control mechanism in which a congested node
stops receiving data from the immediate upstream node or nodes. This may cause the upstream node
or nodes to become congested, and they, in turn, reject data from their upstream nodes or nodes. And
so on. Backpressure is a node-to-node congestion control that starts with a node and propagates, in
the opposite direction of data flow, to the source. Figure 24.6 shows the idea ofbackpressure.
Node III in the figure has more input data than it can handle. It drops some packets in its input buffer
and informs node II to slow down. Node II, in turn, may be congested because it is slowing down the
output flow of data. If node II is congested, it informs node I to slow down, which in turn may create
congestion. If so, node I informs the source of data to slow down. This, in time, alleviates the
congestion. Note that the pressure on node III is moved backward to the source to remove the
congestion.
Choke Packet
A choke packet is a packet sent by a node to the source to inform it of congestion. Note the difference
between the backpressure and choke packet methods. In backpressure, the warning is from one node
to its upstream node, although the warning may eventually reach the source station. In the choke
packet method, the warning is from the router, which has encountered congestion, to the source
station directly.
Figure 24.7 Choke packet
choke packet
Source Congestion
Data flow destination
Implicit Signaling
In implicit signaling, there is no communication between the congested node or nodes and the
source. The source guesses that there is a congestion somewhere in the network from other
symptoms. For example, when a source sends several packets and there is no acknowledgment for
a while, one assumption is that the network is congested. The delay in receiving an
acknowledgment is interpreted as congestion in the network; the source should slow down. We
will see this type of signaling when we discuss TCP congestion control later in the chapter.
Explicit Signaling
The node that experiences congestion can explicitly send a signal to the source or desti• nation.
The explicit signaling method, however, is different from the choke packet method. In the choke
packet method, a separate packet is used for this purpose; in the explicit signaling method, the
signal is included in the packets that carry data. Explicit signaling, as we will see in Frame Relay
congestion control, can occur in either the forward or the backward direction.
Backward Signaling A bit can be set in a packet moving in the direction opposite to the
congestion. This bit can warn the source that there is congestion and that it needs to slow down to
avoid the discarding of packets.
Forward Signaling A bit can be set in a packet moving in the direction of the congestion. This
bit can warn the destination that there is congestion.
Quality of Service
QoS is an overall performance measure of the computer network.
Quality-of-Service (QoS) refers to traffic control mechanisms that seek to either differentiate
performance based on application or network-operator requirements or provide predictable or
guaranteed performance to applications, sessions or traffic aggregates. Basic phenomenon for QoS
means in terms of packet delay and losses of various kinds.
Need for QoS –
Video and audio conferencing require bounded delay and loss rate.
Video and audio streaming requires bounded packet loss rate, it may not be so sensitive to delay.
Time-critical applications (real-time control) in which bounded delay is considered to be an
important factor.
Valuable applications should be provided better services than less valuable applications.
Important flow characteristics of the QoS are given below:
1. Reliability
If a packet gets lost or acknowledgement is not received (at sender), the re-transmission of data will
be needed. This decreases the reliability.
The importance of the reliability can differ according to the application.
For example:
E- mail and file transfer need to have a reliable transmission as compared to that of an audio
conferencing.
2. Delay
Delay of a message from source to destination is a very important characteristic. However, delay can
be tolerated differently by the different applications.
For example:
The time delay cannot be tolerated in audio conferencing (needs a minimum time delay), while the
time delay in the e-mail or file transfer has less importance.
3. Jitter
The jitter is the variation in the packet delay. If the difference between delays is large, then it is
called as high jitter. On the contrary, if the difference between delays is small, it is known as low
jitter.
Example:
Case1: If 3 packets are sent at times 0, 1, 2 and received at 10, 11, 12. Here, the delay is same for all
packets and it is acceptable for the telephonic conversation.
Case2: If 3 packets 0, 1, 2 are sent and received at 31, 34, 39, so the delay is different for all packets.
In this case, the time delay is not acceptable for the telephonic conversation.
4. Bandwidth
Different applications need the different bandwidth. For example: Video conferencing needs more
bandwidth in comparison to that of sending an e-mail
These two models are designed to provide Quality of Service (QoS) in the network.
Classification of services
i) Scalability
In Integrated Services, it is necessary for each router to keep information of each flow. But, this is
not always possible due to growing network.