Module 5
Module 5
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
Process to process delivery
• The data link layer is responsible for delivery of frames between two neighboring nodes over a link.
This is called node-to-node delivery. The network layer is responsible for delivery of datagrams
between two hosts. This is called host-to-host delivery. Real communication takes place between two
processes (application programs). We need process-to-process delivery. The transport layer is
responsible for process-to-process delivery-the delivery of a packet, part of a message, from one
process to another. Figure shows these three types of deliveries and their domains
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
Process to process delivery
• Although there are several ways to achieve process-to-process communication, the most common one
is through the client/server paradigm. A process on the local host, called a client, needs services from
a process usually on the remote host, called a server. Both processes (client and server) have the same
name.
• 1. Local host
• 2. Local process
• 3. Remote host
• 4. Remote process
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
Addressing
Fig:-PORT NUMBER
SOCKET ADDRES
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
TCP & UDP
• Transmission Control Protocol is a connection-oriented protocol for communications that helps in
the exchange of messages between different devices over a network.
• The position of TCP is at the transport layer of the OSI model. TCP also helps in ensuring that
information is transmitted accurately by establishing a virtual connection between the sender and
receiver.
Features of TCP/IP
• Some of the most prominent features of Transmission control protocol are mentioned below.
• Segment Numbering System: TCP keeps track of the segments being transmitted or received by
assigning numbers to each and every single one of them. A specific Byte Number is assigned to data
bytes that are to be transferred while segments are assigned sequence numbers. Acknowledgment
Numbers are assigned to received segments.
• Connection Oriented: It means sender and receiver are connected to each other till the completion
of the process. The order of the data is maintained i.e. order remains same before and after
transmission.
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
TCP
• Full Duplex: In TCP data can be transmitted from receiver to the sender or vice – versa at the same
time. It increases efficiency of data flow between sender and receiver.
• Flow Control: Flow control limits the rate at which a sender transfers data. This is done to ensure
reliable delivery. The receiver continually hints to the sender on how much data can be received
(using a sliding window).
• Error Control: TCP implements an error control mechanism for reliable data transfer. Error control is
byte-oriented. Segments are checked for error detection. Error Control includes – Corrupted Segment
& Lost Segment Management, Out-of-order segments, Duplicate segments, etc.
• Congestion Control: TCP takes into account the level of congestion in the network. Congestion level is
determined by the amount of data sent by a sender.
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
TCP
Advantages of TCP
• It is a reliable protocol.
• It provides an error-checking mechanism as well as one for recovery.
• It gives flow control.
• It makes sure that the data reaches the proper destination in the exact order that it was sent.
• Open Protocol, not owned by any organization or individual.
• It assigns an IP address to each computer on the network and a domain name to each site thus making
each device site to be distinguishable over the network.
Disadvantages of TCP
• TCP is made for Wide Area Networks, thus its size can become an issue for small networks with low
resources.
• TCP runs several layers so it can slow down the speed of the network.
• It is not generic in nature. Meaning, it cannot represent any protocol stack other than the TCP/IP suite.
E.g., it cannot work with a Bluetooth connection.
• No modifications since their development around 30 years ago.
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
Segment
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
Segment
Source Port (16 bits): This field identifies the port number of the sending application or process on the source host.
Destination Port (16 bits): This field specifies the port number of the receiving application or process on the
destination host.
Sequence Number (32 bits): The sequence number field represents the sequence number of the first data byte in
the current TCP segment. It enables the receiver to reconstruct the data stream in the correct order.
Acknowledgment Number (32 bits): The acknowledgment number field indicates the sequence number of the next
expected byte by the receiver. It acknowledges the receipt of all bytes up to the acknowledged number.
Data Offset (4 bits): This field represents the size of the TCP header in 32-bit words. It specifies the length of the TCP
header and helps locate the start of the data in the segment.
Reserved (6 bits): These bits are reserved for future use and are currently unused. They are typically set to zero.
Control Flags (6 bits): The control flags field contains several individual flags that control specific aspects of the TCP
communication. The flags include:
URG (Urgent Pointer field used)
ACK (Acknowledgment number field used)
PSH (Push function)
RST (Reset the connection)
SYN (Synchronize sequence numbers for connection establishment)
FIN (Finish the connection)
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
TCP
Window Size (16 bits): The window size field indicates the number of bytes the sender is willing to
receive before expecting an acknowledgment. It facilitates flow control and congestion control.
Checksum (16 bits): The checksum field is used to verify the integrity of the TCP segment. It ensures
that the received segment is not corrupted during transmission.
Urgent Pointer (16 bits): This field is only significant when the URG flag is set. It points to the last byte
of urgent data in the segment.
Options (variable length): The options field allows for additional parameters and configurations in the
TCP segment. It includes options such as maximum segment size (MSS), selective acknowledgments
(SACK), and timestamp.
Data: The TCP segment can also carry a payload of data from the sending application. The length of the
data may vary depending on the Maximum Segment Size (MSS) negotiated during the TCP connection
establishment.
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
A TCP Connection
Connection Establishment
Step 1: SYN (Synchronize)
The client initiates the connection by sending a TCP segment with the SYN (synchronize) flag set to the server.
The client selects an initial sequence number (ISN) for the connection, which is a randomly chosen value to ensure
uniqueness.
The SYN segment also includes the client's initial TCP window size, which indicates the number of bytes the client is
willing to receive.
Step 2: SYN-ACK (Synchronize-Acknowledge)
Upon receiving the SYN segment from the client, the server responds with a TCP segment containing the SYN and ACK
(acknowledge) flags set.
The server selects its own initial sequence number (ISN) for the connection.
The SYN-ACK segment also includes the server's initial TCP window size, acknowledging the client's window size from
the previous step.
Additionally, the SYN-ACK segment may include other optional parameters negotiated between the client and server,
such as maximum segment size (MSS) or TCP options.
Step 3: ACK (Acknowledge)
Finally, the client acknowledges the server's SYN-ACK segment by sending an ACK segment.
The ACK segment has the ACK flag set and contains the acknowledgment number, which is the server's initial sequence
number incremented by one.
The client also confirms the server's TCP window size, indicating the number of bytes it is willing to receive from the
server.
At this point, the connection is established, and both the client and server can begin transmitting data .
A TCP Connection
Initiating the Connection Termination: The device that wishes to terminate the connection
(referred to as the active closer) sends a TCP segment with the FIN (Finish) flag set to the other
device, indicating its intention to close the connection.
Acknowledgment of the Termination Request: Upon receiving the FIN segment, the receiving
device (passive closer) acknowledges the termination request by sending an acknowledgment
(ACK) TCP segment back to the active closer. The ACK acknowledges the receipt of the FIN
segment.
Finalizing the Connection Termination: After sending the ACK, the passive closer also sends its
own FIN segment to the active closer, indicating its agreement to close the connection. This
segment has the FIN flag set.
Acknowledgment of the Final Termination: Upon receiving the FIN segment from the passive
closer, the active closer acknowledges it by sending an ACK segment back. This ACK serves as
confirmation that the passive closer's FIN has been received.
A TCP Connection
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
UDP
• The operations of UDP can be summarized as follows:
• Connectionless Communication: UDP does not establish a connection before sending data. It simply
sends datagrams (packets) to the destination without any handshake process.
• Unreliable Delivery: UDP does not guarantee delivery of packets. It does not perform retransmission of
lost packets or ensure that packets arrive in the correct order.
• Low Overhead: UDP has minimal overhead compared to TCP since it does not need to maintain
connection state information or perform complex error recovery mechanisms.
• No Congestion Control: UDP does not implement congestion control mechanisms, which means it can
potentially congest a network if used improperly.
• Port Numbers: UDP uses port numbers to distinguish between different applications running on the
same host.
• Checksum: UDP includes a checksum field in its header to detect errors in the data during transmission.
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
UDP Header
UDP header is an 8-byte fixed and simple header, while for TCP it may
vary from 20 bytes to 60 bytes. The first 8 Bytes contain all necessary
header information and the remaining part consists of data. UDP port
number fields are each 16 bits long, therefore the range for port
numbers is defined from 0 to 65535; port number 0 is reserved. Port
numbers help to distinguish different user requests or processes.
• Source Port: Source Port is a 2 Byte long field used to identify the
port number of the source.
• Destination Port: It is a 2 Byte long field, used to identify the port of
the destined packet.
• Length: Length is the length of UDP including the header and the
Notes – Unlike TCP, the Checksum
data. It is a 16-bits field. calculation is not mandatory in UDP. No
• Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s Error control or flow control is provided
complement of the one’s complement sum of the UDP header, the by UDP. Hence UDP depends on IP and
pseudo-header of information from the IP header, and the data, ICMP for error reporting. Also UDP
padded with zero octets at the end (if necessary) to make a multiple provides port numbers so that is can
of two octets. differentiate between users requests.
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
UDP
Advantages of UDP
•Speed: UDP is faster than TCP because it does not have the overhead of establishing a connection and
ensuring reliable data delivery.
•Lower latency: Since there is no connection establishment, there is lower latency and faster response
time.
•Simplicity: UDP has a simpler protocol design than TCP, making it easier to implement and manage.
•Broadcast support: UDP supports broadcasting to multiple recipients, making it useful for applications
such as video streaming and online gaming.
•Smaller packet size: UDP uses smaller packet sizes than TCP, which can reduce network congestion and
improve overall network performance.
•User Datagram Protocol (UDP) is more efficient in terms of both latency and bandwidth.
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
UDP
Disadvantages of UDP
•No reliability: UDP does not guarantee delivery of packets or order of delivery, which can lead to
missing or duplicate data.
•No congestion control: UDP does not have congestion control, which means that it can send packets at
a rate that can cause network congestion.
•No flow control: UDP does not have flow control, which means that it can overwhelm the receiver with
packets that it cannot handle.
•Vulnerable to attacks: UDP is vulnerable to denial-of-service attacks, where an attacker can flood a
network with UDP packets, overwhelming the network and causing it to crash.
•Limited use cases: UDP is not suitable for applications that require reliable data delivery, such as email
or file transfers, and is better suited for applications that can tolerate some data loss, such as video
streaming or online gaming.
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
Differences between TCP and UDP
Differences between TCP and UDP
Congestion control algorithm: Leaky bucket
algorithm, Token bucket algorithm, choke packets
Leaky Bucket Algorithm
• The leaky bucket algorithm discovers its use in the context of network traffic shaping
or rate-limiting.
• A leaky bucket execution and a token bucket execution are predominantly used for
traffic shaping algorithms.
• This algorithm is used to control the rate at which traffic is sent to the network
• The disadvantages compared with the leaky-bucket algorithm are the inefficient use
of available network resources.
• The large area of network resources such as bandwidth is not being used effectively.
Congestion control algorithm: Leaky bucket algorithm
Congestion control algorithm: Leaky bucket algorithm
• Similarly, each network interface contains a leaky
bucket and the following steps are involved in leaky
bucket algorithm:
1. When host wants to send packet, packet is thrown
into the bucket.
2. The bucket leaks at a constant rate, meaning the
network interface transmits packets at a constant
rate.
3. Bursty traffic is converted to a uniform traffic by the
leaky bucket.
4. In practice the bucket is a finite queue that outputs
at a finite rate.
Congestion control algorithm: Token bucket algorithm
Token bucket Algorithm
• The leaky bucket algorithm has a rigid output design at an average rate independent of the
bursty traffic.
• In some applications, when large bursts arrive, the output is allowed to speed up. This calls
for a more flexible algorithm, preferably one that never loses information. Therefore, a token
bucket algorithm finds its uses in network traffic shaping or rate-limiting.
• It is a control algorithm that indicates when traffic should be sent. This order comes based on
the display of tokens in the bucket.
• The bucket contains tokens. Each of the tokens defines a packet of predetermined size.
Tokens in the bucket are deleted for the ability to share a packet.
• When tokens are shown, a flow to transmit traffic appears in the display of tokens.
• No token means no flow sends its packets. Hence, a flow transfers traffic up to its peak burst
rate in good tokens in the bucket.
Congestion control algorithm: Token bucket algorithm
• In figure (a) the bucket holds two tokens, and
three packets are waiting to be sent out of the
interface.
• In Figure (b) two packets have been sent out by
consuming two tokens, and 1 packet is still left.
• When compared to Leaky bucket the token
bucket algorithm is less restrictive that means it
allows more traffic. The limit of busyness is
restricted by the number of tokens available in
the bucket at a particular instant of time.
• The implementation of the token bucket
algorithm is easy − a variable is used to count
the tokens. For every t seconds the counter is
incremented and then it is decremented
whenever a packet is sent. When the counter
reaches zero, no further packet is sent out.
Congestion control algorithm: Token bucket algorithm
• The leaky bucket algorithm enforces output pattern at the average rate, no matter how bursty the traffic is. So in
order to deal with the bursty traffic we need a flexible algorithm so that the data is not lost. One such algorithm is
token bucket algorithm.
Steps of this algorithm can be described as follows:
• In regular intervals tokens are thrown into the bucket. ƒ
• The bucket has a maximum capacity. ƒ
• If there is a ready packet, a token is removed from the bucket, and the packet is sent.
• If there is no token in the bucket, the packet cannot be sent.
•
Congestion control algorithm
When the host has to send a packet , packet is thrown in In this, the bucket holds tokens generated at regular intervals of
bucket. time.
Bursty traffic is converted into uniform traffic by leaky bucket. If there is a ready packet , a token is removed from Bucket and
packet is send.
In practice bucket is a finite queue outputs at finite rate If there is no token in the bucket, then the packet cannot be
sent.
Choke Packet Technique
• Choke packet technique is applicable to both virtual networks as well as datagram subnets.
• A choke packet is a packet sent by a node to the source to inform it of congestion.
• Each router monitors its resources and the utilization at each of its output lines. Whenever the resource
utilization exceeds the threshold value which is set by the administrator, the router directly sends a
choke packet to the source giving it a feedback to reduce the traffic.
• The intermediate nodes through which the packets has traveled are not warned about congestion.
Quality of service
Department of Computer Science & Engineering, Amity School of Engineering and Technology, Amity University, Gwalior, Madhya Pradesh, India
How Does QoS Work?
• QoS networking technology works by marking packets to identify service
types, then configuring routers to create separate virtual queues for each
application, based on their priority.
Packet loss: This occurs when network connections get congested, and routers
and switches begin losing packets.
Jitter: This is the result of network congestion, time drift, and routing changes.
Too much jitter can reduce the quality of voice and video communication.
Latency: This is how long it takes a packet to travel from its source to its
destination. The latency should be as near to zero as possible.
33
QoS Implementation Models
1. Best Effort. A QoS model where all the packets receive the same priority, and there is no
guaranteed delivery of packets..
2. Integrated Services (IntServ). A QoS model that reserves bandwidth along a specific path
on the network. Applications ask the network for resource reservation and network devices
monitor the flow of packets to make sure network resources can accept the packets.
3. Differentiated Services (DiffServ). A QoS model where network elements, such as
routers and switches, are configured to service multiple classes of traffic with different
priorities.
34
QoS Techniques
1. Scheduling:
• FIFO Queuing Packets wait in a buffer (queue) in first-in, first-out (FIFO) queuing until
the node (router or switch) is prepared to process them. The queue will get full and new
packets will be deleted if the average arrival rate exceeds the average processing rate.
• Priority Queuing Packets are first given a priority class in priority queuing. Each type of
priority has its own queue. The first packets processed are those in the queue with the
highest priority. The final packets processed are those in the lowest priority queue
Weighted Fair Queuing The packets are still allowed to various queues and assigned to
various classes in this method. The queues are, however, weighted according to their
priority; a higher priority corresponds to a higher weight. The quantity of packets
processed from each queue is determined by the associated weight, and the system
processes packets in each queue in a round-robin method.
35
2. Traffic Shaping:
• Traffic shaping is a mechanism to manipulate the quantity and the price of the visitors
despatched to the network. Two strategies can form visitors: Leaky Bucket and
Token Bucket.
• Leaky Bucket
36
Token Bucket
37
3.Resource Reservation
• A flow of data needs resources such as a buffer, bandwidth, CPU time, and so on. The
quality of service is improved if these resources are reserved beforehand.
4.Admission Control
• Admission control refers to the mechanism used by a router, or a switch, to accept or
reject a flow based on predefined parameters called flow specifications. Before a
router accepts a flow for processing, it checks the flow specifications to see if its
capacity (in terms of bandwidth, buffer size, CPU speed, etc.) and its previous
commitments to other flows can handle the new flow.
38
Advantages and Disadvantage of QoS
• Advantages
a) Unlimited software prioritization
b) Better aid management
c) Enhanced consumer experience
d) Point-to-factor site visitors management
e) Packet loss prevention
f) Latency reduction
39
Disadvantage
Frequent needs for enlargement of the net aid, generated with the aid of
using unsatisfactory consumer experiences, can frequently be
circumvented thru the software of manage mechanisms, which cost
protection and availability.
40