0% found this document useful (0 votes)
4 views37 pages

CN Unit 3

The Point-to-Point Protocol (PPP) suite is a collection of protocols used for establishing and maintaining point-to-point connections, including Link Control Protocol (LCP) for managing connections, Network Control Protocol (NCP) for configuring communication protocols, and various authentication protocols. Physical addresses (MAC) uniquely identify devices within a local network, while logical addresses (IP) enable communication across different networks. The document also discusses the functions and protocols of the network layer, including IP, ARP, RARP, ICMP, and IGMP, each serving distinct roles in data transmission and network management.

Uploaded by

Anuj Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views37 pages

CN Unit 3

The Point-to-Point Protocol (PPP) suite is a collection of protocols used for establishing and maintaining point-to-point connections, including Link Control Protocol (LCP) for managing connections, Network Control Protocol (NCP) for configuring communication protocols, and various authentication protocols. Physical addresses (MAC) uniquely identify devices within a local network, while logical addresses (IP) enable communication across different networks. The document also discusses the functions and protocols of the network layer, including IP, ARP, RARP, ICMP, and IGMP, each serving distinct roles in data transmission and network management.

Uploaded by

Anuj Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Point-to-Point Protocol (PPP) Suite

Point-to-Point Protocol (PPP) is an asymmetrical protocol suite for different connections or


links that do not provide any framing. PPP also wants other protocols to establish connections,
authenticate users, and carry the network layer data. PPP is not a single protocol but a protocol
suite that includes protocols that address different aspects of point-to-point. There are 2 routers
in the PPP session i.e. Initiator (client) and Responder (mainly server).

PPP operation is generally made using three different parameters as given below:

1. Link Control Protocol (LCP):


LCP is responsible for establishing, maintaining, testing configuring and terminating the
physical connection. It also negotiates other Wide Area Network (WAN) options that are
controlled by NCPs. All of the LCP Packets are carried in the data field of the PPP Frame. LCP
packet is shown below:

Different LCP protocols are given below:


• Bandwidth Allocation Protocol (BAP)– BAP is a mechanism where any device
that is communicating over a Multi-link (MP) bundle of layers, one link can request
that an individual link needs to be added or removed from the bundle.
• Bandwidth Allocation Control Protocol (BACP) – BACP allows these devices
to configure and clarify how they want to use BAP.
• Link Quality Monitoring (LQM) – LQM is basically a process of determining the
loss of the data. It is generally used for monitoring the link quality.
• Link Quality Reporting (LQR) – LQR allows two computers to get connected to
each other. It usually specifies the quality reporting mechanism, but not a particular
standard for quality of connection, as there are no implementation-dependent.

2. Network Control Protocol (NCP) : NCP protocols are required to configure the various
communication protocols. Each NCP is particular to a network-layer protocol like IP or
IPX/SPX or Apple Talk. IP is the most common layer-3 protocol that is being negotiated. At
least one NCP is always present there for each higher-layer protocol that is supported by PPP.
Different NCP protocols are given below:
• Compression Control Protocol (CCP) – CCP is basically responsible for
configuring, enabling, disabling, or negotiating and controlling or maintaining data
compression algorithms on both of the ends of the PP connection.
• Bridging Control Protocol (BCP) – BCP is basically responsible for
configuring, enabling, disabling, or negotiating and controlling or maintaining
bridge control modules on both of the ends of the PP connection. It is similar to
IPCP but rather than routing, it initializes bridging.
• Internet Protocol Control Protocol (IPCP) – This protocol is especially
requiring to configure, enable, also disable the IP protocol modules at every end of
the connection. Routers also exchange IPCP simply to negotiate options that are
specific to IP.
• Encryption Control Protocol (ECP)– This protocol is especially required to
configure, enable, disable negotiate and control or maintain data encryption
algorithms on both of ends of the PP connection.

3. Authentication Protocol: Authentication protocols simply require to validate i.e., to check


the identity of the user who wants to have access to the resources. These protocols also
authenticate end points simply for users of services. Different authentication protocols are
given below :
• Extensible Authentication Protocol (EAP)– Several authentication protocols are
initiated by the client i.e. peer but EAP authentication is generally initiated by the
server i.e. authentication. It is a protocol that supports a wide range of authentication
protocols.
• Password Authentication Protocol (PAP)– This protocol is especially required to
verify the identity and password of the peer or client that might result in success or
either failure. It is also symmetric and does not even allow asymmetric settings with
authenticator and peer.

Physical Address and Logical Address in Networking:


A physical address is a unique identifier given to network interfaces for communication over a
physical network segment. A logical address is a unique address assigned to each networked
device to identify its location and enable routing.
The physical address is also known as the MAC (Media Access Control) address or link
address. It is the address of a node which is defined by its LAN or WAN. It is used by the data
link layer and is the lowest level of addresses. MAC address is the unique address of a device.
The size of a physical address is 48 bits (6 bytes). Below is the format for representing a
physical address:
XX : XX : XX : YY : YY : YY, where 1 octant = 8 bits.
Example:
16 : 1A : BB : 6F : 90 : E5
The first 24 bits of a MAC address XX : XX: XX is decided by OUI (Organizationally
Unique Identifier). It represents the identity of the manufacturer. The next 24 bits of a MAC
address YY : YY : YY represents the unique identity of the device. It is assigned by the
manufacturer. They represent NIC (Network Interface Card).
Below is a diagram representing the working mechanism of a physical address:

Mechanism of Physical Address

In the above diagram, we can see that there are two networks - Network 1 and Network 2. A1
is the sender and there are two receivers - D1 and D2. In case of physical address, receiver D1
receives the data but receiver D2 is unable to receive data. This is because receiver D2 does
not belong to the same network as the sender A1 belongs to. Physical address can only be
passed in the same network and not in different networks. The purpose of using Physical
address is to identify devices in the same network.
Advantages
• Physical address can uniquely identify devices and deliver data packets accurately.
• We can restrict access to any network by allowing only those devices which have
the authorized MAC addresses to connect. Thus, it can also be used for network
security.
Disadvantages
• MAC addresses can be easily spoofed. Thus, the devices can easily gain
unauthorized access to a network.
• As physical addresses cannot traverse through the routers therefore they can only
be used in local networks and not between different networks.
Physical and Logical Address

Logical address also referred to as IP (Internet Protocol) address is an universal addressing


system. It is used in the Network layer. This address facilitates universal communication that
are not dependent on the underlying physical networks. There are two types of IP addresses
- IPv4 and IPv6.
The size of IPv4 is 32 bits. For example ,
192 : 180 : 210 where, 1 octant = 8 bits.
The size of IPv6 is 128 bits. For example ,
1C18 : 1B32 : C450 : 62A5 : 34DC : AE24 : 15BC : 6A5D where , 1 octant = 16 bits.
Below is a diagram representing the working mechanism of Logical address:
Mechanism of Logical Address

In the above diagram , we can see that there are two networks - Network 1 and Network 2. A1
is the sender and there are two receivers - D1 and D2. In case of logical address, receiver D1
as well as D2 receives the data. This is because logical address can be passed in different
networks. The purpose of using logical address is to send the data across networks.
Advantages
• Logical address can be used in different networks because they can traverse
through routers.
• They can handle a number devices and networks. Even if the number of devices
and network increases, the logical address is able to handle all them very easily.
Thus, they are highly scalable.
Disadvantages
• Internet Protocol is vulnerable to attacks such as hacking, phishing etc. and there
can be data loss.
• It lacks privacy. The data which is moving through the packets can be intercepted,
traced and monitored by unauthorized entities.
Differences between Physical Address and Logical Address

Physical Address Logical Address

Physical Address is the address of a Logical address also referred to as IP (Internet


node which is defined by its LAN or Protocol) address is a universal addressing
WAN system

Physical Address is computed by


Logical Address is generated by CPU.
MMU.

Found on Data Link Layer. Found on Network Layer.

The format is a 48-bit address in Format is IPv4: 32-bit


hexadecimal. IPv6: 128-bit

A physical address is not visible to


A logical address is visible to users.
users.

Network Layer Protocols


Network Layer is responsible for the transmission of data or communication from one host to
another host connected in a network. Rather than describing how data is transferred, it
implements the technique for efficient transmission. In order to provide efficient
communication protocols are used at the network layer. The data is being grouped into packets
or in the case of extremely large data it is divided into smaller sub packets. Each protocol used
has specific features and advantages. The below article covers in detail the protocols used at
the network layer.
Functions of Network Layer
The network layer is responsible for providing the below-given tasks:
• Logical Addressing: Each device on the network needs to be identified uniquely.
Therefore network layer provides an addressing scheme to identify the device. It
places the IP address of every sender and the receiver in the header. This header
consists of the network ID and host ID of the network.
• Host-to-host Delivery of Data: The network layer ensures that the packet is
being delivered successfully from the sender to the receiver. This layer makes sure
that the packet reaches the intended recipient only.
• Fragmentation: To transmit the larger data from sender to receiver, the network
layer fragments it into smaller packets. Fragmentation is required because every
node has its own fixed capacity for receiving data.
• Congestion Control: Congestion is defined as a situation where the router is not
able to route the packets property which results in aggregation of packets in the
network. Congestion occurs when a large amount of packets are flooded in the
network. Therefore network layer controls the congestion of data packets in the
network.
• Routing and Forwarding: Routing is the process that decides the route for
transmission of packets from sender to receiver. It mostly chooses the shortest path
between the sender and the receiver. Routing protocols that are mostly used are path
vector, distance vector routing, link state routing, etc.

Network Layer Protocols


There are various protocols used in the network layer. Each protocol is used for a different task.
Below are the protocols used in the network layer:

Protocols at each Layer


1. IP (Internet Protocol)
IP stands for Internet Protocol. Internet Protocol helps to uniquely identify each device on the
network. Internet protocol is responsible for transferring the data from one node to another
node in the network. Internet protocol is a connectionless protocol therefore it does not
guarantee the delivery of data. For the successful delivery higher level protocols such as TCP
are used to guarantee the data transmission. The Internet Protocol is divided in two types. They
are:
• IPv4: IPv4 provides with the 32-bit address scheme. IPv4 addressing has four
numeric fields which are separated by a dot. IPv4 can be configured either using
DHCP or manually. IPv4 does not provide more security features as it does not
support authentication or encryption techniques. IPv4 is further divided into five
classes as Class A, Class B, Class C, Class D and Class E.
• IPv6: IPv6 is the most recent version of IP. If provided with a 128-bit addressing
scheme. IP address has eight fields that are separated by colon, and these fields are
alphanumeric. The IPv6 address is represented in hexadecimal. IPv6 provides more
security features such as authentication and encryption. IPv6 supports end-to-end
connection integrity. IPv6 provides with more range of IP addresses as compared to
IPv4.
2. ARP (Address Resolution Protocol)
ARP stands for Address Resolution Protocol. ARP is used to convert the logical address ie. IP
address into physical address ie. MAC address. While communicating with other nodes, the
MAC address or physical address of the destination node. If any node in a network wants to
know the physical address of another node in the same network, the host then sends an ARP
query packet. This ARP query packet consists of IP address and MAC address of the source
host and only the IP address of the destination host. This ARP packet is then received to every
node present in the network. The node with its IP address recognizes it and sends it MAC
address to the requesting node. But sending and receiving such packets to know the MAC
address of the destination node it increases the traffic load. Therefore to reduce this traffic and
improve the performance, the systems that make use of ARP maintain a cache of recently
acquired IP into MAC address bindings.
ARP Work
• The host broadcasts an ARP inquiry packet containing the IP address over the
network to find out the physical address of another computer on its network.
• The ARP packet is received and processed by all hosts on the network; however,
only the intended recipient can identify the IP address and reply with the physical
address.
• After adding the physical address to the datagram header and cache memory, the
host storing the datagram transmits it back to the sender.

ARP

Types of ARP Entries


• Static Entry: This type of entry is created when a user uses the ARP command
utility to manually enter the IP to MAC address association.
• Dynamic Entry: A dynamic entry is one that is automatically formed when a
sender broadcasts their message to the whole network. Dynamic entries are
periodically removed and are not permanent.
3. RARP
RARP stands for Reverse Address Resolution Protocol. RARP works the opposite of ARP.
Reverse Address Resolution Protocol is used to convert MAC address ie. physical address into
IP address ie. logical address. RARP provides a feature for the systems and applications to get
their IP address from a DNS (Domain Name System) or router. This type of resolution is
required for various tasks such as executing reverse DNS lookup. As Reverse Address
Resolution Protocol works at a low level it requires direct network addresses. The reply from
the server mostly carries a small information but the 32-bit internet address is used and it does
not exploit the full potential of a network such as Ethernet.
RARP Work
• Data is sent between two places in a network using the RARP, which is on the
Network Access Layer.
• Every user on the network has two distinct addresses: their MAC (physical)
address and their IP (logical) address.
• Software assigns the IP address, and the hardware then builds the MAC address
into the device.
• Any regular computer connected to the network can function as the RARP server,
answering to RARP queries. It must, however, store all of the MAC addresses'
associated IP addresses. Only these RARP servers are able to respond to RARP
requests that are received by the network. The information package must be
transmitted over the network's lowest tiers.
• Using both its physical address and Ethernet broadcast address, the client
transmits a RARP request. In response, the server gives the client its IP address.

4. ICMP
ICMP stands for Internet Control Message Protocol. ICMP is a part of IP protocol suite. ICMP
is an error reporting and network diagnostic protocol. Feedback in the network is reported to
the designated host. Meanwhile, if any kind of error occur it is then reported to ICMP. ICMP
protocol consists of many error reporting and diagnostic messages. ICMP protocol handles
various kinds of errors such as time exceeded, redirection, source quench, destination
unreachable, parameter problems etc. The messages in ICMP are divided into two types. They
are given below:
• Error Message: Error message states the issues or problems that are faced by the
host or routers during the processing of IP packets.
• Query Message: Query messages are used by the host in order to get information
from a router or another host.
ICMP Work
• The main and most significant protocol in the IP suite is called ICMP. However,
unlike TCP and UDP, ICMP is a connectionless protocol, meaning it doesn't require
a connection to be established with the target device to transmit a message.
• TCP and ICMP operate differently from one another; TCP is a connection-
oriented protocol, while ICMP operates without a connection. Every time a
connection is made before a message is sent, a TCP Handshake is required of both
devices.
• Datagrams including an IP header containing ICMP data are used to transmit
ICMP packets. An independent data item like a packet is comparable to an ICMP
datagram.

5. IGMP
IGMP stands for Internet Group Message Protocol. IGMP is a multicasting communication
protocol. It utilizes the resources efficiently while broadcasting the messages and data packets.
IGMP is also a protocol used by TCP/IP. Other hosts connected in the network and routers
makes use of IGMP for multicasting communication that have IP networks. In many networks
multicast routers are used in order to transmit the messages to all the nodes. Multicast routers
therefore receive a large number of packets that need to be sent. But to broadcast these packets
is difficult as it would increase the overall network load. IGMP helps the multicast routers by
addressing them while broadcasting. As multicast communication consists of more than one
sender and receivers the Internet Group Message Protocol is majorly used in various
applications such as streaming media, web conference tools, games, etc.
IGMP Work
• Devices that can support dynamic multicasting and multicast groups can use IGMP.
• The host can join or exit the multicast group using these devices. It is also possible
to add and remove customers from the group using these devices.
• The host and local multicast router use this communication protocol. Upon
creation of a multicast group, the packet's destination IP address is changed to the
multicast group address, which falls inside the class D IP address range.

Network Layer Services


The network layer is a part of the communication process in computer networks. Its main
job is to move data packets between different networks. It helps route these packets from the
sender to the receiver across multiple paths and networks. Network-to-network connections
enable the Internet to function. These connections happen at thenetwork layer which sends
data packets between different networks. In the 7-layer OSI model, the network layer is layer
3. The Internet Protocol (IP) is a key protocol used at this layer, along with other protocols
for routing, testing, and encryption.
1. Assigning Logical Address
Logical addressing is the process of assigning unique IP addresses to devices within a network.
Unlike physical addresses (MAC addresses), logical addresses can change based on network
configurations. These addresses are hierarchical and help identify both the network and the
device within that network. Logical addressing is important for:
• Enabling communication between devices on different networks.
• Facilitating routing by providing location-based information.
2. Packetizing
The process of encapsulating the data received from the upper layers of the network (also called
payload) in a network layer packet at the source and decapsulating the payload from the
network layer packet at the destination is known as packetizing.
The source host adds a header that contains the source and destination address and some other
relevant information required by the network layer protocol to the payload received from the
upper layer protocol and delivers the packet to the data link layer.

The destination host receives the network layer packet from its data link layer, decapsulates
the packet, and delivers the payload to the corresponding upper layer protocol. The routers in
the path are not allowed to change either the source or the destination address. The routers in
the path are not allowed to decapsulate the packets they receive unless they need to be
fragmented.
3. Host-to-Host Delivery
The network layer ensures data is transferred from the source device (host) to the destination
device (host) across one or multiple networks. This involves:
• Determining the destination address.
• Ensuring that data is transmitted without duplication or corruption.
Host-to-host delivery is a foundational aspect of communication in large-scale, interconnected
systems like the internet.
4. Forwarding
Forwarding is the process of transferring packets between network devices such as routers,
which are responsible for directing the packets toward their destination. When a router receives
a packet from one of its attached networks, it needs to forward the packet to another attached
network (unicast routing) or to some attached networks (in the case of multicast routing).The
router uses:
• Routing tables: These tables store information about possible paths to different
networks.
• Forwarding decisions: Based on the destination IP address in the packet header.
Forwarding ensures that packets move closer to their destination efficiently.
5. Fragmentation and Reassembly of Packets
Some networks have a maximum transmission unit (MTU) that defines the largest packet
size they can handle. If a packet exceeds the MTU, the network layer:
• Fragments the packet into smaller pieces.
• Adds headers to each fragment for identification and sequencing. At the
destination, the fragments are reassembled into the original packet. This ensures
compatibility with networks of varying capabilities without data loss.
6. Logical Subnetting
Logical subnetting involves dividing a large IP network into smaller, more manageable sub-
networks (subnets). Subnetting helps:
• Improve network performance by reducing congestion.
• Enhance security by isolating parts of a network.
• Simplify network management and troubleshooting. Subnetting uses subnet
masks to define the range of IP addresses within each subnet, enabling efficient
address allocation and routing.
7. Network Address Translation (NAT)
NAT allows multiple devices in a private network to share a single public IP address for
internet access. This is achieved by:
• Translating private IP addresses to a public IP address for outbound traffic.
• Reversing the process for inbound traffic. Benefits of NAT include:
• Conserving IPv4 addresses by reducing the need for unique public IPs for each
device.
• Enhancing security by masking internal IP addresses from external networks.
8. Routing
Routing is the process of moving data from one device to another device. These are two other
services offered by the network layer. In a network, there are several routes available from the
source to the destination. The network layer specifies some strategies which find out the best
possible route. This process is referred to as routing. Several routing protocols are used in this
process and they should be run to help the routers coordinate with each other and help in
establishing communication throughout the network.
Advantages of Network Layer Services
• Packetization service in the network layer provides ease of transportation of the
data packets.
• Packetization also eliminates single points of failure in data communication
systems.
• Routers present in the network layer reduce network traffic by creating collision
and broadcast domains.
• With the help of Forwarding, data packets are transferred from one place to
another in the network.
Disadvantages of Network Layer Services
• There is a lack of flow control in the design of the network layer.
• Congestion occurs sometimes due to the presence of too many datagrams in a
network that is beyond the capacity of the network or the routers. Due to this, some
routers may drop some of the datagrams, and some important pieces of information
may be lost.
• Although indirect error control is present in the network layer, there is a lack of
proper error control mechanisms as due to the presence of fragmented data packets,
error control becomes difficult to implement.

Types of Routing
Routing is the process of determining paths through a network for sending data packets. It
ensures that data moves effectively from source to destination, making the best use of network
resources and ensuring consistent communication. Routing performed by layer 3 (or network
layer) devices to deliver the packet by choosing an optimal path from one network to another.
It is an autonomous process handled by the network devices to direct a data packet to its
intended destination. The node here refers to a network device called Router.
Routing is classified into Static Routing, Default Routing, and Dynamic Routing. In this article,
we will see types of routing in detail.
Routing is essential for determining the optimal path for data packets across networks.
Different routing techniques, such as static, dynamic, and hybrid, are used depending on the
network.
Types of Routing
Routing is typically of 3 types, each serving its purpose and offering different functionalities.

Types of Routing

1. Static Routing
Static routing is also called as “non-adaptive routing”. In this, routing configuration is done
manually by the network administrator. Let’s say for example, we have 5 different routes to
transmit data from one node to another, so the network administrator will have to manually
enter the routing information by assessing all the routes.
Advantages of Static Routing
• No routing overhead for the router CPU which means a cheaper router can be used
to do routing.
• It adds security because only an only administrator can allow routing to particular
networks only.
• No bandwidth usage between routers.
Disadvantage of Static Routing
• For a large network, it is a hectic task for administrators to manually add each
route for the network in the routing table on each router.
• The administrator should have good knowledge of the topology. If a new
administrator comes, then he has to manually add each route so he should have very
good knowledge of the routes of the topology.

2. Default Routing
This is the method where the router is configured to send all packets toward a single router
(next hop). It doesn’t matter to which network the packet belongs, it is forwarded out to the
router which is configured for default routing. It is generally used with stub routers. A stub
router is a router that has only one route to reach all other networks.
Advantages of Default Routing
• Default routing provides a “last resort” route for packets that don’t match any
specific route in the routing table. It ensures that packets are not dropped and can
reach their intended destination.
• It simplifies network configuration by reducing the need for complex routing tables.
• Default routing improves network reliability and reduces packet loss.
Disadvantages of Default Routing
• Relying solely on default routes can lead to inefficient routing, as it doesn’t
consider specific paths.
• Using default routes may introduce additional network latency.

3. Dynamic Routing
Dynamic routing makes automatic adjustments of the routes according to the current state of
the route in the routing table. Dynamic routing uses protocols to discover network destinations
and the routes to reach them. RIP and OSPF are the best examples of dynamic routing
protocols. Automatic adjustments will be made to reach the network destination if one route
goes down. A dynamic protocol has the following features:
• The routers should have the same dynamic protocol running in order to exchange
routes.
• When a router finds a change in the topology then the router advertises it to all
other routers.
Advantages of Dynamic Routing
• Easy to configure.
• More effective at selecting the best route to a destination remote network and also
for discovering remote networks.
Disadvantage of Dynamic Routing
• Consumes more bandwidth for communicating with other neighbors.
• Less secure than static routing.

Difference between Static and Dynamic Routing


Static routing and dynamic routing are two fundamental concepts in network
communication. Static routing uses preconfigured paths, while dynamic routing
automatically adjusts paths based on current network conditions.

Static Routing Dynamic Routing

In dynamic routing, routes are updated


In static routing routes are user-defined.
according to the topology.

Static routing does not use complex Dynamic routing uses complex routing
routing algorithms. algorithms.

Static routing provides high or more


Dynamic routing provides less security.
security.
Static Routing Dynamic Routing

Static routing is manual. Dynamic routing is automated.

Static routing is implemented in small Dynamic routing is implemented in large


networks. networks.

In static routing, additional resources are In dynamic routing, additional resources are
not required. required.

In static routing, failure of the link In dynamic routing, failure of the link does
disrupts the rerouting. not interrupt the rerouting.

Less Bandwidth is required in Static More Bandwidth is required in Dynamic


Routing. Routing.

Static Routing is difficult to configure. Dynamic Routing is easy to configure.

Another name for static routing is non- Another name for dynamic routing is adaptive
adaptive routing. routing.

Classification of Routing Algorithms


Routing is the process of establishing the routes that data packets must follow to reach the
destination. In this process, a routing table is created which contains information regarding
routes that data packets follow. Various routing algorithms are used for the purpose of deciding
which route an incoming data packet needs to be transmitted on to reach the destination
efficiently.
Classification of Routing Algorithms
The routing algorithms can be classified as follows:
1. Adaptive Algorithms
2. Non-Adaptive Algorithms
3. Hybrid Algorithms

Types of Routing Algorithm

Routing algorithms can be classified into various types such as distance vector, link state, and
hybrid routing algorithms. Each has its own strengths and weaknesses depending on the
network structure.
1. Adaptive Algorithms
These are the algorithms that change their routing decisions whenever network topology or
traffic load changes. The changes in routing decisions are reflected in the topology as well as
the traffic of the network. Also known as dynamic routing, these make use of dynamic
information such as current topology, load, delay, etc. to select routes. Optimization parameters
are distance, number of hops, and estimated transit time.
Further, these are classified as follows:
• Isolated: In this method each, node makes its routing decisions using the
information it has without seeking information from other nodes. The sending nodes
don’t have information about the status of a particular link. The disadvantage is that
packets may be sent through a congested network which may result in delay.
Examples: Hot potato routing, and backward learning.
• Centralized: In this method, a centralized node has entire information about the
network and makes all the routing decisions. The advantage of this is only one node
is required to keep the information of the entire network and the disadvantage is
that if the central node goes down the entire network is done. The link state
algorithm is referred to as a centralized algorithm since it is aware of the cost of
each link in the network.
• Distributed: In this method, the node receives information from its neighbors and
then takes the decision about routing the packets. A disadvantage is that the packet
may be delayed if there is a change in between intervals in which it receives
information and sends packets. It is also known as a decentralized algorithm as it
computes the least-cost path between source and destination.
2. Non-Adaptive Algorithms
These are the algorithms that do not change their routing decisions once they have been
selected. This is also known as static routing as a route to be taken is computed in advance and
downloaded to routers when a router is booted.
Further, these are classified as follows:
• Flooding: This adapts the technique in which every incoming packet is sent on
every outgoing line except from which it arrived. One problem with this is that
packets may go in a loop and as a result of which a node may receive duplicate
packets. These problems can be overcome with the help of sequence numbers, hop
count, and spanning trees.
• Random walk: In this method, packets are sent host by host or node by node to
one of its neighbors randomly. This is a highly robust method that is usually
implemented by sending packets onto the link which is least queued.

Random Walk
3. Hybrid Algorithms
As the name suggests, these algorithms are a combination of both adaptive and non-adaptive
algorithms. In this approach, the network is divided into several regions, and each region uses
a different algorithm. Further, these are classified as follows:
• Link-state: In this method, each router creates a detailed and complete map of the
network which is then shared with all other routers. This allows for more accurate
and efficient routing decisions to be made.
• Distance vector: In this method, each router maintains a table that contains
information about the distance and direction to every other node in the network.
This table is then shared with other routers in the network. The disadvantage of this
method is that it may lead to routing loops.

Difference between Adaptive and Non-Adaptive Routing Algorithms


The main difference between Adaptive and Non-Adaptive Algorithms is:
Adaptive Algorithms are the algorithms that change their routing decisions whenever network
topology or traffic load changes. It is called Dynamic Routing. Adaptive Algorithm is used in
a large amount of data, highly complex network, and rerouting of data.
Non-Adaptive Algorithms are algorithms that do not change their routing decisions once they
have been selected. It is also called static Routing. Non-Adaptive Algorithm is used in case of
a small amount of data and a less complex network.

Types of Routing Protocol in Computer Networks


1. Routing information protocol (RIP)
One of the earliest protocols developed is the inner gateway protocol, or RIP. we can use it
with local area networks (LANs), that are linked computers in a short range, or wide area
networks (WANs), which are telecom networks that cover range. Hop counts are used by the
Routing Information Protocol (RIP) to calculate the shortest path between networks.

2. Interior gateway protocol (IGRP)


IGRP was developed by the multinational technology corporation Cisco. It has many of the
core features of RIP but raises the maximum number of supported hops to 100. It might
function better on networks. IGRPs are elegant and distance-vector protocols. To work, IGRP
requires comparisons across indicators such as load, reliability, and network capacity.
Additionally, this kind of updates automatically when things change, such as the route.
3. Exterior Gateway Protocol (EGP)
Exterior gateway protocols, such as EGP, help transfer data or information between several
gateway hosts in autonomous systems. In particular, it aids in giving routers the room they need
to exchange data between domains, such as the Internet.

4. Enhanced interior gateway routing protocol (EIGRP)


This kind is categorized as a classless protocol, inner gateway, and distance vector routing. To
maximize efficiency, it makes use of the diffusing update method and the dependable transport
protocol. A router can use the tables of other routers to obtain information and store it for later
use. Every router communicates with its neighbor when something changes so that everyone
is aware of which data paths are active. It stops routers from miscommunicating with one
another. The only external gateway protocol is called Border Gateway Protocol (BGP).

5. Open shortest path first (OSPF)


OSPF is an inner gateway, link state, and classless protocol that makes use of the shortest path
first (SPF) algorithm to guarantee effective data transfer. Multiple databases containing
topology tables and details about the network as a whole are maintained by it. The ads, which
resemble reports, provide thorough explanations of the path’s length and potential resource
requirements. When topology changes, OSPF recalculates paths using the Dijkstra algorithm.
To guarantee that its data is safe from modifications or network intrusions, it also employs
authentication procedures.

6. Border gateway protocol (BGP)


Another kind of outer gateway protocol that was first created to take the role of EGP is BGP.
It is also a distance vector protocol since it performs data package transfers using the best path
selection technique. BGP defines communication over the Internet. The Internet is a vast
network of interconnected autonomous systems. Every autonomous system has an autonomous
system number (ASN) that it receives by registering with the Internet Assigned Numbers
Authority.

Difference between Routing and Flooding


The difference between Routing and Flooding is listed below:
Routing Flooding

A routing table is required. No Routing table is required.

May give the shortest path. Always gives the shortest path.

Less Reliable. More Reliable.

Traffic is less. Traffic is high.

No duplicate packets. Duplicate packets are present.

Congestion Control in Computer Networks


Congestion in a computer network happens when there is too much data being sent at the same
time, causing the network to slow down. Just like traffic congestion on a busy road, network
congestion leads to delays and sometimes data loss. When the network can’t handle all the
incoming data, it gets “clogged,” making it difficult for information to travel smoothly from
one place to another.
Congestion control is a crucial concept in computer networks. It refers to the methods used to
prevent network overload and ensure smooth data flow. Congestion control techniques help
manage the traffic, so all users can enjoy a stable and efficient network connection. These
techniques are essential for maintaining the performance and reliability of modern networks.
Effects of Congestion Control
• Improved Network Stability: Congestion control helps keep the network stable
by preventing it from getting overloaded. It manages the flow of data so the network
doesn’t crash or fail due to too much traffic.
• Reduced Latency and Packet Loss: Without congestion control, data
transmission can slow down, causing delays and data loss. Congestion
control helps manage traffic better, reducing these delays and ensuring fewer data
packets are lost, making data transfer faster and the network more responsive.
• Enhanced Throughput: By avoiding congestion, the network can use its
resources more effectively. This means more data can be sent in a shorter time,
which is important for handling large amounts of data and supporting high-speed
applications.
• Fairness in Resource Allocation: Congestion control ensures that network
resources are shared fairly among users. No single user or application can take up
all the bandwidth, allowing everyone to have a fair share.
• Better User Experience: When data flows smoothly and quickly, users have a
better experience. Websites, online services, and applications work more reliably
and without annoying delays.
• Mitigation of Network Congestion Collapse: Without congestion control, a
sudden spike in data traffic can overwhelm the network, causing severe congestion
and making it almost unusable. Congestion control helps prevent this by managing
traffic efficiently and avoiding such critical breakdowns.

Congestion Control Algorithm


Congestion Control is a mechanism that controls the entry of data packets into the network,
enabling a better use of a shared network infrastructure and avoiding congestive
collapse. Congestive-avoidance algorithms (CAA) are implemented at the TCP layer as the
mechanism to avoid congestive collapse in a network. There are two congestion control
algorithms which are as follows:
Leaky Bucket Algorithm
• The leaky bucket algorithm discovers its use in the context of network traffic
shaping or rate-limiting.
• A leaky bucket execution and a token bucket execution are predominantly used for
traffic-shaping algorithms.
• This algorithm is used to control the rate at which traffic is sent to the network and
shape the burst traffic to a steady traffic stream.
• The disadvantage compared with the leaky-bucket algorithm are the inefficient
use of available network resources.
• The large area of network resources such as bandwidth is not being used effectively.
Let us consider an example to understand. Imagine a bucket with a small hole in the bottom.
No matter at what rate water enters the bucket, the outflow is at constant rate. When the bucket
is full with water additional water entering spills over the sides and is lost.

Leaky Bucket

Similarly, each network interface contains a leaky bucket and the following steps are involved
in the leaky bucket algorithm:
• When a host wants to send a packet, the packet is thrown into the bucket.
• The bucket leaks at a constant rate, meaning the network interface transmits
packets at a constant rate.
• Bursty traffic is converted to uniform traffic by the leaky bucket.
• In practice the bucket is a finite queue that outputs at a finite rate.

Leaky bucket algorithm


In computer networks, congestion occurs when data traffic exceeds the available bandwidth
and leads to packet loss, delays, and reduced performance. Traffic shaping can prevent and
reduce congestion in a network. It is a technique used to regulate data flow by controlling the
rate at which packets are sent into the network. There are 2 types of traffic-shaping algorithms:
1. Leaky Bucket
2. Token Bucket
Leaky bucket algorithm
Suppose we have a bucket in which we are pouring water at random points in time but we have
to get water at a fixed rate to achieve this we will make a hole at the bottom of the bucket. This
will ensure that the water coming out is at some fixed rate. If the bucket gets full, then we will
stop pouring water into it.
The input rate can vary but the output rate remains constant. Similarly, in networking, a
technique called leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the
bucket and sent out at an average rate.

In the above figure, we assume that the network has committed a bandwidth of 3 Mbps for a
host. The use of the leaky bucket shapes the input traffic to make it conform to this
commitment. In the above figure, the host sends a burst of data at a rate of 12 Mbps for 2s, for
a total of 24 Mbits of data. The host is silent for 5 s and then sends data at a rate of 2 Mbps for
3 s, for a total of 6 Mbits of data. In all, the host has sent 30 Mbits of data in 10 s. The leaky
bucket smooths out the traffic by sending out data at a rate of 3 Mbps during the same 10 s.
Without the leaky bucket, the beginning burst may have hurt the network by consuming more
bandwidth than is set aside for this host. We can also see that the leaky bucket may prevent
congestion.
Leaky Bucket Algorithm Work
A simple leaky bucket algorithm can be implemented using FIFO queue. A FIFO queue holds
the packets. If the traffic consists of fixed-size packets, the process removes a fixed number of
packets from the queue at each tick of the clock. If the traffic consists of variable-length
packets, the fixed output rate must be based on the number of bytes or bits.
The following is an algorithm for variable-length packets:
1. Initialize a counter to n at the tick of the clock.
2. Repeat until n is smaller than the packet size of the packet at the head of the queue.
1. Pop a packet out of the head of the queue, say P.
2. Send the packet P, into the network
3. Decrement the counter by the size of packet P.
3. Reset the counter and go to step 1.

In the below examples, the head of the queue is the rightmost position and the tail of the queue
is the leftmost position.
Example: Let n=1000
Packet=

Since n > size of the packet at the head of the Queue, i.e. n > 200
Therefore, n = 1000-200 = 800
Packet size of 200 is sent into the network.

Now, again n > size of the packet at the head of the Queue, i.e. n > 400
Therefore, n = 800-400 = 400
Packet size of 400 is sent into the network.
Since, n < size of the packet at the head of the Queue, i.e. n < 450
Therefore, the procedure is stopped.
Initialize n = 1000 on another tick of the clock.
This procedure is repeated until all the packets are sent into the network.

Token Bucket Algorithm


• The leaky bucket algorithm has a rigid output design at an average rate
independent of the bursty traffic.
• In some applications, when large bursts arrive, the output is allowed to speed up.
This calls for a more flexible algorithm, preferably one that never loses
information. Therefore, a token bucket algorithm finds its uses in network traffic
shaping or rate-limiting.
• It is a control algorithm that indicates when traffic should be sent. This order
comes based on the display of tokens in the bucket.
• The bucket contains tokens. Each of the tokens defines a packet of
predetermined size. Tokens in the bucket are deleted for the ability to share a
packet.
• When tokens are shown, a flow to transmit traffic appears in the display of tokens.
• No token means no flow sends its packets. Hence, a flow transfers traffic up to
its peak burst rate in good tokens in the bucket.

Token Bucket Algorithm


The Token Bucket algorithm is a popular and simple method used in computer networking and
telecommunications for traffic shaping and rate limiting. It is designed to control the amount
of data that a system can send or receive in some sort of period, ensuring that the traffic
conforms to a specified rate.
It refers to traffic control mechanisms that seek to either differentiate performance based on
application or network-operator requirements or provide predictable or guaranteed
performance to applications, sessions, or traffic aggregates. It is something that data flow seeks
to attain.
Need for Token Bucket Algorithm
• Video and audio conferencing require a bounded delay and loss rate.
• Video and audio streaming requires a bounded packet loss rate, it may not be so
sensitive to delay.
• a -critical applications (real-time control) in which bounded delay is considered to
be an important factor.
• Valuable applications should provide better services than less valuable applications.
Flow Characteristics of Token Bucket Algorithm
Four types of characteristics are attributed to a flow: reliability, delay, jitter, and bandwidth.

Types of Characteristics for Quality of Service

Reliability
It implies packet reached or not, information lost or not. Lack of reliability means losing a
packet or acknowledgement, which entails re-transmission. Reliability requirements may differ
from program to program. For example, it is more important that electronic mail, file transfer
and internet access have reliable transmissions than telephony or audio conferencing.
Delay
It denotes source-to-destination delay. Different applications can tolerate delay in different
degrees. Telephony, audio conferencing, video conferencing, and remote log-in need minimum
delay, while delay in file transfer or e-mail is less important.
Jitter
Jitter is the variation in delay for packets belonging in same flow. High jitter means the
difference between delays is large; low jitter means the variation is small. For example, if
packets 0,1,2,3s arrive at 6,7,8,9s it represents same delay. Jitter would signify that packets
departed at 0,1,2,3s reach destination at 4,6,10,15s. Audio and video applications don't allow
jitter.
Bandwidth
Different applications need different bandwidths. In video conferencing we need to send
millions of bits per second to refresh a color screen while the total number of bits in an e-mail
may not reach even a million.
Techniques to Improve QoS
There are several ways to improve QoS like Scheduling and Traffic shaping, We will see each
and every part of this in brief.
Scheduling
Packets from different flows arrive at a switch or router for processing. A good scheduling
technique treats the different flows fairly and appropriately. Three scheduling techniques are:
1. FIFO Queuing
2. Priority Queuing
3. Weighted Fair Queuing

Traffic Shaping
It is a mechanism to control the amount and the rate of the traffic sent to the network. The
techniques used to shape traffic are: leaky bucket and token bucket.

Difference Between Token Bucket Algorithm and Leaky Bucket Algorithm


The differences between leaky and token bucket algorithm are:

Token Bucket Algorithm Leaky Bucket Algorithm

It depends on tokens. It does not depend on tokens.

If the bucket is full, the token is discarded, If the bucket is full, then packets are
not the packet. discarded.

Packets can only be transmitted when there


Packets are transmitted continuously.
are enough tokens.

Allows large bursts to be sent at a faster rate.


Sends the packet at a constant rate.
The bucket has maximum capacity.
Token Bucket Algorithm Leaky Bucket Algorithm

The bucket holds tokens generated at When the host has to send a packet, the
regular intervals of time. packet is thrown in the bucket.

If there is a ready packet, a token is removed Bursty traffic is converted into uniform
from Bucket and the packet is sent. traffic by leaky buckets.

The packet cannot be sent if there is no In practice bucket is a finite queue that
token in the bucket. outputs at a finite rate.

Working of Token Bucket Algorithm


It allows bursty traffic at a regulated maximum rate. It allows idle hosts to accumulate credit
for the future in the form of tokens. The system removes one token for every cell of data sent.
For each tick of the clock the system send n tokens to the bucket. If n is 100 and host is idle
for 100 ticks, bucket collects 10000 tokens. Host can now consume all these tokens with 10
cells per tick.
Token bucket can be easily implemented with a counter. The token is initiated to zero. Each
time a token is added, counter is incremented to 1. Each time a unit of data is sent, counter is
decremented by 1. When the counter is zero, host cannot send data.

Process depicting how token bucket algorithm works


Steps Involved in Token Bucket Algorithm
Step 1: Creation of Bucket: An imaginative bucket is assigned a fixed capacity, known as
"rate limit". It can hold up to a certain number of tokens.
Step 2: Refill the Bucket: The bucket is dynamic; it gets periodically filled with tokens.
Tokens are added to the bucket at a fixed rate.
Step 3: Incoming Requests: Upon receiving a request, we verify the presence of tokens in
the bucket.
Step 4: Consume Tokens: If there are tokens in the bucket, we pick one token from it. This
means the request is allowed to proceed. The time of token consumption is also recorded.
Step 5: Empty Bucket: If the bucket is depleted, meaning there are no tokens remaining, the
request is denied. This precautionary measure prevents server or system overload, ensuring
operation stays within predefined limits.
Advantage of Token Bucket over Leaky Bucket
• If a bucket is full in tokens, then tokens are discarded and not the packets. While
in leaky bucket algorithm, packets are discarded.
• Token bucket can send large bursts at a faster rate while leaky bucket always
sends packets at constant rate.
• Token bucket ensures predictable traffic shaping as it allows for setting token
arrival rate and maximum token count. In leaky bucket, such control may not be
present.
• Premium Quality of Service(QoS) is provided by prioritizing different traffic
types through distinct token arrival rates. Such flexibility in prioritization is
not provided by leaky bucket.
• Token bucket is suitable for high-speed data transfer or streaming video content
as it allows transmission of large bursts of data. As leaky bucket operates at a
constant rate, it can lead to less efficient bandwidth utilization.
• Token Bucket provides more granular control as administrators can adjust
token arrival rate and maximum token count based on network requirements.
Leaky Bucket has limited granularity in controlling traffic compared to Token
Bucket.
Disadvantages of Token Bucket Algorithm
• Token Bucket has the tendency to generate tokens at a fixed rate, even when the
network traffic is not present. This is leads of accumulation of unused tokens
during times when there is no traffic, hence leading to wastage.
• Due to token accumulation, delays can introduced in the packet delivery. If the
token bucket happens to be empty, packets will have to wait for new tokens,
leading to increased latency and potential packet loss.
• Token Bucket happens to be less flexible than leaky bucket when it comes to
network traffic shaping. The fixed token generation rate cannot be easily altered
to meet changing network requirements, unlike the adaptable nature of leaky
bucket.
• The implementation involved in token bucket can be more complex, especially
due to the fact that different token generation rates are used for different traffic
types. Configuration and management might be more difficult due to this.
• Usage of large bursts of data may lead to inefficient use of bandwidth, and may
cause congestion. Leaky bucket algorithm, on the other hand helps prevent
congestion by limiting the amount of data sent at any given time, promoting
more efficient bandwidth utilization.

IPv6
The next-generation Internet Protocol (IP) address standard, known as IPv6, is designed to
work in conjunction with IPv4. To communicate with other devices, a computer, smartphone,
home automation component, Internet of Things sensor, or any other Internet-connected
device needs a numerical IP address. Because so many connected devices are being used, the
original IP address scheme, known as IPv4, is running out of addresses. This new IP address
version is being deployed to fulfil the need for more Internet addresses. With a 128-bit
address space, it allows for 340 undecillion unique addresses.
Difference between IPv6 and IPv4

IPv6 IPv4

IPv6 has a 128-bit address length. IPv4 has a 32-bit address length.

It supports Auto and renumbering address It Supports Manual and DHCP address
configuration. configuration.

The address space of IPv6 is quite large it


It can generate 4.29×109 address space.
can produce 3.4×1038 address space.

Address Representation of IPv6 is in Address representation of IPv4 is in


hexadecimal. decimal.

In IPv6 checksum field is not available. In IPv4 checksum field is available.

IPv6 has a header of 40 bytes fixed. IPv4 has a header of 20-60 bytes.

IPv4 supports VLSM(Variable Length


IPv6 does not support VLSM.
subnet mask).

Representation of IPv6
An IPv6 address consists of eight groups of four hexadecimal digits separated by ‘ . ‘ and
each Hex digit representing four bits so the total length of IPv6 is 128 bits. Structure given
below.

IPV6-Representation
The first 48 bits represent Global Routing Prefix. The next 16 bits represent the student ID
and the last 64 bits represent the host ID. The first 64 bits represent the network portion and
the last 64 bits represent the interface id.
• Global Routing Prefix: The Global Routing Prefix is the portion of an IPv6
address that is used to identify a specific network or subnet within the larger
IPv6 internet. It is assigned by an ISP or a regional internet registry (RIR).
• Student Id: The portion of the address used within an organization to identify
subnets. This usually follows the Global Routing Prefix.
• Host Id: The last part of the address, is used to identify a specific host on a
network.
Example: 3001:0da8:75a3:0000:0000:8a2e:0370:7334
Types of IPv6 Address
Now that we know about what is IPv6 address let’s take a look at its different types.
• Unicast Addresses : Only one interface is specified by the unicast address. A
packet moves from one host to the destination host when it is sent to a unicast
address destination.
• Multicast Addresses: It represents a group of IP devices and can only be used
as the destination of a datagram.
• Anycast Addresses: The multicast address and the anycast address are the same.
The way the anycast address varies from other addresses is that it can deliver the
same IP address to several servers or devices. Keep in mind that the hosts do not
receive the IP address. Stated differently, multiple interfaces or a collection of
interfaces are assigned an anycast address.
Advantages
• Faster Speeds: IPv6 supports multicast rather than broadcast in IPv4.This
feature allows bandwidth-intensive packet flows (like multimedia streams) to be
sent to multiple destinations all at once.
• Stronger Security: IPSecurity, which provides confidentiality, and data
integrity, is embedded into IPv6.
• Routing efficiency
• Reliability
• Most importantly it’s the final solution for growing nodes in Global-network.
• The device allocates addresses on its own.
• Internet protocol security is used to support security.
• Enable simple aggregation of prefixes allocated to IP networks; this saves
bandwidth by enabling the simultaneous transmission of large data packages.
Disadvantages
• Conversion: Due to widespread present usage of IPv4 it will take a long period
to completely shift to IPv6.
• Communication: IPv4 and IPv6 machines cannot communicate directly with
each other.
• Not Going Backward Compatibility: IPv6 cannot be executed on IPv4-capable
computers because it is not available on IPv4 systems.
• Conversion Time: One significant drawback of IPv6 is its inability to uniquely
identify each device on the network, which makes the conversion to IPV4
extremely time-consuming.
• Cross-protocol communication is forbidden since there is no way for IPv4 and
IPv6 to communicate with each other.

You might also like