Module 1 DCN
Module 1 DCN
2024
AAI/ANS/CNS/CATC/2024/CBTA-Non PLI/Data
Communication Networking/Cyber-
सी.ए.टी.सी.,
Security/Linux/Mod-1 /Ver.1.0
प्रयागराज
CATC, PRAYAGRAJ
THIS PAGE IS INTENTIONALLY KEPT BLANK
Module-1
Data Communication Networking, Cyber security and Linux
With pleasure, I authenticate this handout and make it available for imparting NPLI training
course on “Data Communication Networking, Cyber Security and Linux” for ATSEPs in AAI.
The course content has been approved by CHQ of AAI. It is hoped that the trainee ATSEPs
will find it informative, interesting and better in presentation.
I am sure that the trainees will carry a sense of pride in undergoing this CBTA based NPLI
Training course of ICAO standard.
For the development and presentation of this module as per ICAO Doc 10057, I would like
to appreciate the meticulous and excellent work done by the course developers.
CHAPTER -01
INTRODUCTION TO DATA COMMUNICATION
For communication to take place it requires a source from where the message
intended to be exchanged is generated. The message so generated is sent to
the destination through a medium. The figure shown below depicts a
generalized block diagram of a communication model.
Source system Destination system
⮚ Receiver: The receiver accepts the signal from the transmission system
and converts it into a form that can be handled by the destination device.
For example, a modem will accept analog signal coming from a network or
transmission line and convert it into a digital bit stream the incoming data
from the receiver.
1.2 Data Representation
Different sets of bit patterns have been designed to represent text symbols.
Each set is a code, and the process of representing the symbols is called
coding.
⮚ Extended ASCII: To make the size of each pattern 1byte (8-bits), the
ASCII pattern is augmented with an extra 0 in the left. Now each pattern
is exactly 1 byte of memory. In other words, in extended ASCII, the first
pattern is 00000000 and the last one is 01111111.
1.4 Numbers
Numbers are also represented using bit patterns. However a code such as
ASCII is not used to represent the numbers; the number is directly converted
Images today are represented by bit patterns. In its simpler form, an image is
divided into a matrix of pixels (picture elements), where each pixel is a small
dot. The size of the pixel depends on resolution. For example an image can
be divided into 1000 pixels or 10000 pixels. In the second case, there is a
better representation of image, but more memory is needed to store the bit
pattern of the image.
After the image is divided into pixels, each pixel is assigned a bit pattern. The
size and the value of the pattern depend on the image. For an image made of
only black & white dots (e.g., a chessboard), a 1-bit pattern is enough to
represent a pixel either 0 or 1. If the image consists of 4 levels of gray shades
a 2-bit pattern is required. A bit pattern 00 represents a black pixel, 01 a dark
gray shade, 10 pattern a light gray shade and 11 a white shade.
To represent color images, each color pixel is decomposed into three primary
colors; red, green and blue. A three-bit pattern each consisting of 8-bits is
used to represent the intensity of each color.
1.6 Audio
Audio is representation of sound. Audio is by nature different from text,
numbers, or images. It is continuous not discrete. This form is changed to the
other form to use it.
1.7 Video
In its simplest form data communication takes place between two devices that
are directly connected by some form of point-to-point connected transmission
medium. A network is two or more devices connected together through links. A
link is a communications pathway that transfers data from one device to
another. It is simple to imagine any link as a line drawn between two points. For
communication to occur, two devices must be connected in some way to the
same link at the same time. There are two possible types of connections: point-
to-point and multi-point.
Computer Computer
Fig: Point-to-point connection
The links discussed above may be small which may be within a building or it
may be several kilometers long. If the devices are farther apart it is impractical
to directly connect through a point-to-point link. It is always not possible to run
a dedicated line between the devices. In such cases the devices are to be
connected through networks. There exists different types of networks while will
be covered in the networking module.
1.9 Network Topology
Bus Topology: It uses a single backbone cable that is terminated at both ends.
All the hosts connect directly to this backbone.
Ring Topology: A ring topology connects the nodes in a continuous loop. Data
flows around the ring in one direction.
Mesh Topology: A mesh topology has at least two network connections on every
device on the network. Each host has its own connections to all other hosts.
Hybrid Topology: It is a combination of above-mentioned topologies, connected
by a suitable networking device.
In this section, we define two widely used terms: protocols and standards. A
protocol is synonymous with rules and Standards are agreed upon rules.
Protocols
In computer networks, communication occurs between entities in different
systems. An entity is anything capable of sending and receiving information.
Examples are user application programs, file transfer packages, e-mail facilities,
Database management systems etc., However, two entities cannot simply send
bit streams to each other and expect to be understood. For communication to
occur, the entities must agree on a protocol. A Protocol is a set of rules that
governs the data communications. A protocol defines what is communicated,
Standards
Network standards are agreed-upon specifications that ensure compatibility and
interoperability among different devices, vendors, and applications on a network.
They define the physical, electrical, and functional characteristics of network
components, such as cables, connectors, signals, frequencies, and protocols. For
example, Ethernet is a network standard that defines how data is transmitted
over a wired network using frames, MAC addresses, and switches.
Network protocols and standards are closely related, but not the same. Network
protocols are the logical rules that govern how data is communicated, while
network standards are the physical and technical specifications that enable the
implementation of network protocols. Network protocols and standards often
work together in layers, forming a network architecture or model that describes
the functions and interactions of each layer. For example, the OSI model is a
network architecture that consists of seven layers, each with its own protocols
and standards.
2 PROTOCOL LAYERING
In data communication and networking, a protocol defines the rules that both
the sender and receiver and all intermediate devices need to follow to be able to
Fig 1.1
When the communication is complex, we may need to divide the task between
different layers, in which case we need a protocol at each layer, or protocol
layering.
Fig 1.2
Let us assume that A sends the first letter to B. The third layer machine listens
to what A says and creates the plaintext (a letter in English), which is passed to
the second layer machine. The second layer machine takes the plaintext,
encrypts it, and creates the ciphertext, which is passed to the first layer machine.
The first layer machine takes the ciphertext, puts it in an envelope, adds the
sender and receiver addresses, and mails it.
At B’s side, the first layer machine picks up the letter from B’s mail box,
recognizing the letter from A by the sender address. The machine takes out the
ciphertext from the envelope and delivers it to the second layer machine. The
second layer machine decrypts the message, creates the plaintext, and passes
the plaintext to the third-layer machine. The third layer machine takes the
plaintext and reads it.
Protocol layering enables us to divide a complex task into several smaller
and simpler tasks. We could have used only one machine to do the job of all
three machines. However, if A and B decide that the encryption/decryption done
by the machine is not enough to protect their secrecy, they would have to change
the whole machine. In the present situation, they need to change only the second
layer machine; the other two can remain the same. This is referred to as
modularity. Modularity in this case means independent layers. A layer (module)
can be defined as a black box with inputs and outputs, without concern about
how inputs are changed to outputs. If two machines provide the same outputs
when given the same inputs, they can replace each other. For example, A and B
can buy the second layer machine from two different manufacturers. As long as
the two machines create the same ciphertext from the same plaintext and vice
versa, they do the job.
One of the advantages of protocol layering is that it allows us to separate
the services from the implementation. A layer needs to be able to receive a set of
services from the lower layer and to give the services to the upper layer; we don’t
care about how the layer is implemented.
Another advantage of protocol layering, which cannot be seen in our
simple examples but reveals itself when we discuss protocol layering in the
Internet, is that communication does not always use only two end systems; there
are intermediate systems that need only some layers, but not all layers. If we did
not use protocol layering, we would have to make each intermediate system as
complex as the end systems, which makes the whole system more expensive.
Logical Connections
In protocol layering, there is a logical connection between each layer. This means
that we have layer-to-layer communication.
Fig 1.3
CHAPTER-2
INTRODUCTION TO TCP/IP
The Transmission Control Protocol/Internet Protocol (TCP/IP) suite was
created by the Department of Defense (DoD) to ensure and preserve data
integrity, as well as maintain communications in the event of catastrophic war.
It is a hierarchical protocol made up of interactive modules, each of which
provides a specific functionality. The term hierarchical means that each upper-
level protocol is supported by the services provided by one or more lower-level
protocols.
1) Process/Application layer
2) Host-to-Host layer
3) Internet layer
4) Network Access layer
Figure 6.1 shows a comparison of the DoD model and the OSI reference model.
As you can see, the two are similar in concept, but each has a different number
of layers with different names.
Fig 2.2
2.1. Network Access Layer
Physical Layer
Physical layer is the lowest level in the TCP/IP protocol suite. It is
responsible for transmitting raw data bits over a physical medium, such as
copper wires, fiber optic cables, or wireless communication channels. The
Physical Layer deals with the physical characteristics of the transmission
medium and the physical signaling mechanisms used to transmit data. It defines
how binary 0s and 1s are converted into signals that can be transmitted over the
chosen medium. This process involves encoding the data into electrical, optical,
or radio signals, depending on the transmission medium. It determines the rate
at which data is transmitted over the network and the bandwidth available for
the transmission. Techniques such as parity checking or cyclic redundancy
check (CRC) for error detection and correction.
Data-link Layer
Delivery of the packets between two systems on the same network is the
responsibility of the Data Link layer. Its major role is to ensure error-free
transmission of information. The data link layer receives data from the Network
Layer above it. It breaks this data into smaller, manageable units called frames
and attach source and destination device addresses (MAC addresses) as header.
❖ Transport Layer: The Transport Layer, is the layer above the Network Layer,
is responsible for providing end-to-end communication services for
applications. It ensures that data is transmitted reliably, efficiently, and
accurately between devices on a network. The logical connection at the
transport layer is also end-to-end. The Transport Layer breaks down data
from the Application Layer into smaller units called segments or datagrams
before transmission. It also reassembles these segments at the receiving end.
TCP UDP
Sequenced Un-sequenced
Reliable Unreliable
Connection-oriented Connectionless
Virtual circuit Low overhead
Acknowledgments No acknowledgment
Windowing flow control No windowing or flow control
The figure below shows the simple representation of a TCP or UDP segment.
(There are many different fields available in the TCP and UDP header)
The application layer is the highest abstraction layer of the TCP/IP model
that encompasses various protocols and services that serves as the bridge
between user applications and the network. It facilitates the user to use the
services of the network, develop network-based applications, transfer of files to
other systems etc. The application layer shields application programs from the
complexities of the lower layers in the TCP/IP model.
Examples of Layer Protocols:
⮚ File Transfer Protocol (FTP): FTP is a protocol used for transferring files
between hosts over a TCP/IP network. It allows users to upload and
download files to and from remote servers.
In the top three layers, the data unit (packets) should not be changed by
any router or link-layer switch. In the data link layer, the packet created by the
host is changed only by the routers, not by the link-layer switches.
CHAPTER-3
CLASSIFICATION OF NETWORK & NETWORK DEVICES
The Ethernet frame format is the structure used for data transmission over
Ethernet networks. It consists of several fields, each serving a specific purpose
in the communication process.
Here's a breakdown of the Ethernet frame format:
Preamble: The preamble is a sequence of alternating 1s and 0s (101010...) used
to signal the start of the Ethernet frame. It helps the receiving device synchronize
its clock with the incoming data stream.
Start Frame Delimiter (SFD): The SFD is a unique bit pattern (10101011)
immediately following the preamble. It indicates the end of the preamble and the
start of the Ethernet frame's header.
Destination MAC Address: This field specifies the MAC (Media Access Control)
address of the intended recipient of the Ethernet frame. It is 6 bytes (48 bits) in
length and identifies the network interface card (NIC) or device that should
receive the frame.
Source MAC Address: This field specifies the MAC address of the sender of the
Ethernet frame. Like the destination MAC address, it is also 6 bytes (48 bits) in
length and identifies the NIC or device that originated the frame.
EtherType or Length: The EtherType field indicates the type of payload carried
in the Ethernet frame. It can either specify the length of the payload (in bytes) or
indicate the protocol type being used (e.g., IPv4, IPv6, ARP, etc.).
Payload: The payload contains the actual data being transmitted in the Ethernet
frame. It can vary in size depending on the EtherType or length field.
Frame Check Sequence (FCS): The FCS is a 4-byte (32-bit) field used for error
detection. It contains a checksum or CRC (Cyclic Redundancy Check) value
calculated over the entire Ethernet frame, including the header and payload. The
receiving device uses the FCS to check for transmission errors and verify the
integrity of the received data.
The figure below shows a simple representation of an Ethernet frame with IP and
TCP/UDP.
The Ethernet standard also defines the number of conductors that are required
for a connection, the performance thresholds that can be expected, and provides
the framework for data transmission. A standard Ethernet network can transmit
data at a rate up to 10 Megabits per second (10 Mbps).
The Fast Ethernet standard (IEEE 802.3u) has been established for
Ethernet networks that need higher transmission speeds. This standard raises
the Ethernet speed limit from 10 Mbps to 100 Mbps. Types of Fast Ethernet:
● 100BASE-TX for use with Cat 5 UTP cable
● 100BASE-FX for use with fiber-optic cable
The Gigabit Ethernet standard (IEEE 802.3ab) raises the Ethernet speed
limit to 1 Gbps.
● 1000BASE-T for use with Cat 5 UTP cable
● 1000BASE-X is the collective term used to describe various options of 1
Gbps transmission over fiber-optic cable such as 1000BASE-SX,
1000BASE-LX and 1000BASE-LX10 etc.
Ethernet cables are the primary means of wired network connectivity. They
come in various types, but the most common is the UTP (Unshielded Twisted
Pair) cable. UTP cables consist of four pairs of insulated copper wires twisted
together. The twisting helps to cancel out electrical interference (crosstalk) that
can corrupt data signals. UTP cables come in different categories, each with
different maximum speeds and cable lengths.
CAT5e cables can support data transmission speeds of up to 1 gigabit per second
(Gbps) and can reliably transmit data over distances of up to 100 meters (or
approximately 328 feet).
CAT6 cables can support data transmission speeds of up to 1 gigabit per second
(Gbps) over distances of up to 100 meters and 10 Gbps over shorter distances,
typically up to 55 meters.
RJ45 connectors are commonly used in Cat5 and Cat6 cables. These
connectors are standardized connectors used primarily for Ethernet networking.
T-568B T-568A
A Wide Area Network (WAN) is a type of computer network that spans a large
geographical area, connecting multiple Local Area Networks (LANs) and other
types of networks over long distances. WANs facilitate communication and data
exchange between geographically dispersed locations, such as different cities,
countries, or even continents. WANs utilize various transmission mediums for
data transfer. This includes fiber optic cables, leased lines, satellite links,
microwave links etc.
3.4. Repeaters
The repeater passes the digital signal bit-by-bit in both directions between
the two segments. As the signal passes through a repeater, it is amplified and
regenerated at the other end. The repeater does not isolate one segment from the
other, if there is a collision on one segment, it is regenerated on the other
segment. Hence it has one collision domain. Repeaters work at the physical
layer. The main aim of using a repeater is to increase the networking distance
by increasing the strength and quality of signals.
3.5. Hubs
Hubs are networking devices operating at a physical layer of the OSI model
that are used to connect multiple devices in a network. They are generally used
to connect computers in a LAN. A hub is a multiport repeater. A computer which
intends to be connected to the network is plugged into one of these ports. When
a data frame arrives at a port, it is broadcast to every other port, without
considering whether it is destined for a particular destination device or not. A
hub operates in the physical layer.
3.6. Bridges
These are network devices that connect two or more LAN segments. They
work by examining the destination MAC address of a packet and forwarding it
only to the segment where the destination device resides. This reduces collisions
on the network by limiting the traffic flow, but it does not segment the broadcast
domain.
3.8. Switches
A switch is a networking device that operates at the data link layer. Its
primary function is to connect multiple devices within a local area network (LAN)
and facilitate communication between them. Unlike hubs or repeaters, switches
are intelligent devices that can inspect data packets and make forwarding
decisions based on the destination MAC (Media Access Control) address.
Forwarding: When a switch receives a data packet destined for a specific MAC
address, it looks up the MAC address in its table to determine the appropriate
outgoing port. The switch then forwards the packet only to that port, rather than
flooding it out to all ports as hubs do.
Broadcast Handling: Switches handle broadcast traffic differently. Broadcast
traffic is typically forwarded out to all ports except the one it was received on.
Segmentation: Switches can segment a network into multiple collision domains.
Each port on a switch is its own collision domain.
Symbol of switch
3.9. Address Resolution Protocol (ARP)
addressing at the network layer, while the MAC address is used for physical
addressing at the data link layer (Layer 2).
When a device needs to communicate with another device on the same
network, it checks its ARP cache to see if it already knows the MAC address
corresponding to the IP address it wants to reach. If the MAC address is not
found in the ARP cache, the device broadcasts an ARP request. The destination
MAC address in an ARP request is the layer 2 broadcast MAC address
(FF:FF:FF:FF:FF:FF). All devices on the same network receive the broadcast.
However, only the device with the corresponding IP address specified in the ARP
request will respond with its MAC address. Once the reply is received, the
mapping is added to the ARP cache for future reference, speeding up subsequent
communication. The ARP cache entries are typically aged out after a certain
period of time to accommodate changes in the network topology.
How does data packets move from one system to another system in a LAN
● All the systems connected to the switch receive the broadcast and only
system B will respond to the ARP request.
● System B sends ARP reply to System A using the MAC address of System
A as destination address.
● Once the switch receives the ARP reply packet from system B, updates its
MAC table with the MAC address of system B.
● Since the MAC address of System A is known to switch, the reply will be
sent only to the port where System A is connected.
● System A receives the ARP reply from B and updates its ARP cache.
● Actual data will be encapsulated in an ethernet frame using the MAC
address of the System B as destination address and sent to the switch.
● Switch checks the MAC table to find out the port where System B is
connected and switch the packet to the corresponding port.
Layer 3 Switch
A Layer 3 switch, also known as a multilayer switch, is a networking device
that combines the functionalities of a Layer 2 managed switch and a router. Like
a Layer 2 switch, a Layer 3 switch learns the MAC addresses of devices connected
to its ports and forwards frames (data packets at Layer 2) based on that
information. A Layer 3 switch can also inspect incoming packets and route them
based on their IP addresses. It maintains a routing table that contains
information about how to reach different networks. This routing table can be
statically configured or learned dynamically using routing protocols. Layer 3
switches can route traffic between different VLANs without the need for an
external router.
Symbol of Router
Security: Routers can be configured to filter incoming and outgoing traffic based
on security rules, protecting the network from unauthorized access and
malicious attacks.
3.10. Gateway:
A gateway in networking serves as an entry or exit point between two different
networks, facilitating communication between them. Its functions vary
depending on its specific role and the type of networks it connects.
CHAPTER-4
LOOP AVOIDANCE IN LAN
● SW1 will forward this broadcast frame on all its interfaces, except the
interface where it received the frame on.
● SW2 will receive both broadcast frames.
● SW2 will forward it out of every interface except the interface where it
received the frame on.
● The frame that was received on interface Fa0/0 of SW2 will be forwarded
on its Interface Fa0/1.
● The frame that was received on Interface Fa0/1 of SW2 will be forwarded
on Interface Fa0/0.
● The same thing will happen in SW1 also.
● Both switches will keep forwarding packets over and over again, creating
an infinite loop.
Since spanning tree is enabled, all our switches will send a special frame
to each other called a BPDU (Bridge Protocol Data Unit). In this BPDU there are
two pieces of information that spanning-tree requires:
● MAC address
● Priority
The MAC address and the priority together make up the bridge ID. The
BPDU is sent between all the switches
● First of all spanning tree will elect a root bridge; this root-bridge will be the
one that has the best “bridge ID”.
● The switch with the lowest bridge ID is the best one.
● By default the priority is 32768 but we can change this value if we want.
In this example SW1 will become the root bridge. Since the priority is the
same on all switches it will be the MAC address that is the tiebreaker. SW1 has
the lowest MAC address thus the best bridge ID and will become the root bridge.
All other switches will become non-root bridges
The ports on the root bridge are always designated which means they are in a
forwarding state.
Non-root bridges will have to find the shortest path to the root bridge. The
shortest path to the root bridge is called the “root port”.
To break the loop, one of the ports between SW2 and SW3 shall be
shutdown. Both switches have the same priority but the MAC address of SW2 is
lower. Hence, SW3 will block its port, effectively breaking the loop.
Take a look at the picture above. SW1 is the root bridge and SW2 is non-
root. We have two links between these switches for redundancy. Redundancy
means loops so spanning-tree is going to block one the interfaces on SW2.
SW2 will receive BPDUs on both interfaces but the root path cost field will
be the same. When the cost is equal, spanning-tree will look at the port priority.
By default the port priority is the same for all interfaces which means that the
interface number will be the tie-breaker. The lowest interface number (Fa0/1)
will be chosen as forwarding port and port Fa0/2 will be blocked here. Of course
port priority is a value that we can change so we can choose which interface will
be blocked.
CHAPTER-5
IP ADDRESSING & SUBNETTING
5.1. IP Addressing
So if you calculate this from binary to decimal you’ll get the following:
● Class A starts at 0.0.0.0
● Class B starts at 128.0.0.0
● Class C starts at 192.0.0.0
● Class D starts at 224.0.0.0
● Class D starts at 240.0.0.0
Class A: Subnet mask is 255.0.0.0 (or /8 in CIDR notation). This means the first
8 bits are for the network portion, and the remaining 24 bits are for hosts.
Class B: Subnet mask is 255.255.0.0 (or /16 in CIDR notation). This allows for
16 bits for the network portion and 16 bits for hosts.
Class C: Subnet mask is 255.255.255.0 (or /24 in CIDR notation). This allows
for 24 bits for the network portion and 8 bits for hosts.
Subnet masks are not defined for Class D and Class E addresses because
these address ranges were reserved for special purposes and were not intended
for conventional host-to-host communication.
Difference between “Private” and “Public” IP addresses
● Public IP addresses are used on the Internet.
● Private IP addresses are used on your local area network and should not
be used on the Internet.
In each IP subnet, there are two special addresses that cannot be assigned to
individual devices.
● Network address.
● Broadcast address.
When we set all the bits to 0 in the 'host' part of the IP address
192.168.1.1, we obtain the network address.
When we set all the bits to 1 in the 'host' part of the IP address
192.168.1.1, we obtain the broadcast address.
A, B, C, etc.) with predefined subnet masks, CIDR allows for the allocation of IP
addresses in Variable-Length Subnet Masks (VLSM). CIDR allows for the
subdivision of IP address blocks into smaller subnets, enabling more efficient
utilization of available IP addresses.
With VLSM (Variable-Length Subnet Masks), network administrators can
subnet a network into smaller subnets, each with its own subnet mask length
based on the number of required hosts in that subnet. This flexibility enables
more precise allocation of IP addresses, reducing wastage and optimizing
address space utilization.
For example, in a network with the IP address range 192.168.1.0/24, a subnet
mask of /24 (255.255.255.0) provides 256 addresses. However, if one subnet
requires only 30 hosts, while another requires 100 hosts, VLSM allows using
subnet masks of /27 (255.255.255.224) for the smaller subnet (30 hosts) and
/25 (255.255.255.128) for the larger subnet (100 hosts) (Variable subnetting will
be discussed later).
5.3. Subnetting
A subnet mask is a 32-bit number that identifies the network portion and
the host portion of an IP address. It's represented similarly to an IP address,
often with dotted decimal notation. The subnet mask contains a sequence of
contiguous ones (1s) followed by a sequence of contiguous zeros (0s). The ones
represent the network portion, and the zeros represent the host portion.
Subnetting is the process of dividing a large network into smaller, more
manageable sub-networks called subnets. It's a technique used in IP networking
to efficiently utilize IP address space and improve network performance,
security, and management. Subnetting is facilitated by the use of subnet masks.
To subnet a network, borrow bits from the host portion of the IP address and
allocate them to create subnets. Each subnet is identified by its own unique
subnet address and subnet mask.
Let's subnet the network 192.168.1.0/24 into 2 newt works.
Identify the original network: The given network is 192.168.1.0 with a subnet
mask of /24, which means the first 24 bits are assigned for the network portion.
It is a class C network.
Calculate the new subnet mask: With 1 bit borrowed, the new subnet mask
becomes 255.255.255.128 in decimal (or /25 in CIDR notation), as the first 25
bits are set to 1.
Determine the subnet range: Each subnet will have its own range of addresses.
2. 192.168.1.64/26
3. 192.168.1.128/26
a. Network Address - 192.168.1.128
b. Broadcast Address - 192.168.1.191
c. Usable Address Range - 192.168.1.129 to 192.168.1.190 (62 Usable
IPs)
4. 192.168.1.192/26
a. Network Address - 192.168.1.192
b. Broadcast Address - 192.168.1.255
c. Usable Address Range - 192.168.1.193 to 192.168.1.254 (62 Usable
IPs)
Another example
How many networks will be available on 190.10.0.0/22. Also find the number of
hosts per network
1. First octet is 190. So it belongs to the Class B network.
2. Usually Class B networks have a 16 bit subnet mask.
3. In this example the subnet mask is 22 bits long. So 6 bits were borrowed
from the host side to the network side.
4. 2n = Number of subnets. 26 = 64. So 64 nos of subnets can be formed.
5. Block Size = 2(32 - number of 1’s in the new subnet mask), 232-22) = 210 = 1024
1024 - 2 = 1022
7. To find the network addresses of each subnet
a. Block Size = 2(32 - number of 1’s in the new subnet mask), 232-22) = 210
11000011.10101010.00000001.00101101
11111111.11111111.11111111.11100000
__________________________________
11000011.10101010.00000001.00100000
● Result in decimal - 195.170.1.32
● Therefore, the network address for the IP address 195.170.1.45/27 is
195.170.1.32
To provide 50 hosts to Main Office, 6 bits required in host part (26 = 64)
Hence allocate a subnet of /26 (which allows for 62 hosts) to this department.
Subnet: 192.168.10.0/26 Broadcast: 192.168.10.63
IP addresses from 192.168.10.1 to 192.168.10.62 are reserved for the Main
Office.
To provide 20 hosts to Sales Department, 5 bits required in host part (25 = 32)
Hence allocate a subnet of /27 (which allows for 30 hosts) to this department.
Subnet: 192.168.10.64/27 Broadcast: 192.168.10.95
IP addresses from 192.168.10.65 to 192.168.10.94 are reserved for the Sales
Department.
The main reason for the development and implementation of IPv6 (Internet
Protocol version 6) is the exhaustion of IPv4 addresses. IPv4, the previous version
of the Internet Protocol, uses 32-bit addresses, which allows for approximately
4.3 billion unique addresses. With the rapid growth of the internet and the
proliferation of connected devices, IPv4 addresses were being depleted.
IPv6 addresses are 128 bits long, providing a vastly larger address space
compared to IPv4's 32-bit addresses. This allows for approximately 340 trillion
(2128) unique addresses, ensuring that the internet can continue to grow and
accommodate the increasing number of devices and users.
CHAPTER-6
IP ROUTING
6.1. IP Routing
● Then the data is encapsulated into TCP or UDP datagrams (Adding Port
Addresses)
● The result of the bitwise AND operation for both the source and
destination addresses yields the same network address, then the
destination is within the same network.
● System A checks its ARP cache for the MAC address of System D.
● Switch receives the broadcast, reads the source MAC address (MAC of
System A), updates its own MAC table and forwards to all other ports
except the port which it receives.
● All systems connected to the switch receive the broadcast and Only
System D will reply (ARP Reply) with its own MAC address.
● Switch receives the ARP reply from System D, reads the source MAC
address (MAC of System D), updates its own MAC table, checks its MAC
table to find which port is connected to System A and forwards the packet
only to System A.
● System A updates its ARP cache with the MAC address of System D.
● Then System A makes an ethernet frame as shown in the below figure and
sends it to switch.
● Switch receives the frame, read the destination MAC address and forward
to the port in which System D is connected.
● Then the data is encapsulated into TCP or UDP datagrams (Adding Port
Addresses).
● The result of the bitwise AND operation for both the source and
destination addresses yields different network addresses. Hence, the
destination is not within the same network.
● Then the data is encapsulated into TCP or UDP datagrams (Adding Port
Addresses).
● The result of the bitwise AND operation for both the source and
destination addresses yields different network addresses. Hence, the
destination is not within the same network.
● System A checks its ARP cache for the MAC address of the router (Gateway
Interface).
● Switch receives the broadcast, reads the source MAC address (MAC of
System A), updates its own MAC table, if not available and forwards to all
other ports except the port which it receives.
● All systems connected to the switch receive the broadcast and only router
gateway interface reply (ARP Reply) with its own MAC address.
● Switch receives the ARP reply from the router gateway interface, reads the
source MAC address (MAC of router gateway interface), updates its own
MAC table, if not available. Checks its MAC table to find which port is
connected to System A and forwards the packet only to System A.
● System A updates its ARP cache with the MAC address of the router
gateway interface.
● Then System A makes an ethernet frame as shown in the below figure and
sends it to switch.
● Switch receives the frame, reads the destination MAC address and
forwards to the port in which the router gateway interface is connected.
● Router receives the packet, reads the source MAC address, updates its
ARP cache with the MAC address of System A.
● Then it reads the destination IP, checks its routing table to find the
interface through which this packet needs to be sent.
● So, the router needs to send out the packet to the interface Fa0/1.
● Router checks its ARP table to find the MAC address of System E.
● If not available, using ARP request and reply, the router gets the MAC of
system E.
Compare the source and destination MAC addresses of the incoming and
outgoing ethernet frames of the router. MAC addresses were changed. But no
change in IP addresses.
routing decision is made based on the routing table stored in the router's
memory. This routing table contains information about various networks and
the next-hop router or interface through which data should be forwarded to
reach each network. IP routing ensures that data packets are efficiently routed
through multiple network segments and routers to reach their intended
destinations.
If system A wants to send data to System B, then the ethernet frame will
be sent to the gateway (Fa 0/0 interface of R 1). Router 1 will consult its routing
table, which contains information about the available paths to various
destinations. Based on metrics like hop count, bandwidth, latency, and
administrative cost, Router 1 will select the best path to forward the Ethernet
frame toward System B. Once the best path is determined, Router 1 will then
forward the frame accordingly.
Routing tables are built through various mechanisms.
● Directly Connected Networks: When a router is configured with an IP
address and subnet mask on an interface, it automatically knows about
the network directly connected to that interface. These networks and their
associated interfaces are typically added to the routing table as directly
connected routes.
Vector: The vector component refers to the direction or next-hop router that
should be used to reach each destination network.
Routers periodically exchange routing information with their neighboring
routers to keep their routing tables up to date. When a router receives a routing
table update from a neighbor, it compares the received information with its own
routing table. If the received information contains routes that are not present in
its own table or if the received information offers a better path to a destination
network, the router updates its routing table accordingly. Distance-vector
routing protocols only exchange routing information with directly connected
neighbors. Routers make routing decisions based on the information received
from their neighbors. They don't have complete knowledge of the entire network
topology. Distance-vector protocols may take some time to converge, especially
in larger networks, due to the iterative nature of updating routing tables. In
addition to exchanging routing information when changes occur in the network,
routers using distance-vector protocols also send periodic updates to ensure
that neighboring routers have the most up-to-date routing information. The
frequency of these updates varies depending on the specific routing protocol and
configuration settings.
When a distance-vector routing protocol starts up, each router begins with only
its directly connected networks in its routing table.
As routing updates are received from neighboring routers, the routing table is
updated to reflect the learned routes.
Examples of distance-vector routing protocols:
1. RIP (Routing Information Protocol)
2. IGRP (Interior Gateway Routing Protocol)
3. EIGRP (Enhanced Interior Gateway Routing Protocol)
effectively support. RIP routers periodically broadcast their entire routing table
to neighboring routers. By default, updates are sent every 30 seconds, although
this interval can be adjusted. RIP employs the split horizon technique to prevent
routing loops. With split horizon, a router does not advertise routes back out the
interface from which they were learned. RIP uses route poisoning to inform other
routers that a route has become unreachable. When a route is no longer
available, the router advertises the route with an infinite metric (16 hops) to
indicate its unreachability.
There are two versions of RIP: RIP version 1 (RIPv1) and RIP version 2
(RIPv2). RIPv2 includes enhancements such as support for Variable Length
Subnet Masking (VLSM), authentication, and support for multicast routing
updates.
RIP has several limitations:
● RIP's periodic update mechanism and limited metric (hop count) can lead
to slow convergence, especially in larger networks.
● The maximum hop count limit restricts the size of networks that RIP can
support. RIP is not suitable for large or complex networks.
● In some scenarios, RIP may encounter routing loops.
● RIP's sole metric, hop count, does not consider factors such as bandwidth
or delay, which can lead to suboptimal routing decisions.
Assume that each link costs 1. All the routers start with an empty LSDB (Link
State Database).
The following steps illustrate how the Link State Routing algorithm would
operate in this network:
Discovery phase: Each router sends Hello packets to discover its neighbors.
Based on the topology, each router learns the following information:
LSA flooding: Each router floods its own LSA (Link State Database) to all other
routers in the network. The LSA contains information about the router's own
links and the state of its neighboring routers.
After flooding is complete, the LSDB for each router will look like this:
P: P, Q, R, S, T, U
Q: P, Q, R, S, T, U
R: P, Q, R, T, U
S: Q, S
T: Q, T, U
U: R, T, U
R R
S Q
T Q
U R
Updating LSAs: Suppose that Link P-R fails. Router R would detect the failure
and send a new LSA to all other routers in the network.
After flooding is complete, the LSDB for each router will look like this:
P: P, Q, S, T, U
Q: P, Q, S, T, U
R: P, Q, S, T, U
S: Q, S
T: Q, T, U
U: R, T, U
Q Q
R -
S Q
T Q
U R
In this example, the Link State Routing algorithm is used. It is used to maintain
an up-to-date view of the network topology. It is also used to determine the best
path to each destination. The algorithm is designed to quickly adapt to changes
in the network, such as link failures, and to provide a reliable and efficient way
to route packets.
Here's a comparison of some key points between link-state and distance vector
routing protocols:
of routing loops or other issues. If the TTL value of a packet reaches zero (0)
before it reaches its destination, the packet is discarded by the router that
decremented it to zero. Additionally, the router may send an ICMP (Internet
Control Message Protocol) Time Exceeded message back to the source indicating
that the packet was discarded due to TTL expiration.
MPLS
Components of mpls
● Ingress Router: The router at the edge of the MPLS network where packets
enter from external networks. The ingress router assigns MPLS labels to
incoming packets and forwards them into the MPLS network.
● Egress Router: The router at the edge of the MPLS network where labeled
packets exit the MPLS domain and are forwarded to their final destination.
The egress router removes MPLS labels from outgoing packets before
forwarding them to the next hop or destination.
● Label Switch Router (LSR): Routers within the MPLS network that
perform label switching based on incoming labels. LSRs make forwarding
decisions based on labels and swap labels as packets traverse the MPLS
network.
● Provider Edge (PE) Router: Routers within the service provider's MPLS
network that connect directly to customer networks via CE (Customer
Edge) routers. PE routers establish MPLS connectivity with CE routers. A
PE router can function as both an ingress and an egress router, depending
on the context and the flow of traffic within an MPLS (Multiprotocol Label
Switching) network.
and policies. Multiple VRFs can coexist on the same router. Each VRF
corresponds to a virtual packet-forwarding table. VRF configurations are
typically done on Provider Edge (PE) routers. Each VRF instance is bound to one
or more physical or logical interfaces on the PE router. These interface bindings
determine which interfaces belong to each VRF instance and where traffic
belonging to that VRF is received or forwarded. VRF instances are typically
denoted by assigning a unique name or identifier to each VRF. These identifiers
use a mix of letters and numbers. For example, VRF names like VRF123,
CustomerA, or Site42.
CHAPTER-7
VLAN
7.1. Introduction
Switched network
When VLANs were not in the picture, we were using the type of network depicted
in the figure below.
Here you can see that each network is attached with a hub port to the router.
Notice that each department has its own LAN, so if you needed to add new users
to, let’s say, Sales, you would just plug them into the Sales LAN, and they would
automatically become part of the Sales collision and broadcast domain. This
design really worked well for many years. But there was one major flaw. What
happens if the hub for Sales is full, and we need to add another user to the Sales
LAN? Or, what do we do if there’s no more physical space available where the
Sales team is located for this new employee?
Well, let’s say there just happens to be plenty of room in the Finance section of
the building. That new Sales team member will just have to sit on the same side
of the Finance people, and we’ll just plug the system of that sales team
member into the hub for Finance. Doing this obviously makes the new user part
of the Finance LAN, which is very bad for many reasons. First and foremost, we
now have a major security issue. Because the new Sales employee is a member
of the Finance broadcast domain, the newbie can see all the same servers and
access all network services that the Finance folks can. Second, for this user to
access the Sales network services, they would have to go through the router to
log in to the Sales server—not exactly efficient!
But, if you create a virtual LAN (VLAN). You can solve many of the problems
associated with these issues.
● VLANs enable you to group devices together logically, even though they are
physically connected to the same switch.
● All the devices connected to the same switch are in the same broadcast
domain.
● But VLANs allow you to create separate broadcast domains within a single
physical switch.
● Devices within the same VLAN can communicate with each other as if they
were on the same physical network.
● Devices in different VLANs typically cannot communicate with each other
without routing.
● VLANs enhance network security by isolating traffic.
When a switch port is configured as an access port, it will only carry traffic for
the specified VLAN. Access ports are primarily used to connect end-user devices
such as computers, printers, IP phones, cameras, and other network peripherals
to the local network. By connecting these devices to access ports, they can
communicate with other devices within the same VLAN. In the fig: ___ Port nos
2,3,6 are the access ports of VLAN 3, Port nos 1,7 are the access ports of VLAN
4 and Port nos 4,5,8 are the access ports of VLAN 5.
Trunk ports are switch ports configured to carry traffic for multiple VLANs
simultaneously. They are used to interconnect switches or to connect switches.
This eliminates the need for separate cables for each VLAN when connecting
switches, promoting network efficiency. This allows VLAN traffic to traverse
multiple switches while maintaining VLAN segregation and ensuring that frames
reach their intended destinations. Trunk ports are also used to connect switches
to routers, servers, or other networking devices that support VLAN tagging. This
enables these devices to communicate with multiple VLANs on the network. Each
frame transmitted over a trunk port includes a VLAN tag that identifies the VLAN to
which the frame belongs.
determine the VLAN to which the frame belongs. This allows switches on the
receiving end to correctly forward the frames to the appropriate VLANs.
What we see in the above figure is that each router interface is plugged into an
access link. This means that each of the routers’ interface IP addresses would
then become the default gateway address for each host in each respective VLAN.
Instead of using a router interface for each VLAN, you can use one Fast Ethernet interface
and run 802.1Q trunking (802.1Q trunking, also known as VLAN trunking. Above figure
shows how a Fast Ethernet interface on a router will look when configured with 802.1Q
trunking. This allows all VLANs to communicate through one interface. Cisco calls this a
“router on a stick.”
How Router on a Stick works.
● First, you configure the switch to support VLANs. Then create and assign VLANs
to specific switch ports where devices are connected.
● On the switch, configure a port as a trunk port. It carries traffic from multiple
VLANs across a single physical link.
● This trunk link is then connected to one of the physical interface of the router.
● Configure subinterfaces(Logical) on the above mentioned physical interface of the
router. Subinterfaces are commonly used in routers to perform inter-VLAN
routing.
● Each subinterface is configured with its own unique network settings, including IP
address, subnet mask, VLAN tagging. These settings allow the subinterface to
operate as if it were a distinct physical interface.
CHAPTER-8
IP MULTICAST
There are three types of traffic that we can choose from for our networks:
● Unicast
● Broadcast
● Multicast
If you want to send a message from one source to one destination, we use
unicast. If you want to send a message from one source to everyone, we use
broadcast.
Why do you want to use multicast instead of unicast or broadcast? That’s best
explained with an example. Let’s imagine that we want to stream a high
definition video on the network using unicast, broadcast or multicast. You will
see the advantages and disadvantages of each traffic type. Let’s start with
unicast:
LAN, the other two hosts are on another site that is connected through a 30
Mbps WAN link.
Each additional host that wants to receive this video stream will put more
burden on the video server and require more bandwidth from the WAN link.
Hence it is not scalable.
If our video server broadcasts its traffic then the load on the video server
will be reduced, it’s only sending the packets once. The problem however is that
everyone in the broadcast domain will receive it, whether they like it or not.
Another issue with broadcast traffic is that routers do not forward broadcast
traffic, it will be dropped.
Multicast traffic is very efficient. This time we only have two hosts that are
interested in receiving the video stream. The video server will only send the
packets once. switches forward multicast packets selectively to only interested
receivers, routers play a more active role in replicating and distributing multicast
packets across different network segments. This reduces the load of the video
server and network traffic in general.
When using unicast, each additional host will increase the load and traffic rate.
With multicast it will remain the same.
Multicast Components
Multicast is efficient but it doesn’t work “out of the box”. There are a
number of components that we require.
Above you can see the router is receiving the multicast traffic from the
video server. It doesn’t know where and if it should forward this multicast traffic.
We need some mechanism on our hosts that tells the router when they want to
receive multicast traffic. We use the IGMP (Internet Group Management Protocol)
for this. Hosts that want to receive multicast traffic will use the IGMP protocol
to tell the router which multicast traffic they want to receive.
IGMP helps the router to figure out on what interfaces it should forward
multicast traffic but what about switches? Take a look at the following image:
To help the switch figure out where to forward multicast traffic, we can
use IGMP snooping. The switch will “listen” to IGMP messages between the
host(s) and router to figure out where it should forward multicast traffic to.
Above we have our video server that is forwarding multicast traffic to R1.
On the bottom there’s H1 who is interested in receiving it.
Multicast IP Addresses
Some of the addresses are reserved and we can’t use them for our own
applications. The 224.0.0.0 – 224.0.0.255 range has been reserved by IANA to
use for network protocols. All multicast IP packets in this range are not
forwarded by routers between subnets.
Address Usage
224.0.0.18 VRRP
For all multicast MAC addresses, the first 25 bits must be the same. Because
IANA has reserved the block from 01:00:5e:00:00:00 to 01:00:5e:7f:ff:ff for
encapsulating IP multicast datagrams.
From the above example, you can find that the MAC addresses for 224.11.2 and
225.11.1.2 are the same. Because the 5 bits of IP address (shown in the above
figure) are not relevant for IP to MAC mapping.
All the multicast IPs shown in the below table are mapped to the same multicast
MAC address - 01:00:5E:0B:01:02
IGMP (Internet Group Management Protocol): IGMP stands for Internet Group
Management Protocol. It is a communication protocol used by IPv4 hosts and
multicast routers to manage multicast group memberships within a network.
IGMP operates at the network layer (Layer 3) of the OSI model.
IGMP version 2 (IGMPv2) is widely used in IPv4 networks for managing multicast
group memberships.
- Membership query
- Membership report
IGMP Leave Group message: IGMP leave group message is sent by an IPv4 host
to notify the local multicast router that it is no longer interested in receiving
traffic for a specific multicast group.
IGMP messages are sent with the TTL field in the IP header set to one.
Therefore, IGMP messages are never forwarded by routers.
IGMP Snooping
Layer two switches are simple devices. They learn source MAC addresses
and insert these in their MAC address tables. When a frame arrives, they check
for the destination MAC address, perform a lookup in the MAC address table
and then forward the frame. This works very well for unicast traffic but it’s a
problem for multicast traffic. Take a look at the example below:
Multicast Routing
Above figure: R1 receives a multicast packet from some video server, the
destination address is 239.1.1.1. But the routing table is a unicast routing table.
There’s no information about any multicast addresses in there. Router 1 will
have no idea where to forward this multicast traffic to.
Dense Mode: Dense mode multicast routing protocols are used for networks
where most subnets in your network should receive the multicast traffic. When
a router receives the multicast traffic, it will flood it on all of its interfaces except
the interface where it received the multicast traffic.
In the example above both the hosts H1 and H2 are interested in multicast traffic
but what if there are hosts that don’t want to receive it?
A multicast router can tell its neighbor that it doesn’t want to receive the
multicast traffic anymore. This happens when:
● The router doesn’t have any downstream neighbors that require the
multicast traffic.
● The router doesn’t have any hosts on its directly connected interface that
require the multicast traffic.
Above we see R1 that receives the multicast traffic from our video server.
It floods this multicast traffic to R2 and R3. But these two routers don’t have
any interest in multicast traffic. They will send a prune message to signal R1
that it should no longer forward the multicast traffic.
There is one additional check also to prevent loops. It is called RPF (Reverse
Path Forwarding).
Even Though routers are not forwarding multicast packets on the interface
where they received the packet on, there exists a multicast routing loop. This
can be prevented by implementing the RPF check:
● Is there an entry that matches the source address in the unicast routing
table?
When the multicast packet is received on the interface that matches the
information from the unicast routing table, it passes the RPF check and accepts
the packet. When it fails the RPF check, drop the packet.
Above we see R1 floods the multicast traffic to R2 and R3. R2 also floods
it to R3. R3 will now perform a RPF check. It sees the source address of the
multicast data is 192.168.12.2 and checks the unicast routing table. It finds a
route for 192.168.12.2 that points to R1.
The packet that it receives from R1 will pass the RPF check since we
receive it on the Fa0/0 interface, the one it receives from R2 fails the RPF check.
So, the multicast packet from R2 will be dropped.
R3 will then flood the multicast packet towards R2 who will also do a RPF
check. It will drop this packet since R2 uses its interface towards R1 to reach
192.168.12.2.
Another way to look at this is that the RPF check ensures that only
multicast packets from the shortest path are accepted. Multicast packets that
travel longer paths are dropped.
Upstream router - The router where we receive multicast traffic from (source
side)
RPF neighbor
PIM (Protocol Independent Multicast), the term "RPF neighbor" stands for
"Reverse Path Forwarding neighbor."
When a router receives multicast traffic, it performs an RPF check. It looks at its
unicast routing table to determine the upstream interface from which it expects
to receive unicast traffic for the source of the multicast stream.
The router compares the incoming interface of the multicast packet with the
expected upstream interface determined by the RPF check.
If the incoming interface matches the expected upstream interface, the router
forwards the multicast packet. If they don't match, the router might discard the
packet to prevent loops.
In this context, the upstream neighbor from which the router expects to receive
multicast traffic is referred to as the "RPF neighbor" for that source.
PIM Dense Mode is a push method in which source-based trees are used.
Above we see a video server sending a multicast packet towards R1. H1 wants to
receive the same multicast. So H1 will send an ip igmp join request to R6. As
soon as R1 receives this multicast packet, it will create an entry in its multicast
routing table where it stores the source address and multicast group address. It
will then flood the traffic on all of its interfaces except the interface where it
received the multicast packets.
Other routers that receive this multicast packet will also create an entry
in its multicast routing table and are flooded on all of their interfaces except the
interface where it received the multicast packets. This does cause some issues,
one problem is that we will have multicast routing loops. You can see that the
packet that R1 receives is forwarded to R2 > R4 > R5 and back to R1 (and the
other way around).
Each router that is not interested in the multicast traffic will send a prune
to its upstream router, requesting it to stop forwarding it. Pruning of multicast
traffic helps to prevent looping.
The interfaces of routers R2 and R6 where the arrow marks are shown will not send a
prune back message. Because H1 wants to receive the multicast packets.
Now multicast traffic is flooded from R1 to R2 > R6 > H1. This flood and
prune behavior will occur every three minutes. This topology is called the source-
based distribution tree or SPT (Shortest Path Tree). The source is the root of
our tree. The routers in between that are forwarding traffic are the nodes. The
subnets with receivers are the branches and leaves of the tree. Depending on the
source and/or multicast groups that we use, you might have more than one
source tree in your network. We use the [S,G] notation to refer to a particular
source tree.
● Dense mode floods multicast traffic until a router asks you to stop.
● Sparse mode sends multicast traffic only when a router requests it.
To fix this issue, sparse mode uses a special router called the RP
(Rendezvous Point). All multicast traffic is forwarded to the RP and when other
routers want to receive it, they’ll have to find their way towards the RP.
Above we see R1 which is the RP for the network. It’s receiving multicast
traffic from the video server but at the moment nobody is interested in it. R1 will
not send any multicast traffic on the network at this moment.
● R3 receives the IGMP join and will request R1 (Using PIM Join message)
to start sending the multicast traffic.
When using sparse mode, all routers need to know the IP address of the RP.
(This will discussed later)
● Once the RP receives the PIM register message there are two options:
o When nobody is interested in the multicast traffic then the RP will
reject the PIM register message.
o When there is at least one receiver, the RP accepts the RP register
message.
● When the timer is almost expired, R1 will send a PIM register null packet
to RP.
● PIM register null packet doesn’t carry the encapsulated multicast packet.
It’s a simple request to ask the RP if it is interested now.
● If still don’t have any receivers, the RP will send another PIM register stop
message.
● When there are receivers, the RP will not send a PIM register stop message
to R1
● The host (H1) that is connected to R6 would like to receive multicast traffic.
● R6 now has to figure out how to get to the RP and request it to start
forwarding the multicast traffic.
● R6 will check its unicast routing table for the IP address of the RP and
send a PIM join message on the interface that is used to reach the RP.
● When R4 receives the PIM join, it has to request the RP to start forwarding
multicast traffic
● So R4 will check its unicast routing table, find the interface that is used
to reach the RP and send a PIM join message towards the RP.
● When the RP receives the PIM join, it will start forwarding the multicast
traffic.
● Multicast traffic is now flowing from R1 towards the RP, down to R4, R6
and to our receiver (H1).
This concept of joining the RP is called the RPT (Rendezvous Point Tree) or
Shared Tree. The RP is the root of our tree which decides where to forward
multicast traffic to. Each multicast group might have different sources and
receivers so we might have different RPTs in our network.
If you look closely at the picture above then you might have noticed that R6 has
multiple paths towards the source. Right now multicast traffic is flowing like
this:
This is not the most optimal path. The path from R1 > R2 > R6 has one less
router than the current path. So if all interfaces are equal, this path is probably
better.
Once H1 starts receiving multicast traffic through the RP, it’s possible to switch
to the SPT (Shortest Path Tree) - R1 > R2 > R6.
● When R6 received the multicast traffic through the RP, it also learned the
source address of this multicast.
● R6 checks its unicast routing table to find a better path to reach the
source. It finds that R1 > R2 > R6 is the better path.
● Now R6 decided to use the SPT (R1 > R2 > R6) instead of the RPT (R1 > R5
> R4 > R6) to receive this traffic.
● For this, R6 will send PIM join messages to R2. R2 will forward the PIM
join to R1
● R1 will start forwarding multicast traffic towards R6, using the best path:
R1 > R2 > R6 (SPT - Source Path Tree)
● PIM sparse mode uses a RP (Rendezvous Point) as a “central point” for our
multicast traffic.
● Routers will use PIM register packets to register sources with the RP. The
first multicast packet is encapsulated and forwarded to the RP.
● When the RP is not interested in traffic from a certain group then it will
send a PIM register stop packet.
● The router that sent the PIM register will start a suppression timer (60
seconds) and will send a PIM register null packet a few seconds before the
suppression timer expires.
● Routers with receivers will join the RPT (Root Path Tree) for each group
that they want to receive.
● Once routers with receivers get a multicast packet from the RP, they will
switch from the RPT to the SPT when traffic exceeds 0 kbps (in other
words: immediately).
Candidate BSR: This is the router that collects information from all available
RPs in the network and advertises it throughout the network.
Candidate RP: These are the routers that advertise themselves that they want
to be the RP for the network.
messages is 1. So they are not routed. When a multicast router receives the
message, it will do an RPF check on the source address of the BSR and will
resend the message on all other PIM-enabled interfaces.
The BSR messages will contain information about the BSR and the RP-to-
group mappings.
Above we have a small network with six routers. R3 is the BSR and sends
BSR messages on its interfaces. All other routers will re-send these messages.
There can be only one active BSR in the network. There can be more than one
BSR router in a network. The BSR router with the highest priority will become
the active BSR.
When a network is initiated, all the routers including candidate RPs will
receive a BSR message. Since it is the first BSR message, it will not have any RP-
to-group mappings data. Candidate RPs also learned the source address of the
BSR. Now Candidate RPs will start sending their RP announcement packets to
the unicast IP address of the BSR (3.3.3.3).
Once the BSR receives the RP announcements, it will build a list of all RPs and
the multicast groups they want to serve. This is called the group-to-RP mapping
set. The BSR will then include this list in its next BSR messages, so that all
multicast routers in the network receive it.
Multicast PIM Designated Router: The DR is the router that will forward the
PIM join message from the receiver to the RP (rendezvous point).
receive it and forward it to R1. This would mean that we have 2 multicast streams
which results in duplicate packets and wasted bandwidth.
To mitigate this issue, PIM-SM will elect only one Designated Router (DR) from
R2 and R3. In this, R3 has been elected as the Designated router on this
segment. Because by default the highest IP address will determine who
becomes the PIM DR. DR priority can be changed using suitable configuration.
CHAPTER-9
PROTOCOLS
9.1. VRRP
local hosts, ensuring that hosts on the LAN can communicate directly with the
virtual IP address.
Master router: One of the VRRP group members is elected as master router.
Routers in the group participate in an election process to determine the master
router. The master router is elected on the basis of priority. If the priority is the
same (by default 100) then the router having the highest IP address will become
the master router. Administrators can manually configure the priority value for
each router within the VRRP group. Once the master router is elected, it assumes
the responsibility of forwarding traffic destined for the virtual IP address
associated with the VRRP group. Hosts on the LAN send packets to the virtual
IP address, assuming it to be their default gateway. The master router receives
these packets and forwards them to their destinations as required.
Backup routers: Only one of the VRRP group members will become the master
router while others will be back up routers. Backup routers monitor the status
of the master router. If the master router fails or becomes unreachable, the
backup router with the next highest priority value takes over as the new master
router.
For example, if the VRID assigned to a VRRP group is ‘10’, then the virtual MAC
address is 00-00-5E-00-01-0A
Master advertisement timer: This is the keep-alive messages from the master
router. The Master Advertisement Timer (MA Timer) is a timer used by the master
router in a VRRP group to periodically send VRRP advertisement messages to
announce its status as the master router. These advertisement messages are
multicast packets sent to a well-known IP multicast address (typically 224.0.0.18
for IPv4).at 224.0.0.18 in every 1 second.
Master dead timer: The Master Dead Timer (MD Timer) determines the time
duration during which a backup router waits for the master advertisement
message from the current master router. If the backup router does not receive
the master advertisement within this specified time, it assumes that the master
router is no longer operational. By default, the Master Dead Timer is set to 3.69
seconds. This means that if the backup router does not hear from the master
router for 3.69 seconds, it takes over the responsibilities of the master. The
Master Dead Timer plays a crucial role in triggering this failover process.
For example, let's say we have two routers configured with VRRP. Router A is the
primary router, and Router B is the backup. You can configure VRRP object
tracking on Router A to monitor the status of a specific interface (like an uplink
to an ISP) or a remote IP address (like the IP address of a critical server). If Router
A detects that the monitored object becomes unreachable, it can decrease its
VRRP priority, allowing Router B to take over as the active router. Once the object
becomes reachable again, Router A can increase its priority and potentially
reclaim its role as the active router.
By using VRRP object tracking, network administrators can ensure that failover
decisions are not based on local router state only, but also on the availability of
critical network resources. As a result, VRRP object tracking contributes to
improved network reliability and resilience, reducing the risk of downtime and
enhancing overall network performance.
Components of SNMP
Managed Devices: These are the network devices that are being monitored or managed
using SNMP. Examples include routers, switches, servers, printers, etc.
Agents: SNMP agents are software modules running on managed devices. They collect
and store management information and make it available to SNMP managers.
Network Management System (NMS): The NMS is the central management station
responsible for monitoring and managing the managed devices. It communicates with
SNMP agents on managed devices to gather information and issue commands.
Get: The NMS requests specific data from a managed device, such as device configuration
or performance metrics.
Get Response: This is the response sent by the SNMP agent to the SNMP manager in
response to a GET request.
Set: The NMS sends instructions to a managed device to modify its configuration or
settings.
Trap/Inform: The managed device sends unsolicited messages (traps or informs) to the
NMS to notify it of specific events or conditions, such as system reboots, interface status
changes, or critical errors.
Get Next: Similar to the Get operation, but retrieves the next available data object in the
MIB (Management Information Base), which is a hierarchical database containing
managed objects representing various aspects of the device's configuration and status.
GetBulk: Retrieves a large amount of data from the MIB in a single operation, reducing
network overhead for large data requests.
The data available in a Management Information Base (MIB) can vary depending
on the specific MIB module being used, the device being managed, and the configuration
of the SNMP agent.
● Network Interfaces: Details about the network interfaces on the device, such as
their status, speed, traffic statistics (e.g., packets transmitted and received), and
configuration parameters.
● Performance Metrics: Metrics related to the performance of the device and its
components, including bandwidth utilization, packet loss, error rates, and latency.
● Event Logs: Log entries and event notifications generated by the device,
including system events, error messages, security alerts, and administrative
actions.
Versions: SNMP has evolved through different versions, including SNMPv1, SNMPv2c,
and SNMPv3. Each version offers improvements in security, performance, and
functionality. While SNMPv2c remains prevalent due to its simplicity and backward
compatibility,
The Internet Control Message Protocol (ICMP) is a network layer protocol that
provides a means for devices to send error messages and operational information, such
as diagnostics or route-change information, between hosts on an IP network. ICMP is an
integral part of the Internet Protocol Suite (TCP/IP).
Error Reporting: ICMP is primarily used for reporting errors in packet processing. For
example, if a router encounters a problem forwarding a packet, it may send an ICMP
message back to the source indicating the nature of the problem.
Ping: "ping" utility sends ICMP Echo Request messages to a destination host and waits
for ICMP Echo Reply messages. This is used to test whether a host is reachable and to
measure round-trip time (RTT) between hosts.
Traceroute: The traceroute utility sends out a series of packets, typically using ICMP
echo requests to the destination host.
The first packet sent has a TTL value of 1. When this packet reaches the first router along
the path, the TTL is decremented to 0, and the router discards the packet. The router then
sends an ICMP Time Exceeded message back to the source indicating that the packet's
TTL expired.
The traceroute utility receives the ICMP Time Exceeded message and records the IP
address of the router that sent it. This IP address represents the first hop on the path to
the destination.
The traceroute utility then sends another packet, this time with a TTL value of 2. This
packet reaches the first router, which decrements the TTL to 1 and forwards the packet to
the next router along the path. When the TTL reaches 0, the second router sends an ICMP
Time Exceeded message back to the source, and its IP address is recorded by the
traceroute utility.
This process is repeated with increasing TTL values until the packet reaches the
destination host. When the destination host receives the packet, it responds with an ICMP
Port Unreachable message, indicating that the packet has reached its destination.
By analyzing the sequence of IP addresses received in response to the packets sent with
increasing TTL values, the traceroute utility can determine the path taken by packets from
the source to the destination. This information is useful for diagnosing network
connectivity issues, identifying routing problems etc.
While ICMP is essential for troubleshooting and managing IP networks, it can also
be misused for various attacks, such as ICMP flooding attacks or ICMP redirect attacks.
Therefore, network administrators often configure firewalls and routers to filter ICMP
messages to mitigate potential security risks.
● The DHCP client now has an IP address and other configuration settings for a
limited time period, known as the lease duration. During this time, the client can
use the assigned IP address and communicate on the network.
● Before the lease expires, the DHCP client can request to renew its lease with the
DHCP server. If the lease is not renewed, the IP address is released back to the
DHCP server for reuse by other clients.
● DHCP servers maintain a pool of available IP addresses and manage address
allocation to clients, ensuring efficient utilization of available addresses and
preventing address conflicts. Administrators can configure various settings on the
DHCP server, such as the size of the address pool, lease duration, and network
configuration parameters to be provided to clients.
Contents
AAI/ANS/CNS/CATC/2024/NON-PLI
सी.ए.टी.सी., प्रयागराज TRNG/DATA COMMUNICATION
NETWORKING/CYBER-SECURITY/
LINUX/MOD 1/Ver.1.0
C.A.T.C., PRAYAGRAJ