0% found this document useful (0 votes)
13 views45 pages

HCIA Domain 3

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views45 pages

HCIA Domain 3

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

7.

Ethernet Switching Basics

1. Overview of Ethernet Protocols


Ethernet is a set of standards governing the transmission of data across
local area networks (LANs).

It defines the structure of data frames, cable types, signal processing


methods, and communication protocols to ensure devices in a LAN can
communicate effectively.

Ethernet operates primarily at the Data Link Layer (Layer 2) of the OSI
model, but it also defines aspects of physical cabling, connectors, and
signal types at the Physical Layer (Layer 1).

1.1 Key Ethernet Standards


IEEE 802.3: The primary Ethernet standard specifying how data is
transmitted over LANs.

Ethernet Cable Types: Standards specify cable types like twisted-pair


(e.g., Cat5, Cat6) and fiber optic cables for different data rates and
distances.

Data Rates: Common data rates include Fast Ethernet (100 Mbps), Gigabit
Ethernet (1 Gbps), and newer standards for 10 Gbps, 40 Gbps, and 100
Gbps Ethernet.

1.2 CSMA/CD (Carrier Sense Multiple Access with Collision


Detection)
Ethernet is a broadcast network that initially used Carrier Sense Multiple
Access with Collision Detection (CSMA/CD) to manage multiple devices
attempting to send data on a shared network medium.

Carrier Sense (CS):

Before sending data, a device listens to detect if another device is


currently transmitting.

Multiple Access (MA):

Multiple devices share the same network medium.

7. Ethernet Switching Basics 1


Collision Detection (CD):

If two devices send data simultaneously, a collision is detected.

1.3 CSMA/CD Process


1. Listen Before Send: A device checks if the network line is idle.

2. Send If Idle: If the line is idle, the device sends its data.

3. Collision Handling: If two devices send data simultaneously, a collision


occurs, making the signal unstable.

4. Collision Detection and Disturbing Pulse: Devices stop transmission upon


detecting a collision, and each device sends a "disturbing pulse" to notify
other devices of the collision.

5. Backoff and Retry: After a random delay, each device attempts to resend
the data, repeating the process if another collision occurs.

This approach works well in small networks but can create performance
bottlenecks in larger or heavily trafficked networks.

1.4 Early Ethernet: Hubs and Limitations


Initially, hubs were commonly used to connect devices in Ethernet
networks. Hubs operate at the Physical Layer (Layer 1) and act as
repeaters, forwarding incoming signals to all connected devices.

Collision Domain: All devices connected to a hub share a single


collision domain, meaning collisions can happen frequently.

Broadcast Traffic: Hubs do not restrict broadcast traffic, leading to


more collisions as the network scales.

1.5 Switch Networking and Collision Domains


Switches operate at the Data Link Layer (Layer 2) and use MAC addresses
to forward data frames only to the intended device, unlike hubs that forward
to all connected devices.

Switches break the network into multiple collision domains, which reduces
collisions and improves performance.

Switches and Collision Domains: Each switch port creates a separate


collision domain, isolating traffic and reducing collision frequency.

7. Ethernet Switching Basics 2


Limitation on Broadcast Traffic: While switches limit collision domains,
they do not contain broadcast domains. Therefore, broadcast traffic is
still sent to all devices in the same broadcast domain.

1.6 Broadcast Domain and MAC Addresses


A broadcast domain is a logical network segment where broadcast frames
(like ARP requests) are received by all devices.

Broadcasts are essential for functions like address resolution but can
congest the network as the number of devices grows.

Broadcast MAC Address: Ethernet uses a broadcast MAC address


(FF:FF:FF:FF:FF) to address all devices within a broadcast domain.

Unique MAC Addresses: Each device has a unique MAC address for
identification and communication within a LAN. The MAC address is
assigned by the NIC manufacturer and is globally unique.

1.7 Network Interface Card (NIC)


The Network Interface Card (NIC) is the hardware component enabling a
device to connect to a network.

Each NIC has a unique MAC address used to identify the device on the
Ethernet network.

Port and NIC Association: Each network port on a switch or device


corresponds to a NIC, and traffic is sent and received via this interface.

Types of NICs: Ethernet NICs are common for most devices, but other
network types like Wi-Fi or optical fiber require different types of NICs.

2. Ethernet Frames
An Ethernet frame is the basic data unit used in Ethernet networks,
encapsulating the data for transmission across the network.

Ethernet frames have two main formats:

1. Ethernet II Frame

2. IEEE 802.3 Frame

Each format is used to transmit data within an Ethernet network, but they differ
in structure and usage.

7. Ethernet Switching Basics 3


1. Ethernet II Frame Format

The Ethernet II frame is one of the most common Ethernet frame


formats, widely used in modern IP-based networks.

Here’s a breakdown of its fields:

1. Destination MAC Address (DMAC): 6 bytes

Identifies the MAC address of the device intended to receive the


frame.

2. Source MAC Address (SMAC): 6 bytes

Identifies the MAC address of the device sending the frame.

3. Type: 2 bytes

Indicates the protocol type used in the data payload, such as:

0x0800 for IPv4

0x0806 for ARP (Address Resolution Protocol)

4. Payload (Data): Variable length

Contains the actual data being transmitted, such as an IP


packet.

5. Frame Check Sequence (FCS): 4 bytes

Ensures data integrity by using a Cyclic Redundancy Check


(CRC) to detect errors in the transmitted frame.

2. IEEE 802.3 Frame Format

The IEEE 802.3 frame includes a Logical Link Control (LLC) sub-layer,
which adds fields for additional protocol control and data management:

1. Destination Service Access Point (DSAP): 1 byte

Indicates the protocol type; for example, if the frame uses IP,
the value is set to 0x06 .

2. Source Service Access Point (SSAP): 1 byte

Identifies the source protocol type, also typically 0x06 for IP.

3. Control (Ctrl): 1 byte

Usually set to 0x03 for connectionless services.

7. Ethernet Switching Basics 4


This LLC layer makes the IEEE 802.3 format compatible with multiple upper-
layer protocols by adding these fields, though Ethernet II is more commonly
used in modern networks.

2.1 MAC Address: Unique Identification in Ethernet


A MAC address is a 48-bit identifier assigned to each NIC, giving each
device on an Ethernet network a unique address.

This address is burned into the NIC by the manufacturer, ensuring


uniqueness across the globe.

MAC Address Format:

Length: 6 bytes (48 bits), typically presented in hexadecimal format,


e.g.,
00-1E-10-DD-DD-02 .

Organizationally Unique Identifier (OUI): The first 24 bits identify the


manufacturer.

Device ID: The last 24 bits uniquely identify the device within the
manufacturer’s production.

2.2 IP Address vs. MAC Address


Both IP and MAC addresses are used in Ethernet networks but serve different
purposes:

IP Address:

Layer: Network Layer (Layer 3).

Unique, hierarchical, and assigned based on network topology.

Changeable and used for routing across different networks.

MAC Address:

Layer: Data Link Layer (Layer 2).

Globally unique, fixed at manufacturing, and used for local identification


within a network segment.

Not changeable by users, providing a consistent identifier for each


device.

2.3 Ethernet Frame Transmission Modes

7. Ethernet Switching Basics 5


Ethernet frames can be transmitted in three different modes, depending on
how many devices are targeted:

i. Unicast Ethernet Frame


In unicast mode, an Ethernet frame is sent from one device directly to
another using the destination device's unique MAC address.

This is the most common form of communication in Ethernet networks,


ensuring data is only sent to the intended recipient.

Frame Structure: The frame’s destination MAC address field contains


the unicast MAC address of the receiving device.

MAC Address Characteristics: The eighth bit of the first byte in a


unicast MAC address is set to 0 .

ii. Broadcast Ethernet Frame


In broadcast mode, the frame is sent to all devices in the same
broadcast domain.

Broadcast frames are used for essential network functions like Address
Resolution Protocol (ARP), which maps IP addresses to MAC
addresses.

Broadcast MAC Address: FF-FF-FF-FF-FF-FF.

Network Impact: Broadcast frames consume bandwidth since all


devices process them, potentially reducing network performance in
large or busy networks.

Broadcast frames are useful for situations where information needs to


reach all devices on the network, but overuse of broadcasts can impact
network efficiency.

iii. Multicast Ethernet Frame


Multicast mode is a selective broadcast, allowing the frame to be
received only by devices that have joined a specific multicast group.

Multicast MAC Address: The eighth bit of the first byte in a multicast
MAC address is set to 1 .

Efficiency: Multicast frames are more efficient than broadcasts, as only


subscribed devices process the frame.

7. Ethernet Switching Basics 6


Multicast frames are often used in applications like video conferencing,
where only a subset of devices need to receive the data.

3. Ethernet Switches
Switches are network devices operating at the Data Link Layer (Layer 2),
connecting devices within a local area network (LAN).

They are often categorized by their layer functionality:

1. Layer 2 Switch: Forwards frames based on MAC addresses and


operates entirely within a single broadcast domain.

2. Layer 3 Switch: Combines routing and switching, capable of forwarding


packets between different networks based on IP addresses.

3.1 Working Principles of Layer 2 Switches


Layer 2 switches learn and maintain a mapping of MAC addresses to
interfaces in a MAC address table.

This learning process enables switches to direct traffic within a LAN


efficiently.

1. Initial State:

When powered on, the switch’s MAC address table is empty.

2. Frame Reception and Learning:

When a switch receives a frame, it reads the source MAC address


and incoming interface.

The switch records this information in the MAC address table,


associating the MAC address with the interface.

3. Forwarding Decision:

The switch then examines the destination MAC address in the


frame:

If the destination MAC address is already in the MAC address


table, the switch forwards the frame to the associated interface.

If the destination MAC address is not in the table, the switch


floods the frame to all interfaces except the one that received it.

7. Ethernet Switching Basics 7


3.2 MAC Address Table
The MAC address table (also called a forwarding table) is essential for
efficient traffic forwarding within the switch.

Here’s how it works:

1. Mapping MAC Addresses to Interfaces:

Each entry in the table maps a specific MAC address to an


interface, allowing the switch to know where each device resides.

2. Entry Aging and Updates:

Dynamically learned MAC address entries have a set lifespan, called


aging time. After this period, if the MAC address is not updated, the
entry is removed to keep the table current.

Example: On Huawei switches, the default aging time is 300


seconds (5 minutes).

3. Unknown MAC Addresses:

If a frame arrives with a destination MAC address not in the table, it


is treated as an unknown unicast and flooded to all interfaces.

3.3 MAC Address Learning Process Example


1. Frame Reception:

Host 1 sends a frame to Host 2. Host 1’s MAC address is recorded in the
frame’s source MAC address field.

2. Table Entry Creation:

When the switch receives this frame, it adds an entry in the MAC
address table, mapping Host 1’s MAC address to the interface it arrived
on.

3. Flooding on Unknown MAC:

If Host 2’s MAC address is not yet in the MAC address table, the switch
floods the frame to all interfaces except the one it arrived on.

4. MAC Learning Update:

Host 2 receives the frame and replies to Host 1 with a unicast frame.
This allows the switch to learn Host 2’s MAC address and update its
MAC address table accordingly.

7. Ethernet Switching Basics 8


3.4 Frame Processing Behaviours on a Switch
Switches process frames in three main ways: flooding, forwarding, and
discarding.

i. Flooding
Flooding occurs when a switch forwards a frame out of all interfaces
except the one it arrived on.

Unknown Unicast: If the switch does not find the destination MAC
address in its MAC address table, it floods the frame to all other ports.

Broadcast Frames: Frames with a destination MAC address of FF-FF-


FF-FF-FF-FF (broadcast address) are always flooded, as they are
intended for all devices in the broadcast domain.

Examples of Flooding:

Unknown Unicast Flooding: Host 1 sends a frame to Host 2, but


Host 2’s MAC address is not in the table. The switch floods the
frame, allowing Host 2 to respond and update the MAC address
table.

ARP Request: Host 1 needs Host 2’s MAC address, so it sends an


ARP request (broadcast) with the MAC address FF-FF-FF-FF-FF-FF .
The switch floods this frame, and Host 2 responds with its MAC
address, which the switch learns and records.

ii. Forwarding
When a destination MAC address is in the switch’s MAC address table,
the switch forwards the frame only to the interface associated with that
MAC address.

This is known as point-to-point forwarding.

Unicast Frame Forwarding: If Host 1 sends a frame to Host 2, and Host


2’s MAC address is in the table, the switch directly forwards the frame
to Host 2’s interface.

Example of Forwarding:

Host 1 sends a frame to Host 2. The switch finds Host 2’s MAC
address in its MAC address table and forwards the frame
specifically to Host 2’s interface.

7. Ethernet Switching Basics 9


3. Discarding
The switch discards frames under certain conditions to prevent loops
and redundant traffic.

Same Port Condition: If a switch receives a frame with a destination


MAC address that matches an interface the frame arrived on, it discards
the frame.

This behavior prevents unnecessary forwarding.

Example of Discarding:

Host 1 sends a frame to Host 2, but a second switch connected to


both Host 1 and Host 2 already knows Host 2 is on the receiving
interface. Therefore, it discards the frame to avoid redundant traffic.

4. Data Communication Process Within a Network


Segment
4.1 Scenario Overview
Task: Host 1 wants to communicate with Host 2 within the same network
segment.

Hosts: Host 1 has its own IP and MAC addresses but doesn’t yet know Host
2's MAC address. It knows only Host 2's IP address.

Switch: The switch has just powered on, so its MAC address table is
empty.

4.2 Step-by-Step Process of Data Communication


1. Data Encapsulation Process

Before Host 1 can send data to Host 2, it must encapsulate the


necessary information into the Ethernet frame, including the source IP
and MAC addresses and the destination IP and MAC addresses.

However, Host 1 only knows Host 2’s IP address at this point, not its
MAC address. Therefore, it must perform an ARP (Address Resolution
Protocol) request to learn Host 2's MAC address.

2. Initialization

In the initial state:

7. Ethernet Switching Basics 10


Host 1: The ARP cache (where IP-to-MAC mappings are stored) on
Host 1 is empty, as it doesn’t yet know Host 2's MAC address.

Switch: The MAC address table on the switch is also empty


because it has just been powered on and hasn’t yet learned any
MAC addresses.

3. Sending an ARP Request

Since Host 1 needs to find out Host 2’s MAC address, it sends an ARP
Request. This ARP Request packet includes:

Source MAC Address: Host 1's MAC address.

Source IP Address: Host 1's IP address.

Destination IP Address: Host 2's IP address.

The destination MAC address in this ARP packet is a broadcast


address ( FF-FF-FF-FF-FF-FF ), meaning that the ARP Request will be sent
to all devices on the local network segment.

4. Flooding the ARP Request

When the ARP Request frame from Host 1 reaches the switch:

The switch checks its MAC address table for an entry matching
Host 2’s MAC address.

Since the switch’s MAC address table is still empty, it does not find
a match for the destination MAC address.

Because the destination is unknown, the switch floods the ARP


Request to all other interfaces except the one it was received on.

This ensures that all devices, including Host 2, receive the ARP
Request.

5. MAC Address Learning on the Switch

The switch also learns the MAC address of Host 1 as part of this
process:

When the switch receives the ARP Request from Host 1, it extracts
the source MAC address and incoming interface from the frame.

The switch creates an entry in its MAC address table associating


Host 1’s MAC address with the interface on which it arrived.

7. Ethernet Switching Basics 11


This process, called MAC address learning, enables the switch to
remember where Host 1 is located in the network.

6. ARP Reply from Host 2

When Host 2 receives the ARP Request, it processes the packet:

Host 2 recognizes that the ARP Request is intended for its own IP
address.

Host 2 then responds with an ARP Reply, which includes:

Source MAC Address: Host 2’s MAC address.

Source IP Address: Host 2’s IP address.

Destination MAC Address: Host 1’s MAC address (since Host 1


initiated the request).

Destination IP Address: Host 1’s IP address.

This ARP Reply is sent as a unicast frame directed specifically to Host


1, meaning only Host 1 will receive it.

7. Forwarding the ARP Reply

When the ARP Reply frame from Host 2 arrives at the switch:

The switch examines the destination MAC address (Host 1’s MAC
address) and looks it up in its MAC address table.

Since the switch previously learned Host 1’s MAC address during
the ARP Request flooding, it finds a match in the MAC address table.

The switch forwards the ARP Reply directly to the interface


associated with Host 1’s MAC address, ensuring efficient delivery.

The switch also learns Host 2’s MAC address in this step by recording
Host 2’s MAC address and the interface on which it received the ARP
Reply.

8. Host 1 Updates Its ARP Cache

After receiving the ARP Reply from Host 2:

Host 1 records Host 2’s IP-to-MAC address mapping in its ARP


cache, allowing it to communicate directly with Host 2 without
needing to perform ARP again unless the entry expires.

7. Ethernet Switching Basics 12


With Host 2’s MAC address now known, Host 1 can encapsulate its IP
packet with the correct source and destination MAC addresses and
continue with data transmission.

7. Ethernet Switching Basics 13


8. VLAN Principles and
Configuration

1. Issues in Traditional Ethernet Networks


In a traditional Ethernet network setup, all devices are part of a single
broadcast domain, meaning:

Broadcast Domain Issues: When one device sends a broadcast, all


devices in that network segment receive it, leading to:

Broadcast Storms: Excessive broadcast traffic can congest the


network.

Security Concerns: All devices can receive broadcasts, increasing


exposure to potential data leaks.

Unicast Flooding:

If a switch doesn’t know a device’s MAC address, it floods the


frame, sending it to all devices.

This may result in junk traffic, leading to unnecessary load on the


network.

2. Introduction to VLANs
VLAN technology was developed to counter these issues by:

Segmenting Broadcast Domains:

Each VLAN forms its broadcast domain, isolating broadcast traffic


within that VLAN.

Improving Security and Reducing Traffic:

Devices in separate VLANs can only communicate via a Layer 3 device


(like a router).

Geographical Independence:

Devices don’t need to be in the same physical location to be in the


same VLAN.

8. VLAN Principles and Configuration 1


3. VLAN Implementation and Characteristics
IEEE 802.1Q VLAN Tagging: VLANs use a 4-byte tag added to Ethernet
frames to specify the VLAN ID (VID). This tag allows switches to identify
which VLAN each frame belongs to.

Tagged Frames: Frames with a VLAN tag; they specify the VLAN ID.

Untagged Frames: Frames without a VLAN tag.

3.1 Main Fields in a VLAN Frame


TPID (Tag Protocol Identifier): Indicates if a frame is a VLAN frame.

A value of 0x8100 specifies an IEEE 802.1Q frame.

PRI (Priority): A 3-bit field for frame priority, used for Quality of Service
(QoS) to prioritize frames in congested networks.

4. VLAN Assignment Methods


Since end devices only send untagged frames, switches assign VLANs to
these frames based on assignment methods:

Interface-Based Assignment:

VLANs are assigned based on the switch’s physical interface.

Each interface has a Port VLAN ID (PVID), which assigns untagged


frames arriving at that port to the PVID's VLAN.

Pros: Simple and intuitive.

Cons: The VLAN assignment changes if the device moves to a


different port.

MAC Address-Based Assignment:

VLANs are assigned based on the device’s MAC address, using a


table that maps MAC addresses to VLAN IDs.

Pros: VLANs remain consistent regardless of which port the


device connects to.

Cons: Less secure, as malicious devices can spoof MAC


addresses.

8. VLAN Principles and Configuration 2


5. Layer 2 Ethernet Interface Types
Switch interfaces are categorized based on their role in VLAN
communication:

Access Interface: Connects to end devices and typically doesn’t carry


VLAN tags. Used for devices in a single VLAN.

Trunk Interface: Connects to other switches and carries traffic for


multiple VLANs using VLAN tags.

Hybrid Interface: Can carry both tagged and untagged traffic from
multiple VLANs, often found in complex network environments.

6. Frame Processing in Different Interface Types


Each interface type handles incoming and outgoing frames differently
based on whether they’re tagged or untagged:

6.1 Access Interface


Receiving Frames:

Untagged Frame: Adds a VLAN tag with the VID set to the PVID of the
interface.

Tagged Frame: Only forwards if the frame’s VID matches the interface’s
PVID; otherwise, it’s discarded.

Sending Frames:

Removes VLAN tags, sending frames as untagged.

6.2 Trunk Interface


Receiving Frames:

Untagged Frame: Adds a VLAN tag with the VID of the PVID and checks
if the VID is in the permitted VLAN list; forwards if allowed.

Tagged Frame: Forwards only if the VID is in the permitted VLAN list.

Sending Frames:

Removes the VLAN tag only if the VID matches the PVID of the
interface; otherwise, frames are sent with tags.

8. VLAN Principles and Configuration 3


6.3 Hybrid Interface
Receiving Frames:

Untagged Frame: Adds a VLAN tag with the VID of the PVID and checks
if the VID is in the permitted list; forwards if allowed.

Tagged Frame: Forwards only if the VID is in the permitted VLAN list.

Sending Frames:

Forwards frames based on the VID list configuration.

Frames with a VID in the untagged list are sent without tags, while those
in the tagged list retain tags.

6.4 VLAN Interface Operations Comparison


Receiving (Untagged Receiving (Tagged Sending (Frame
Interface Type
Frame) Frame) Tagging)

Forwards only if VID


Adds tag with PVID Removes tags and
Access = PVID; discards
and permits sends out as untagged
others

Adds tag with PVID, Removes tag if VID =


Forwards if VID is in
Trunk checks permitted PVID; otherwise, sends
permitted list
VLANs tagged

Adds tag with PVID, Sends based on VID


Forwards if VID is in
Hybrid checks permitted list; untagged if VID is
permitted list
VLANs in untagged list

7. VLAN Tagging Workflow


Receiving Frames:

Access, trunk, and hybrid interfaces tag untagged frames with their
PVID.

Access interfaces permit these frames by default, while trunk and


hybrid interfaces check against their permitted VLAN lists.

Sending Frames:

Access interfaces remove VLAN tags from all frames.

Trunk interfaces remove tags only if the frame's VID matches the PVID.

8. VLAN Principles and Configuration 4


Hybrid interfaces can be configured to selectively send tagged or
untagged frames.

8. VLAN Applications and Planning


VLANs are designed to segment a large network into smaller, isolated sub-
networks, which makes them ideal for specific applications within an
organization.

When planning VLANs, assignments are generally based on:

Service: Grouping VLANs by specific services (e.g., VoIP, Data, or


Video services) to control network traffic and optimize quality.

Department: Assigning VLANs to departments (e.g., HR, Sales,


Finance) to maintain security and manage broadcast domains.

Application: Separating VLANs for specific applications (e.g., an IoT


VLAN for connected devices and a separate VLAN for guest Wi-Fi).

9. VLAN Assignment Methods


There are 2 ways to assign VLANS:

9.1 Interface-Based VLAN Assignment


In interface-based VLAN assignment, VLANs are assigned to specific
switch interfaces (or ports).

This approach is intuitive and widely used, especially in scenarios


where networks need to be segmented by physical ports.

Applicable Scenario:

Imagine multiple enterprises operate within a single building.

To cut costs, they share network infrastructure but need to keep


their networks isolated from each other.

Here’s how it would work:

Each enterprise’s network connects to different interfaces on


the same Layer 2 switch.

All enterprises share the same internet gateway (egress device).

8. VLAN Principles and Configuration 5


Solution:

Assign a separate VLAN ID to each enterprise’s interface.

For instance, Enterprise A could use VLAN 10, Enterprise B


could use VLAN 20, and so on.

This setup isolates traffic for each enterprise, ensuring that devices
within Enterprise A cannot access Enterprise B’s network and vice
versa.

Benefits of Interface-Based VLAN Assignment

Simplicity: Easy to configure and manage.

Security: Isolates traffic based on physical ports, preventing


unintended communication between devices on different VLANs.

Application: Ideal for situations where departments or enterprises


share physical locations and need independent access.

9.2 MAC Address-Based VLAN Assignment


In MAC address-based VLAN assignment, VLANs are assigned based
on the device’s MAC address, allowing more dynamic control.

This method is beneficial in scenarios where network mobility and


device-level security are priorities.

Applicable Scenario

Consider an enterprise that assigns VLANs by department.

For example, only employees in Finance should access specific


financial applications and data, while other departments are
restricted.

Here’s how it works:

The network administrator configures the switch to assign


devices with specific MAC addresses to a particular VLAN.

For example, all devices in Finance could have their MAC


addresses mapped to VLAN 30.

This ensures that only designated devices (those with specific


MAC addresses) can join the Finance VLAN, even if new
devices connect to the network.

8. VLAN Principles and Configuration 6


Solution:

Configure MAC-based VLAN assignment on the switch (e.g.,


SW1).

The switch checks incoming frames against a pre-configured


MAC-to-VLAN table.

For instance, any frame from a device with a Finance MAC


address is automatically assigned to VLAN 30.

Benefits of MAC Address-Based VLAN Assignment

Mobility: Devices maintain their VLAN assignment regardless of


which switch port they connect to.

Security: Ensures only approved devices can access specific


VLANs, enhancing network security.

Application: Useful in environments where employees or devices


move frequently and access to certain resources is tightly
controlled.

9.3 Comparison of Interface-Based and MAC Address-


Based VLAN Assignment
Interface-Based VLAN MAC Address-Based VLAN
Aspect
Assignment Assignment

Assignment Based on the physical port Based on the MAC address of the
Method (interface) on switch connected device

Setup More complex, requires MAC


Simple and widely used
Complexity address mapping

Less flexible, tied to High flexibility, VLANs remain


Flexibility
physical ports with devices even when moved

Provides isolation based on Stronger device-level security,


Security
physical ports controls network access

Shared infrastructure for Specific resource access by


Ideal Scenario
isolated groups approved devices or departments

8. VLAN Principles and Configuration 7


9. Spanning Tree Protocol
1. STP Overview
1.1 Need for STP in LANs
Increasing Redundancy and Reliability:

In large LANs, network devices (switches, routers) are interconnected to provide


redundancy.

For example, if a link fails, another link can still maintain the connection.

Challenges from Redundant Links:

Redundant links can form loops, leading to network issues, such as broadcast storms and
MAC address flapping, which degrade performance and cause network disruptions.

1.2 Problems Caused by Layer 2 Loops


Broadcast Storms:

If a broadcast frame enters a loop, switches flood this frame endlessly.

This results in overwhelming traffic (broadcast storm), making network devices struggle and
services unavailable.

MAC Address Flapping:

Switches update their MAC address table based on frame sources.

When a frame repeatedly enters the network through different ports, the switch continually
updates the MAC entry’s location, causing instability in frame delivery and network
interruptions.

1.3 Introduction to STP and its Functionality


Purpose of STP:

Spanning Tree Protocol (STP) is used to detect and eliminate Layer 2 loops, which helps
maintain a stable and loop-free network topology.

How STP Works:

STP calculates a loop-free path across the network by constructing a spanning tree,
identifying a root switch and blocking unnecessary redundant paths.

STP leverages Bridge Protocol Data Units (BPDUs) to communicate network topology
information, enabling each switch to detect loops and make decisions on blocking redundant
links.

1.4 STP Process and Components


Root Bridge Election:

STP begins by selecting a Root Bridge based on the lowest Bridge ID (a combination of
priority and MAC address).

The Root Bridge acts as the anchor for determining the best path.

9. Spanning Tree Protocol 1


Port Roles:

Root Port: The path with the shortest distance to the Root Bridge is designated as the Root
Port.

Designated Port: One port on each segment is marked as the Designated Port to handle
traffic for that segment.

Blocked Port: Any redundant path that doesn’t contribute to the shortest path becomes a
Blocked Port, preventing loops.

Blocking Redundant Links:

By setting specific ports to a blocked state, STP prevents looping paths from affecting the
network.

1.5 STP Adaptation to Network Changes


Dynamic Topology Adjustment:

When a topology change occurs (like link or switch failure), STP recalculates the network
structure, unblocking and reblocking ports as needed to restore loop-free connectivity.

Timers and Convergence:

STP uses timers to monitor and adjust changes in the network to help with timely
convergence, such as:

Hello Time (BPDU exchange interval)

Forward Delay (transition time)

Max Age (duration before removing stale BPDU info)

1.6 Layer 2 vs. Layer 3 Loops


Layer 2 Loops: Occur due to redundancy or misconfigurations at the Data Link layer. STP
specifically addresses Layer 2 loops by blocking redundant paths.

Layer 3 Loops: Typically caused by routing issues; managed with routing protocols (e.g., OSPF,
RIP) and TTL fields in packet headers to prevent infinite forwarding.

1.7 STP Applications in Campus Networks


Campus Network Resilience:

In large campuses, STP is essential for maintaining stable Layer 2 networks by providing
redundant link support while eliminating potential loops.

Network Monitoring and Adaptation:

STP constantly monitors network status, automatically responding to topology changes to


maintain reliable network operation.

1.8 Benefits of STP


Loop-Free Network: Prevents issues like broadcast storms and MAC address flapping.

Automatic Adjustment: Dynamically responds to changes in network topology, providing


efficient redundancy management.

9. Spanning Tree Protocol 2


Ensures Network Reliability: With STP in place, Layer 2 networks in campus settings achieve
stable, adaptable, and reliable connectivity.

2. STP Basic Concepts


2.1 Bridge ID (BID) in STP
Each STP-enabled switch has a Bridge ID (BID) that uniquely identifies it on the network.

This BID is crucial for selecting the Root Bridge.

BID Structure:

16-bit Bridge Priority: Configurable between 0 and 65535, defaulting to 32768 on many
devices. Lower priority numbers are higher in priority.

48-bit MAC Address: This ensures uniqueness; if priorities are the same, switches compare
MAC addresses, with the lower one becoming the Root Bridge.

Root Bridge Selection: The device with the smallest BID becomes the Root Bridge, which is the
logical center of the STP topology.

2.2 Root Bridge


Purpose: The Root Bridge is the cornerstone of the loop-free STP tree. All STP calculations are
centered around this device.

Selection Process: Switches first compare bridge priorities, and if those are the same, they
compare MAC addresses. The one with the smallest BID wins and becomes the Root Bridge.

Configuration BPDUs: After convergence, the Root Bridge sends Configuration BPDUs at
regular intervals to help other devices keep up with topology changes.

2.3 Cost in STP


Port Cost: Each STP-enabled port has a cost that reflects the path’s quality or distance to the
Root Bridge. Ports with higher bandwidths generally have lower costs, resulting in more efficient
routes.

Root Path Cost (RPC): The RPC of each port to the Root Bridge is the total cost of all inbound
ports along the path to the Root. For example, a 10 Mbps port might have a higher cost than a 1
Gbps port.

Adjusting Port Costs: Port costs can be manually configured to influence the preferred path,
which helps in complex or multi-vendor networks.

2.4 Root Path Cost (RPC)


Definition: Root Path Cost is the total cost of a path from a non-root bridge to the Root Bridge,
calculated by summing up all port costs along the path.

Path Selection: If there are multiple paths to the Root Bridge, the switch selects the one with the
lowest RPC, ensuring optimal paths are used. The Root Bridge itself has an RPC of 0 since it’s
the topology’s center.

2.5 Port ID (PID)


PID Structure: The Port ID consists of:

9. Spanning Tree Protocol 3


4-bit Port Priority: Default is 128 but can range from 0 to 240 (must be a multiple of 16).

12-bit Port Number: This value represents the specific port on the device.

Purpose: PID helps determine port roles within the STP topology, influencing whether a port
becomes a Root Port, Designated Port, or Alternate Port.

2.6 Bridge Protocol Data Units (BPDUs)


Function: BPDUs are critical to STP operations. These packets carry information needed for
topology calculation and loop prevention.

Types of BPDUs:

Configuration BPDU: Contains information like BID, RPC, and PID, essential for STP topology
calculations.

Topology Change Notification (TCN) BPDU: Used to signal network changes, prompting
switches to update their topologies.

BPDU Comparison: STP-enabled devices compare BPDUs using specific fields (Root ID, RPC,
Bridge ID, Port ID) to determine the optimal BPDU, helping elect the Root Bridge, Root Port, and
Designated Port.

2.7. BPDU Comparison Rules and Process


Comparison Order:

1. Smallest Root BID: Used for Root Bridge election.

2. Smallest RPC: Used for optimal path selection to the Root Bridge.

3. Smallest BID: Used when the Root Bridge has been elected, but Designated Ports and Root
Ports need to be selected.

4. Smallest PID: If all other values are identical, PID is the final determining factor.

2.8 STP Operations


STP Process:

1. Select the Root Bridge: Based on the smallest BID.

2. Elect Root Ports: Each non-root switch elects one Root Port, which provides the optimal path
to the Root Bridge.

3. Select Designated Ports: For each segment, one port is chosen as the Designated Port to
handle traffic for that segment.

4. Block Alternate Ports: Any port that is neither a Root nor a Designated Port is set to an
Alternate (Blocked) Port to prevent loops.

Port Roles:

Root Port: The best path to the Root Bridge, only one per switch (except the Root Bridge
itself).

Designated Port: Forwarding port on each segment that communicates directly with the
network segment.

Alternate Port: A blocked port, preventing any loops in the topology.

9. Spanning Tree Protocol 4


2.9 Configuration BPDU Forwarding Process
Initial BPDU Transmission: When switches start, they assume they are the Root Bridge and
begin sending Configuration BPDUs.

Root Bridge Election: After comparing BPDUs, only the switch with the smallest BID continues as
the Root Bridge. Other switches update their roles and begin forwarding Configuration BPDUs
based on the new network topology.

3. STP Calculation
3.1 Selecting the Root Bridge
Broadcasting BPDUs: When STP starts, every switch on the network considers itself the Root
Bridge and begins broadcasting Configuration BPDUs (Bridge Protocol Data Units) with its own
Bridge ID (BID).

BID Comparison: Each BID contains two parts:

Bridge Priority: A 16-bit field where a lower value has higher priority.

MAC Address: A 48-bit field, unique to each switch. If priorities match, the switch with the
smallest MAC address becomes the Root Bridge.

Role of the Root Bridge:

The Root Bridge acts as the logical center of the network.

It continuously sends BPDUs, providing a reference for topology calculations.

As the network changes, STP may elect a new Root Bridge, preempting the current one if a
switch with a lower BID joins the network.

3.2 Selecting the Root Port on Each Non-root Bridge


Definition: Each non-root bridge selects a Root Port, which is the port with the shortest path to
the Root Bridge.

Port Selection Criteria:

1. Shortest Root Path Cost (RPC): The port with the lowest RPC to the Root Bridge is chosen.

2. If there’s a tie in RPC, the BID of the connecting switch is considered.

3. If BIDs match, the Port ID (PID) is compared to choose the port.

Root Port Role:

A non-root bridge uses its Root Port to receive BPDUs from the Root Bridge.

The Root Port ensures a single, optimal path to the Root Bridge, preventing loops from non-
root bridges.

3.3 Selecting a Designated Port on Each Link


Definition: The Designated Port is responsible for forwarding BPDUs to each network segment
or link.

Selection Process:

9. Spanning Tree Protocol 5


1. The port with the smallest Root Path Cost (RPC) to the Root Bridge becomes the Designated
Port.

2. If RPCs are equal, the switch with the smallest BID is chosen.

3. If BIDs are identical, the port with the smallest PID becomes the Designated Port.

Role:

Designated Ports manage network segments to ensure that each has a unique path to the
Root Bridge.

In most cases, all ports on the Root Bridge become Designated Ports.

3.4 Blocking Non-designated Ports


Non-designated Ports (Alternate Ports): Any port that is neither a Root Port nor a Designated
Port becomes a Non-designated Port, also called an Alternate Port.

Blocking the Port: STP blocks Alternate Ports to prevent data frames from circulating in loops
across the network.

Function of Blocked Ports:

Blocked ports only listen to and process BPDUs.

They do not forward data frames but can still participate in BPDU exchanges to detect
network changes.

3.5 STP Port States and Transitions


STP-enabled devices operate through a sequence of port states to reach a stable, loop-free
topology. These states are:

Disabled:

A port in the Disabled state doesn’t participate in STP.

It cannot send or receive any data frames, BPDUs, or user traffic.

Blocking:

Initial State: When a switch port is enabled, it enters the Blocking state.

Function: The port listens for BPDUs but doesn’t forward BPDUs or user traffic. It also
doesn’t learn MAC addresses.

Role: Primarily used for Alternate Ports to prevent loops.

Listening:

Transition: If a port is elected as the Root or Designated Port, it moves to the Listening state.

Function: The port can forward BPDUs and listen to BPDUs but doesn’t forward user traffic
or learn MAC addresses.

Purpose: Allows the network to determine the topology without forwarding data, preventing
loops during the setup phase.

Learning:

Transition: After a Forward Delay timer expires, the port moves to Learning.

9. Spanning Tree Protocol 6


Function: The port learns MAC addresses to populate the MAC address table but does not
yet forward user traffic.

Purpose: This state reduces temporary loops by learning MAC addresses before data
forwarding begins.

Forwarding:

Final State: Only Root and Designated Ports enter the Forwarding state.

Function: The port sends and receives both user traffic and BPDUs.

Purpose: Enables normal data transmission, completing the STP convergence.

3.6 STP Port State Transition Process


Here’s how a port transitions through these states to reach Forwarding:

1. Initialization: Port starts in Blocking.

2. Elect Role: If elected as Root or Designated, it moves to Listening.

3. Timer Expiry: After the Forward Delay timer expires, the port transitions to Learning.

4. Final Check: If it still retains the role after the next timer, it moves to Forwarding.

3.7 Example of STP in Action


Consider a network with three switches (SW1, SW2, SW3):

1. Root Bridge Election:

All switches declare themselves Root and exchange BPDUs.

SW1 has the lowest BID, so it becomes the Root Bridge.

2. Root Port Selection:

SW2 and SW3 compare their RPCs and select the port with the shortest path to SW1 as their
Root Port.

3. Designated Port Selection:

For each segment, the port with the lowest RPC to the Root Bridge becomes the Designated
Port.

4. Blocking Non-designated Ports:

Any remaining ports are blocked, creating a loop-free path from each segment to the Root
Bridge.

4. Topology Change
4.1 Root Bridge Fault
What Happens: If the Root Bridge fails, non-root switches stop receiving its BPDUs.

Rectification Process:

1. Detection:

Non-root bridges detect a fault when they no longer receive BPDUs from the Root
Bridge. Each non-root switch has a Max Age Timer set to 20 seconds. When this timer

9. Spanning Tree Protocol 7


expires, the BPDU records are invalidated.

2. Re-election of Root Bridge:

Each non-root bridge initiates a re-election process by sending Configuration BPDUs to


one another to elect a new Root Bridge based on the smallest BID.

3. Port Transition:

For example, SW3’s port A transitions to Forwarding state after two intervals of the
Forward Delay timer (15 seconds each by default).

4. Convergence Time:

The total convergence time from a root bridge failure is approximately 50 seconds (Max
Age timer of 20 seconds + two Forward Delay intervals of 15 seconds each).

4.2 Direct Link Fault


What Happens: If the link connecting the Root Port of a switch fails, an Alternate Port takes
over.

Rectification Process:

1. Detection:

The affected switch detects the fault on its Root Port.

2. Alternate Port Transition:

The Alternate Port transitions from Blocking to Listening, Learning, and finally
Forwarding.

3. New Root Port:

The Alternate Port is designated as the new Root Port.

4. Convergence Time:

This process typically takes around 30 seconds (two intervals of the Forward Delay
timer).

4.3 Indirect Link Fault


What Happens: An indirect link failure happens when a link indirectly connected to a switch (not
directly) fails.

Rectification Process:

1. Detection:

When an indirect link fails, switches may stop receiving BPDUs. For instance, if SW2 and
SW1 are indirectly linked through a fault, SW2 stops receiving BPDUs from SW1 after the
Max Age timer expires (20 seconds).

2. Assumption of Root Bridge:

SW2, assuming the Root Bridge has failed, considers itself the Root and starts sending its
own BPDUs.

3. BPDU Exchanges:

9. Spanning Tree Protocol 8


SW3’s Alternate Port enters the Listening state after 20 seconds. Both SW2 and SW3
exchange BPDUs and re-evaluate their roles.

4. Re-election and Convergence:

SW2 and SW3 determine that the BPDU from SW3 is superior, stopping SW2 from
declaring itself as the Root. They recalculate the STP, leading to stable convergence.

5. Convergence Time:

The convergence time is around 50 seconds (Max Age timer of 20 seconds + two
Forward Delay intervals of 15 seconds each).

4.4 Impact of Topology Changes on MAC Address Table


What Happens: When topology changes, the forwarding path for traffic changes, potentially
leading to outdated or incorrect entries in the MAC address table.

Solution:

1. Topology Change Notification (TCN):

When SW3 detects a topology change, it sends Topology Change Notification (TCN)
BPDUs to its upstream device (SW2).

2. Propagation:

SW2 acknowledges by setting the Topology Change Acknowledgement (TCA) bit and
forwards the TCN BPDU toward the Root Bridge.

3. Triggering MAC Aging Updates:

Upon receiving TCN BPDUs, the Root Bridge sets the Topology Change (TC) bit in the
BPDU’s Flags field, instructing downstream devices to adjust MAC address aging times.

4. Reduced Aging Time:

The MAC address aging time is shortened from 300 seconds to the Forward Delay time
(typically 15 seconds), allowing outdated entries to be purged quickly.

5. MAC Table Refresh:

Once the old entries expire, the switches learn the new MAC addresses based on the
new topology, ensuring accurate data forwarding paths.

4.5 STP Port State Transitions During Failures


STP employs a series of port states to manage traffic flow and prevent loops.

Here’s a summary of each state and the transition process during failure scenarios:

1. Blocking: Initial state to prevent loops. Listens to BPDUs only.

2. Listening: Transition state where BPDUs are sent and received, but no data forwarding
occurs.

3. Learning: The port starts learning MAC addresses but still doesn’t forward traffic.

4. Forwarding: The final state where the port can send and receive both data and BPDUs.

5. RSTP

9. Spanning Tree Protocol 9


5.1 Disadvantages of STP
Slow Convergence: STP is slow to converge, taking up to 50 seconds to respond to topology
changes (Max Age Timer + 2 × Forward Delay). This delay can lead to significant service
disruptions, especially in networks with frequent topology changes.

Complex Port Roles: STP doesn’t clearly differentiate between port roles and states. Ports in
Listening, Learning, and Blocking states appear the same to users because none of these ports
forward traffic. This complexity can be confusing for network administrators.

Dependency on Timers: STP relies on timers (like the Max Age timer) for convergence, resulting
in additional delays in topology change detection and reconfiguration.

5.2 Overview of RSTP (IEEE 802.1w)


What is RSTP? RSTP, defined in IEEE 802.1w, is an enhancement of STP that provides faster
convergence and simplifies network topology management.

It’s backward compatible with STP but offers additional port roles and optimizations for quicker
response to network changes.

Convergence Improvements:

RSTP enables faster convergence by utilizing a Proposal/Agreement handshake mechanism.

RSTP processes configuration BPDUs with optimized settings and uses a shorter timeout
interval for BPDUs, reducing convergence times.

Edge Ports:

RSTP introduces the Edge Port concept for ports that connect directly to end devices (like
computers).

Edge ports bypass the STP state transition process and immediately enter the Forwarding
state, improving performance for user connections.

5.3 Improvements in RSTP


1. Configuration BPDU Optimization:

RSTP sends configuration BPDUs more efficiently, using a shorter timeout interval for faster
convergence.

Unlike STP, which requires the root bridge to send BPDUs, all RSTP-enabled switches can
send BPDUs, expediting the convergence process.

2. Simplified BPDU Processing:

RSTP uses a Flags field within the BPDU format to define port roles, enabling faster and
more accurate BPDU processing.

Inferior BPDU Handling: RSTP optimizes handling of inferior BPDUs (those received from
switches with a higher BID), further accelerating convergence in response to topology
changes.

5.4 RSTP Port Roles


RSTP introduces new port roles to manage redundancy and improve response times during topology
changes:

Root Port:

9. Spanning Tree Protocol 10


Similar to STP, the Root Port is the port with the best path to the Root Bridge.

Designated Port:

Designated Ports connect directly to network segments, handling BPDU forwarding to those
segments.

Alternate Port:

Functions as a backup for the Root Port, providing an alternate path to the Root Bridge.

Alternate Ports transition from Blocking to Forwarding if the Root Port fails.

Backup Port:

Backup Ports are redundant links for Designated Ports within the same switch, offering a
secondary path in case of failure.

Edge Port:

Edge Ports connect directly to end devices and are immediately set to Forwarding state.

They don’t participate in the RSTP topology calculation but convert to regular ports if they
receive BPDUs, triggering a spanning tree recalculation to prevent loops.

5.5 RSTP Port States


RSTP reduces the number of port states from five (in STP) to three, simplifying the transition
process and improving convergence.

STP Port State RSTP Port State Port Role

Forwarding Forwarding Root port or designated port

Learning Learning Root port or designated port

Listening, Blocking, Disabled Discarding Alternate port or backup port

Discarding:

Combines the Blocking, Listening, and Disabled states of STP.

A Discarding port neither forwards traffic nor learns MAC addresses.

Used for Alternate and Backup Ports that are waiting in case a Root or Designated Port fails.

Learning:

The Learning state allows the port to learn MAC addresses but not to forward user traffic.

RSTP uses the Learning state to prepare the network without sending frames, thus avoiding
temporary loops.

Forwarding:

In the Forwarding state, a port can both send and receive user traffic and learn MAC
addresses.

Only Root Ports, Designated Ports, and Edge Ports enter this state.

5.6 Convergence Process in RSTP


1. Proposal/Agreement Mechanism:

9. Spanning Tree Protocol 11


Instead of waiting for timer expirations, RSTP uses a Proposal/Agreement mechanism
between switches for fast state transitions.

When a switch detects a new topology, it sends a Proposal BPDU to its neighbor, suggesting
a role change.

If the neighboring switch agrees, it responds with an Agreement BPDU, and both switches
adjust their port roles quickly.

2. Direct Link and Indirect Link Failures:

Direct Link Failure: When the link connected to a Root Port fails, the Alternate Port can take
over as the Root Port immediately.

Indirect Link Failure: RSTP handles indirect link failures faster by optimizing BPDU
processing, which reduces the convergence time compared to STP’s 50-second wait.

5.7 Summary of Key Differences between STP and RSTP


Feature STP RSTP

Convergence Time Up to 50 seconds Typically under 10 seconds

5 (Blocking, Listening, Learning,


Port States 3 (Discarding, Learning, Forwarding)
Forwarding, Disabled)

Edge ports immediately enter


Edge Port Concept Not supported
Forwarding state

Backup and Alternate ports improve


Backup and Alternate Ports Not explicitly defined
redundancy

Proposal/Agreement Supports Proposal/Agreement for faster


Not supported
Mechanism convergence

5.8 Example Scenario: Convergence with RSTP


Consider three switches: SW1 (Root Bridge), SW2, and SW3.

Direct Link Failure:

If the Root Port of SW2 fails, the Alternate Port on SW2 can quickly take over as the new
Root Port, avoiding the lengthy STP convergence time.

Indirect Link Failure:

If an indirect link between SW1 and SW2 fails, SW2 quickly sends BPDUs to SW3, initiating
the Proposal/Agreement process to re-establish an optimal path.

6. RSTP Advancements
6.1 Defects of STP/RSTP: All VLANs Share One Spanning Tree
Single Spanning Tree Limitation: In STP and RSTP, all VLANs share a single spanning tree. This
setup has limitations:

Inefficient Link Utilization: Only one active path is used, and redundant links are blocked,
underutilizing bandwidth.

Processor Overload in Large VLAN Networks: If multiple VLANs are configured, computing
a single spanning tree for each VLAN places a heavy load on switch processors.

9. Spanning Tree Protocol 12


Advancements to Address These Issues:

VLAN-Based Spanning Tree (VBST)

Multiple Spanning Tree Protocol (MSTP)

iStack and Smart Link

6.2 VLAN-Based Spanning Tree (VBST)


Overview: VBST creates separate spanning trees for each VLAN, allowing different paths for
different VLANs.

Benefits of VBST:

Loop Elimination: Just like STP, VBST prevents Layer 2 loops across VLANs.

Efficient Link Utilization and Load Balancing: Each VLAN can use a different path, balancing
traffic and optimizing bandwidth use.

Reduced Management Complexity: VBST reduces the need for frequent configuration
changes and minimizes maintenance costs.

6.3 Multiple Spanning Tree Protocol (MSTP)


Overview: MSTP (defined in IEEE 802.1s) is an evolution of STP/RSTP designed to address
multiple VLANs without needing an individual spanning tree for each VLAN.

How MSTP Works:

MST Regions: MSTP divides a network into regions where each region runs its own set of
spanning trees, called Multiple Spanning Tree Instances (MSTIs).

Mapping VLANs to MSTIs: VLANs are mapped to MSTIs based on their traffic patterns and
required redundancy. For instance:

Even-numbered VLANs may be mapped to MSTI 1.

Odd-numbered VLANs may be mapped to MSTI 2.

Independent Topology Calculations: Each MSTI calculates its topology independently,


allowing traffic to balance across different paths.

Benefits of MSTP:

Resource Efficiency: Multiple VLANs with similar traffic paths are bound to a single MSTI,
reducing CPU and memory usage.

Load Balancing: MSTP provides more granular control over traffic distribution, balancing it
across different paths.

Simplified Configuration: Only a few spanning trees need to be configured, reducing


administrative overhead and minimizing network congestion.

6.4 iStack (Intelligent Stack)


Overview: iStack enables aggregation (core) switches to function as a single logical device,
enhancing network stability and reducing administrative complexity.

iStack Functionality:

9. Spanning Tree Protocol 13


Logical Device Management: Multiple iStack-capable switches are stacked to form a single
logical entity, managed through a single IP address. This simplifies network management and
troubleshooting.

Improved Bandwidth Utilization: Link aggregation between stacked switches and access
switches eliminates Layer 2 loops and increases link bandwidth usage.

Benefits of iStack:

Simplified Network Topology: The network becomes a simplified tree topology, improving
organization and manageability.

Enhanced Network Reliability: iStack improves network resiliency by providing fault


tolerance within the logical device.

Scalability: New switches can be added to the stack without major configuration changes,
supporting network growth.

6.5 Smart Link


Overview: Smart Link is designed for dual-uplink networking. It ensures continuity by enabling
fast switchover from an active link to a standby link if the active link fails.

How Smart Link Works:

Active and Standby Links: In a dual-uplink configuration, Smart Link designates one link as
active (used for data traffic) and the other as standby (blocked from forwarding data).

Fast Switchover: If the active link fails, Smart Link rapidly switches traffic to the standby link,
restoring connectivity in milliseconds.

Benefits of Smart Link:

Loop Prevention: By blocking one of the dual uplinks, Smart Link eliminates Layer 2 loops.

Minimal Switchover Delay: Since Smart Link doesn’t involve protocol packet exchanges, it
can switch traffic with low latency, ensuring network reliability.

Simplicity and Speed: Smart Link is easy to configure and provides near-instantaneous
failover.

6.6 Comparison of Advanced STP Techniques


Feature STP/RSTP VBST MSTP iStack Smart Link

Single
Separate trees Grouped VLANs Aggregation Dual-uplink
Primary Use spanning tree
for each VLAN share MSTI switch stacking redundancy
for all VLANs

Loop
Yes Yes Yes Yes Yes
Prevention

Improved
Optimized through Enhanced with Fast link
Load Balancing Limited through per-
MSTIs aggregation switchover
VLAN trees

Convergence Faster than


Slow Faster Immediate Milliseconds
Time STP/RSTP

Administrative Complex in Simplified for Reduced with MST Single IP Easy to


Simplicity large networks VLANs regions management configure

9. Spanning Tree Protocol 14


Fast
Failover Slow (timer- Moderate (per Faster
n/a (active/standby
Mechanism based) VLAN) (Proposal/Agreement)
links)

6.7 Practical Use of Advanced STP Mechanisms in Campus Networks


1. VBST and MSTP for Load Balancing:

Use VBST or MSTP in networks with multiple VLANs to efficiently distribute traffic across
different paths, preventing congestion on a single link.

By mapping VLANs to different instances (MSTIs), MSTP simplifies large networks and
maximizes link utilization.

2. iStack in Aggregation Networks:

iStack is beneficial in campus networks where multiple aggregation switches need to


function as a single unit.

Stacked switches eliminate the need for complex spanning tree setups, reduce port blocking,
and improve network resiliency.

3. Smart Link for Dual-Uplink Redundancy:

For devices with dual uplinks (e.g., servers, firewalls), configure Smart Link on the access
switch to prevent loops while ensuring a backup path is available.

In case of a link failure, Smart Link automatically switches traffic to the standby link, reducing
downtime and maintaining service continuity.

9. Spanning Tree Protocol 15


10. Implements Communication
Between VLANs

1. Inter-VLAN Communication Overview


Layer 2 Communication:

Devices within the same VLAN can communicate directly using Layer 2
switching without needing Layer 3 devices (such as routers or Layer 3
switches).

Layer 3 Communication (Inter-VLAN):

For communication between different VLANs, a Layer 3 device is


required to route the traffic, as each VLAN belongs to a separate IP
segment.

Common Layer 3 devices include:

Routers

Layer 3 switches

Firewalls

2. Using Routers for Inter-VLAN Communication


There are two main methods to use routers for inter-VLAN communication:

Physical interfaces

Sub-interfaces.

i. Using Router Physical Interfaces


Each VLAN is configured on a Layer 2 switch, and each VLAN connects to
the router through a dedicated physical interface.

Each physical interface serves as the default gateway for PCs within its
respective VLAN (e.g., VLAN 10 has IP 192.168.10.254, VLAN 20 has IP
192.168.20.254).

10. Implements Communication Between VLANs 1


ii. Using Router Sub-Interfaces
Sub-interfaces are virtual interfaces created on a single physical interface
to support multiple VLANs.

Each sub-interface is assigned a VLAN ID and configured as the default


gateway for its VLAN.

VLAN Tag Termination: Sub-interfaces remove VLAN tags from incoming


packets and add VLAN tags to outgoing packets for VLAN separation.

3. Using Layer 3 Switches and VLANIF Interfaces


Layer 3 switches use VLANIF interfaces to provide inter-VLAN
communication.

Each VLANIF interface serves as a gateway for a VLAN, allowing the switch
to route packets between VLANs internally.

4. VLANIF Forwarding Process


When a device in VLAN 10 wants to communicate with a device in VLAN 20:

1. The device sends its packet to VLANIF 10.

2. The Layer 3 switch checks the destination IP and routes the packet to
VLANIF 20.

3. The packet is then forwarded to the destination in VLAN 20.

5. Layer 3 Communication Process and Example


Network Topology: Suppose we have a network with VLANs, an access
switch (SW1), an aggregation switch (SW2), and a router with NAT (R1)
connecting to the ISP.

Packet Flow:

PC Processing: The PC checks the destination IP; if it’s on a different


network, it forwards the packet to its gateway.

SW1: Searches its MAC address table, forwards the frame to SW2.

SW2: Finds the destination MAC in its routing table, forwards to the
appropriate VLANIF, and routes accordingly.

10. Implements Communication Between VLANs 2


R1 with NAT: Performs NAT for packets going to the internet, translating
private IPs to a public IP.

6. Network Address and Port Translation (NAPT)


Purpose: NAPT translates private IP addresses and ports to a single public
IP, allowing multiple devices on a private network to access the internet.

Example in Process:

R1 NAT: When R1 receives packets from VLANs heading out, it applies


NAPT, translating private IPs to a single public IP for internet access.
The response is directed back based on port numbers.

10. Implements Communication Between VLANs 3


11. Ethernet Link Aggregation
and Switch Stacking

1. Network Reliability Requirements


Network reliability is about ensuring continuous network service despite
failures.

It’s especially crucial because diverse applications and value-added


services (VASs) demand high availability.

Reliability is essential for avoiding service disruptions, which can lead to


severe operational and economic losses.

Network reliability can be managed at three levels:

1. Card Reliability

2. Device Reliability

3. Link Reliability

1.1 Card Reliability


In modular switches, card reliability focuses on the components within a
switch chassis, which typically include:

Power Modules: Provide power to the switch. Redundant power


modules ensure the switch remains powered if one fails.

Fan Modules: Maintain proper cooling; redundancy helps in case of a


fan failure.

Main Processing Units (MPUs): The brain of the switch. Multiple MPUs
can operate in master-backup mode for failover.

Switch Fabric Units (SFUs): Handle the data forwarding between ports.
Multiple SFUs provide redundancy, so data forwarding continues even if
one SFU fails.

Line Processing Units (LPUs): These cards contain the interfaces for
data transmission. In case an LPU fails, only the interfaces on that LPU
stop forwarding data; other LPUs continue to operate.

11. Ethernet Link Aggregation and Switch Stacking 1


For instance, modular switches like the 512700E-8 have several LPU, SFU,
and MPU slots. This setup ensures that if one MPU or SFU fails, the device
continues to operate normally.

1.2 Device Reliability


Device reliability focuses on providing redundancy for critical devices like
switches and routers in the network.

This reliability can be established through:

No Backup:

This configuration lacks redundancy.

For example, if an aggregation switch fails, all traffic from


connected downstream switches will be disrupted.

Master/Backup Mode:

Also known as active/passive mode, where one device is the master


while another acts as a backup.

If the primary device or root port fails, the backup device or


alternative port takes over without interrupting service.

1.3 Link Reliability


Link reliability is achieved by creating backup links for critical connections.

This typically involves:

Backup Links:

Additional links between devices serve as standbys.

For instance, STP blocks the backup link under normal conditions to
prevent loops.

However, if the main link fails, STP unblocks the backup link,
enabling data flow.

2. Principle and Configuration of Link Aggregation


2.1 Principle of Link Aggregation

11. Ethernet Link Aggregation and Switch Stacking 2


Link aggregation, often called Ethernet Trunk (Eth-Trunk), combines
multiple physical links into a single logical link to boost bandwidth and
provide redundancy without hardware upgrades.

By bundling multiple links, traffic can be spread across multiple paths,


increasing both bandwidth and reliability.

2.2 Increasing Link Bandwidth


Without link aggregation, protocols like the Spanning Tree Protocol (STP)
allow only one link to forward traffic between devices, leaving any
additional links inactive.

With link aggregation, all links in the group can actively participate in
forwarding traffic, thereby increasing the overall link bandwidth.

2.3 Key Concepts in Eth-Trunk


1. Link Aggregation Group (LAG): The group created by aggregating multiple
physical links into one logical link.

2. Member Interfaces and Links: Physical links that are part of the LAG.

3. Active and Inactive Interfaces: Links actively forwarding traffic are active,
while others remain as backups, becoming active if the primary links fail.

4. Link Aggregation Modes:

Manual Mode: Configurations are manually managed without a protocol


to confirm or negotiate link status.

LACP Mode: Uses the Link Aggregation Control Protocol (LACP) to


dynamically negotiate and manage links, selecting active interfaces
based on priorities.

2.4. Configuration of Eth-Trunk


i. Manual Mode

In manual mode, administrators manually create and configure the Eth-


Trunk and member interfaces.

All links are active by default, sharing traffic evenly.

Fault Tolerance: If a link fails, traffic redistributes across the


remaining active links.

11. Ethernet Link Aggregation and Switch Stacking 3


Drawbacks: Manual mode does not exchange configuration packets,
requiring manual checks to confirm configurations.

ii. LACP Mode (Link Aggregation Control Protocol)

LACP mode leverages the Link Aggregation Control Protocol (LACP) for
more dynamic and reliable link aggregation.

LACPDUs: Devices exchange LACP Data Units (LACPDUs)


containing device priority, MAC address, and interface priority,
ensuring that the correct member interfaces are paired.

System Priority: The device with a higher priority (smaller priority


value) is chosen as the Actor, directing which interfaces should be
active.

Interface Priority: After the Actor is determined, both devices select


active links based on interface priority. Lower priority values indicate
higher priority.

Maximum Active Interfaces: In LACP mode, you can specify a


maximum number of active interfaces. Interfaces beyond this limit
function as backups and become active if an active link fails,
ensuring uninterrupted bandwidth.

2.5 Load Balancing in Eth-Trunk


Link aggregation enables load balancing, which distributes traffic across
member links based on criteria like IP or MAC addresses:

1. Per-Packet Load Balancing: Packets are distributed across multiple links in


a round-robin manner, but this method may result in out-of-order packets
for some protocols.

2. Per-Flow Load Balancing: Packets from the same flow (identified by IP or


MAC address) are sent over the same link, ensuring packet order within a
flow.

Load Balancing Modes:

Based on IP Addresses: For traffic where IP addresses change frequently,


load balancing based on source, destination, or both IPs is ideal.

Based on MAC Addresses: For networks with fixed IPs and changing MAC
addresses, balancing based on MACs is more suitable.

11. Ethernet Link Aggregation and Switch Stacking 4


2.6 Typical Application Scenarios for Link Aggregation
1. Between Switches: To aggregate bandwidth and provide redundancy in
connections between network switches.

2. Switch-to-Server: Enhances bandwidth and reliability between switches


and high-performance servers, reducing bottlenecks.

3. Switch-to-Stack Connections: Useful in switch stacks where multiple


switches operate as a single logical unit, sharing aggregated links.

4. Firewall Heartbeat Links: In hot standby configurations, aggregated links


ensure high availability and fast failover between firewalls.

3. Overview of iStack and CSS


1. iStack:

Definition: iStack allows multiple iStack-capable switches to connect


using stacking cables, forming a single logical switch that participates
in data forwarding.

Applications: Mainly used in access and aggregation layers to increase


port quantity, bandwidth, and redundancy.

2. Cluster Switch System (CSS):

Definition: CSS bundles two CSS-capable switches into a single logical


switch.

Applications: Primarily used in the core layer to simplify the network


structure and provide redundancy.

3.1 Advantages of iStack and CSS


Single Logical Device: Both iStack and CSS function as a single logical
switch, making Operations and Maintenance (O&M) easier and more
centralized.

Redundancy: If one physical switch fails, the other can seamlessly take
over its forwarding and control functions, preventing single points of failure.

Loop-Free Network: By using inter-device link aggregation, iStack and CSS


create loop-free networks, removing the need for the Spanning Tree
Protocol (STP).

11. Ethernet Link Aggregation and Switch Stacking 5


Increased Link Usage: All links in an Eth-Trunk can be fully utilized,
maximizing link bandwidth and efficiency.

3.2 Applications of iStack and CSS


Port Expansion: Stacking or clustering switches increases the number of
available interfaces, which is particularly useful for access and aggregation
switches.

Increased Bandwidth and Redundancy: Aggregating links from multiple


devices into an Eth-Trunk boosts network bandwidth and reliability. In case
of a link failure, traffic is seamlessly rerouted through the remaining active
links, ensuring uninterrupted service.

Simplified Network Configuration: Networks built with CSS or iStack


eliminate the need for complex protocols like MSTP or VRRP, making
configuration straightforward and minimizing potential errors.

Rapid Network Convergence: With link aggregation, convergence happens


quickly, maintaining network stability and reliability.

3.3 Recommended Architecture for iStack and CSS


i. Core Layer

In the core layer:

Core switches are configured as a CSS, providing high reliability and


redundancy.

Eth-Trunks connect the core layer to uplink and downlink devices,


establishing a loop-free network with no need for STP.

ii. Aggregation Layer


In the aggregation layer:

Aggregation switches are grouped using iStack, enabling a large logical


switch with high interface density.

Eth-Trunks connect to both the core and access layers, offering a


robust, high-bandwidth connection that supports seamless failover
without requiring STP.

iii. Access Layer


In the access layer:

11. Ethernet Link Aggregation and Switch Stacking 6


Access switches close to each other (e.g., in a building) are virtualized
as a single device using iStack.

An Eth-Trunk connects this iStacked device to the aggregation layer,


providing an architecture that’s both simple and highly reliable.

With iStack in place, protocols like STP or VRRP are unnecessary,


allowing high-speed convergence, which is crucial for bandwidth-
intensive applications.

11. Ethernet Link Aggregation and Switch Stacking 7

You might also like