0% found this document useful (0 votes)
2 views

Computer Network Module 1

The document explains Local Area Network (LAN) design, focusing on the OSI model's layers, particularly the Physical and Data Link Layers, and the IEEE 802 standards that define various transmission mediums. It describes different LAN topologies, including bus, tree, ring, and star, along with their advantages and disadvantages. Additionally, it covers Medium Access Control (MAC) protocols, such as ALOHA and CSMA, detailing their operations and applications in managing network access.

Uploaded by

rajritik1875
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Computer Network Module 1

The document explains Local Area Network (LAN) design, focusing on the OSI model's layers, particularly the Physical and Data Link Layers, and the IEEE 802 standards that define various transmission mediums. It describes different LAN topologies, including bus, tree, ring, and star, along with their advantages and disadvantages. Additionally, it covers Medium Access Control (MAC) protocols, such as ALOHA and CSMA, detailing their operations and applications in managing network access.

Uploaded by

rajritik1875
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 32

LAN Design and IEEE 802 Standards (Simplified Explanation)

A Local Area Network (LAN) is a system that connects multiple devices to communicate and share
resources efficiently. The design of a LAN follows a structured model with different layers that handle
data transmission and network access.

1. LAN Layers and Data Transmission

LANs follow the OSI model, which includes seven layers, but the most critical ones for LAN design are:

 Physical Layer: Defines the hardware and transmission medium (cables, wireless signals, etc.).

 Data Link Layer: Ensures reliable data transfer between devices and is divided into:

o Logical Link Control (LLC): Manages communication, flow, and error control.

o Medium Access Control (MAC): Governs how multiple devices share the network.

The MAC layer is separated because different networks use different methods to control access, such as
Ethernet (CSMA/CD) or Token Ring.

When sending data, the process follows these steps:

1. User data moves from the Application Layer to TCP/IP Layer and then to the LLC Layer.

2. The LLC Layer adds control information, forming an LLC Protocol Data Unit (PDU).

3. The MAC Layer adds more details to create a MAC Frame, which is transmitted over the
network.

4. On reception, the process is reversed to reconstruct the original data.

2. IEEE 802 Standards

The IEEE 802 committee developed widely used LAN standards, later adopted by ISO (International
Organization for Standardization). These standards define different transmission mediums, such as:

 Coaxial cables

 Twisted pair cables (shielded/unshielded)

 Optical fiber

 Wireless networks (Wi-Fi, spread spectrum, etc.)

Each network type has its own access method:

 Ethernet (CSMA/CD) – Devices compete to send data.

 Token Ring – Devices take turns using a token.

 Wireless LANs (Wi-Fi, DQDB, etc.) – Use different protocols for wireless communication.

Conclusion
LAN design ensures efficient and reliable communication by organizing data transmission through
structured layers. The IEEE 802 standards provide various networking options, allowing flexibility in how
data is transmitted and accessed across different mediums.
LAN Topologies (Simple Explanation)

A LAN topology is the way computers and devices are connected in a network. There are four common
types: bus, tree, ring, and star.

1. Bus Topology

 All devices are connected to a single main cable (called the bus).

 Devices send data to everyone, but only the intended recipient reads it.

 Terminators at both ends of the bus absorb signals to prevent interference.

 Example: Like a public announcement (PA) system where one person speaks, and everyone
listens.

🔹 Advantages:
✅ Simple and cheap to set up
✅ Uses less cable
✅ Works well for small networks

🔹 Disadvantages:
❌ If the main cable breaks, the entire network stops working
❌ Only one device can send data at a time (or else signals will collide)

2. Tree Topology
 Similar to a bus topology but with branches.

 A central point (headend) connects multiple cables that branch out.

 Devices follow the same rules as the bus topology.

🔹 Advantages:
✅ Can cover a larger area
✅ Easier to expand than a bus network

🔹 Disadvantages:
❌ If the main cable fails, the entire network is affected
❌ Requires more cable than a simple bus

How Data Flows in Bus and Tree Networks

1. A device sends a message (frame) to another device.

2. The message travels along the cable.

3. Each device checks if the message is for them (by looking at the address).

4. The correct device receives the message; others ignore it.

5. The signal disappears when it reaches the terminator.

To prevent collisions, devices take turns sending messages. This is like raising your hand in class before
speaking.

Ring Topology (Simple Explanation)


In a Ring Topology, all devices are connected in a closed loop (a ring). Each device is linked to exactly
two neighbors.

How It Works

🔄 One-way Data Flow:

 Data moves in one direction (clockwise or counterclockwise).

 A device sends data as a frame.

 The frame circulates through all devices until it reaches the correct one.

 The destination device copies the data and lets it continue moving.

 When the frame returns to the sender, it is removed from the ring.

🔹 Repeaters Help Signal Strength

 Each device has a repeater, which boosts the signal before passing it to the next device.

 No buffering → Data is forwarded immediately, bit by bit.

Medium Access Control

 Since all devices share the ring, they must take turns sending data.

 A system controls who can send at any given time.

Advantages of Ring Topology

✅ No data collisions → Only one device sends data at a time.


✅ Better performance than bus topology in large networks.
✅ No need for a central hub (unlike star topology).

Disadvantages of Ring Topology

❌ Single point of failure → If one device or link fails, the whole network stops.
❌ Slower than Star Topology → Data must pass through multiple devices.
❌ Difficult to expand → Adding or removing a device disrupts the network.

Where is Ring Topology Used?

🔹 Used in older network designs, especially in token-based networks like Token Ring.
🔹 Some fiber-optic networks use a dual-ring setup for redundancy.
Star Topology Summary

In a star topology, all devices connect to a central node (hub or switch) via individual links for
transmission and reception.

🔹 Two operational modes:

1. Broadcast Mode: The central node forwards incoming data to all connected devices, making it
logically similar to a bus topology. Only one device can transmit at a time.

2. Switching Mode: The central node buffers and directs data only to the intended recipient,
improving efficiency and reducing collisions.

✅ Advantages: Easy troubleshooting, better performance, and reduced collisions.


❌ Disadvantage: Central node failure disrupts the entire network.
Medium Access Control

Medium Access Control (MAC) protocols manage how devices share the network’s transmission capacity.
There are two main approaches to control:

1. Centralized Control:

o A single controller manages access.

o Pros: Greater control, simpler access logic, no coordination issues.

o Cons: Single point of failure, potential bottleneck.

2. Decentralized Control:

o Stations collectively determine the transmission order.

o Pros: No single point of failure, better scalability.

o Cons: Complex coordination, increased overhead.

Regarding synchronous vs asynchronous access:

 Synchronous: Fixed, dedicated capacity (e.g., TDM, circuit switching), not ideal for LANs due to
unpredictable traffic.

 Asynchronous: Dynamic allocation of capacity based on demand, more efficient for LANs, and
further divided into:

1. Round Robin: Equal time slices for stations.

2. Reservation: Stations reserve slots.

3. Contention: Stations compete for access.

In general, asynchronous and decentralized methods offer better scalability and adaptability for modern
networks.
In Round Robin access control, each station takes turns to transmit data. Each station can either transmit
data during its turn or skip it, but once it's done (or skips), the next station in the sequence gets its turn.
The sequence can be controlled either by a central controller or by the stations themselves (distributed).

 Efficient for many stations: If many stations have data to send, Round Robin works well because
it ensures each gets a fair chance to transmit.

 Not efficient for few stations: If only a few stations have data, it becomes inefficient because
many stations will just pass their turn without transmitting, causing unnecessary overhead.

Round Robin works better for different types of data traffic:

 Stream traffic (e.g., voice calls, file transfers) works well with Round Robin because it requires
continuous transmission.

 Bursty traffic (e.g., terminal-host interactions) might not be efficient with Round Robin, as
transmissions are short and sporadic.

Reservation

For stream traffic, reservation techniques are well suited. In general, for these tech- niques, time on the
medium is divided into slots, much as with synchronous TDM. A station wishing to transmit reserves
future slots for an extended or even an in- definite period. Again, reservations may be made in a
centralized or distributed fashion.

Contention techniques are used for bursty traffic, where devices need to send small, irregular bursts of
data. In this method, there is no control over which station gets to send data first. Instead, all devices try
to send data at the same time, leading to potential conflicts or collisions, which can cause problems if
many devices try to transmit at once.

 Advantages: It's simple and works efficiently when there are only a few devices trying to
transmit at the same time (light to moderate network traffic).

 Disadvantages: Performance can become poor if many devices compete to send data
simultaneously, leading to delays or congestion.

The MAC (Medium Access Control) layer handles data transmission and medium access in networking,
using a MAC frame as the Protocol Data Unit (PDU). Here's a summarized breakdown:

1. Role of MAC Layer:

o The MAC layer receives data from the LLC (Logical Link Control) layer and is responsible
for managing access to the network medium, as well as transmitting the data.

o It encapsulates the data into a MAC frame.


2. MAC Frame Format:

o MAC Control: Contains protocol control information, such as priority levels.

o Destination MAC Address: Specifies where the frame is to be delivered on the LAN.

o Source MAC Address: Indicates where the frame originated.

o LLC PDU: Carries data from the LLC layer.

o CRC (Cyclic Redundancy Check): Used to detect errors in the transmitted frame. If errors
are found, the MAC layer discards the frame.

3. Error Handling:

o The MAC layer is responsible for detecting errors using the CRC and discarding
erroneous frames.

o The LLC layer is optionally responsible for retransmitting any frames that were not
successfully received.

4. Topologies and MAC Protocols:

o Ring Topology: Used in Token Ring (IEEE 802.5) and FDDI.

o Switched Topology: Includes protocols like Request/Priority (IEEE 802.12) and


CSMA/CD (IEEE 802.3).

In summary, the MAC layer formats data into a frame, manages access to the transmission medium,
handles error detection, and works with the LLC layer for error recovery if necessary.

Logical Link Control (LLC) - Simplified Explanation

The Logical Link Control (LLC) layer is a part of the data link layer in LANs. It helps in transmitting data
between two devices without needing an intermediate switch.

Key Features of LLC:


1. Supports shared-medium networks (multiple devices sharing the same network).

2. Works with the MAC layer, which handles physical network access.

LLC Services

LLC helps in addressing devices and managing data exchange. (hdlc)It provides three types of services:

1. Unacknowledged Connectionless Service:

o Works like sending a simple message (datagram).

o No confirmation or error checking.

o Used in cases where occasional data loss is acceptable (e.g., sensor readings, monitoring
systems).

2. Connection-Mode Service:

o Like a phone call – establishes a connection before sending data.

o Ensures reliable delivery using error control and flow control.

o Useful for simple devices with no advanced software.

3. Acknowledged Connectionless Service:

o A mix of the above two.

o Sends data without establishing a connection but requires an acknowledgment.

o Used for important messages (e.g., emergency alarms) that must be confirmed quickly.

LLC Protocol

LLC is based on the HDLC (High-Level Data Link Control) protocol and follows a similar structure. It has
three types of operations:

1. Type 1: Supports unacknowledged connectionless service. No acknowledgment or error control.

2. Type 2: Supports connection-mode service. Ensures error-free data delivery using


acknowledgments.

3. Type 3: Supports acknowledged connectionless service using special messages.

How LLC Works

 Uses Service Access Points (SAPs) to address devices.

 Uses different message types (PDUs) for sending, receiving, and managing connections.

 In connection-mode service (Type 2), a connection is requested, accepted, used, and then
closed when done.
 In acknowledged connectionless service (Type 3), each message is confirmed using small
sequence numbers (0 and 1) to ensure delivery.

Conclusion

LLC helps LAN devices communicate efficiently, providing options based on reliability and speed needs.
It ensures flexibility and efficiency while working with the MAC layer to manage network access. 🚀

IEEE 802

Physical:

• Encoding/decoding

• Preamble generation/removal

• Bit transmission/reception

• Transmission medium and topology

ALOHA Protocol (MAC) - Summary

ALOHA is a random access protocol used for wireless and satellite communication. It allows multiple
devices to share a common channel without coordination.

Types of ALOHA

1. Pure ALOHA

o Devices transmit anytime, leading to high collision chances.

o Maximum efficiency: 18.4%.

o High delay due to frequent retransmissions.

2. Slotted ALOHA

o Transmission allowed only at the start of time slots, reducing collisions.

o Maximum efficiency: 36.8% (double that of Pure ALOHA).

o Lower delay and better performance.

Applications

 Satellite communication, RFID, and wireless sensor networks.

Limitations

 High collision probability.

 Low efficiency compared to modern MAC protocols like CSMA/CD (Ethernet) and CSMA/CA (Wi-
Fi).

Steps in ALOHA Protocol


1. Pure ALOHA Steps

1. A device transmits data whenever it wants, without checking the channel.

2. If no collision occurs, the receiver sends an ACK (Acknowledgment).

3. If a collision occurs, the sender waits for a random time and retransmits.

4. Steps repeat until the data is successfully transmitted.

2. Slotted ALOHA Steps

1. Time is divided into fixed slots equal to the frame transmission time.

2. A device can only transmit at the beginning of a time slot.

3. If no collision occurs, the receiver sends an ACK.

4. If a collision occurs, the sender waits for a random time and retries in the next slot.

5. Steps repeat until successful transmission.

✅ Slotted ALOHA reduces collisions by allowing transmission only at specific time slots. 🚀

Carrier Sense Multiple Access (CSMA) – Explanation

CSMA is a network access method used in shared communication channels where multiple devices
compete to transmit data. The key idea is "listen before talk" to reduce collisions.

How CSMA Works?

1. A device checks if the channel is free before transmitting.

2. If the channel is busy, the device waits until it becomes free.

3. If the channel is idle, the device transmits data.

4. If a collision occurs (in some CSMA variants), the device takes corrective action.

Why is CSMA Used?

 Prevents multiple devices from sending data simultaneously.

 Reduces collisions and improves network efficiency.

 Used in wired and wireless networks (Ethernet, Wi-Fi).

Non-Persistent CSMA

Non-Persistent CSMA is a collision avoidance technique where a device checks the channel before
transmitting and waits a random time if the channel is busy, instead of continuously sensing it.

Working Steps of Non-Persistent CSMA


1. Sense the channel: The device checks if the channel is free.

2. If idle: The device transmits the data immediately.

3. If busy: The device waits a random time before rechecking the channel.

4. Repeat the process until successful transmission.

Advantages

✅ Reduces network congestion by avoiding continuous sensing.


✅ Fewer collisions compared to Persistent CSMA.

1-Persistent CSMA – Detailed Explanation

Overview

1-Persistent CSMA is a medium access control (MAC) protocol where a device continuously senses the
communication channel and transmits immediately when the channel becomes idle. It is called "1-
Persistent" because the device has a 100% (i.e., 1.0 probability) chance of transmitting as soon as the
channel is free.

This protocol is aggressive and works well in low-traffic networks, but in high-traffic conditions, it
increases the risk of collisions due to multiple devices trying to send data at the same time.

Working Steps of 1-Persistent CSMA

1. Sense the channel:

o The device listens to the communication medium to check if it is idle or busy.

2. If the channel is idle:

o The device immediately transmits its data without waiting.

3. If the channel is busy:

o The device keeps sensing continuously (i.e., it does not wait randomly).

o As soon as the channel becomes idle, it immediately starts transmitting.

4. If a collision occurs:

o Since multiple devices may try to transmit at the same time (as they all sense the idle
channel), collisions can happen.

o The device then waits for a random backoff time before reattempting transmission.

Advantages of 1-Persistent CSMA

✅ Lower waiting time: Since devices transmit immediately when the channel is idle, there is minimal
delay.
✅ Efficient in low-traffic conditions: When there are few devices, collisions are rare, and the protocol
works efficiently.
p-Persistent CSMA – Detailed Explanation

Overview

p-Persistent CSMA is a probabilistic channel access method used in slotted time networks to reduce
collisions while maintaining efficiency. It is a hybrid of 1-Persistent and Non-Persistent CSMA, offering a
balance between collision probability and channel utilization.

It is mainly used in slotted-time systems like Wi-Fi (IEEE 802.11), where time is divided into slots, and
transmission decisions are based on probability.

Working Steps of p-Persistent CSMA

1. Sense the channel: The device checks if the channel is idle.

2. If idle, the device transmits with a probability p.

o With probability (1 - p), the device waits for the next time slot and checks again.

3. If busy, the device waits until the next time slot and repeats the process.

4. If a collision occurs, the device applies a random backoff time and retries after some time.

Why is p-Persistent CSMA Called a Hybrid?

 Similar to 1-Persistent CSMA → It listens continuously and transmits if the channel is idle.

 Similar to Non-Persistent CSMA → It does not always transmit immediately, reducing collision
chances.

 Unique Feature → Uses a probability factor (p) to decide when to transmit, balancing speed and
efficiency.

Advantages

✅ Reduces collisions compared to 1-Persistent CSMA.


✅ Higher efficiency than Non-Persistent CSMA.
✅ Works well in slotted networks like Wi-Fi.

Disadvantages

❌ Increased delay due to probability-based waiting.


❌ Not ideal for unslotted channels.

🔹 Best suited for slotted-time networks, such as IEEE 802.11 (Wi-Fi). 🚀

p Selection in p-Persistent CSMA

 To avoid collisions, the probability of transmission p should be chosen so that:

np<1

where n is the number of stations ready to send.


 Under heavy load:
✅ Use small p to reduce collisions.
❌ Too small → Longer delays for transmission.

 Under light load:


✅ Use larger p for faster transmission.
❌ Too large → Higher risk of collisions if traffic increases.

 Dynamic p adjustment is used in some networks (e.g., Wi-Fi) to optimize performance based on
network load. 🚀

CSMA/CD

 CSMA/CD (Carrier Sense Multiple Access with Collision Detection) is used in wired Ethernet
(IEEE 802.3) to detect and manage collisions.

 Devices listen before transmitting and stop if a collision occurs.

 Steps:

1. Carrier Sensing (CS) – Check if the channel is free.

2. Transmission (MA) – If idle, transmit data.

3. Collision Detection (CD) – If a collision occurs, stop and send a jamming signal.

4. Backoff & Retry – Wait for a random time (Binary Exponential Backoff) before
retransmitting.

✅ Used in wired Ethernet (hubs & shared networks).


❌ Not used in Wi-Fi (uses CSMA/CA) or full-duplex Ethernet (switches eliminate collisions). 🚀

Summary of Binary Exponential Backoff (BEB)


 After a collision, devices wait for a random time before retrying.

 Wait time doubles after each collision (exponential increase).

 After the 10th collision, the wait time stops increasing and remains at a maximum limit.

 After the 16th collision, the device gives up and drops the packet (no retransmission).

✅ Used in Ethernet (CSMA/CD) and Wi-Fi (CSMA/CA) to reduce repeated collisions and improve
network efficiency. 🚀

Ethernet

 10Base5 (Thick Ethernet)

 Medium: Coaxial cable (Thick Ethernet).

 Signaling: Baseband, Manchester encoding.

 Topology: Bus.

 Nodes: Supports up to 100 nodes per segment.

 10Base2 (Thin Ethernet)

 Medium: Coaxial cable (Thin Ethernet).

 Signaling: Baseband, Manchester encoding.

 Topology: Bus.

 Nodes: Supports up to 30 nodes per segment.

 10Base-T (Twisted-Pair Ethernet)

 Medium: Unshielded Twisted Pair (UTP).

 Signaling: Baseband, Manchester encoding.

 Topology: Star (uses hubs or switches).

 Nodes: Not specified (depends on hub/switch capacity).

 10Base-F (Fiber-Optic Ethernet)

 Medium: 850nm fiber-optic cable.

 Signaling: Manchester encoding with On/Off keying.

 Topology: Star (typically used with fiber switches).

 Nodes: Supports 33 nodes per segment.

Fast Ethernet (100BASE-T) Explained


Fast Ethernet refers to 100 Mbps Ethernet (IEEE 802.3u) and is an upgrade from 10 Mbps Ethernet. It
retains the same CSMA/CD (Carrier Sense Multiple Access with Collision Detection) access method but
operates at 100 Mbps.

The 100BASE-T family includes multiple standards based on different cabling types and signaling
methods, as shown in the diagram.

100BASE-T Categories:

1. 100BASE-X (Uses Two Pairs)

 100BASE-TX

o Uses Category 5 UTP or Shielded Twisted Pair (STP) cables.

o The most widely used Fast Ethernet standard.

o Requires two twisted-pair cables (one for sending, one for receiving).

o Uses 4B/5B encoding for efficient data transmission.

 100BASE-FX

o Uses fiber optic cables (typically multi-mode fiber).

o Provides better performance over longer distances.

o Mostly used in backbone connections due to reduced signal interference.

o Uses 4B/5B encoding like 100BASE-TX.

2. 100BASE-T4 (Uses Four Pairs)

 Uses Category 3, or 5 UTP cables.

 Requires four twisted-pair wires instead of two.

 Used in networks where only Category 3 cables are available, as it doesn't require all pairs to
support high-speed signaling.

 Less common today since 100BASE-TX (with Cat 5) became the standard.

Key Differences Between These Standards:

Standard Cable Type Wires Used Encoding Max Distance

100BASE-TX Cat 5 UTP/STP 2 pairs 4B/5B 100m

100BASE-FX Fiber Optic (850nm) 2 pairs 4B/5B 400m-2km

100BASE-T4 Cat 3, 4, or 5 UTP 4 pairs 8B/6T 100m

Conclusion:

 100BASE-TX is the most commonly used Fast Ethernet standard.


 100BASE-FX is used for long-distance, fiber-based networking.

 100BASE-T4 was an alternative for older Cat 3 cabling but is now obsolete.

Full Duplex Ethernet - Summary

 Half-duplex Ethernet: Can either transmit or receive, using CSMA/CD to handle collisions.

 Full-duplex Ethernet: Allows simultaneous transmission and reception, effectively doubling the
data rate.

 100-Mbps Ethernet in full-duplex achieves 200 Mbps theoretical throughput.

 Requires full-duplex adapter cards and a switching hub (Ethernet switch).

 Eliminates collisions, making CSMA/CD unnecessary.

 Uses the IEEE 802.3 MAC frame format, ensuring compatibility.

 Benefits: Higher efficiency, no collisions, lower latency, and better performance in modern
networks. 🚀

Simple Explanation of Mixed Configurations

 Fast Ethernet allows both old (10 Mbps) and new (100 Mbps) networks to work together.
 Example Setup:

o Old computers (10 Mbps) connect using 10BASE-T.

o Hubs link to switching hubs that support both 10 Mbps and 100 Mbps.

o Powerful computers and servers connect directly to fast switches (10/100 Mbps).

o Switches link to 100 Mbps hubs for faster communication.

o 100 Mbps hubs act as the main network backbone and connect to a router for
internet/WAN access.

🔹 Benefit: Allows older and newer devices to work together smoothly while upgrading to faster speeds.
🚀

Gigabit Ethernet (1Gbps) - Explanation & Types

Gigabit Ethernet (GbE) is a high-speed networking standard that provides 1 Gbps (1000 Mbps) data
transfer rates. It is widely used in enterprise networks, data centers, and home networking for fast and
reliable communication.

Types of Gigabit Ethernet:

1️⃣ 1000BASE-SX (Short-range)

 Uses 850 nm wavelength on multimode fiber

 Maximum distance: 220m to 550m

2️⃣ 1000BASE-LX (Long-range)

 Uses 1310 nm wavelength on single-mode fiber

 Maximum distance: 5km to 10km

3️⃣ 1000BASE-CX (Short copper cable)

 Uses shielded twisted-pair (STP) copper cables

 Maximum distance: 25m

4️⃣ 1000BASE-T (Twisted Pair - Ethernet Cable)

 Uses Category 5e, 6, or 7 twisted-pair cables (RJ-45 connectors)

 Maximum distance: 100m

🔹 Key Takeaway: Gigabit Ethernet supports both fiber optic and copper cables, offering flexibility for
short- and long-distance networking needs. 🚀

IEEE 802.5 Medium Access Control (MAC) Protocol - Token Ring

Overview
IEEE 802.5 Token Ring is a MAC protocol that uses a token-passing technique for network access. It
operates in a round-robin fashion, ensuring fair access to all stations.

How It Works:

1️⃣ Token Circulation:

 A small frame (token) continuously circulates the network when no station is transmitting.

2️⃣ Token Seizing & Transmission:

 A station that wants to transmit waits for the token.

 It modifies the token into a start-of-frame sequence.

 It appends data and sends the frame.

3️⃣ Frame Circulation & Absorption:

 The frame travels around the ring and is received by the intended station.

 The sender absorbs the frame after confirming its circulation.

4️⃣ Token Re-insertion:

 A new token is generated after transmission is complete and the original frame returns.

 This ensures only one frame at a time is on the ring.

Advantages:

✅ Fair & Efficient Under Heavy Load – Uses a round-robin mechanism.


✅ Controlled Access – Ensures smooth data transmission.
✅ Supports Priority & Guaranteed Bandwidth – Can be regulated.

Disadvantages:

❌ Token Maintenance Required – Loss or duplication of the token disrupts the network.
❌ Delays Under Light Load – Stations must wait for the token.
❌ Requires a Monitor Station – To prevent token loss and duplication.

🔹 Conclusion: The token ring protocol ensures fair access and efficiency under heavy loads, but it
requires strict token management to function properly. 🚀

IEEE 802.5 MAC Frame Format (Token Ring Protocol)

The IEEE 802.5 MAC frame consists of several fields used for controlling data transmission in a Token
Ring network.

📌 Frame Structure

1️⃣ Starting Delimiter (SD)


 Indicates the start of a frame.

 Uses a unique pattern JKOJKOOO (J and K are special symbols).

2️⃣ Access Control (AC)

 Controls access to the token ring.

 Format: PPPTMRRR

o PPP (Priority Bits): Defines the priority of the frame.

o T (Token Bit):

 0 → Indicates a token frame (available for use).

 1 → Indicates a data frame (token seized).

o M (Monitor Bit): Ensures only one active token exists.

o RRR (Reservation Bits): Allows stations to reserve a token for later use.

3️⃣ Frame Control (FC)

 Defines the type of frame (LLC data frame or MAC control frame).

4️⃣ Destination Address (DA)

 Specifies the intended recipient.

 Can be 2 or 6 bytes long (like in IEEE 802.3 Ethernet).

5️⃣ Source Address (SA)

 Specifies the sender's MAC address.

 Can be 2 or 6 bytes long.

6️⃣ Data Unit

 Contains the actual data being transmitted.

7️⃣ Frame Check Sequence (FCS)

 Error-detection field (uses CRC for integrity checking).

8️⃣ End Delimiter (ED)

 Marks the end of a frame.

 Contains:

o E (Error Bit): Set to 1 if an error is detected.

o I (Intermediate Bit): Indicates that the frame is part of a multi-frame transmission.

9️⃣ Frame Status (FS)


 Indicates whether the frame was received properly.

 Contains:

o A (Address Recognized Bit): Set to 1 if the destination exists.

o C (Copied Bit): Set to 1 if the frame is copied by the recipient.

o Redundancy Check Bits: Ensures frame integrity.

Overview of Token Ring Priority Mechanism

The 802.5 Token Ring network includes an optional priority mechanism to control which stations get
access to the token first. This is done using priority fields and reservation fields in the data frame and
token.

Key Terms:

 Priority of Frame (Pf): The priority level of a frame a station wants to transmit.

 Service Priority (Ps): The priority of the current token.

 Reservation Value (R): The highest priority reserved by a station.

 Stacks (Snew, Sold): Used to track priority changes.

How the Priority Mechanism Works

1. Waiting for Token:

o A station can only transmit if the token priority (Ps) is less than or equal to its frame
priority (Pf).
2. Reserving a Future Token:

o If a station cannot transmit immediately, it can reserve a future token.

o It checks passing frames/tokens and, if the reservation field is lower than its priority (R
< Pf), it updates it to R = Pf.

3. Seizing the Token:

o When a station gets a token with priority ≤ Pf, it seizes the token, sets the reservation
field to 0, and transmits its frames.

4. Issuing a New Token:

o After transmitting, the station issues a new token with priority and reservation fields
based on rules in Table 13.3.

o The highest reserved priority (Rr) is maintained in the new token.

5. Preventing Priority Lock:

o If a station raises the token priority, it must later lower it back when no higher-priority
traffic exists.

o It does this using two stacks:

 Snew (New Priorities): Stores raised priority levels.

 Sold (Old Priorities): Tracks previous lower priority levels.

o When no high-priority traffic remains, the station downgrades the token.

Example Scenario

Let’s consider four stations: A, B, C, and D on a Token Ring network.

1. A transmits at priority 0. While the frame is passing D, D reserves priority 3.

2. A issues a token with priority 3.

3. D seizes the token and transmits at priority 3.

4. After D finishes, it issues a priority 3 token.

5. A sees the priority 3 token and seizes it to downgrade it to priority 0.

Thus, the priority mechanism ensures higher-priority frames get transmitted first, while avoiding
priority "lock" by returning to lower priority levels when no high-priority frames remain.

 Ps (Service Priority): Priority level of the current token.


 Pf (Frame Priority): Priority of the frame a station wants to send.

 Pr (Priority of the Reservation Field): Highest priority value reserved by a station.

 R (Reservation Value in Token): The highest reservation value set in the last token rotation.

 Rr (Received Reservation Value): The highest reservation value observed by the station from frames in
the last token rotation.

 Sx (Priority Stack Top): Stores previous priority values to allow priority restoration.

 THT (Token Holding Time): Maximum time a station can hold the token before releasing it.

Condition 1: Frame available AND Ps≤Pf → Send frame

 If the station has data to send and the token priority is low enough for the frame’s priority, it
sends the frame.

 ✅ Example: If the token has priority 2 and the frame needs priority 3, the station can send the
frame because 3 ≥ 2.

Condition 2: No frame to send OR token time expired AND Pr≥max(Rr,Pf) → Send token with new
priority

 If the station has no data or its time with the token is up, and its reservation is the highest so
far, it updates the token priority and passes it on.

 ✅ Example: If the station reserved priority 5 earlier and no one else has a higher request, the
token is set to priority 5 and sent.

Condition 3: No frame OR time expired AND PrP_Pr is lower than the highest reservation max(Rr,Pf)
AND Pr>Sx

 If the station did not reserve the highest priority, but its reservation is still higher than a stored
priority, it:

1. Sets the token to the highest reservation seen

2. Resets the reservation field

3. Saves the old priority to memory (stack)

 ✅ Example: If a station previously reserved priority 4 but another station requested priority 6, it
updates the token to priority 6 and saves priority 4 for later.

Condition 4: No frame OR time expired AND Pr is lower than the highest reservation max(Rr, Pf)
AND Pr = Sc

This is similar to Condition 3, but if the reservation matches the last stored priority, it:

1. Updates the token


2. Removes the old priority from memory

3. Saves the new priority

 ✅ Example: If the station had priority 4 in memory and the new token also has priority 4, it
removes the stored value and updates the token.

Condition 5: No frame OR (Frame available but priority is lower than stored

Sc) AND PS Sc AND Rr > Sc

If the station still holds a high priority from memory, but another station requested an even higher
priority, it:

1. Sets token priority to the new highest reservation RrR_rRr

2. Resets reservation field

3. Updates stored priority

 ✅ Example: If a station was waiting with priority 4, but another station requested priority 5, it
updates the token to priority 5 and saves the change.

Condition 6: No frame OR (Frame available but priority is lower than stored

Sc) AND Ps =Sc AND Rr <= Sc

 If the station’s priority matches what was stored, but no higher priority requests exist, it:

1. Sets token priority to the latest reservation

2. Resets reservation field

3. Restores the stored priority

 ✅ Example: If a station stored priority 3 and no new high-priority requests exist, it restores
priority 3 and passes the token.

Key Takeaways

1. Higher priority traffic gets priority over lower-priority data.

2. If no data is sent, the highest reserved priority gets the token.

3. A stack (Sx) stores past priorities to ensure fairness.

4. If a station requests priority but sees a higher one, it updates the token accordingly.

Summary of Early Token Release (ETR)


In a Token Ring network, data is transmitted as frames, and only the station holding the token can
send data.

 Standard Token Release: A station waits for its transmitted frame to completely circulate the
ring and return before sending the next token.

 Early Token Release (ETR): A station releases the token immediately after finishing frame
transmission, without waiting for the frame header to return.

Early Token Release (ETR) is a feature in Token Ring (IEEE 802.5) networks that allows a station to
release the token immediately after transmitting a frame, instead of waiting for the frame header to
return.

Key Benefits:

✅ Increases network efficiency by reducing idle time.


✅ Improves data transmission speed, especially for short frames.
✅ Reduces delays in high-traffic networks.

Potential Drawbacks:

⚠ Priority handling may be affected, as reservations are ignored until the next token cycle.
⚠ Can increase access delay for high-priority traffic in busy networks.

Compatibility:

ETR stations can coexist with non-ETR stations, making it a flexible upgrade for improving network
performance.

Would you like an example or a diagram for better understanding? 😊

FDDI (Fiber Distributed Data Interface) Medium Access Control (MAC) Summary

FDDI is a token ring protocol similar to IEEE 802.5, but designed for higher-speed networks (100 Mbps)
and supports both Local Area Networks (LANs) and Metropolitan Area Networks (MANs). It provides
high reliability and efficient data transfer over fiber optic cables.

Key Differences from IEEE 802.5

1. Higher Speed: Operates at 100 Mbps, whereas IEEE 802.5 runs at 4 or 16 Mbps.

2. Frame Format: Uses symbols (4-bit chunks) instead of individual bits for efficiency.

3. Addressing: Supports both 16-bit and 48-bit addresses in the same network.

4. Clock Synchronization: Includes a preamble to aid in timing synchronization at high speeds.

5. No Priority & Reservation Bits: Unlike 802.5, FDDI manages capacity differently.

FDDI Frame Format


FDDI frames are divided into two types:

1. MAC Frame (Used for data transmission)

2. Token Frame (Used to control access to the network)

Field Description

Preamble Synchronizes the frame with the receiving station’s clock.

Starting Delimiter (SD) Marks the beginning of a frame (coded as JK non-data symbols).

Frame Control (FC) Defines the frame type (synchronous/asynchronous, control, reserved).

Destination Address (DA) Specifies where the frame is going (unicast, multicast, broadcast).

Source Address (SA) Identifies the sender of the frame.

Information Contains LLC data or control-related info.

Frame Check Sequence (FCS) Error-checking using a 32-bit cyclic redundancy check (CRC).

Ending Delimiter (ED) Contains non-data symbols (T) to indicate the frame end.

Frame Status (FS) Error detection, address recognition, and frame copied indicators.

Token Frame Format

A token frame is used to control network access and contains:

 Preamble

 Starting Delimiter (SD)

 Frame Control (FC) (Indicates the token type)

 Ending Delimiter (ED)

👉 The token must be captured by a station before it can send data, ensuring fair access to the network.

Comparison with IEEE 802.5

Feature IEEE 802.5 (Token Ring) FDDI

Speed 4 or 16 Mbps 100 Mbps

Medium Copper wires Fiber optics

Addressing 16-bit or 48-bit (separate networks) Supports both 16 & 48-bit in the same network

Priority Handling Uses priority & reservation bits No priority bits (capacity is managed differently)

Frame Structure No preamble Preamble for better synchronization


Feature IEEE 802.5 (Token Ring) FDDI

Conclusion

 FDDI improves upon IEEE 802.5 by offering higher speeds (100 Mbps), better synchronization,
and more flexible addressing.

 It uses a token-passing mechanism but with enhancements like preamble and dual addressing
formats.

 Ideal for high-speed and long-distance networking, primarily used in backbone networks.

MAC Protocol Differences from IEEE 802.5

1. Token Seizure: A station captures the token by stopping its transmission instead of flipping a bit,
due to FDDI's high speed.

2. Early Token Release: A station releases a new token right after transmission without waiting for
its own frame to return.

Operation & Frame Handling

 A station transmits a frame after seizing the token.

 Multiple frames can circulate the ring at once.

 Each station absorbs its own frames.

 The Frame Status (FS) field helps determine errors, if the frame was received, and whether it
was
apacity Allocation in FDDI: An Easy Explanation

In a network, managing how data is sent efficiently is crucial, especially when multiple devices (or
stations) are connected. FDDI (Fiber Distributed Data Interface) is a high-speed network that uses a
special method to allocate network capacity so that all devices can communicate smoothly.

Why is Capacity Allocation Important?

In a network, some devices need to send continuous data (like video streaming or voice calls), while
others send data occasionally (like emails or web browsing). To balance both needs, FDDI divides
network traffic into two types:

1. Synchronous Traffic: Important, time-sensitive data that must be delivered on time. Each
station is given a fixed amount of capacity (SAi) to send this data.

2. Asynchronous Traffic: Data that can wait, such as file downloads or web browsing. It uses any
leftover capacity not used by synchronous traffic.

How Does It Work?

To keep track of time and ensure fair usage, FDDI uses a Token-Passing Mechanism:
 A token (a special signal) circulates in the network. A station can only send data when it
receives the token.

 A Target Token Rotation Time (TTRT) is set, which is the time within which the token should
ideally complete one full cycle around all stations.

Each station has three timers to manage network capacity:

1. Token-Rotation Timer (TRT): Keeps track of how long it takes for the token to return. If it takes
too long, it means the network is getting congested.

2. Token-Holding Timer (THT): Decides how long a station can send asynchronous frames before
releasing the token.

3. Late Counter (LC): Counts how many times a station has waited too long for the token. If the
count goes too high, the network takes corrective action.

What Happens When a Station Gets the Token?

 If the token arrives on time, the station:

1. Sends synchronous frames for the time allocated (SAi).

2. If there's time left, it sends asynchronous frames.

 If the token arrives late, the station:

o Can only send synchronous frames and must wait for another turn to send
asynchronous data.

This method ensures that the network remains efficient and fair, preventing any one station from
overloading it.

Example of Token Circulation

Imagine a ring network with 4 stations:

 Each station has a synchronous allocation of 20 frame times.

 The total token circulation time (TTRT) is set to 100 frame times.

 Initially, the token moves quickly, but as stations send data, the time increases.

 Eventually, the network stabilizes, ensuring a balance between synchronous and asynchronous
traffic.

Why is This Method Better?

1. Guarantees Fairness: Ensures every station gets a fair share of the network.

2. Handles Mixed Traffic: Supports both time-sensitive and regular data efficiently.

3. Prevents Congestion: Keeps the network from getting too slow or overloaded.
In summary, FDDI’s capacity allocation system is designed to ensure smooth and efficient
communication by giving priority to essential data while still allowing other data to be transmitted
when possible.

You might also like