Computer Network Module 1
Computer Network Module 1
A Local Area Network (LAN) is a system that connects multiple devices to communicate and share
resources efficiently. The design of a LAN follows a structured model with different layers that handle
data transmission and network access.
LANs follow the OSI model, which includes seven layers, but the most critical ones for LAN design are:
Physical Layer: Defines the hardware and transmission medium (cables, wireless signals, etc.).
Data Link Layer: Ensures reliable data transfer between devices and is divided into:
o Logical Link Control (LLC): Manages communication, flow, and error control.
o Medium Access Control (MAC): Governs how multiple devices share the network.
The MAC layer is separated because different networks use different methods to control access, such as
Ethernet (CSMA/CD) or Token Ring.
1. User data moves from the Application Layer to TCP/IP Layer and then to the LLC Layer.
2. The LLC Layer adds control information, forming an LLC Protocol Data Unit (PDU).
3. The MAC Layer adds more details to create a MAC Frame, which is transmitted over the
network.
The IEEE 802 committee developed widely used LAN standards, later adopted by ISO (International
Organization for Standardization). These standards define different transmission mediums, such as:
Coaxial cables
Optical fiber
Wireless LANs (Wi-Fi, DQDB, etc.) – Use different protocols for wireless communication.
Conclusion
LAN design ensures efficient and reliable communication by organizing data transmission through
structured layers. The IEEE 802 standards provide various networking options, allowing flexibility in how
data is transmitted and accessed across different mediums.
LAN Topologies (Simple Explanation)
A LAN topology is the way computers and devices are connected in a network. There are four common
types: bus, tree, ring, and star.
1. Bus Topology
All devices are connected to a single main cable (called the bus).
Devices send data to everyone, but only the intended recipient reads it.
Example: Like a public announcement (PA) system where one person speaks, and everyone
listens.
🔹 Advantages:
✅ Simple and cheap to set up
✅ Uses less cable
✅ Works well for small networks
🔹 Disadvantages:
❌ If the main cable breaks, the entire network stops working
❌ Only one device can send data at a time (or else signals will collide)
2. Tree Topology
Similar to a bus topology but with branches.
🔹 Advantages:
✅ Can cover a larger area
✅ Easier to expand than a bus network
🔹 Disadvantages:
❌ If the main cable fails, the entire network is affected
❌ Requires more cable than a simple bus
3. Each device checks if the message is for them (by looking at the address).
To prevent collisions, devices take turns sending messages. This is like raising your hand in class before
speaking.
How It Works
The frame circulates through all devices until it reaches the correct one.
The destination device copies the data and lets it continue moving.
When the frame returns to the sender, it is removed from the ring.
Each device has a repeater, which boosts the signal before passing it to the next device.
Since all devices share the ring, they must take turns sending data.
❌ Single point of failure → If one device or link fails, the whole network stops.
❌ Slower than Star Topology → Data must pass through multiple devices.
❌ Difficult to expand → Adding or removing a device disrupts the network.
🔹 Used in older network designs, especially in token-based networks like Token Ring.
🔹 Some fiber-optic networks use a dual-ring setup for redundancy.
Star Topology Summary
In a star topology, all devices connect to a central node (hub or switch) via individual links for
transmission and reception.
1. Broadcast Mode: The central node forwards incoming data to all connected devices, making it
logically similar to a bus topology. Only one device can transmit at a time.
2. Switching Mode: The central node buffers and directs data only to the intended recipient,
improving efficiency and reducing collisions.
Medium Access Control (MAC) protocols manage how devices share the network’s transmission capacity.
There are two main approaches to control:
1. Centralized Control:
2. Decentralized Control:
Synchronous: Fixed, dedicated capacity (e.g., TDM, circuit switching), not ideal for LANs due to
unpredictable traffic.
Asynchronous: Dynamic allocation of capacity based on demand, more efficient for LANs, and
further divided into:
In general, asynchronous and decentralized methods offer better scalability and adaptability for modern
networks.
In Round Robin access control, each station takes turns to transmit data. Each station can either transmit
data during its turn or skip it, but once it's done (or skips), the next station in the sequence gets its turn.
The sequence can be controlled either by a central controller or by the stations themselves (distributed).
Efficient for many stations: If many stations have data to send, Round Robin works well because
it ensures each gets a fair chance to transmit.
Not efficient for few stations: If only a few stations have data, it becomes inefficient because
many stations will just pass their turn without transmitting, causing unnecessary overhead.
Stream traffic (e.g., voice calls, file transfers) works well with Round Robin because it requires
continuous transmission.
Bursty traffic (e.g., terminal-host interactions) might not be efficient with Round Robin, as
transmissions are short and sporadic.
Reservation
For stream traffic, reservation techniques are well suited. In general, for these tech- niques, time on the
medium is divided into slots, much as with synchronous TDM. A station wishing to transmit reserves
future slots for an extended or even an in- definite period. Again, reservations may be made in a
centralized or distributed fashion.
Contention techniques are used for bursty traffic, where devices need to send small, irregular bursts of
data. In this method, there is no control over which station gets to send data first. Instead, all devices try
to send data at the same time, leading to potential conflicts or collisions, which can cause problems if
many devices try to transmit at once.
Advantages: It's simple and works efficiently when there are only a few devices trying to
transmit at the same time (light to moderate network traffic).
Disadvantages: Performance can become poor if many devices compete to send data
simultaneously, leading to delays or congestion.
The MAC (Medium Access Control) layer handles data transmission and medium access in networking,
using a MAC frame as the Protocol Data Unit (PDU). Here's a summarized breakdown:
o The MAC layer receives data from the LLC (Logical Link Control) layer and is responsible
for managing access to the network medium, as well as transmitting the data.
o Destination MAC Address: Specifies where the frame is to be delivered on the LAN.
o CRC (Cyclic Redundancy Check): Used to detect errors in the transmitted frame. If errors
are found, the MAC layer discards the frame.
3. Error Handling:
o The MAC layer is responsible for detecting errors using the CRC and discarding
erroneous frames.
o The LLC layer is optionally responsible for retransmitting any frames that were not
successfully received.
In summary, the MAC layer formats data into a frame, manages access to the transmission medium,
handles error detection, and works with the LLC layer for error recovery if necessary.
The Logical Link Control (LLC) layer is a part of the data link layer in LANs. It helps in transmitting data
between two devices without needing an intermediate switch.
2. Works with the MAC layer, which handles physical network access.
LLC Services
LLC helps in addressing devices and managing data exchange. (hdlc)It provides three types of services:
o Used in cases where occasional data loss is acceptable (e.g., sensor readings, monitoring
systems).
2. Connection-Mode Service:
o Used for important messages (e.g., emergency alarms) that must be confirmed quickly.
LLC Protocol
LLC is based on the HDLC (High-Level Data Link Control) protocol and follows a similar structure. It has
three types of operations:
Uses different message types (PDUs) for sending, receiving, and managing connections.
In connection-mode service (Type 2), a connection is requested, accepted, used, and then
closed when done.
In acknowledged connectionless service (Type 3), each message is confirmed using small
sequence numbers (0 and 1) to ensure delivery.
Conclusion
LLC helps LAN devices communicate efficiently, providing options based on reliability and speed needs.
It ensures flexibility and efficiency while working with the MAC layer to manage network access. 🚀
IEEE 802
Physical:
• Encoding/decoding
• Preamble generation/removal
• Bit transmission/reception
ALOHA is a random access protocol used for wireless and satellite communication. It allows multiple
devices to share a common channel without coordination.
Types of ALOHA
1. Pure ALOHA
2. Slotted ALOHA
Applications
Limitations
Low efficiency compared to modern MAC protocols like CSMA/CD (Ethernet) and CSMA/CA (Wi-
Fi).
3. If a collision occurs, the sender waits for a random time and retransmits.
1. Time is divided into fixed slots equal to the frame transmission time.
4. If a collision occurs, the sender waits for a random time and retries in the next slot.
✅ Slotted ALOHA reduces collisions by allowing transmission only at specific time slots. 🚀
CSMA is a network access method used in shared communication channels where multiple devices
compete to transmit data. The key idea is "listen before talk" to reduce collisions.
4. If a collision occurs (in some CSMA variants), the device takes corrective action.
Non-Persistent CSMA
Non-Persistent CSMA is a collision avoidance technique where a device checks the channel before
transmitting and waits a random time if the channel is busy, instead of continuously sensing it.
3. If busy: The device waits a random time before rechecking the channel.
Advantages
Overview
1-Persistent CSMA is a medium access control (MAC) protocol where a device continuously senses the
communication channel and transmits immediately when the channel becomes idle. It is called "1-
Persistent" because the device has a 100% (i.e., 1.0 probability) chance of transmitting as soon as the
channel is free.
This protocol is aggressive and works well in low-traffic networks, but in high-traffic conditions, it
increases the risk of collisions due to multiple devices trying to send data at the same time.
o The device keeps sensing continuously (i.e., it does not wait randomly).
4. If a collision occurs:
o Since multiple devices may try to transmit at the same time (as they all sense the idle
channel), collisions can happen.
o The device then waits for a random backoff time before reattempting transmission.
✅ Lower waiting time: Since devices transmit immediately when the channel is idle, there is minimal
delay.
✅ Efficient in low-traffic conditions: When there are few devices, collisions are rare, and the protocol
works efficiently.
p-Persistent CSMA – Detailed Explanation
Overview
p-Persistent CSMA is a probabilistic channel access method used in slotted time networks to reduce
collisions while maintaining efficiency. It is a hybrid of 1-Persistent and Non-Persistent CSMA, offering a
balance between collision probability and channel utilization.
It is mainly used in slotted-time systems like Wi-Fi (IEEE 802.11), where time is divided into slots, and
transmission decisions are based on probability.
o With probability (1 - p), the device waits for the next time slot and checks again.
3. If busy, the device waits until the next time slot and repeats the process.
4. If a collision occurs, the device applies a random backoff time and retries after some time.
Similar to 1-Persistent CSMA → It listens continuously and transmits if the channel is idle.
Similar to Non-Persistent CSMA → It does not always transmit immediately, reducing collision
chances.
Unique Feature → Uses a probability factor (p) to decide when to transmit, balancing speed and
efficiency.
Advantages
Disadvantages
np<1
Dynamic p adjustment is used in some networks (e.g., Wi-Fi) to optimize performance based on
network load. 🚀
CSMA/CD
CSMA/CD (Carrier Sense Multiple Access with Collision Detection) is used in wired Ethernet
(IEEE 802.3) to detect and manage collisions.
Steps:
3. Collision Detection (CD) – If a collision occurs, stop and send a jamming signal.
4. Backoff & Retry – Wait for a random time (Binary Exponential Backoff) before
retransmitting.
After the 10th collision, the wait time stops increasing and remains at a maximum limit.
After the 16th collision, the device gives up and drops the packet (no retransmission).
✅ Used in Ethernet (CSMA/CD) and Wi-Fi (CSMA/CA) to reduce repeated collisions and improve
network efficiency. 🚀
Ethernet
Topology: Bus.
Topology: Bus.
The 100BASE-T family includes multiple standards based on different cabling types and signaling
methods, as shown in the diagram.
100BASE-T Categories:
100BASE-TX
o Requires two twisted-pair cables (one for sending, one for receiving).
100BASE-FX
Used in networks where only Category 3 cables are available, as it doesn't require all pairs to
support high-speed signaling.
Less common today since 100BASE-TX (with Cat 5) became the standard.
Conclusion:
100BASE-T4 was an alternative for older Cat 3 cabling but is now obsolete.
Half-duplex Ethernet: Can either transmit or receive, using CSMA/CD to handle collisions.
Full-duplex Ethernet: Allows simultaneous transmission and reception, effectively doubling the
data rate.
Benefits: Higher efficiency, no collisions, lower latency, and better performance in modern
networks. 🚀
Fast Ethernet allows both old (10 Mbps) and new (100 Mbps) networks to work together.
Example Setup:
o Hubs link to switching hubs that support both 10 Mbps and 100 Mbps.
o Powerful computers and servers connect directly to fast switches (10/100 Mbps).
o 100 Mbps hubs act as the main network backbone and connect to a router for
internet/WAN access.
🔹 Benefit: Allows older and newer devices to work together smoothly while upgrading to faster speeds.
🚀
Gigabit Ethernet (GbE) is a high-speed networking standard that provides 1 Gbps (1000 Mbps) data
transfer rates. It is widely used in enterprise networks, data centers, and home networking for fast and
reliable communication.
🔹 Key Takeaway: Gigabit Ethernet supports both fiber optic and copper cables, offering flexibility for
short- and long-distance networking needs. 🚀
Overview
IEEE 802.5 Token Ring is a MAC protocol that uses a token-passing technique for network access. It
operates in a round-robin fashion, ensuring fair access to all stations.
How It Works:
A small frame (token) continuously circulates the network when no station is transmitting.
The frame travels around the ring and is received by the intended station.
A new token is generated after transmission is complete and the original frame returns.
Advantages:
Disadvantages:
❌ Token Maintenance Required – Loss or duplication of the token disrupts the network.
❌ Delays Under Light Load – Stations must wait for the token.
❌ Requires a Monitor Station – To prevent token loss and duplication.
🔹 Conclusion: The token ring protocol ensures fair access and efficiency under heavy loads, but it
requires strict token management to function properly. 🚀
The IEEE 802.5 MAC frame consists of several fields used for controlling data transmission in a Token
Ring network.
📌 Frame Structure
Format: PPPTMRRR
o T (Token Bit):
o RRR (Reservation Bits): Allows stations to reserve a token for later use.
Defines the type of frame (LLC data frame or MAC control frame).
Contains:
Contains:
The 802.5 Token Ring network includes an optional priority mechanism to control which stations get
access to the token first. This is done using priority fields and reservation fields in the data frame and
token.
Key Terms:
Priority of Frame (Pf): The priority level of a frame a station wants to transmit.
o A station can only transmit if the token priority (Ps) is less than or equal to its frame
priority (Pf).
2. Reserving a Future Token:
o It checks passing frames/tokens and, if the reservation field is lower than its priority (R
< Pf), it updates it to R = Pf.
o When a station gets a token with priority ≤ Pf, it seizes the token, sets the reservation
field to 0, and transmits its frames.
o After transmitting, the station issues a new token with priority and reservation fields
based on rules in Table 13.3.
o If a station raises the token priority, it must later lower it back when no higher-priority
traffic exists.
Example Scenario
Thus, the priority mechanism ensures higher-priority frames get transmitted first, while avoiding
priority "lock" by returning to lower priority levels when no high-priority frames remain.
R (Reservation Value in Token): The highest reservation value set in the last token rotation.
Rr (Received Reservation Value): The highest reservation value observed by the station from frames in
the last token rotation.
Sx (Priority Stack Top): Stores previous priority values to allow priority restoration.
THT (Token Holding Time): Maximum time a station can hold the token before releasing it.
If the station has data to send and the token priority is low enough for the frame’s priority, it
sends the frame.
✅ Example: If the token has priority 2 and the frame needs priority 3, the station can send the
frame because 3 ≥ 2.
Condition 2: No frame to send OR token time expired AND Pr≥max(Rr,Pf) → Send token with new
priority
If the station has no data or its time with the token is up, and its reservation is the highest so
far, it updates the token priority and passes it on.
✅ Example: If the station reserved priority 5 earlier and no one else has a higher request, the
token is set to priority 5 and sent.
Condition 3: No frame OR time expired AND PrP_Pr is lower than the highest reservation max(Rr,Pf)
AND Pr>Sx
If the station did not reserve the highest priority, but its reservation is still higher than a stored
priority, it:
✅ Example: If a station previously reserved priority 4 but another station requested priority 6, it
updates the token to priority 6 and saves priority 4 for later.
Condition 4: No frame OR time expired AND Pr is lower than the highest reservation max(Rr, Pf)
AND Pr = Sc
This is similar to Condition 3, but if the reservation matches the last stored priority, it:
✅ Example: If the station had priority 4 in memory and the new token also has priority 4, it
removes the stored value and updates the token.
If the station still holds a high priority from memory, but another station requested an even higher
priority, it:
✅ Example: If a station was waiting with priority 4, but another station requested priority 5, it
updates the token to priority 5 and saves the change.
If the station’s priority matches what was stored, but no higher priority requests exist, it:
✅ Example: If a station stored priority 3 and no new high-priority requests exist, it restores
priority 3 and passes the token.
Key Takeaways
4. If a station requests priority but sees a higher one, it updates the token accordingly.
Standard Token Release: A station waits for its transmitted frame to completely circulate the
ring and return before sending the next token.
Early Token Release (ETR): A station releases the token immediately after finishing frame
transmission, without waiting for the frame header to return.
Early Token Release (ETR) is a feature in Token Ring (IEEE 802.5) networks that allows a station to
release the token immediately after transmitting a frame, instead of waiting for the frame header to
return.
Key Benefits:
Potential Drawbacks:
⚠ Priority handling may be affected, as reservations are ignored until the next token cycle.
⚠ Can increase access delay for high-priority traffic in busy networks.
Compatibility:
ETR stations can coexist with non-ETR stations, making it a flexible upgrade for improving network
performance.
FDDI (Fiber Distributed Data Interface) Medium Access Control (MAC) Summary
FDDI is a token ring protocol similar to IEEE 802.5, but designed for higher-speed networks (100 Mbps)
and supports both Local Area Networks (LANs) and Metropolitan Area Networks (MANs). It provides
high reliability and efficient data transfer over fiber optic cables.
1. Higher Speed: Operates at 100 Mbps, whereas IEEE 802.5 runs at 4 or 16 Mbps.
2. Frame Format: Uses symbols (4-bit chunks) instead of individual bits for efficiency.
3. Addressing: Supports both 16-bit and 48-bit addresses in the same network.
5. No Priority & Reservation Bits: Unlike 802.5, FDDI manages capacity differently.
Field Description
Starting Delimiter (SD) Marks the beginning of a frame (coded as JK non-data symbols).
Frame Control (FC) Defines the frame type (synchronous/asynchronous, control, reserved).
Destination Address (DA) Specifies where the frame is going (unicast, multicast, broadcast).
Frame Check Sequence (FCS) Error-checking using a 32-bit cyclic redundancy check (CRC).
Ending Delimiter (ED) Contains non-data symbols (T) to indicate the frame end.
Frame Status (FS) Error detection, address recognition, and frame copied indicators.
Preamble
👉 The token must be captured by a station before it can send data, ensuring fair access to the network.
Addressing 16-bit or 48-bit (separate networks) Supports both 16 & 48-bit in the same network
Priority Handling Uses priority & reservation bits No priority bits (capacity is managed differently)
Conclusion
FDDI improves upon IEEE 802.5 by offering higher speeds (100 Mbps), better synchronization,
and more flexible addressing.
It uses a token-passing mechanism but with enhancements like preamble and dual addressing
formats.
Ideal for high-speed and long-distance networking, primarily used in backbone networks.
1. Token Seizure: A station captures the token by stopping its transmission instead of flipping a bit,
due to FDDI's high speed.
2. Early Token Release: A station releases a new token right after transmission without waiting for
its own frame to return.
The Frame Status (FS) field helps determine errors, if the frame was received, and whether it
was
apacity Allocation in FDDI: An Easy Explanation
In a network, managing how data is sent efficiently is crucial, especially when multiple devices (or
stations) are connected. FDDI (Fiber Distributed Data Interface) is a high-speed network that uses a
special method to allocate network capacity so that all devices can communicate smoothly.
In a network, some devices need to send continuous data (like video streaming or voice calls), while
others send data occasionally (like emails or web browsing). To balance both needs, FDDI divides
network traffic into two types:
1. Synchronous Traffic: Important, time-sensitive data that must be delivered on time. Each
station is given a fixed amount of capacity (SAi) to send this data.
2. Asynchronous Traffic: Data that can wait, such as file downloads or web browsing. It uses any
leftover capacity not used by synchronous traffic.
To keep track of time and ensure fair usage, FDDI uses a Token-Passing Mechanism:
A token (a special signal) circulates in the network. A station can only send data when it
receives the token.
A Target Token Rotation Time (TTRT) is set, which is the time within which the token should
ideally complete one full cycle around all stations.
1. Token-Rotation Timer (TRT): Keeps track of how long it takes for the token to return. If it takes
too long, it means the network is getting congested.
2. Token-Holding Timer (THT): Decides how long a station can send asynchronous frames before
releasing the token.
3. Late Counter (LC): Counts how many times a station has waited too long for the token. If the
count goes too high, the network takes corrective action.
o Can only send synchronous frames and must wait for another turn to send
asynchronous data.
This method ensures that the network remains efficient and fair, preventing any one station from
overloading it.
The total token circulation time (TTRT) is set to 100 frame times.
Initially, the token moves quickly, but as stations send data, the time increases.
Eventually, the network stabilizes, ensuring a balance between synchronous and asynchronous
traffic.
1. Guarantees Fairness: Ensures every station gets a fair share of the network.
2. Handles Mixed Traffic: Supports both time-sensitive and regular data efficiently.
3. Prevents Congestion: Keeps the network from getting too slow or overloaded.
In summary, FDDI’s capacity allocation system is designed to ensure smooth and efficient
communication by giving priority to essential data while still allowing other data to be transmitted
when possible.