Unit 3
Unit 3
COMPUTER NETWORKS
Switching in Computer Networks refers to the process of directing data from one device to
another within a network. This process involves using devices called switches, which are
responsible for determining the path that data takes to reach its destination. The primary goal
of switching is to efficiently transmit data across the network while minimizing delays and
maximizing the use of available bandwidth.
There are two primary types of switching used in computer networks: Circuit Switching and
Packet Switching.
Circuit Switching:
Inefficient use of resources: The dedicated path remains idle if there is no data being
sent.
Setup time: The establishment of a connection takes time.
Packet Switching:
Packet switching is a method where data is broken into smaller units called packets. Each
packet is transmitted independently across the network and may take different paths to reach
the destination. Once all packets arrive at the destination, they are reassembled to form the
original data.
Efficient use of bandwidth since multiple communication sessions can share the same
network resources.
More robust and flexible: If one path is congested or fails, packets can be rerouted
dynamically.
Variable delay: Since packets can take different routes, the time it takes for each
packet to reach the destination may vary.
Requires additional overhead for reassembling packets and managing data flow.
Summary:
Pure ALOHA and Slotted ALOHA were early random access protocols.
In Pure ALOHA, a station transmits at any time and waits for an acknowledgment. If
no acknowledgment is received, it retransmits after a random period.
In Slotted ALOHA, time is divided into slots, and stations must transmit at the
beginning of a slot. This reduces the chances of collision.
Performance in ALOHA: The maximum channel utilization is 1/e (about 36.8%), meaning
that the system often experiences collisions, leading to inefficiency.
CSMA improves upon ALOHA by having stations listen to the channel before transmitting.
If the channel is idle, the station transmits its data. If it detects that the channel is busy, it
waits until the channel becomes idle.
a. 1-persistent CSMA:
How it works:
1. A station listens to the channel.
2. If the channel is idle, the station transmits immediately.
3. If the channel is busy, the station waits until it becomes idle, then transmits
immediately.
Problem: Collisions can still occur, especially if two stations wait for the channel to
become idle at the same time and start transmitting simultaneously.
Key points:
How it works:
1. A station listens to the channel.
2. If the channel is idle, the station transmits.
3. If the channel is busy, instead of transmitting immediately, the station waits
for a random amount of time before sensing the channel again.
Advantages:
Disadvantages:
c. p-persistent CSMA:
Key points:
Collision Detection: During transmission, the station listens to the channel. If the
signal it hears is different from its transmitted signal, it detects a collision.
Ethernet: This is the protocol behind classic Ethernet networks.
Imagine two stations, A and B, are transmitting data at the same time:
This 2τ delay means that for CSMA/CD to function optimally, stations must wait for a time
equal to 2τ to be sure that the channel is fully seized by their transmission.
1-persistent CSMA: Better than ALOHA but still prone to collisions when two
stations begin transmitting simultaneously.
Nonpersistent CSMA: Reduces collisions but introduces more delay due to random
waiting times.
p-persistent CSMA: Good for slotted systems, balancing between collision
probability and delay.
Transmission on
Protocol Collisions Advantages Disadvantages
Idle
Collisions happen if
1-persistent Transmits Simple and High collision
stations transmit
CSMA immediately quick probability
simultaneously
Nonpersistent Waits a random Better channel
Fewer collisions Higher delay
CSMA time utilization
p-persistent Transmits with Reduces collision Good for Slightly more
CSMA probability p probability slotted systems complex
Detects collision
Detects and stops if Efficient use of Delay in detecting
CSMA/CD during
collision happens bandwidth collisions
transmission
Conclusion:
Carrier Sense Multiple Access (CSMA) protocols help manage how multiple stations share a
communication channel. By listening before transmitting, CSMA reduces collisions
compared to ALOHA systems. The different variations (1-persistent, nonpersistent, and p-
persistent) offer trade-offs between collision avoidance and transmission delays. CSMA/CD
further improves performance by detecting collisions during transmission and aborting the
transmission, which is crucial for networks like Ethernet.
This diagram is illustrating how the CSMA/CD (Carrier Sense Multiple Access with
Collision Detection) protocol works in terms of time slots and its states:
Key components of the diagram:
1. Transmission Period (Frame)
o This is when a device successfully transmits a frame (packet of data) over the
medium (e.g., an Ethernet cable).
o During this time, the medium is busy, and other devices should wait.
2. Contention Period
o Once the transmission ends, multiple devices might try to access the channel,
leading to a contention period.
o In this period, devices sense the channel and compete to transmit.
o The slots during contention are called contention slots. If two devices choose
the same slot, a collision might occur, which CSMA/CD will detect.
3. Idle Period
o This is the gap when no frame is being transmitted and no contention is
occurring. The medium is idle here.
Time (t₀)
This point marks the start of a new contention period right after a frame transmission
has finished.
CSMA/CD Process Flow:
1. Carrier Sense: Devices "listen" to check if the medium is free.
2. Collision Detection: If two devices transmit at the same time, a collision is detected.
3. Backoff Algorithm: Devices wait for a random period (backoff) before retrying in the
contention slots.
How to read the diagram:
The timeline flows left to right.
The network alternates between:
o Transmission periods (when frames are sent)
o Contention periods (when devices compete to send frames)
o Idle periods (when no one is transmitting)
Summary:
CSMA/CD works by coordinating who gets to "talk" on the medium.
Preamble (8 bytes):
o Pattern: 10101010 (except last byte 10101011).
o Purpose: Synchronizes the receiver’s clock.
o Last byte = Start of Frame Delimiter.
Destination Address (6 bytes):
o First bit = 0 for individual, 1 for group (multicast) address.
o All 1s (FF:FF:FF:FF:FF:FF) = Broadcast (all stations receive).
Source Address (6 bytes):
o Globally unique.
o First 3 bytes = OUI (Organizationally Unique Identifier by IEEE).
o Last 3 bytes = Manufacturer-defined.
Type Field (2 bytes) [Ethernet]:
o Indicates network-layer protocol (e.g., IPv4 = 0x0800).
Collision Scenario:
Jamming Signal:
Slot Time:
Backoff Strategy:
Purpose:
Reduces delay when few stations are contending.
Avoids indefinite collisions when many stations are competing.
6. Multicasting vs Broadcasting
Broadcasting: All stations accept the frame (no group management needed).
Multicasting: Only a group of stations (requires group management).
What it shows:
This is the real transmission path data follows.
Actual Data Path:
The data is passed down from the Data Link Layer of Host 1 to the Physical Layer,
transmitted over the physical medium (like a wire or wireless channel), and then up to
the Data Link Layer of Host 2.
Illustrated Path:
o Host 1: Layer 2 → Layer 1 (Physical)
o Transmission over medium
o Host 2: Layer 1 → Layer 2
Why it’s important:
This illustrates how actual data movement happens and emphasizes that the Data
Link Layer depends on the Physical Layer to carry bits over the medium.
(Section 3.1.2)
To ensure reliable communication, the Data Link Layer must transform the raw bit stream
provided by the Physical Layer into structured and manageable units called frames.
The Physical Layer simply moves bits from one place to another, but it doesn’t guarantee
error-free delivery. Bits may be flipped, lost, or inserted due to noise in the channel,
especially in wireless or long-distance wired connections.
To manage this:
The Data Link Layer breaks the stream into frames.
Each frame is appended with a checksum (or similar error-detection code).
Upon receiving, the destination computes the checksum again and compares it with
the one received.
If they differ, it indicates an error, and the frame can be discarded or a retransmission
requested.
However, framing is more complex than it seems because the receiver must be able to:
Clearly identify where each frame starts and ends.
Avoid excessive bandwidth usage for framing information.
🔧 Framing Methods:
1. Byte Count:
o Each frame starts with a byte specifying the total number of bytes in the
frame.
o Receiver reads this byte and counts to determine where the frame ends.
o Problem: If the byte count field is corrupted, synchronization is lost and
recovery becomes difficult.
2. Flag Bytes with Byte Stuffing:
o Special byte (e.g., 01111110) indicates frame boundaries.
o If this byte appears in the data, an escape byte (ESC) is added before it.
3. Flag Bits with Bit Stuffing:
o Uses bit-level flags and inserts a 0 after five consecutive 1s to avoid
misinterpretation.
4. Physical Layer Coding Violations:
o Uses signal patterns not valid for regular data as delimiters.
These framing methods ensure that even in noisy environments, the receiver can correctly
identify and process each frame.
📦 3. Framing: Data into Frames
A Frame is a structured unit of data used for transmission. It is made up of:
🗈 Diagram:
This shows how byte count-based framing works under normal conditions.
Key Points:
Each frame starts with a byte count, indicating how many bytes follow in that frame
(including the count byte itself).
Example:
o Frame 1 starts with 5 → Total 5 bytes (including count)
o Frame 2 starts with 5
o Frame 3 starts with 8
o Frame 4 starts with 8
Visual Interpretation:
This shows the problem when an error occurs in the count field.
What Happens:
Error occurs in Frame 2’s byte count (was 5, now read as 7).
Receiver assumes Frame 2 has 7 bytes, reads too far.
Frame boundaries become misaligned.
Subsequent frames are misread or lost, leading to resynchronization failure.
Consequences:
Pros Cons
Simple and space-efficient Highly vulnerable to single-bit errors in count field
Easy to implement Difficult to recover from synchronization loss
A: A frame is a structured unit of data used by the Data Link Layer, which includes headers
and trailers to ensure that the data is properly addressed and error-checked before delivery to
the Network Layer.
🔗 4. Services Provided to the Network Layer
Type Features
Unacknowledged Simple, no reliability, fast; suitable for low-error
Connectionless environments (e.g., Ethernet).
Each frame is acknowledged individually; adds reliability
Acknowledged Connectionless
(e.g., Wi-Fi).
Acknowledged Connection- Logical connection is established; ensures ordered, reliable
Oriented delivery (e.g., HDLC).
A: It ensures reliable, ordered delivery even in environments with high latency and potential
for data loss, such as satellite communication.
This abstraction helps in conceptualizing end-to-end communication as though the layers are
directly interacting.
📀 6. Framing Techniques
Framing is essential to distinguish where one frame ends and the next begins in a stream of
bits.
A: It prevents data bytes from being misinterpreted as control flags by inserting escape
characters before them.
7. Error Control
Reliable delivery is ensured through the following mechanisms:
❓ Q4: Why are sequence numbers used in the Data Link Layer?
A: They help identify and discard duplicate frames, ensuring each frame is processed only
once.
🧠 Example Scenario:
🔄 8. Flow Control
🌊 Purpose:
To prevent the sender from overwhelming the receiver by controlling the rate of data
transmission.
Type Description
Feedback-based Receiver sends feedback to control the data flow (e.g., sliding window).
Rate-based Predefined transmission rate without feedback.
❓ Q5: What happens if there is no flow control and the receiver is slow?
A: The receiver may be flooded with data, leading to buffer overflow and frame loss.
Stop-and-Wait
Sliding Window
A:
Stop-and-Wait: Only one frame can be sent at a time; sender waits for ACK before
sending next.
Sliding Window: Multiple frames can be sent before requiring ACKs; improves
efficiency and throughput.
✅ Conclusion
The Data Link Layer ensures that the bits provided by the Physical Layer are organized
into frames and transmitted reliably to the Network Layer. It handles crucial aspects like
framing, addressing, error detection, flow control, and access regulation. It plays a
foundational role in enabling seamless and error-free data communication between nodes in a
network.