0% found this document useful (0 votes)
9 views18 pages

Unit 3

The document discusses switching in computer networks, highlighting Circuit Switching and Packet Switching as the two primary types. Circuit Switching establishes a dedicated path for communication, while Packet Switching divides data into packets that can take different routes, optimizing resource use. It also covers Carrier Sense Multiple Access (CSMA) protocols, detailing their variations and the CSMA/CD protocol for collision detection in Ethernet networks.

Uploaded by

xabah70709
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views18 pages

Unit 3

The document discusses switching in computer networks, highlighting Circuit Switching and Packet Switching as the two primary types. Circuit Switching establishes a dedicated path for communication, while Packet Switching divides data into packets that can take different routes, optimizing resource use. It also covers Carrier Sense Multiple Access (CSMA) protocols, detailing their variations and the CSMA/CD protocol for collision detection in Ethernet networks.

Uploaded by

xabah70709
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

UNIT 3

COMPUTER NETWORKS

Switching in Computer Networks refers to the process of directing data from one device to
another within a network. This process involves using devices called switches, which are
responsible for determining the path that data takes to reach its destination. The primary goal
of switching is to efficiently transmit data across the network while minimizing delays and
maximizing the use of available bandwidth.

There are two primary types of switching used in computer networks: Circuit Switching and
Packet Switching.

Circuit Switching:

Circuit switching is a communication method where a dedicated communication path is


established between the sender and receiver for the duration of the communication session.
This path is reserved exclusively for the two parties, and no other data can use it during the
session. It is typically used in traditional telephone networks.

Key Characteristics of Circuit Switching:

 Dedicated Path: A dedicated communication channel is established for the entire


duration of the communication session.
 Constant Bandwidth: Once the connection is established, the bandwidth is fixed,
and the data can flow continuously without interruption.
 Reliable: The data transmission is consistent and predictable.
 Example: Traditional telephone networks.

Advantages of Circuit Switching:

 Consistent quality of service (QoS) and real-time communication.


 Low delay during data transmission once the circuit is established.

Disadvantages of Circuit Switching:

 Inefficient use of resources: The dedicated path remains idle if there is no data being
sent.
 Setup time: The establishment of a connection takes time.

Packet Switching:

Packet switching is a method where data is broken into smaller units called packets. Each
packet is transmitted independently across the network and may take different paths to reach
the destination. Once all packets arrive at the destination, they are reassembled to form the
original data.

Key Characteristics of Packet Switching:


 No Dedicated Path: Data is divided into packets and sent independently. The
network decides the best route for each packet.
 Dynamic Routing: The path that each packet takes can vary depending on the current
network conditions (e.g., congestion, failures).
 Efficiency: The network resources are used more efficiently, as packets from
different sources can share the same network links.
 Example: The Internet (TCP/IP-based networks).

Advantages of Packet Switching:

 Efficient use of bandwidth since multiple communication sessions can share the same
network resources.
 More robust and flexible: If one path is congested or fails, packets can be rerouted
dynamically.

Disadvantages of Packet Switching:

 Variable delay: Since packets can take different routes, the time it takes for each
packet to reach the destination may vary.
 Requires additional overhead for reassembling packets and managing data flow.

Comparison between Circuit Switching and Packet Switching:

Feature Circuit Switching Packet Switching


Requires a dedicated connection No connection setup, data is
Connection Setup
before transmission. transmitted in packets.
Data Continuous, dedicated Data is divided into packets and may
Transmission communication path. take different paths.
Bandwidth Fixed bandwidth for the entire Dynamic bandwidth allocation,
Allocation session. shared among multiple sessions.
Less efficient; idle resources More efficient; resources shared
Efficiency
during periods of silence. among multiple users.
Lower latency once the circuit is Higher latency due to packet routing
Latency
established. and delays.
Example Traditional telephone systems. The Internet, email, web browsing.
Can be unreliable due to varying
Reliability Reliable, as the path is dedicated.
packet routes.
Highly scalable, as multiple users
Scalability Poor scalability due to fixed paths.
share the network.

Summary:

 Circuit Switching is suited for real-time, continuous communication where


consistent bandwidth is essential (e.g., voice calls).
 Packet Switching is ideal for bursty, flexible communication where resources can be
dynamically allocated, as seen in modern data networks like the Internet.

4.2.2 Carrier Sense Multiple Access (CSMA) Protocols


Carrier Sense Multiple Access (CSMA) is a family of protocols used to manage how stations
on a shared channel (e.g., in a Local Area Network) access the communication medium. The
idea is to listen (sense) the channel for any ongoing transmission and transmit data only when
the channel is idle. Several versions of CSMA were developed to improve performance, each
with a different way of handling when and how stations transmit. Let's break down the
various CSMA protocols and their behavior.

1. ALOHA (Basic Concept)

 Pure ALOHA and Slotted ALOHA were early random access protocols.
 In Pure ALOHA, a station transmits at any time and waits for an acknowledgment. If
no acknowledgment is received, it retransmits after a random period.
 In Slotted ALOHA, time is divided into slots, and stations must transmit at the
beginning of a slot. This reduces the chances of collision.

Performance in ALOHA: The maximum channel utilization is 1/e (about 36.8%), meaning
that the system often experiences collisions, leading to inefficiency.

2. CSMA (Carrier Sense Multiple Access)

CSMA improves upon ALOHA by having stations listen to the channel before transmitting.
If the channel is idle, the station transmits its data. If it detects that the channel is busy, it
waits until the channel becomes idle.

Types of CSMA Protocols:

a. 1-persistent CSMA:

 How it works:
1. A station listens to the channel.
2. If the channel is idle, the station transmits immediately.
3. If the channel is busy, the station waits until it becomes idle, then transmits
immediately.
 Problem: Collisions can still occur, especially if two stations wait for the channel to
become idle at the same time and start transmitting simultaneously.

Key points:

 High chance of collision due to simultaneous transmission after idle time.


 Works well in small networks with minimal traffic.
b. Nonpersistent CSMA:

 How it works:
1. A station listens to the channel.
2. If the channel is idle, the station transmits.
3. If the channel is busy, instead of transmitting immediately, the station waits
for a random amount of time before sensing the channel again.
 Advantages:

o Reduces the likelihood of collisions compared to 1-persistent CSMA.


o Better channel utilization because stations don't immediately transmit after the
channel is idle.

 Disadvantages:

o Increased delay because of the random waiting time.

c. p-persistent CSMA:

 How it works (for slotted channels):


1. When a station wants to transmit, it senses the channel.
2. If the channel is idle, the station transmits with probability p.
3. If it doesn't transmit (with probability q = 1 - p), it waits until the next time
slot and repeats the process.
4. If the station senses the channel to be busy, it waits until the next slot and
follows the same algorithm.
 Use: This protocol is particularly useful in networks with slotted time and helps
prevent too many simultaneous transmissions.
3. CSMA with Collision Detection (CSMA/CD)

CSMA/CD is an improvement over basic CSMA protocols. In CSMA/CD, stations can


detect collisions while they are transmitting and stop immediately. After detecting a
collision, the stations will wait a random time and attempt transmission again.

Key points:

 Collision Detection: During transmission, the station listens to the channel. If the
signal it hears is different from its transmitted signal, it detects a collision.
 Ethernet: This is the protocol behind classic Ethernet networks.

4. CSMA/CD Collision Detection Example:

Imagine two stations, A and B, are transmitting data at the same time:

1. Both start transmitting at time t = 0.


2. At time t = τ, station B detects that a collision has occurred and stops transmitting.
3. Station A will not detect the collision until time t = 2τ (due to the propagation delay).

This 2τ delay means that for CSMA/CD to function optimally, stations must wait for a time
equal to 2τ to be sure that the channel is fully seized by their transmission.

Performance Comparison of CSMA Protocols:

 1-persistent CSMA: Better than ALOHA but still prone to collisions when two
stations begin transmitting simultaneously.
 Nonpersistent CSMA: Reduces collisions but introduces more delay due to random
waiting times.
 p-persistent CSMA: Good for slotted systems, balancing between collision
probability and delay.

Throughput Comparison (Graph shown in the original text):

 Throughput increases as we move from Pure ALOHA to Slotted ALOHA, and


then to CSMA protocols.
 Nonpersistent CSMA achieves better channel utilization, although it may have
slightly higher delays.
Summary of Key Differences:

Transmission on
Protocol Collisions Advantages Disadvantages
Idle
Collisions happen if
1-persistent Transmits Simple and High collision
stations transmit
CSMA immediately quick probability
simultaneously
Nonpersistent Waits a random Better channel
Fewer collisions Higher delay
CSMA time utilization
p-persistent Transmits with Reduces collision Good for Slightly more
CSMA probability p probability slotted systems complex
Detects collision
Detects and stops if Efficient use of Delay in detecting
CSMA/CD during
collision happens bandwidth collisions
transmission

Conclusion:

Carrier Sense Multiple Access (CSMA) protocols help manage how multiple stations share a
communication channel. By listening before transmitting, CSMA reduces collisions
compared to ALOHA systems. The different variations (1-persistent, nonpersistent, and p-
persistent) offer trade-offs between collision avoidance and transmission delays. CSMA/CD
further improves performance by detecting collisions during transmission and aborting the
transmission, which is crucial for networks like Ethernet.

This diagram is illustrating how the CSMA/CD (Carrier Sense Multiple Access with
Collision Detection) protocol works in terms of time slots and its states:
Key components of the diagram:
1. Transmission Period (Frame)
o This is when a device successfully transmits a frame (packet of data) over the
medium (e.g., an Ethernet cable).
o During this time, the medium is busy, and other devices should wait.
2. Contention Period
o Once the transmission ends, multiple devices might try to access the channel,
leading to a contention period.
o In this period, devices sense the channel and compete to transmit.
o The slots during contention are called contention slots. If two devices choose
the same slot, a collision might occur, which CSMA/CD will detect.
3. Idle Period
o This is the gap when no frame is being transmitted and no contention is
occurring. The medium is idle here.
Time (t₀)
 This point marks the start of a new contention period right after a frame transmission
has finished.
CSMA/CD Process Flow:
1. Carrier Sense: Devices "listen" to check if the medium is free.
2. Collision Detection: If two devices transmit at the same time, a collision is detected.
3. Backoff Algorithm: Devices wait for a random period (backoff) before retrying in the
contention slots.
How to read the diagram:
 The timeline flows left to right.
 The network alternates between:
o Transmission periods (when frames are sent)
o Contention periods (when devices compete to send frames)
o Idle periods (when no one is transmitting)
Summary:
 CSMA/CD works by coordinating who gets to "talk" on the medium.

4.3.2 Classic Ethernet MAC Sublayer Protocol


Ethernet Frame Format & CSMA/CD Overview
1. Frame Format (Ethernet & IEEE 802.3)

Ethernet Frame (DIX) Format:

 Preamble (8 bytes):
o Pattern: 10101010 (except last byte 10101011).
o Purpose: Synchronizes the receiver’s clock.
o Last byte = Start of Frame Delimiter.
 Destination Address (6 bytes):
o First bit = 0 for individual, 1 for group (multicast) address.
o All 1s (FF:FF:FF:FF:FF:FF) = Broadcast (all stations receive).
 Source Address (6 bytes):
o Globally unique.
o First 3 bytes = OUI (Organizationally Unique Identifier by IEEE).
o Last 3 bytes = Manufacturer-defined.
 Type Field (2 bytes) [Ethernet]:
o Indicates network-layer protocol (e.g., IPv4 = 0x0800).

IEEE 802.3 Frame Format:

 Length Field (2 bytes):


o Specifies data length instead of type.
o Uses LLC (Logical Link Control) header inside the data for protocol
identification.
 Type vs Length Conflict:
o < 0x600 (1536) = Length.
o > 0x600 = Type.
o IEEE allowed both after 1997 due to existing DIX implementations.
 Data (0-1500 bytes):
o Upper limit = 1500 bytes (due to memory constraints in early transceivers).
 Pad (0-46 bytes):
o Ensures minimum frame size (64 bytes) if data is too short.
 Checksum (4 bytes):
o 32-bit CRC for error detection.
o If CRC fails = frame dropped.

2. Minimum and Maximum Frame Sizes

 Maximum: 1518 bytes (including all headers + CRC).


 Minimum: 64 bytes (from destination address to CRC).
o Prevents completion of short frames before detecting collisions.
o Ensures transmission lasts at least 2τ (round-trip time).
o 64 bytes = 512 bits = 51.2 μs at 10 Mbps.

3. Collision Detection (CSMA/CD)

Collision Scenario:

 Frame starts at station A at time 0.


 Propagation delay (τ): Time to reach the farthest station B.
 B may start transmitting just before receiving A's frame, causing a collision.
 Both A and B will detect collision and abort.

Jamming Signal:

 Upon collision, stations send a 48-bit jam signal to inform others.


 A retransmits after a random interval.

4. Binary Exponential Backoff Algorithm

Slot Time:

 Set to 512 bits = 51.2 μs (2τ).

Backoff Strategy:

 After 1st collision: wait random 0 or 1 slot.


 After 2nd collision: wait random 0 to 3 slots.
 After i collisions: wait random 0 to (2^i - 1) slots.
 Frozen at max 1023 slots after 10 collisions.
 After 16 collisions: give up and report failure.

Purpose:
 Reduces delay when few stations are contending.
 Avoids indefinite collisions when many stations are competing.

5. Notes on CSMA/CD Operation

 1-persistent CSMA: Transmit as soon as medium is idle.


 No ACKs used in Ethernet (assumes wired networks have low errors).
 CRC handles error detection; retransmission is handled by upper layers if needed.

6. Multicasting vs Broadcasting

 Broadcasting: All stations accept the frame (no group management needed).
 Multicasting: Only a group of stations (requires group management).

3.1 DATA LINK LAYER DESIGN ISSUES

🧹 1. Introduction to Data Link Layer


The Data Link Layer is the 2nd layer in the OSI model. It builds on the services provided
by the Physical Layer to transmit raw bits in the form of structured frames, and delivers
them reliably to the Network Layer. It acts as a bridge between the physical transmission of
data and its logical organization.

Figure 3-2(a): Virtual Communication


 What it shows:
This diagram presents a simplified, conceptual view where the Data Link Layers of
Host 1 and Host 2 appear to communicate directly with each other.
 Virtual Data Path:
The solid black line between the layers shows a logical (virtual) path of
communication between Data Link Layer processes.
 Layer Stack:
Each host has a stack of layers numbered 1 (Physical) to 4 (Transport or higher), but
here we focus on the Data Link Layer (Layer 2). The communication is visualized as
though Layer 2 on Host 1 is directly interacting with Layer 2 on Host 2.
 Why it’s useful:
This abstraction helps developers and network engineers design and think in terms of
peer-to-peer communication at each OSI layer, even though the underlying
implementation is more complex.

Figure 3-2(b): Actual Communication

 What it shows:
This is the real transmission path data follows.
 Actual Data Path:
The data is passed down from the Data Link Layer of Host 1 to the Physical Layer,
transmitted over the physical medium (like a wire or wireless channel), and then up to
the Data Link Layer of Host 2.
 Illustrated Path:
o Host 1: Layer 2 → Layer 1 (Physical)
o Transmission over medium
o Host 2: Layer 1 → Layer 2
 Why it’s important:
This illustrates how actual data movement happens and emphasizes that the Data
Link Layer depends on the Physical Layer to carry bits over the medium.

⚖️Virtual vs Actual Communication Summary

Aspect Virtual Communication Actual Communication


Definition Conceptual peer-to-peer interaction Physical transmission over medium
Path Layer 2 ↔ Layer 2 Layer 2 ↓ Layer 1 → Layer 1 ↑ Layer 2
Use Simplifies understanding Reflects real-world behavior
Type Logical abstraction Physical reality

🎯 2. Major Functions of Data Link Layer


Function Description
Framing Encapsulates packets into frames for transmission.
Physical
Adds MAC addresses to identify source and destination devices.
Addressing
Error Control Ensures error-free delivery using detection and correction mechanisms.
Flow Control Prevents sender from overwhelming the receiver with data.
Determines which device can transmit in shared communication
Access Control
environments.

It ensures that the transmission is structured, synchronized, and efficient.

(Section 3.1.2)
To ensure reliable communication, the Data Link Layer must transform the raw bit stream
provided by the Physical Layer into structured and manageable units called frames.
The Physical Layer simply moves bits from one place to another, but it doesn’t guarantee
error-free delivery. Bits may be flipped, lost, or inserted due to noise in the channel,
especially in wireless or long-distance wired connections.
To manage this:
 The Data Link Layer breaks the stream into frames.
 Each frame is appended with a checksum (or similar error-detection code).
 Upon receiving, the destination computes the checksum again and compares it with
the one received.
 If they differ, it indicates an error, and the frame can be discarded or a retransmission
requested.
However, framing is more complex than it seems because the receiver must be able to:
 Clearly identify where each frame starts and ends.
 Avoid excessive bandwidth usage for framing information.
🔧 Framing Methods:
1. Byte Count:
o Each frame starts with a byte specifying the total number of bytes in the
frame.
o Receiver reads this byte and counts to determine where the frame ends.
o Problem: If the byte count field is corrupted, synchronization is lost and
recovery becomes difficult.
2. Flag Bytes with Byte Stuffing:
o Special byte (e.g., 01111110) indicates frame boundaries.
o If this byte appears in the data, an escape byte (ESC) is added before it.
3. Flag Bits with Bit Stuffing:
o Uses bit-level flags and inserts a 0 after five consecutive 1s to avoid
misinterpretation.
4. Physical Layer Coding Violations:
o Uses signal patterns not valid for regular data as delimiters.
These framing methods ensure that even in noisy environments, the receiver can correctly
identify and process each frame.
📦 3. Framing: Data into Frames
A Frame is a structured unit of data used for transmission. It is made up of:

 Header – Control info like source/destination addresses, sequence numbers.


 Payload – Actual data from the Network Layer.
 Trailer – Error detection code (e.g., checksum, CRC).

🗈 Diagram:

Explanation of Figure 3-3 – Byte Stream Framing Using

🔹 Figure 3-3(a): Byte Stream Without Errors

This shows how byte count-based framing works under normal conditions.

Key Points:

 Each frame starts with a byte count, indicating how many bytes follow in that frame
(including the count byte itself).
 Example:
o Frame 1 starts with 5 → Total 5 bytes (including count)
o Frame 2 starts with 5
o Frame 3 starts with 8
o Frame 4 starts with 8
Visual Interpretation:

🔹 Figure 3-3(b): Byte Stream With One Error

This shows the problem when an error occurs in the count field.

What Happens:

 Error occurs in Frame 2’s byte count (was 5, now read as 7).
 Receiver assumes Frame 2 has 7 bytes, reads too far.
 Frame boundaries become misaligned.
 Subsequent frames are misread or lost, leading to resynchronization failure.

Consequences:

 Even if the data is fine, a corrupted count leads to a cascade of errors.


 The receiver can't tell where the next frame starts.
 Retransmission is complicated because the receiver doesn't know how many bytes to
skip.

⚠️Conclusion and Drawback of Byte Count Framing

Pros Cons
Simple and space-efficient Highly vulnerable to single-bit errors in count field
Easy to implement Difficult to recover from synchronization loss

Q1: What is a frame in the Data Link Layer?

A: A frame is a structured unit of data used by the Data Link Layer, which includes headers
and trailers to ensure that the data is properly addressed and error-checked before delivery to
the Network Layer.
🔗 4. Services Provided to the Network Layer
Type Features
Unacknowledged Simple, no reliability, fast; suitable for low-error
Connectionless environments (e.g., Ethernet).
Each frame is acknowledged individually; adds reliability
Acknowledged Connectionless
(e.g., Wi-Fi).
Acknowledged Connection- Logical connection is established; ensures ordered, reliable
Oriented delivery (e.g., HDLC).

❓ Q2: Why is Acknowledged Connection-Oriented Service used over long


links like satellites?

A: It ensures reliable, ordered delivery even in environments with high latency and potential
for data loss, such as satellite communication.

🌋 5. Virtual vs Actual Communication


💭 Virtual Communication Path:
Network Layer → Data Link Layer → Network Layer

🔧 Actual Communication Path:


Host A: Data Link → Physical → Medium → Physical → Data Link: Host B

This abstraction helps in conceptualizing end-to-end communication as though the layers are
directly interacting.

📀 6. Framing Techniques
Framing is essential to distinguish where one frame ends and the next begins in a stream of
bits.

Method Key Concept


Byte Count First byte indicates the number of bytes in the frame.
Special FLAG byte used; ESC added before any FLAG found in
Byte Stuffing
the data.
Bit Stuffing 0 inserted after five consecutive 1s to prevent false flags.
Physical Layer
Uses unused physical signal patterns to denote frame boundaries.
Violations
❓ Q3: What problem does Byte Stuffing solve?

A: It prevents data bytes from being misinterpreted as control flags by inserting escape
characters before them.

📁 Example of Byte Stuffing:


Original: A FLAG B
Stuffed: A ESC FLAG B

📁 Example of Bit Stuffing:


Original: 01111110
Stuffed: 011111010

7. Error Control
Reliable delivery is ensured through the following mechanisms:

 ACK/NAK – Receiver sends acknowledgment for received frames.


 Timers – If ACK is not received within time, frame is retransmitted.
 Sequence Numbers – Ensure that duplicate frames are detected.

❓ Q4: Why are sequence numbers used in the Data Link Layer?

A: They help identify and discard duplicate frames, ensuring each frame is processed only
once.

🧠 Example Scenario:

 Frame with seq = 3 sent


 Timer expires (no ACK)
 Frame re-sent with seq = 3
 Receiver identifies it as duplicate using sequence number

🔄 8. Flow Control
🌊 Purpose:

To prevent the sender from overwhelming the receiver by controlling the rate of data
transmission.

Type Description
Feedback-based Receiver sends feedback to control the data flow (e.g., sliding window).
Rate-based Predefined transmission rate without feedback.
❓ Q5: What happens if there is no flow control and the receiver is slow?

A: The receiver may be flooded with data, leading to buffer overflow and frame loss.

🌟 Real-World Flow Control Protocols:

 Stop-and-Wait
 Sliding Window

🔍 9. Common Protocols Using Data Link Concepts


Protocol Features
Ethernet Unacknowledged, fast, uses CRC for error detection.
Wi-Fi (802.11) Acknowledged connectionless; uses ARQ for reliability.
PPP Point-to-Point Protocol; uses byte stuffing.
High-level Data Link Control; bit stuffing, connection-oriented,
HDLC
synchronous.

🧠 10. Quick Review Table


Feature Description
Frame Structured unit containing header, payload, trailer.
Stuffing Byte/Bit insertion to avoid confusion with control patterns.
Error Control Uses ACKs, timers, and sequence numbers for reliability.
Flow Control Prevents buffer overflow by regulating sending rate.
Service Types Different delivery guarantees based on reliability and order.

📋 Sample Practice Questions


❓ Q6: Explain the difference between Stop-and-Wait and Sliding Window
protocols.

A:

 Stop-and-Wait: Only one frame can be sent at a time; sender waits for ACK before
sending next.
 Sliding Window: Multiple frames can be sent before requiring ACKs; improves
efficiency and throughput.

❓ Q7: What is the role of the trailer in a frame?


A: The trailer contains error detection codes such as CRC that help in identifying errors
during transmission.

✅ Conclusion
The Data Link Layer ensures that the bits provided by the Physical Layer are organized
into frames and transmitted reliably to the Network Layer. It handles crucial aspects like
framing, addressing, error detection, flow control, and access regulation. It plays a
foundational role in enabling seamless and error-free data communication between nodes in a
network.

You might also like