0% found this document useful (0 votes)
13 views26 pages

Unit-2 CN

CSMA/CD (Carrier Sense Multiple Access with Collision Detection) is a protocol used in early Ethernet networks to manage data transmission and detect collisions when multiple devices attempt to send data simultaneously. It involves checking if the transmission medium is idle, transmitting data while monitoring for collisions, and employing a backoff algorithm to retry transmission after a collision is detected. Although CSMA/CD is largely obsolete due to advancements in Ethernet technology, it remains supported and is characterized by features like fairness, efficiency, and vulnerability to collisions.

Uploaded by

gagan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views26 pages

Unit-2 CN

CSMA/CD (Carrier Sense Multiple Access with Collision Detection) is a protocol used in early Ethernet networks to manage data transmission and detect collisions when multiple devices attempt to send data simultaneously. It involves checking if the transmission medium is idle, transmitting data while monitoring for collisions, and employing a backoff algorithm to retry transmission after a collision is detected. Although CSMA/CD is largely obsolete due to advancements in Ethernet technology, it remains supported and is characterized by features like fairness, efficiency, and vulnerability to collisions.

Uploaded by

gagan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Collision Detection in CSMA/CD



CSMA/CD (Carrier Sense Multiple Access/ Collision Detection) is a media


access control method that was widely used in Early Ethernet
technology/LANs when there used to be shared Bus Topology and each
node ( Computers) was connected by Coaxial Cables. Nowadays Ethernet
is Full Duplex and Topology is either Star (connected via Switch or
Router) or point-to-point ( Direct Connection). Hence CSMA/CD is not used
but they are still supported though.
Consider a scenario where there are ‘n’ stations on a link and all are
waiting to transfer data through that channel. In this case, all ‘n’ stations
would want to access the link/channel to transfer their own data. The
problem arises when more than one station transmits the data at the
moment. In this case, there will be collisions in the data from different
stations.
CSMA/CD is one such technique where different stations that follow this
protocol agree on some terms and collision detection measures for
effective transmission. This protocol decides which station will transmit
when so that data reaches the destination without corruption.
How Does CSMA/CD Work?
 Step 1: Check if the sender is ready to transmit data packets.
 Step 2: Check if the transmission link is idle.
The sender has to keep on checking if the transmission link/medium is
idle. For this, it continuously senses transmissions from other nodes.
The sender sends dummy data on the link. If it does not receive any
collision signal, this means the link is idle at the moment. If it senses that
the carrier is free and there are no collisions, it sends the data.
Otherwise, it refrains from sending data.
 Step 3: Transmit the data & check for collisions.
The sender transmits its data on the link. CSMA/CD does not use an
‘acknowledgment’ system. It checks for successful and unsuccessful
transmissions through collision signals. During transmission, if a
collision signal is received by the node, transmission is stopped. The
station then transmits a jam signal onto the link and waits for random
time intervals before it resends the frame. After some random time, it
again attempts to transfer the data and repeats the above process.
 Step 4: If no collision was detected in propagation, the sender
completes its frame transmission and resets the counters.
How Does a Station Know if Its Data Collide?
Consider the above situation. Two stations, A & B.
Propagation Time: Tp = 1 hr ( Signal takes 1 hr to go from A to B)
At time t=0, A transmits its data.
t= 30 mins : Collision occurs.
After the collision occurs, a collision signal is generated and sent to both A
& B to inform the stations about the collision. Since the collision happened
midway, the collision signal also takes 30 minutes to reach A & B.
Therefore, t=1 hr: A & B receive collision signals.
This collision signal is received by all the stations on that link. Then,
How to Ensure that it is our Station’s Data that Collided?
For this, Transmission time (Tt) > Propagation Time (Tp) [Rough bound]
This is because we want that before we transmit the last bit of our data
from our station, we should at least be sure that some of the bits have
already reached their destination. This ensures that the link is not busy and
collisions will not occur.
But, above is a loose bound. We have not taken the time taken by the
collision signal to travel back to us. For this consider the worst-case
scenario.
Consider the above system again.

At time t=0, A transmits its data.


t= 59:59 mins : Collision occurs
This collision occurs just before the data reaches B. Now the collision
signal takes 59:59 minutes again to reach A. Hence, A receives the
collision information approximately after 2 hours, that is, after 2 * Tp.
Hence, to ensure tighter bound, to detect the collision
completely,
Tt > >= 2 * Tp
This is the maximum collision time that a system can take to detect if the
collision was of its own data.
What should be the Minimum length of the Packet to be
Transmitted?
Transmission Time = Tt = Length of the packet/ Bandwidth of
the link
[Number of bits transmitted by sender per second]
Substituting above, we get,
Length of the packet/ Bandwidth of the link>= 2 * Tp
Length of the packet >= 2 * Tp * Bandwidth of the link
Padding helps in cases where we do not have such long packets. We can
pad extra characters to the end of our data to satisfy the above condition.
Features of Collision Detection in CSMA/CD
 Carrier Sense: Before transmitting data, a device listens to the network
to check if the transmission medium is free. If the medium is busy, the
device waits until it becomes free before transmitting data.
 Multiple Access: In a CSMA/CD network, multiple devices share the
same transmission medium. Each device has equal access to the
medium, and any device can transmit data when the medium is free.
 Collision Detection: If two or more devices transmit data
simultaneously, a collision occurs. When a device detects a collision, it
immediately stops transmitting and sends a jam signal to inform all other
devices on the network of the collision. The devices then wait for a
random time before attempting to transmit again, to reduce the chances
of another collision.
 Backoff Algorithm: In CSMA/CD, a backoff algorithm is used to
determine when a device can retransmit data after a collision. The
algorithm uses a random delay before a device retransmits data, to
reduce the likelihood of another collision occurring.
 Minimum Frame Size: CSMA/CD requires a minimum frame size to
ensure that all devices have enough time to detect a collision before the
transmission ends. If a frame is too short, a device may not detect a
collision and continue transmitting, leading to data corruption on the
network.
Advantages of CSMA/CD
 Simple and widely used: CSMA/CD is a widely used protocol for
Ethernet networks, and its simplicity makes it easy to implement and
use.
 Fairness: In a CSMA/CD network, all devices have equal access to the
transmission medium, which ensures fairness in data transmission.
 Efficiency: CSMA/CD allows for efficient use of the transmission
medium by preventing unnecessary collisions and reducing network
congestion.
Disadvantages of CSMA/CD
 Limited Scalability: CSMA/CD has limitations in terms of scalability,
and it may not be suitable for large networks with a high number of
devices.
 Vulnerability to Collisions: While CSMA/CD can detect collisions, it
cannot prevent them from occurring. Collisions can lead to data
corruption, retransmission delays, and reduced network performance.
 Inefficient Use of Bandwidth: CSMA/CD uses a random backoff
algorithm that can result in inefficient use of network bandwidth if a
device continually experiences collisions.
 Susceptibility to Security Attacks: CSMA/CD does not provide any
security features, and the protocol is vulnerable to security attacks such
as packet sniffing and spoofing.
Conclusion
In conclusion, Collision Detection in CSMA/CD (Carrier Sense Multiple
Access with Collision Detection) is a method used in Ethernet networks to
handle collisions that occur when two or more devices transmit data
simultaneously. When a collision is detected, devices involved pause and
wait for a random amount of time before attempting to retransmit, reducing
the likelihood of another collision. This approach helps maintain efficient
data transmission and minimize network congestion in Ethernet networks.
📚 Key IEEE 802 Standards for LAN and MAN
Standard Focus Area Description

Bridging and Network Defines standards for network bridging (e.g., Spanning Tree
IEEE 802.1
Management Protocol), VLANs (802.1Q), and network management protocols.

Logical Link Control Specifies the LLC sublayer, providing a uniform interface for the
IEEE 802.2
(LLC) MAC sublayer across different physical media.

Covers wired LAN technologies using Carrier Sense Multiple Access


IEEE 802.3 Ethernet (CSMA/CD) with Collision Detection, including various Ethernet standards like
10Base-T, 100Base-TX, and Gigabit Ethernet.

Defines a LAN protocol where devices are connected in a ring


IEEE 802.5 Token Ring
topology and pass a token for network access.

IEEE Specifies wireless networking standards for LANs, including various


Wireless LAN (Wi-Fi)
802.11 amendments like 802.11a/b/g/n/ac/ax.

IEEE Wireless Personal Focuses on short-range communication standards, including


802.15 Area Networks
Standard Focus Area Description

(WPAN) Bluetooth (802.15.1) and Zigbee (802.15.4).

IEEE Broadband Wireless Defines standards for wireless MANs, providing high-speed
802.16 Access (WiMAX) internet access over large areas.

IEEE Specifies protocols for ring-based MANs, aiming for efficient data
Resilient Packet Ring
802.17 transport and resilience.

IEEE Mobile Broadband


Targets high-speed mobile broadband services over MANs.
802.20 Wireless Access

Wireless Regional
IEEE Utilizes unused television broadcast bands to provide broadband
Area Networks
802.22 access in rural areas.
(WRAN)

For a comprehensive overview and the latest developments in IEEE 802 standards, you can visit the
official IEEE 802 LAN/MAN Standards Committee website
📚 Key IEEE 802 Standards for LAN and MAN
Standard Focus Area Description

Bridging and Network Defines standards for network bridging (e.g., Spanning Tree
IEEE 802.1
Management Protocol), VLANs (802.1Q), and network management protocols.

Logical Link Control Specifies the LLC sublayer, providing a uniform interface for the
IEEE 802.2
(LLC) MAC sublayer across different physical media.

Covers wired LAN technologies using Carrier Sense Multiple Access


IEEE 802.3 Ethernet (CSMA/CD) with Collision Detection, including various Ethernet standards like
10Base-T, 100Base-TX, and Gigabit Ethernet.

Defines a LAN protocol where devices are connected in a ring


IEEE 802.5 Token Ring
topology and pass a token for network access.

IEEE Specifies wireless networking standards for LANs, including various


Wireless LAN (Wi-Fi)
802.11 amendments like 802.11a/b/g/n/ac/ax.

Wireless Personal
IEEE Focuses on short-range communication standards, including
Area Networks
802.15 Bluetooth (802.15.1) and Zigbee (802.15.4).
(WPAN)

IEEE Broadband Wireless Defines standards for wireless MANs, providing high-speed
802.16 Access (WiMAX) internet access over large areas.

IEEE Specifies protocols for ring-based MANs, aiming for efficient data
Resilient Packet Ring
802.17 transport and resilience.

IEEE Mobile Broadband


Targets high-speed mobile broadband services over MANs.
802.20 Wireless Access
Standard Focus Area Description

Wireless Regional
IEEE Utilizes unused television broadcast bands to provide broadband
Area Networks
802.22 access in rural areas.
(WRAN)

X.25 is a standardized protocol suite developed by the International Telecommunication


Union (ITU-T) in 1976 for packet-switched data communication over wide area networks
(WANs). It was widely adopted by telecommunications companies and financial institutions
during the late 20th century.

🧱 Architecture and OSI Model Alignment


X.25 aligns with the lower three layers of the OSI model:
Physical Layer: Defines the physical and electrical characteristics of the interface between
Data Terminal Equipment (DTE) and Data Circuit-terminating Equipment (DCE). Common
interfaces include X.21 and EIA-232
1. Data Link Layer: Utilizes the Link Access Procedure, Balanced (LAPB) protocol to
ensure reliable data transfer between DTE and DCE. LAPB manages framing, error
detection, and retransmission.
2. Packet Layer: Handles the establishment, maintenance, and termination of virtual
circuits. It manages packet sequencing, flow control, and error handling at the
network level.

🔄 Virtual Circuits
X.25 supports two types of virtual circuits:
 Switched Virtual Circuits (SVCs): Established dynamically for each session and
terminated after the session ends.
 Permanent Virtual Circuits (PVCs): Pre-established paths that remain active,
suitable for consistent and long-term connections.
Each virtual circuit is identified by a Logical Channel Identifier (LCI), allowing multiple
simultaneous connections over a single physical link.

📦 Packet Types
X.25 defines various packet types for control and data transmission:
 Call Setup and Clearing: Includes Call Request, Call Accepted, Clear Request, and
Clear Confirmation packets.
 Data Transfer: Data packets carry user information, while Interrupt packets handle
urgent data.
 Flow Control and Error Handling: Packets like Receive Ready (RR), Receive Not
Ready (RNR), Reject (REJ), Reset, and Restart manage flow control and error
recovery.
 Diagnostic and Registration: Provide network status and manage terminal
registration.

📡 Addressing
X.25 uses the X.121 addressing scheme, comprising:
 Data Network Identification Code (DNIC): A 4-digit code identifying the country
and network.
 National Terminal Number (NTN): Up to 10 digits identifying the specific terminal
within the network.
Later revisions introduced support for Network Service Access Point (NSAP) addresses,
enhancing compatibility with OSI protocols.

🧰 Applications and Legacy


X.25 was instrumental in the development of early data networks and was widely used in:
 Financial Services: Connecting ATMs and facilitating credit card transactions.
 Public Data Networks: Providing packet-switched services over leased lines and
dial-up connections.
 Government and Military Communications: Ensuring reliable data transfer in
critical applications.
Despite being largely supplanted by newer technologies like Frame Relay and TCP/IP, X.25
remains in use in certain legacy systems, particularly where reliability over less-than-perfect
transmission mediums is required.
Frame Relay is a packet-switching network protocol that is designed to
work at the data link layer of the network. It is used to connect Local Area
Networks (LANs) and transmit data across Wide Area Networks (WANs).
It is a better alternative to a point-to-point network for connecting multiple
nodes that require separate dedicated links to be established between
each pair of nodes. It allows transmission of different size packets and
dynamic bandwidth allocation. Also, it provides a congestion control
mechanism to reduce the network overheads due to congestion. It does
not have an error control and flow management mechanism.

Frame Relay Network

Working:
Frame relay switches set up virtual circuits to connect multiple LANs to
build a WAN. Frame relay transfers data between LANs across WAN by
dividing the data in packets known as frames and transmitting these
packets across the network. It supports communication with multiple LANs
over the shared physical links or private lines.
Frame relay network is established between Local Area Networks (LANs)
border devices such as routers and service provider network that
connects all the LAN networks. Each LAN has an access link that
connects routers of LAN to the service provider network terminated by the
frame relay switch. The access link is the private physical link used for
communication with other LAN networks over WAN. The frame relay
switch is responsible for terminating the access link and providing frame
relay services.
For data transmission, LAN’s router (or other border device linked with
access link) sends the data packets over the access link. The packet sent
by LAN is examined by a frame relay switch to get the Data Link
Connection Identifier (DLCI) which indicates the destination of the packet.
Frame relay switch already has the information about addresses of the
LANs connected to the network hence it identifies the destination LAN by
looking at DLCI of the data packet. DLCI basically identifies the virtual
circuit (i.e. logical path between nodes that doesn’t really exist) between
source and destination network. It configures and transmits the packet to
frame relay switch of destination LAN which in turn transfers the data
packet to destination LAN by sending it over its respective access link.
Hence, in this way, a LAN is connected with multiple other LANs by
sharing a single physical link for data transmission.
Frame relay also deals with congestion within a network. Following
methods are used to identify congestion within a network:
1. Forward Explicit Congestion Network (FECN) –
FECN is a part of the frame header that is used to notify the destination
about the congestion in the network. Whenever a frame experiences
congestion while transmission, the frame relay switch of the destination
network sets the FECN bit of the packet that allows the destination to
identify that packet has experienced some congestion while
transmission.
2. Backward Explicit Congestion Network (BECN) –
BECN is a part of the frame header that is used to notify the source
about the congestion in the network. Whenever a frame experiences
congestion while transmission, the destination sends a frame back to
the source with a set BECN bit that allows the source to identify that
packet that was transmitted had experienced some congestion while
reaching out to the destination. Once, source identifies congestion in
the virtual circuit, it slows down to transmission to avoid network
overhead.
3. Discard Eligibility (DE) –
DE is a part of the frame header that is used to indicate the priority for
discarding the packets. If the source is generating a huge amount of
traffic on the certain virtual network then it can set DE bits of less
significant packets to indicate the high priority for discarding the
packets in case of network overhead. Packets with set DE bits are
discarded before the packets with unset DE bits in case of congestion
within a network.

Types:

1. Permanent Virtual Circuit (PVC) –


These are the permanent connections between frame relay nodes that
exist for long durations. They are always available for communication
even if they are not in use. These connections are static and do not
change with time.
2. Switched Virtual Circuit (SVC) –
These are the temporary connections between frame relay nodes that
exist for the duration for which nodes are communicating with each
other and are closed/ discarded after the communication. These
connections are dynamically established as per the requirements.

Advantages:

1. High speed
2. Scalable
3. Reduced network congestion
4. Cost-efficient
5. Secured connection

Disadvantages:

1. Lacks error control mechanism


2. Delay in packet transfer
3. Less reliable

Integrated Services Digital Network (ISDN) is a suite of communication standards that


enables the digital transmission of voice, video, and data over traditional telephone networks.
ISDN is categorized into two main types: Narrowband ISDN (N-ISDN) and Broadband
ISDN (B-ISDN), each designed to meet different communication needs

🔹 Narrowband ISDN (N-ISDN)


Narrowband ISDN was developed to digitize the analog telephone system, facilitating the
transmission of voice and limited data services over existing copper lines.
Key Features:
 Channel Structure: Utilizes 64 kbps channels known as B-channels for data and
voice transmission, and 16 kbps D-channels for signaling.
 Service Interfaces:
o Basic Rate Interface (BRI): Comprises 2 B-channels and 1 D-channel
(2B+D), offering a total bandwidth of 144 kbps.
o Primary Rate Interface (PRI): Typically includes 23 B-channels and 1 D-
channel (23B+D) in North America, or 30 B-channels and 1 D-channel
(30B+D) in Europe, providing higher bandwidth suitable for larger
organizations.
 Applications: Primarily used for voice calls, fax transmissions, and early internet
access, offering more reliable and faster connections compared to analog modems.
 Limitations: The bandwidth was insufficient for emerging high-bandwidth
applications like video conferencing and multimedia streaming

🔸 Broadband ISDN (B-ISDN)


Broadband ISDN was conceptualized to overcome the limitations of N-ISDN by supporting
higher data rates suitable for advanced services.
Key Features:
 High Data Rates: Designed to support transmission rates exceeding 1.5 Mbps,
accommodating services like video conferencing, high-speed internet, and multimedia
applications.
 Underlying Technology: Employs Asynchronous Transfer Mode (ATM), a cell-
based switching technique that efficiently handles various types of traffic (voice,
video, data) simultaneously.
 Service Capabilities: Enables integration of diverse services such as high-definition
television (HDTV), video telephony, and broadband internet over a single network
infrastructure.
 Challenges: Despite its advanced capabilities, B-ISDN faced challenges including
high implementation costs, complexity, and competition from emerging technologies
like DSL and cable modems, leading to limited adoption.

🔄 Comparative Overview
Narrowband ISDN (N-
Feature Broadband ISDN (B-ISDN)
ISDN)
Channel 1.5 Mbps
64 kbps (B-channel)
Bandwidth
Switching
Circuit-switched Packet-switched (ATM)
Technique
Primary Use Cases Voice, fax, basic data Video, multimedia, high-speed data
Infrastructure Existing copper lines Requires new infrastructure
Limited due to competition from other
Adoption Widely adopted initially
technologies
While both N-ISDN and B-ISDN played significant roles in the evolution of digital
communication, advancements in networking technologies have led to the adoption of more
efficient and higher-capacity systems, rendering traditional ISDN services largely obsolete in
modern networks.
Asynchronous Transfer Mode (ATM):
It is an International Telecommunication Union- Telecommunications
Standards Section (ITU-T) efficient for call relay and it transmits all
information including multiple service types such as data, video, or voice
which is conveyed in small fixed-size packets called cells. Cells are
transmitted asynchronously and the network is connection-oriented.
ATM is a technology that has some event in the development of
broadband ISDN in the 1970s and 1980s, which can be considered an
evolution of packet switching. Each cell is 53 bytes long – 5 bytes header
and 48 bytes payload. Making an ATM call requires first sending a
message to set up a connection.
Subsequently, all cells follow the same path to the destination. It can
handle both constant rate traffic and variable rate traffic. Thus it can carry
multiple types of traffic with end-to-end quality of service. ATM is
independent of a transmission medium, they may be sent on a wire or
fiber by themselves or they may also be packaged inside the payload of
other carrier systems. ATM networks use “Packet” or “cell” Switching with
virtual circuits. Its design helps in the implementation of high-performance
multimedia networking.
ATM Cell Format –
As information is transmitted in ATM in the form of fixed-size units
called cells. As known already each cell is 53 bytes long which consists of
a 5 bytes header and 48 bytes payload.

Asynchronous Transfer Mode can be of two format types which are as


follows:
1. UNI Header: This is used within private networks of ATMs for
communication between ATM endpoints and ATM switches. It includes
the Generic Flow Control (GFC) field.

2. NNI Header: is used for communication between ATM switches, and it


does not include the Generic Flow Control(GFC) instead it includes a
Virtual Path Identifier (VPI) which occupies the first 12 bits.

Working of ATM:
ATM standard uses two types of connections. i.e., Virtual path
connections (VPCs) which consist of Virtual channel connections (VCCs)
bundled together which is a basic unit carrying a single stream of cells
from user to user. A virtual path can be created end-to-end across an
ATM network, as it does not rout the cells to a particular virtual circuit. In
case of major failure, all cells belonging to a particular virtual path are
routed the same way through the ATM network, thus helping in faster
recovery.
Switches connected to subscribers use both VPIs and VCIs to switch the
cells which are Virtual Path and Virtual Connection switches that can have
different virtual channel connections between them, serving the purpose
of creating a virtual trunk between the switches which can be handled as
a single entity. Its basic operation is straightforward by looking up the
connection value in the local translation table determining the outgoing
port of the connection and the new VPI/VCI value of connection on that
link.

Switching is a technique of transferring the information from one computer


network to another computer network.
Let us discuss about switching in step by step manner as follows −
Step 1 − In a computer network the switching can be achieved by using
switches.
Step 2 − A switch is a small piece of hardware device that is used to join
multiple computers together with one local area network (LAN).
Step 3 − These are devices which are helpful in creating temporary
connections between two or more devices that are linked to the switch.
Step 4 − Switches are helpful in forwarding the packets based on MAC
addresses.
Step 5 − By verifying the destination address to route the packet a Switch is
used to transfer the data only to the device that has been addressed.
Step 6 − It will operate in full duplex mode.
Step 7 − It works with limited bandwidth, so it does not broadcast the
message.
The diagram given below depicts the switching technique −

Advantages
The advantages of switching are as follows:
 The bandwidth of the network increases with the help of a switch.
 It tries to reduce the workload on individual PCs because it always
sends the information to specified addressed devices.
 It increases the overall performance of the network by reducing the
traffic on the network.
 There will be less frame collision because; switch creates the collision
domain for each connection.
Advertisement

-
Advertisement: 0:08

Disadvantages
The disadvantages of switching are as follows −
 A Switch is more expensive than network bridges.
 It cannot determine the network connectivity issues easily.
 The proper designing and configuration of the switch are required to
handle multicast packets.

Explore our latest online courses and learn new skills at your own pace.
Enroll and become a certified expert to boost your career.

Types of Switching Techniques


The different types of switching techniques are depicted below −

Let us understand all these techniques.

Circuit Switching
In circuit switching a path will be set-up before the transmission of the data.
Now the data follows the path specified.
For example, telephone lines.

Packet Switching
The data packets will contain the source and destination addresses. Every
router in between will check the destination address, select the next router to
which the packet should be forwarded and send it via an appropriate path. As
there is no path specified, different packets may follow different paths.

Virtual circuit packet switching


This is a mix of both packet and circuit switching. A path will be set-up
logically i.e. no physical path will be set-up. Packets always follow this logical
path. Therefore, these are the advantages of both packet and circuit
switching.
Message Switching
A message is transferred as a complete unit and that is routed through
intermediate nodes at which it is stored and forwarded.
Routing in Computer Networks
Routing is the process of determining the optimal path for data packets to travel from a
source to a destination across interconnected networks. Routers, the specialized devices in
networks, make these decisions based on routing algorithms and protocols.
🔹 Types of Routing
1. Static Routing: Routes are manually configured and remain unchanged unless
manually updated. Suitable for small, simple networks.
2. Dynamic Routing: Routers automatically adjust paths based on current network
conditions using protocols like:
o Distance Vector Routing: Each router shares its routing table with immediate
neighbors. Examples include RIP (Routing Information Protocol).
o Link-State Routing: Routers have a complete view of the network topology
and compute the shortest path using algorithms like Dijkstra's. OSPF (Open
Shortest Path First) is a common protocol.
3. Default Routing: Used when a router doesn't have a specific route for a destination, it
forwards packets to a default route. 🔹 Routing Algorithms
 Distance Vector: Routers calculate the best path based on distance metrics, sharing
this information with neighbors.
 Link-State: Routers have knowledge of the entire network topology and
independently calculate the best paths.
 Path Vector: Used in protocols like BGP (Border Gateway Protocol), where routers
maintain the path information that gets updated as it traverses the network.\

🚦 Congestion Control in Computer Networks


Congestion Control refers to techniques and mechanisms used to prevent network
congestion, ensuring efficient data transmission and maintaining quality of service.
🔹 Causes of Congestion
 High Traffic Load: Excessive data packets in the network can overwhelm resources.
 Insufficient Bandwidth: Limited capacity can lead to bottlenecks.
 Network Topology Changes: Failures or changes can reroute traffic, causing
congestion elsewhere.
🔹 Congestion Control Techniques
1. Open-Loop Control: Prevention techniques that do not rely on feedback from the
network. Examples include:
o Traffic Shaping: Regulating data transmission using algorithms like:
 Leaky Bucket: Controls the rate at which packets are sent into the
network, smoothing out bursts.
 Token Bucket: Allows for burstiness but controls the average rate.
2. Closed-Loop Control: Reactive techniques that adjust based on network feedback.
Examples include:
o TCP Congestion Control: Mechanisms like:
Slow Start: Gradually increases the transmission rate to avoid
congestion.
 Congestion Avoidance: Adjusts the rate based on network feedback to
prevent congestion.
 Fast Retransmit and Fast Recovery: Quickly recover from packet
loss without waiting for timeouts.
3. Active Queue Management (AQM): Routers proactively manage packet queues to
prevent congestion. Techniques include:
o Random Early Detection (RED): Randomly drops packets before the queue
becomes full to signal congestion.
o Explicit Congestion Notification (ECN): Marks packets instead of dropping
them to indicate impending congestion.
Internetworking with TCP/IP
TCP/IP (Transmission Control Protocol/Internet Protocol) is the foundational suite of
communication protocols used for the internet and similar networks. It enables diverse
computer systems to communicate over interconnected networks by defining how data should
be packetized, addressed, transmitted, routed, and received.
🔹 TCP/IP Model Layers
The TCP/IP model comprises four layers, each with specific functions:
1. Application Layer: Provides protocols for specific data communications services on
a process-to-process level, such as HTTP for web browsing, SMTP for email, and
FTP for file transfers.
2. Transport Layer: Ensures reliable data transfer between host systems. Key protocols
include:
o TCP (Transmission Control Protocol): Provides reliable, ordered, and error-
checked delivery of data.
o UDP (User Datagram Protocol): Offers a connectionless datagram service
that emphasizes reduced latency over reliability.Lifewire
3. Internet Layer: Handles the movement of packets around the network. The primary
protocol is:
o IP (Internet Protocol): Responsible for addressing and routing packets
between hosts.
4. Link Layer: Manages the physical transmission of data over network hardware and
includes protocols like Ethernet and Wi-Fi.
Together, these layers facilitate end-to-end communication across diverse and complex
networks. \

📦 IP Packet Structure
An IP packet is the fundamental unit of data transmitted across IP networks. It consists of
two main components:
1. Header: Contains control information required for routing and delivery.
2. Payload: Carries the actual data being transmitted, such as a segment from a TCP
connection or a datagram from a UDP service.
🔹 IPv4 Header Fields
The IPv4 header includes several fields, each serving a specific purpose:
 Version (4 bits): Indicates the IP version; for IPv4, this is set to 4.
 Header Length (4 bits): Specifies the length of the header in 32-bit words.
 Type of Service (8 bits): Indicates the quality of service desired.
 Total Length (16 bits): Specifies the entire packet size, including header and data, in
bytes.
 Identification (16 bits): Used for uniquely identifying the group of fragments of a
single IP datagram.
 Flags (3 bits): Control or identify fragments.
 Fragment Offset (13 bits): Indicates the position of the fragment in the original
datagram.
 Time to Live (8 bits): Limits the packet's lifetime to prevent it from circulating
indefinitely.
 Protocol (8 bits): Indicates the protocol used in the data portion of the IP datagram
(e.g., TCP, UDP).
 Header Checksum (16 bits): Provides error-checking for the header.
 Source Address (32 bits): The IP address of the sender.
 Destination Address (32 bits): The IP address of the receiver.
 Options (variable): Allows for additional options; this field is optional and not
commonly used.
The payload follows the header and contains the actual data being transmitted.
🔹 IPv6 Header Differences
IPv6, the successor to IPv4, was developed to address the exhaustion of IPv4 addresses and
includes several enhancements:
 Larger Address Space: IPv6 uses 128-bit addresses, significantly increasing the
number of available IP addresses.
 Simplified Header: The IPv6 header has a fixed length and a simplified structure,
improving processing efficiency.
 Extension Headers: Optional extension headers are used for additional features,
allowing for more flexibility.
These improvements make IPv6 more suitable for the modern internet's needs.

IPv4 (Internet Protocol Version 4)


IPv4 is the fourth version of the Internet Protocol and is widely used to identify devices on a
network through an addressing system.
🔹 IPv4 Address Format
 Structure: IPv4 addresses are 32-bit numbers divided into four octets (8 bits each),
represented in decimal format and separated by periods.
 Range: Each octet can range from 0 to 255.
 Example: 192.168.1.1
🔹 Example IPv4 Addresses
 192.0.2.126
 10.0.0.1
 172.16.254.1
These addresses are typically used in private networks or for local communication.

🌐 IPv6 (Internet Protocol Version 6)


IPv6 is the most recent version of the Internet Protocol, designed to address the limitations of
IPv4, particularly the exhaustion of available addresses.
🔹 IPv6 Address Format
 Structure: IPv6 addresses are 128-bit numbers represented in hexadecimal format
and separated by colons.
 Range: Each segment can range from 0000 to FFFF.
 Example: 2001:0db8:85a3:0000:0000:8a2e:0370:7334
🔹 Example IPv6 Addresses
 2001:0db8:85a3:0000:0000:8a2e:0370:7334
 fe80::1ff:fe23:4567:890a
 ::1 (loopback address)
 IPv6 addresses can be abbreviated by omitting leading zeros and using a double
colon (::) to represent consecutive zero segments.

Transport Layer
The Transport Layer (Layer 4 in the OSI model) is responsible for end-to-end communication
between hosts. It ensures complete data transfer with mechanisms for error checking, flow control,
and congestion control.
🔹 Design Issues
1. Multiplexing and Demultiplexing: Allows multiple applications to use the network
simultaneously by assigning unique port numbers.
2. Reliable Data Transfer: Ensures data is delivered accurately and in order, using
acknowledgments and retransmissions.
3. Flow Control: Prevents the sender from overwhelming the receiver by controlling the data
transmission rate.
4. Congestion Control: Manages network traffic to prevent congestion by adjusting the rate of
data transmission.
🔹 Connection Management
 TCP (Transmission Control Protocol): A connection-oriented protocol that establishes a
connection using a three-way handshake before data transfer
 UDP (User Datagram Protocol): A connectionless protocol that sends data without
establishing a connection, suitable for applications where speed is crucial and occasional
data loss is acceptable.
🔹 TCP Segment Format
A TCP segment consists of:
 Source Port: 16 bits
 Destination Port: 16 bits
 Sequence Number: 32 bits
 Acknowledgment Number: 32 bits
 Data Offset: 4 bits
 Reserved: 3 bits
 Flags: 9 bits (e.g., SYN, ACK, FIN)
 Window Size: 16 bits
 Checksum: 16 bits
 Urgent Pointer: 16 bits
 Options: Variable length
🔹 UDP Segment Format
A UDP segment consists of: Source Port: 16 bits
 Destination Port: 16 bits
 Length: 16 bits (header + data)
 Checksum: 16 bits

📁 Application Layer
The Application Layer (Layer 7 in the OSI model) provides network services directly to user
applications. It facilitates various functionalities like file transfers, email, and remote access.VSSUT
🔹 File Transfer, Access, and Management (FTAM)
 FTAM: A protocol that allows users to access and manage files on remote systems,
supporting operations like reading, writing, and deleting files.
 FTP (File Transfer Protocol): Enables the transfer of files between a client and server over a
TCP-based network.
🔹 E-Mail
 SMTP (Simple Mail Transfer Protocol): Used for sending emails from a client to a server or
between servers.
 POP3 (Post Office Protocol 3): Allows clients to retrieve emails from a server, downloading
them for offline access.
 IMAP (Internet Message Access Protocol): Enables clients to access and manage emails
directly on the mail server, supporting multiple devices.
🔹 Virtual Terminal
A Network Virtual Terminal allows users to interact with remote systems as if they were directly
connected. It provides a standardized interface for remote login and command execution.
🔹 Public Network
Public networks refer to networks accessible by the general public, such as the Internet. They
provide various services like web browsing, email, and file sharing, often using standardized
protocols to ensure interoperability.

Framing in Data Link Layer




Frames are the units of digital transmission, particularly in computer


networks and telecommunications. Frames are comparable to the packets
of energy called photons in the case of light energy. Frame is continuously
used in Time Division Multiplexing process.
Framing is a point-to-point connection between two computers or devices
consisting of a wire in which data is transmitted as a stream of bits.
However, these bits must be framed into discernible blocks of information.
Framing is a function of the data link layer. It provides a way for a sender to
transmit a set of bits that are meaningful to the receiver. Ethernet, token
ring, frame relay, and other data link layer technologies have their own
frame structures. Frames have headers that contain information such as
error-checking codes.

At the data link layer, it extracts the message from the sender and provides
it to the receiver by providing the sender’s and receiver’s addresses. The
advantage of using frames is that data is broken up into recoverable
chunks that can easily be checked for corruption.
The process of dividing the data into frames and reassembling it is
transparent to the user and is handled by the data link layer.
Framing is an important aspect of data link layer protocol design because it
allows the transmission of data to be organized and controlled, ensuring
that the data is delivered accurately and efficiently.
Problems in Framing
 Detecting start of the frame: When a frame is transmitted, every
station must be able to detect it. Station detects frames by looking out
for a special sequence of bits that marks the beginning of the frame i.e.
SFD (Starting Frame Delimiter).
 How does the station detect a frame: Every station listens to link for
SFD pattern through a sequential circuit. If SFD is detected, sequential
circuit alerts station. Station checks destination address to accept or
reject frame.
 Detecting end of frame: When to stop reading the frame.
 Handling errors: Framing errors may occur due to noise or other
transmission errors, which can cause a station to misinterpret the frame.
Therefore, error detection and correction mechanisms, such as cyclic
redundancy check (CRC), are used to ensure the integrity of the frame.
 Framing overhead: Every frame has a header and a trailer that
contains control information such as source and destination address,
error detection code, and other protocol-related information. This
overhead reduces the available bandwidth for data transmission,
especially for small-sized frames.
 Framing incompatibility: Different networking devices and protocols
may use different framing methods, which can lead to framing
incompatibility issues. For example, if a device using one framing
method sends data to a device using a different framing method, the
receiving device may not be able to correctly interpret the frame.
 Framing synchronization: Stations must be synchronized with each
other to avoid collisions and ensure reliable communication.
Synchronization requires that all stations agree on the frame boundaries
and timing, which can be challenging in complex networks with many
devices and varying traffic loads.
 Framing efficiency: Framing should be designed to minimize the
amount of data overhead while maximizing the available bandwidth for
data transmission. Inefficient framing methods can lead to lower network
performance and higher latency.
Types of framing
There are two types of framing:
1. Fixed-size: The frame is of fixed size and there is no need to provide
boundaries to the frame, the length of the frame itself acts as a delimiter.
 Drawback: It suffers from internal fragmentation if the data size is less
than the frame size
 Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as
well as the beginning of the next frame to distinguish. This can be done in
two ways:
1. Length field – We can introduce a length field in the frame to indicate
the length of the frame. Used in Ethernet(802.3). The problem with this
is that sometimes the length field might get corrupted.
2. End Delimiter (ED) – We can introduce an ED(pattern) to indicate the
end of the frame. Used in Token Ring. The problem with this is that ED
can occur in the data. This can be solved by:
. Character/Byte Stuffing: Used when frames consist of characters. If
data contains ED then, a byte is stuffed into data to differentiate it from
ED.
Bit Stuffing: Let ED = 01111 and if data = 01111
–> Sender stuffs a bit to break the pattern i.e. here appends a 0 in data =
011101.
–> Receiver receives the frame.
–> If data contains 011101, receiver removes the 0 and reads the data.
Error Control in Data Link Layer



Data-link layer uses the techniques of error control simply to ensure and
confirm that all the data frames or packets, i.e. bit streams of data, are
transmitted or transferred from sender to receiver with certain accuracy.
Using or providing error control at this data link layer is an optimization, it
was never a requirement. Error control is basically process in data link
layer of detecting or identifying and re-transmitting data frames that might
be lost or corrupted during transmission. In both of these cases, receiver
or destination does not receive correct data frame and sender or source
does not even know anything about any such loss regarding data frames.
Therefore, in such type of cases, both sender and receiver are provided
with some essential protocols that are required to detect or identify such
types of errors as loss of data frames. The Data-link layer follows a
technique known as re-transmission of frames to detect or identify transit
errors and also to take necessary actions that are required to reduce or
remove such errors. Each and every time an error is detected during
transmission, particular data frames are retransmitted and this process is
known as ARQ (Automatic Repeat Request).

Ways of doing Error Control : There are basically two ways of doing
Error control as given below :

Flow Control in Data Link Layer


Last Updated : 05 May, 2023



Flow control is design issue at Data Link Layer . It is a technique that
generally observes the proper flow of data from sender to receiver. It is
very essential because it is possible for sender to transmit data or
information at very fast rate and hence receiver can receive this
information and process it. This can happen only if receiver has very high
load of traffic as compared to sender, or if receiver has power of
processing less as compared to sender. Flow control is basically a
technique that gives permission to two of stations that are working and
processing at different speeds to just communicate with one another. Flow
control in Data Link Layer simply restricts and coordinates number of
frames or amount of data sender can send just before it waits for an
acknowledgement from receiver. Flow control is actually set of procedures
that explains sender about how much data or frames it can transfer or
transmit before data overwhelms receiver. The receiving device also
contains only limited amount of speed and memory to store data. This is
why receiving device should be able to tell or inform the sender about
stopping the transmission or transferring of data on temporary basis
before it reaches limit. It also needs buffer, large block of memory for just
storing data or frames until they are processed.
flow control can also be understand as a speed matching mechanism for
two stations.

Approaches to Flow Control : Flow Control is classified into two


categories:
 Feedback – based Flow Control : In this control technique, sender
simply transmits data or information or frame to receiver, then receiver
transmits data back to sender and also allows sender to transmit more
amount of data or tell sender about how receiver is processing or
doing. This simply means that sender transmits data or frames after it
has received acknowledgements from user.
 Rate – based Flow Control : In this control technique, usually when
sender sends or transfer data at faster speed to receiver and receiver
is not being able to receive data at the speed, then mechanism known
as built-in mechanism in protocol will just limit or restricts overall rate at
which data or information is being transferred or transmitted by sender
without any feedback or acknowledgement from receiver.
Techniques of Flow Control in Data Link Layer : There are basically
two types of techniques being developed to control the flow of data
1. Stop-and-Wait Flow Control : This method is the easiest and simplest
form of flow control. In this method, basically message or data is broken
down into various multiple frames, and then receiver indicates its
readiness to receive frame of data. When acknowledgement is received,
then only sender will send or transfer the next frame. This process is
continued until sender transmits EOT (End of Transmission) frame. In this
method, only one of frames can be in transmission at a time. It leads to
inefficiency i.e. less productivity if propagation delay is very much longer
than the transmission delay and Ultimately In this method sender sent
single frame and receiver take one frame at a time and sent
acknowledgement(which is next frame number only) for new frame.
Advantages –
 This method is very easiest and simple and each of the frames is
checked and acknowledged well.
 This method is also very accurate.
Disadvantages –
 This method is fairly slow.
 In this, only one packet or frame can be sent at a time.
 It is very inefficient and makes the transmission process very slow.
2. Sliding Window Flow Control : This method is required where reliable
in-order delivery of packets or frames is very much needed like in data link
layer. It is point to point protocol that assumes that none of the other entity
tries to communicate until current data or frame transfer gets completed.
In this method, sender transmits or sends various frames or packets
before receiving any acknowledgement. In this method, both the sender
and receiver agree upon total number of data frames after which
acknowledgement is needed to be transmitted. Data Link Layer requires
and uses this method that simply allows sender to have more than one
unacknowledged packet “in-flight” at a time. This increases and improves
network throughput. and Ultimately In this method sender sent multiple
frame but receiver take one by one and after completing one frame
acknowledge(which is next frame number only) for new frame.
Advantages –
 It performs much better than stop-and-wait flow control.
 This method increases efficiency.
 Multiples frames can be sent one after another.
Disadvantages –
 The main issue is complexity at the sender and receiver due to the
transferring of multiple frames.
 The receiver might receive data frames or packets out the sequence.

Ways of Error Control

1. Error Detection : Error detection, as the name suggests, simply


means detection or identification of errors. These errors may occur due
to noise or any other impairments during transmission from transmitter
to the receiver, in communication system. It is a class of techniques for
detecting garbled i.e. unclear and distorted data or messages.
2. Error Correction : Error correction, as the name suggests, simply
means correction or solving or fixing of errors. It simply means
reconstruction and rehabilitation of original data that is error-free. But
error correction method is very costly and very hard.

Data Link Layer in OSI Model





The data link layer is the second layer from the bottom in the OSI (Open
System Interconnection) network architecture model. It is responsible for
the node-to-node delivery of data within the same local network. Its major
role is to ensure error-free transmission of information. DLL is also
responsible for encoding, decoding, and organizing the outgoing and
incoming data.
This is considered the most complex layer of the OSI model as it hides all
the underlying complexities of the hardware from the other above layers. In
this article, we will discuss Data Link Layer in Detail along with its functions,
and sub-layers.

Data Link Layer in OSI Model

Sub-Layers of The Data Link Layer


The data link layer is further divided into two sub-layers, which are as
follows:
Logical Link Control (LLC)
This sublayer of the data link layer deals with multiplexing, the flow of data
among applications and other services, and LLC is responsible for
providing error messages and acknowledgments as well.
Media Access Control (MAC)
MAC sublayer manages the device’s interaction, responsible for addressing
frames, and also controls physical media access. The data link layer
receives the information in the form of packets from the Network layer, it
divides packets into frames and sends those frames bit-by-bit to the
underlying physical layer.

You might also like