0% found this document useful (0 votes)
14 views93 pages

Mod II Datalink Layer

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views93 pages

Mod II Datalink Layer

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 93

MOD-II

COMPUTER NETWOTKS
AIDS
PREPARED BY: MR NILACHAKRA DASH
UNIT - II

• Data link layer: Design issues, framing, Error detection


and correction. Elementary data link protocols: simplex
protocol, A simplex stop and wait protocol for an error-
free channel, A simplex stop and wait protocol for noisy
channel. Sliding Window protocols: A one-bit sliding
window protocol, A protocol using Go-Back-N, A
protocol using Selective Repeat, Example data link
protocols. Medium Access sub layer: The channel
allocation problem, Multiple access protocols: ALOHA,
Carrier sense multiple access protocols, collision free
protocols. Wireless LANs, Data link layer switching.

ELEMENTARY DATA LINK PROTOCOLS:
UTOPIAN SIMPLEX PROTOCOL
• An utopian simplex protocol is a simple protocol because it does not
worry about whether something is going right or wrong on the
channel.
• In this protocol, data is transmitted in only one direction. Therefore it
is unidirectional.
• No matter what is happening in the network, the sender and receiver
are always ready to communicate. So they also ignore the delay in
processing.
• This protocol is just a consideration so that there is infinite buffer
space available on the sender and receiver.
• It is an unrealistic protocol, or you can say it is an unrestricted
protocol.
• In this protocol, the channel used between layer-2 of the sender and
receiver never discards or damages the frame during communication.
Working of Utopian Simplex Protocol
• In protocol, two entities are sender and receiver, who communicate
with each other over a channel.
• The sender process and receiver process are running at the data link
layer of the sender’s machine and the receiver’s machine, respectively.
• Sequence number and acknowledgment number are not used.
• Only the undamaged frame arrival process is going on.
• The sender is sending the data over the line as fast as possible. The
sender’s machine fetches packets from the network layer, creates
frames, and sends the frames over the line.
• On the other hand, the receiver is waiting to receive the frame.
• The frame comes from the sender, so the receiver will take the frame
that comes into the hardware buffer and send it to the network layer.
• After the frame is sent to the network layer, the receiver’s data link
layer will sit back to wait for the next frame.
• Here, no frame is lost during transmission, and hence no field of the
frame is required to control the flow of data and detect the error.
Simplex Stop-and-Wait Protocol for Noiseless
Channel
• In a stop-and-wait protocol, the sender stops after sending a frame to
the receiver and waits for an acknowledgment before sending
another frame.
• The sender is sending the frame to the receiver. After sending the
frame, the sender stops the transmission and waits for the
acknowledgment from the receiver.
• We here assume a noiseless channel that is error-free on which the
frame is never damaged or corrupted. Here the channel is error-free
but does not control the flow of data.
• As soon as the receiver receives the frame, it opens it and sends it to
the network layer for further processing. Now, the receiver will create
an acknowledgment, which allows the sender to send the next frame.
• Using the simplex stop-and-wait protocol, we can prevent the sender
from flooding the receiver with frames faster than the receiver can
process them.
• To prevent flooding on the receiver side, one solution is to
enable the receiver to process frames back-to-back by adding a
buffer of sufficient size.
• We can enhance the processing capabilities of the receiver so
that it can quickly pass the received frame to the network layer.
But it’s still not a general solution.
• Common solutions for addressing flooding issues on the receiver
side, providing feedback to the sender to reduce the flow rate at
the receiver.
• So that, in the simplex stop-and-wait protocol, the receiver
sends a dummy frame back to the sender after the packet is sent
over the network layer, asking the sender to send the next
frame.
• Frames can be transmitted to or received from the sender or
receiver, so the simplex stop-and-wait protocol is bidirectional.
• the communication is bidirectional, but they are using half-
duplex mode.
Working of Simplex Stop-and-Wait Protocol
(Noiseless Channel)
Simplex Stop-and-Wait Protocol for a Noisy
Channel
• Here, assume the general situation in which the sender and receiver
on the communication channel make an error during transmission.
Frames can either be damaged or lost completely during transmission.
• On a noisy channel, the receiver has only a limited buffer capacity and
a limited processing speed, so the protocol prevents the sender from
flooding the receiver with data too fast to handle it.
• In rare cases, the frame sent by the sender may be damaged in such a
way that the checksum is correct, causing this and all other protocols
to fail. To avoid this situation, a timer is added.
• Suppose, receiver’s acknowledgment is lost during transmission, the
sender will wait for acknowledgment for some time, and after the
timeout, the sender will send the frame again. This process is repeated
until the frame arrives and the acknowledgment is received from the
receiver.
• The data link layer is responsible for flow and error
control. Therefore, when the sender’s network
layer transmits a series of packets to the data link
layer, the data link layer transmits the packets
through the receiver’s data link layer to the
network layer.
• Here, the network layer has no functionality to
check whether there is an error in the packet, so
the data link layer must guarantee to the network
layer that no transmission error occurs in the
packet. Although duplicate packets may arrive at
the network layer, we can prevent this by using this
protocol.
Working of Simplex Stop-and-Wait Protocol for
a Noisy Channel
• As you can see in the above diagram, the sender sends the packet in the
form of a frame to the receiver. When the receiver receives the frame, it
sends the frame in a packet format to the network layer.
• After frame-1 successfully reaches the receiver, the receiver will send an
acknowledgment to the sender. The sender will send the frame-2 after
receiving the acknowledgment from the receiver. But as shown in the figure,
frame-2 is lost during transmission. Therefore, the sender will retransmit
frame-2 after the timeout.
• Further, the receiver is sending an acknowledgment to the sender after
receiving frame-2. But the acknowledgment is completely lost during
transmission.
• The sender is waiting for the acknowledgment, but the timeout has elapsed,
and the acknowledgment has not been received. So the sender will assume
that the frame is lost or damaged, and it will send the same frame again to
the receiver.
• The receiver receives the same frame again. But how does the receiver
recognize that the packet of the frame is a duplicate or the original? So, it
will use the sequence number to identify whether the packet is duplicate or
new
Sliding Window Protocol

• Sliding Window protocol handles efficiency


issues by sending more than one packet at a
time with a larger sequence number.
• The Sliding Window Protocol is a key computer
networking technique for controlling the flow of
data between two devices.
• It guarantees that data is sent consistently and
effectively, allowing many packets to be sent
before requiring an acknowledgment for the
first, maximizing the use of available
bandwidth.
Terminologies Related to Sliding Window Protocol

• Transmission Delay (Tt) – Time to transmit the packet from the host to
the outgoing link. If B is the Bandwidth of the link and D is the Data Size
to transmit
Tt = D/B
• Propagation Delay (Tp) – It is the time taken by the first bit transferred by
the host onto the outgoing link to reach the destination.
• It depends on the distance d and the wave propagation speed s (depends
on the characteristics of the medium).
Tp = d/s
• Efficiency – It is defined as the ratio of total useful time to the total cycle
time of a packet. For stop and wait protocol,
Total time(TT) = Tt(data) + Tp(data) + Tt(acknowledgement)
+Tp(acknowledgement)
= Tt(data) + Tp(data) + Tp(acknowledgement)
= Tt + 2*Tp
• Since acknowledgements are very less in size, their transmission delay can
be neglected.
• Efficiency = Useful Time / Total Cycle Time
= Tt/(Tt + 2*Tp) (For Stop and Wait)
= 1/(1+2a) [ Using a = Tp/Tt ]
• Effective Bandwidth(EB) or Throughput – Number of bits
sent per second.
• EB = Data Size(D) / Total Cycle time(Tt + 2*Tp)
Multiplying and dividing by Bandwidth (B),
= (1/(1+2a)) * B [ Using a = Tp/Tt ]
= Efficiency * Bandwidth
• Capacity of link – If a channel is Full Duplex , then bits can
be transferred in both the directions and without any
collisions. Number of bits a channel/Link can hold at
maximum is its capacity.
• Capacity = Bandwidth(B) * Propagation(Tp)
Stop and Wait Sliding Window
Key
protocol protocol

In the Stop and Wait In the Sliding Window


protocol, the sender protocol, the sender
Mechanism sends a single frame sends multiple
and waits for frames at a time and
acknowledgment retransmits the
from the receiver. damaged frames.

Stop and Wait Sliding Window


Efficiency protocol is less protocol is more
efficient. efficient than Stop
and Wait protocol.

Sender's window size Sender's window size


Window Size in Stop and Wait in Sliding Window
protocol is 1. protocol varies from
"1 to n".
Sorting of frames
Sorting The frames do not helps improve the
need to be sorted. efficiency of the
protocol.

Stop and Wait Sliding Window


protocol efficiency is protocol efficiency is
formulated as formulated as
Efficiency 1/(1+2a) where "a" is N/(1+2a) where N is
ratio of propagation no. of window frames
delay vs transmission and a is ratio of
delay. propagation delay vs
transmission delay.

Stop and Wait Sliding Window


Duplex protocol is half protocol is full duplex
duplex in nature in nature.
A One-Bit Sliding Window Protocol
• In one – bit sliding window protocol, the size of the window is 1.
So the sender transmits a frame, waits for its acknowledgment,
then transmits the next frame.
• Thus it uses the concept of stop and waits for the protocol. This
protocol provides for full – duplex communications. Hence, the
acknowledgment is attached along with the next data frame to be
sent by piggybacking.
• The data frames to be transmitted additionally have an
acknowledgment field, ack field that is of a few bits length.
The ack field contains the sequence number of the last frame
received without error. If this sequence number matches with the
sequence number of the frame to be sent, then it is inferred that
there is no error and the frame is transmitted. Otherwise, it is
inferred that there is an error in the frame and the previous frame
is retransmitted.
• The diagram describes a scenario with
sequence numbers 0, 1, 2, 3, 0, 1, 2 and so on.
It depicts the sliding windows in the sending
and the receiving stations during frame
transmission.
Types of Sliding Window Protocol

• 1.Go-Back-N ARQ 2. Selective Repeat ARQ:


• 1. Go-Back-N ARQ (Automatic Repeat reQuest )
• Go-Back-N ARQ allows sending more than one frame before getting the first
frame’s acknowledgment.
• It is also known as sliding window protocol since it makes use of the sliding
window notion.
• There is a limit to the amount of frames that can be sent, and they are
numbered consecutively.
• All frames beginning with that frame are retransmitted if the
acknowledgment is not received in a timely manner. For more detail visit
the page Go-Back-N ARQ.
• 2. Selective Repeat ARQ
• Additionally, this protocol allows additional frames to be sent before the
first frame’s acknowledgment is received.
• But in this case, the excellent frames are received and buffered, and only
the incorrect or lost frames are retransmitted.
• Advantages of Sliding Window Protocol
• Efficiency: The sliding window protocol is an efficient method of
transmitting data across a network because it allows multiple
packets to be transmitted at the same time. This increases the
overall throughput of the network.
• Reliable: The protocol ensures reliable delivery of data, by
requiring the receiver to acknowledge receipt of each packet
before the next packet can be transmitted. This helps to avoid
data loss or corruption during transmission.
• Flexibility: The sliding window protocol is a flexible technique
that can be used with different types of network protocols and
topologies, including wireless networks, Ethernet and IP
networks.
• Congestion Control: The sliding window protocol can also help
control network congestion by adjusting the size of the window
based on the network conditions.
• Disadvantages of Sliding Window Protocol
• Complexity: The sliding window protocol can be complex to
implement and can require a lot of memory and
processing power to operate efficiently.
• Delay: The protocol can introduce a delay in the transmission of
data, as each packet must be acknowledged before the next packet
can be transmitted. This delay can increase the overall latency of
the network.
• Limited Bandwidth Utilization: The sliding window protocol may
not be able to utilize the full available bandwidth of the network,
particularly in high-speed networks, due to the overhead of the
protocol.
• Window Size Limitations: The maximum size of the sliding window
can be limited by the size of the receiver’s buffer or the available
network resources, which can affect the overall performance of the
protocol
Go-Back-N ARQ VS Selective Repeat ARQ

Go-Back-N Protocol Selective Repeat Protocol

In Go-Back-N Protocol, if the sent


frame are find suspected then all In selective Repeat protocol, only
the frames are re-transmitted from those frames are re-transmitted
the lost packet to the last packet which are found suspected.
transmitted.

Sender window size of Go-Back-N Sender window size of selective


Protocol is N. Repeat protocol is also N.

Receiver window size of Go-Back- Receiver window size of selective


N Protocol is 1. Repeat protocol is N.

Go-Back-N Protocol is less Selective Repeat protocol is more


complex. complex.
In Go-Back-N Protocol, neither sender In selective Repeat protocol, receiver
nor at receiver need sorting. side needs sorting to sort the frames.

In Go-Back-N Protocol, type of In selective Repeat protocol, type of


Acknowledgement is cumulative. Acknowledgement is individual.

In Go-Back-N Protocol, Out-of-Order


In selective Repeat protocol, Out-of-
packets are NOT Accepted (discarded)
Order packets are Accepted.
and the entire window is re-transmitted.

In selective Repeat protocol, if Receives


In Go-Back-N Protocol, if Receives a a corrupt packet, it immediately sends a
corrupt packet, then also, the entire negative acknowledgement and hence
window is re-transmitted. only the selective packet is
retransmitted.

Efficiency of selective Repeat protocol is


Efficiency of Go-Back-N Protocol is
also
N/(1+2*a)
N/(1+2*a)
Examples of Data Link Layer Protocols

• Data Link Layer protocols are generally responsible to simply ensure and confirm
that the bits and bytes that are received are identical to the bits and bytes being
transferred.
Synchronous Data Link Protocol (SDLC)
• SDLC is basically a communication protocol of computer.
• It usually supports multipoint links even error recovery
or error correction also.
• It is usually used to carry SNA (Systems Network
Architecture) traffic and is present precursor to HDLC. It
is also designed and developed by IBM in 1975.
• It is also used to connect all of the remote devices to
mainframe computers at central locations may be in
point-to-point (one-to-one) or point-to-multipoint (one-
to-many) connections.
• It is also used to make sure that the data units should
arrive correctly and with right flow from one network
point to next network point.
High-Level Data Link Protocol (HDLC)–
• HDLC is basically a protocol that is now assumed to
be an umbrella under which many Wide Area
protocols sit.
• It is also adopted as a part of X.25 network.
• It was originally created and developed by ISO in
1979.
• This protocol is generally based on SDLC.
• It also provides best-effort unreliable service and
also reliable service.
• HDLC is a bit-oriented protocol that is applicable for
point-to-point and multipointcommunications both.
Serial Line Interface Protocol (SLIP)
• SLIP is generally an older protocol that is just used
to add a framing byte at end of IP packet.
• It is basically a data link control facility that is
required for transferring IP packets usually among
Internet Service Providers (ISP) and a home user
over a dial-up link.
• It is an encapsulation of the TCP/IP especially
designed to work with over serial ports and several
router connections simply for communication.
• It is some limitations like it does not provide
mechanisms such as error correction or error
detection.
Point to Point Protocol (PPP)
• PPP is a protocol that is basically used to provide same
functionality as SLIP.
• It is most robust protocol that is used to transport other types
of packets also along with IP Packets.
• It can also be required for dial-up and leased router-router
lines.
• It basically provides framing method to describe frames.
• It is a character-oriented protocol that is also used for error
detection.
• It is also used to provides two protocols i.e.
• NCP and LCP. LCP is used for bringing lines up, negotiation of
options, bringing them down whereas NCP is used for
negotiating network-layer protocols.
• It is required for same serial interfaces like that of HDLC.
Link Control Protocol (LCP)
• It was originally developed and created by
IEEE 802.2.
• It is also used to provide HDLC style services
on LAN (Local Area Network).
• LCP is basically a PPP protocol that is used for
establishing, configuring, testing,
maintenance, and ending or terminating links
for transmission of data frames.
Link Access Procedure (LAP) –
• LAP protocols are basically a data link layer protocols that
are required for framing and transferring data across
point-to-point links.
• It also includes some reliability service features.
• There are basically three types of LAP i.e.
• LAPB (Link Access Procedure Balanced),
• LAPD (Link Access Procedure D-Channel),
• and LAPF (Link Access Procedure Frame-Mode Bearer
Services).
• It is actually originated from IBM SDLC, which is being
submitted by IBM to the ISP simply for standardization.
Network Control Protocol (NCP)
• NCP was also an older protocol that was
implemented by ARPANET.
• It basically allows users to have access to use
computers and some of the devices at remote
locations and also to transfer files among two or
more computers.
• It is generally a set of protocols that is forming a
part of PPP.
• NCP is always available for each and every
higher-layer protocol that is supported by PPP.
NCP was replaced by TCP/IP in the 1980s.
Medium Access Control Sublayer (MAC sublayer)

• The medium access control (MAC) is a


sublayer of the data link layer of the open
system interconnections (OSI) reference
model for data transmission.
• It is responsible for flow control and
multiplexing for transmission medium.
• It controls the transmission of data packets
via remotely shared channels.
• It sends data over the network interface card.
Functions of MAC Layer

• It provides an abstraction of the physical layer to the LLC and


upper layers of the OSI network.
• It is responsible for encapsulating frames so that they are
suitable for transmission via the physical medium.
• It resolves the addressing of source station as well as the
destination station, or groups of destination stations.
• It performs multiple access resolutions when more than one
data frame is to be transmitted. It determines the channel
access methods for transmission.
• It also performs collision resolution and initiating
retransmission in case of collisions.
• It generates the frame check sequences and thus contributes
to protection against transmission errors.
MAC Addresses
• MAC address or media access control address is a unique
identifier allotted to a network interface controller (NIC)
of a device.
• It is used as a network address for data transmission within
a network segment like Ethernet, Wi-Fi, and Bluetooth.
• MAC address is assigned to a network adapter at the time
of manufacturing.
• It is hardwired or hard-coded in the network interface card
(NIC).
• A MAC address comprises of six groups of two hexadecimal
digits, separated by hyphens, colons, or no separators.
• An example of a MAC address is 00:0A:89:5B:F0:11.
• A MAC Address is a 12-digit hexadecimal number (48-
bit binary number), which is mostly represented by
Colon-Hexadecimal notation.
• The First 6 digits (say 00:40:96) of the MAC Address
identify the manufacturer, called the OUI
(Organizational Unique Identifier).
• IEEE Registration Authority Committee assigns these
MAC prefixes to its registered vendors.
• Here are some OUI of well-known manufaacturers:
• CC:46:D6 - Cisco
3C:5A:B4 - Google, Inc.
3C:D9:2B - Hewlett Packard
00:9A:CD - HUAWEI TECHNOLOGIES CO.,LTD
Types of MAC Address

1. Unicast:
• A Unicast-addressed frame is only sent out to
the interface leading to a specific NIC.
• If the LSB (least significant bit) of the first octet
of an address is set to zero, the frame is meant
to reach only one receiving NIC.
• The MAC Address of the source machine is
always Unicast.
• Multicast:
• The multicast address allows the source to send a
frame to a group of devices.
• In Layer-2 (Ethernet) Multicast address, the LSB
(least significant bit) of the first octet of an
address is set to one.
• IEEE has allocated the address block 01-80-C2-xx-
xx-xx (01-80-C2-00-00-00 to 01-80-C2-FF-FF-FF) for
group addresses for use by standard protocols.
• Broadcast: Similar to Network Layer,
Broadcast is also possible on the underlying
layer( Data Link Layer).
• Ethernet frames with ones in all bits of the
destination address (FF-FF-FF-FF-FF-FF) are
referred to as the broadcast addresses.
Frames that are destined with MAC address
FF-FF-FF-FF-FF-FF will reach every computer
belonging to that LAN segment.
The channel allocation problem
• Channel allocation is a process in which a single channel is
divided and allotted to multiple users in order to carry user
specific tasks.
• There are user’s quantity may vary every time the process
takes place.
• If there are N number of users and channel is divided into N
equal-sized sub channels, Each user is assigned one portion.
• If the number of users are small and don’t vary at times, then
Frequency Division Multiplexing can be used as it is a simple
and efficient channel bandwidth allocating technique.
• Channel allocation problem can be solved by two schemes:
Static Channel Allocation in LANs and MANs, and Dynamic
Channel Allocation.
• 1. Static Channel Allocation in LANs and MANs:
It is the classical or traditional approach of allocating
a single channel among multiple competing users
using Frequency Division Multiplexing (FDM).
• if there are N users, the frequency channel is divided
into N equal sized portions (bandwidth), each user
being assigned one portion. since each user has a
private frequency band, there is no interference
between users.
• However, it is not suitable in case of a large number
of users with variable bandwidth requirements.
• It is not efficient to divide into fixed number of
chunks.
• T = 1/(U*C-L)
• T(FDM) = N*T(1/U(C/N)-L/N)
• Where,

• T = mean time delay,


• C = capacity of channel,
• L = arrival rate of frames,
• 1/U = bits/frame,
• N = number of sub channels,
• T(FDM) = Frequency Division Multiplexing Time
• Dynamic Channel Allocation:
• In dynamic channel allocation scheme, frequency
bands are not permanently assigned to the users.
Instead channels are allotted to users dynamically as
needed, from a central pool.
• The allocation is done considering a number of
parameters so that transmission interference is
minimized.
• This allocation scheme optimises bandwidth usage and
results is faster transmissions.
• Dynamic channel allocation is further divided into:
• Centralised Allocation
• Distributed Allocation
Multiple access protocols: ALOHA
• The lower sub-layer is used to handle and reduce the
collision or multiple access on a channel. Hence it is termed
as media access control or the multiple access resolutions.
• When a sender and receiver have a dedicated link to
transmit data packets, the data link control is enough to
handle the channel. Suppose there is no dedicated path to
communicate or transfer the data between two devices.
• In that case, multiple stations access the channel and
simultaneously transmits the data over the channel. It may
create collision and cross talk.
• Hence, the multiple access protocol is required to reduce
the collision and avoid crosstalk between the channels.
• ALOHA :
• It is designed for wireless LAN (Local Area Network) but can
also be used in a shared medium to transmit data. Using
this method, any station can transmit data across a network
simultaneously when a data frame set is available for
transmission.
• Aloha Rules
• Any station can transmit data to a channel at any time.
• It does not require any carrier sensing.
• Collision and data frames may be lost during the
transmission of data through multiple stations.
• Acknowledgment of the frames exists in Aloha. Hence,
there is no collision detection.
• It requires retransmission of data after some random
amount of time.
• Pure Aloha: Pure aloha is used when data is available for sending
over a channel at stations. In pure Aloha, when each station
transmits data to a channel without checking whether the
channel is idle or not, the chances of collision may occur, and the
data frame can be lost.
• When a station transmits the data frame to a channel without
checking whether the channel is free or not, there will be a
possibility of the collision of data frames.
• Station expects the acknowledgement from the receiver, and if
the acknowledgement of the frame is received at the specified
time, then it will be OK;
• otherwise, the station assumes that the frame is destroyed. Then
station waits for a random amount of time, and after that, it
retransmits the frame until all the data are successfully
transmitted to the receiver.
• Slotted Aloha: There is a high possibility of frame hitting in pure aloha, so
slotted aloha is designed to overcome it. Unlike pure aloha, slotted aloha does not
allow the transmission of data whenever the station wants to send it.
• In slotted Aloha, the shared channel is divided into a fixed time interval called
slots. So that, if a station wants to send a frame to a shared channel, the frame can
only be sent at the beginning of the slot, and only one frame is allowed to be sent
to each slot. If the station is failed to send the data, it has to wait until the next
slot.
Carrier sense multiple access protocols,
• Carrier Sense Multiple Access (CSMA) is a method used in computer
networks to manage how devices share a communication channel
to transfer the data between two devices.
• In this protocol, each device first sense the channel before sending
the data. If the channel is busy, the device waits until it is free.
• This helps reduce collisions, where two devices send data at the
same time, ensuring smoother communication across the network.
CSMA is commonly used in technologies like Ethernet and Wi-Fi.
• This method was developed to decrease the chances of collisions
when two or more stations start sending their signals over the data
link layer.
• Carrier Sense multiple access requires that each station first check
the state of the medium before sending.
• Types of CSMA Protocol
• There are two main types of Carrier Sense Multiple
Access (CSMA) protocols, each designed to handle how
devices manage potential data collisions on a shared
communication channel. These types differ based on how
they respond to the detection of a busy network:
• CSMA/CD
• CSMA/CA
• Carrier Sense Multiple Access with Collision Detection
(CSMA/CD)
• In this method, a station monitors the medium after it
sends a frame to see if the transmission was successful. If
successful, the transmission is finished, if not, the frame
is sent again.
Process for CSMA/CD
Carrier Sense
Multiple Access with Collision Avoidance (CSMA/CA)

• The basic idea behind CSMA/CA is that the station should be able to receive while
transmitting to detect a collision from different stations. In wired networks, if a
collision has occurred then the energy of the received signal almost doubles, and the
station can sense the possibility of collision. In the case of wireless networks, most of
the energy is used for transmission, and the energy of the received signal increases
by only 5-10% if a collision occurs. It can’t be used by the station to sense collision.
Therefore CSMA/CA has been specially designed for wireless networks.
• These are three types of strategies:
• InterFrame Space (IFS): When a station finds the channel busy it senses the channel
again, when the station finds a channel to be idle it waits for a period of time
called IFS time. IFS can also be used to define the priority of a station or a frame.
Higher the IFS lower is the priority.
• Contention Window: It is the amount of time divided into slots. A station that is
ready to send frames chooses a random number of slots as wait time.
• Acknowledgments: The positive acknowledgments and time-out timer can help
guarantee a successful transmission of the frame.
• Characteristics of CSMA/CA
• Carrier Sense: The device listens to the channel before transmitting,
to ensure that it is not currently in use by another device.
• Multiple Access: Multiple devices share the same channel and can
transmit simultaneously.
• Collision Avoidance: If two or more devices attempt to transmit at
the same time, a collision occurs. CSMA/CA uses random backoff
time intervals to avoid collisions.
• Acknowledgment (ACK): After successful transmission, the
receiving device sends an ACK to confirm receipt.
• Fairness: The protocol ensures that all devices have equal access to
the channel and no single device monopolizes it.
• Binary Exponential Backoff: If a collision occurs, the device waits
for a random period of time before attempting to retransmit. The
backoff time increases exponentially with each retransmission
attempt.
• Interframe Spacing: The protocol requires a minimum amount of
time between transmissions to allow the channel to be clear and
reduce the likelihood of collisions.
• RTS/CTS Handshake: In some implementations, a Request-To-
Send (RTS) and Clear-To-Send (CTS) handshake is used to reserve
the channel before transmission. This reduces the chance of
collisions and increases efficiency.
• Wireless Network Quality: The performance of CSMA/CA is
greatly influenced by the quality of the wireless network, such as
the strength of the signal, interference, and network congestion.
• Adaptive Behavior: CSMA/CA can dynamically adjust its behavior
in response to changes in network conditions, ensuring the
efficient use of the channel and avoiding congestion.
CSMA/CA balances the need for efficient use of the shared
channel with the need to avoid collisions, leading to reliable and
fair communication in a wireless network.
PROCESS FOR CSMA/CA
• Advantages of CSMA
• Increased Efficiency: CSMA ensures that only one
device communicates on the network at a time,
reducing collisions and improving network efficiency.
• Simplicity: CSMA is a simple protocol that is easy to
implement and does not require complex hardware or
software.
• Flexibility: CSMA is a flexible protocol that can be used
in a wide range of network environments, including
wired and wireless networks.
• Low cost: CSMA does not require expensive hardware
or software, making it a cost-effective solution for
network communication.
• Disadvantages of CSMA
• Limited Scalability: CSMA is not a scalable protocol and
can become inefficient as the number of devices on the
network increases.
• Delay: In busy networks, the requirement to sense the
medium and wait for an available channel can result in
delays and increased latency.
• Limited Reliability: CSMA can be affected by
interference, noise, and other factors, resulting in
unreliable communication.
• Vulnerability to Attacks: CSMA can be vulnerable to
certain types of attacks, such as jamming and denial-of-
service attacks, which can disrupt network
communication.
collision free protocols. Wireless LANs,
• Almost all collisions can be avoided in CSMA/CD but they can still
occur during the contention period. The collision during the
contention period adversely affects the system performance, this
happens when the cable is long and length of packet are short.
This problem becomes serious as fiber optics network came into
use. Here we shall discuss some protocols that resolve the
collision during the contention period.
• Bit-map Protocol
• Binary Countdown
• Limited Contention Protocols
• The Adaptive Tree Walk Protocol
• Pure and slotted
Bit-map Protocol:
• Bit map protocol is collision free Protocol.
• In bitmap protocol method, each contention period consists of
exactly N slots.
• If any station has to send frame, then it transmits a 1 bit in the
corresponding slot.
• For example, if station 2 has a frame to send, it transmits a 1 bit
to the 2nd slot.
• In general, Station 1 Announce the fact that it has a frame
questions by inserting a 1 bit into slot 1. In this way, each station
has complete knowledge of which station wishes to transmit.
• There will never be any collisions because everyone agrees on
who goes next.
• Protocols like this in which the desire to transmit is broadcasting
for the actual transmission are called Reservation Protocols.
Binary Count down:
Binary count down protocol is used to overcome the overhead 1 bit per
binary station.
• In binary countdown, binary station addresses are used.
• A station wanting to use the channel broadcast its address as binary bit string
starting with the high order bit.
• All addresses are assumed of the same length. Here, we will see the example
to illustrate the working of the binary countdown.
• In this method, different station addresses are read together who decide the
priority of transmitting. If these stations 0001, 1001, 1100, 1011 all are trying
to seize the channel for transmission. All the station at first broadcast their
most significant address bit that is 0, 1, 1, 1 respectively.
• The most significant bits are read together. Station 0001 see the 1 MSB in
another station address and knows that a higher numbered station is
competing for the channel, so it gives up for the current round.
• Other three stations 1001, 1100, 1011 continue. The next station at which
next bit is 1 is at station 1100, so station 1011 and 1001 give up because
there 2nd bit is 0. Then station 1100 starts transmitting a frame, after which
another bidding cycle starts.
• Limited Contention Protocols:
• Collision based protocols (pure and slotted
ALOHA, CSMA/CD) are good when the
network load is low.
• Collision free protocols (bitmap, binary
Countdown) are good when load is high.
• How about combining their advantages :
• Behave like the ALOHA scheme under light
load
• Behave like the bitmap scheme under heavy
load.
Adaptive Tree Walk Protocol:

• partition the group of station and limit the contention for


each slot.
• Under light load, everyone can try for each slot like aloha
• Under heavy load, only a group can try for each slot
• How do we do it :
• treat every stations as the leaf of a binary tree
• first slot (after successful transmission), all stations
can try to get the slot(under the root node).
• If no conflict, fine. Else, in case of conflict, only nodes under
a subtree get to try for the next one.
(depth first search).
Slot-0 : C*, E*, F*, H* (all nodes under node 0 can try which are going to send), conflict
Slot-1 : C* (all nodes under node 1 can try}, C sends
Slot-2 : E*, F*, H*(all nodes under node 2 can try}, conflict
Slot-3 : E*, F* (all nodes under node 5 can try to send), conflict
Slot-4 : E* (all nodes under E can try), E sends
Slot-5 : F* (all nodes under F can try), F sends
Slot-6 : H* (all nodes under node 6 can try to send), H sends
Data link layer switching.

• In computer networking, Switching is the process of


transferring data packets from one device to another in a
network, or from one network to another, using specific
devices called switches.
• A computer user experiences switching all the time for
example, accessing the Internet from your computer device,
whenever a user requests a webpage to open, the request
is processed through switching of data packets only.
• Switching takes place at the Data Link layer of the OSI
Model.
• This means that after the generation of data packets in the
Physical Layer, switching is the immediate next process in
data communication.
SWITCHING:

• A switch decides the port through which a data packet shall pass with the
help of its destination MAC (Media Access Control) Address.
• A switch does this effectively by maintaining a switching table, (also
known as forwarding table).
• A network switch is more efficient than a network Hub or repeater
because it maintains a switching table, which simplifies its task and
reduces congestion on a network, which effectively improves the
performance of the network.
• A switch is a dedicated piece of computer hardware that facilitates the
process of switching i.e., incoming data packets and transferring them to
their destination.
• A switch works at the Data Link layer of the OSI Model. A switch
primarily handles the incoming data packets from a source computer or
network and decides the appropriate port through which the data
packets will reach their target computer or network.
Process of Switching:
The switching process involves the following steps:

• Frame Reception: The switch receives a data frame or packet from a computer
connected to its ports.
• MAC Address Extraction: The switch reads the header of the data frame and
collects the destination MAC Address from it.
• MAC Address Table Lookup: Once the switch has retrieved the MAC Address, it
performs a lookup in its Switching table to find a port that leads to the MAC
Address of the data frame.
• Forwarding Decision and Switching Table Update: If the switch matches the
destination MAC Address of the frame to the MAC address in its switching table,
it forwards the data frame to the respective port. However, if the destination
MAC Address does not exist in its forwarding table, it follows the flooding
process.
• in which it sends the data frame to all its ports except the one it came from and
records all the MAC Addresses to which the frame was delivered. This way, the
switch finds the new MAC Address and updates its forwarding table .
• Frame Transition: Once the destination port is found, the switch sends the data
frame to that port and forwards it to its target computer/network.
Types of Switching
• Message Switching: This is an older switching
technique that has become obsolete. In message
switching technique, the entire data block/message
is forwarded across the entire network thus, making
it highly inefficient.
• Packet Switching: This technique requires the data to be broken down
into smaller components, data frames, or packets.
• These data frames are then transferred to their destinations according to
the available resources in the network at a particular time. This switching
type is used in modern computers and even the Internet. Here, each data
frame contains additional information about the destination and other
information required for proper transfer through network components.
• Datagram Packet Switching: In Datagram Packet
switching, each data frame is taken as an individual entity
and thus, they are processed separately. Here, no
connection is established before data transmission
occurs. Although this approach provides flexibility in data
transfer, it may cause a loss of data frames or late
delivery of the data frames.
• Virtual-Circuit Packet Switching: In Virtual-Circuit Packet
switching, a logical connection between the source and
destination is made before transmitting any data. These
logical connections are called virtual circuits. Each data
frame follows these logical paths and provides a reliable
way of transmitting data with less chance of data loss.
• Circuit Switching: In this type of switching, a connection is
established between the source and destination beforehand.
This connection receives the complete bandwidth of the
network until the data is transferred completely.
This approach is better than message switching as it does not
involve sending data to the entire network, instead of its
destination only.
• It is a type of switching, in which a connection is established
between the source and destination beforehand.
• This connection receives the complete bandwidth of the
network until the data is transferred completely.
• In circuit switching network resources (bandwidth) are divided
into pieces and the bit delay is constant during a connection.
• The dedicated path/circuit established between the sender and
receiver provides a guaranteed data rate. Data can be
transmitted without any delays once the circuit is
established.
• Phases of Circuit Switching
• Circuit Establishment: A dedicated circuit between the
source and destination is constructed via a number of
intermediary switching center’s. Communication signals
can be requested and received when the sender and
receiver communicate signals over the circuit.
• Data Transfer: Data can be transferred between the source
and destination once the circuit has been established. The
link between the two parties remains as long as they
communicate.
• Circuit Disconnection: Disconnection in the circuit occurs
when one of the users initiates the disconnect. When the
disconnection occurs, all intermediary linkages between
the sender and receiver are terminated.
• Why is Circuit Switching Used for?
• Continuous connections: Circuit switching is used for
connections that must be maintained for long periods,
such as long-distance communication. Circuit switching
technology is used in traditional telephone systems.
• Dial-up network connections: When a computer connects
to the internet through a dial-up service, it uses the public
switched network.
• Dial-up transmits Internet Protocol (IP) data packets via a
circuit-switched telephone network.
• Optical circuit switching: Data centre networks also make
use of circuit switching. Optical circuit switching is used to
expand traditional data centres and fulfil increasing
bandwidth demands.
• Advantages of Circuit Switching
• The main advantage of circuit switching is that a committed transmission channel is
established between the computers which give a guaranteed data rate.
• In circuit switching , there is no delay in data flow because of the dedicated
transmission path.
• Reliability: Circuit switching provides a high level of reliability since the dedicated
communication path is reserved for the entire duration of the communication. This
ensures that the data will be transmitted without any loss or corruption.
• Quality of service: Circuit switching provides a guaranteed quality of service, which
means that the network can prioritize certain types of traffic, such as voice and
video, over other types of traffic, such as email and web browsing.
• Security: Circuit switching provides a higher level of security compared to packet
switching since the dedicated communication path is only accessible to the two
communicating parties. This can help prevent unauthorized access and data
breaches
• Ease of management: Circuit switching is relatively easy to manage since the
communication path is pre-established and dedicated to a specific communication.
This can help simplify network management and reduce the risk of errors.
• Compatibility: Circuit switching is compatible with a wide range of devices and
protocols, which means that it can be used with different types of networks and
applications.
• Disadvantages of Circuit Switching
• Limited scalability: Circuit switching is not well-suited for large-scale
networks with many nodes, as it requires a dedicated communication path
between each pair of nodes. This can result in a high degree of complexity
and difficulty in managing the network.
• Vulnerability to failures: Circuit switching relies on a dedicated
communication path, which can make the network vulnerable to failures,
such as cable cuts or switch failures. In the event of a failure, the
communication path must be re-established, which can result in delays or
loss of data.
• Limited Flexibility: Circuit switching is not flexible as it requires a
dedicated circuit between the communicating devices. The circuit cannot
be used Waste of Resources for any other purpose until the
communication is complete, which limits the flexibility of the network.
• Waste of Resources: Circuit switching reserves the bandwidth and
network resources for the duration of the communication, even if there is
no data being transmitted. This results in the wastage of resources and
inefficient use of the network.
• Expensive: Circuit switching is an expensive technology as it requires dedicated
communication paths, which can be costly to set up and maintain. This makes it
less feasible for small-scale networks and applications.
• Susceptible to Failure: Circuit switching is susceptible to failure as it relies on a
dedicated communication path. If the path fails, the entire communication is
disrupted. This makes it less reliable than other networking technologies, such
as packet switching .
• Not suitable for high traffic: Circuit switching is not suitable for high traffic,
where data is transmitted intermittently at irregular intervals. This is because a
dedicated circuit needs to be established for each communication, which can
result in delays and inefficient use of resources.
• Delay and latency: Circuit switching requires the establishment of a dedicated
communication path, which can result in delay and latency in establishing the
path and transmitting data. This can impact the real-time performance of
applications, such as voice and video.
• High cost: Circuit switching requires the reservation of resources, which can
result in a high cost, particularly in large-scale networks. This can make circuit
switching less practical for some applications.
• No prioritization: Circuit switching does not provide any mechanism for
prioritizing certain types of traffic over others.
Circuit Switching Packet Switching
In Packet switching, each data unit just knows the
In-circuit switching, each data unit knows the entire
final destination address intermediate path is
path address which is provided by the source.
decided by the routers.

In-Circuit switching, data is processed at the source In Packet switching, data is processed at all
system only intermediate nodes including the source system.

The delay between data units in circuit switching is The delay between data units in packet switching is
uniform. not uniform.

Circuit switching is more reliable. Packet switching is less reliable.

Less wastage of resources as compared to Circuit


Wastage of resources is more in Circuit Switching
Switching

Circuit switching is not convenient for handling Packet switching is suitable for handling bilateral
bilateral traffic. traffic.

In-Circuit Switching there is a physical path In Packet Switching there is no physical path
between the source and the destination between the source and the destination
Pure Aloha Slotted Aloha

In pure aloha, data can be transmitted at any time by In slotted aloha, data can be transmitted at the
any station. beginning of the time slot.

It was introduced under the leadership of Norman It was introduced by Robert in 1972 to improve pure
Abramson in 1970 at the University of Hawaii. aloha's capacity.

Time is not synchronized in pure aloha. Time is globally synchronized in slotted aloha.

Time is continuous in it. Time is discrete in it.

It does not decrease the number of collisions to half. It decreases the number of collisions to half.

In pure aloha, the vulnerable time is = 2 x Tt Whereas, in slotted aloha, the vulnerable time is = Tt

In pure aloha, the probability of the successful In slotted aloha, the probability of the successful
transmission of the frame is - transmission of the frame is -

S = G * e-2G S = G * e-G
The maximum throughput in slotted aloha is about
The maximum throughput in pure aloha is about 18%. 37%.
• 1.Explain OSI MODEL with neat diagram.
• 2.Describe about characteristics and features
of computer network.
• 3.Describe different types of computer
network .
• 4,Describe different topologies in computer
network.
• 5.Describe different guided transmission
media. With example.

You might also like