0% found this document useful (0 votes)
11 views52 pages

CN Notes - Unit2

The Data Link Layer, the second layer in the OSI model, is responsible for transferring datagrams across individual links and provides services such as framing, error detection, flow control, and reliable delivery. It uses various protocols like Ethernet and PPP, and handles issues like error correction and coordination between devices to prevent data collisions. Key functions include providing services to the network layer, managing flow control, and ensuring error-free transmission through mechanisms like acknowledgment and retransmission.

Uploaded by

upa728755
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views52 pages

CN Notes - Unit2

The Data Link Layer, the second layer in the OSI model, is responsible for transferring datagrams across individual links and provides services such as framing, error detection, flow control, and reliable delivery. It uses various protocols like Ethernet and PPP, and handles issues like error correction and coordination between devices to prevent data collisions. Key functions include providing services to the network layer, managing flow control, and ensuring error-free transmission through mechanisms like acknowledgment and retransmission.

Uploaded by

upa728755
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 52

Data Link Layerv

o In the OSI model, the data link layer is a 6 th layer from the top and
2nd layer from the bottom.
o The communication channel that connects the adjacent nodes is
known as links, and in order to move the datagram from source to the
destination, the datagram must be moved across an individual link.
o The main responsibility of the Data Link Layer is to transfer the
datagram across an individual link.
o The Data link layer protocol defines the format of the packet
exchanged across the nodes as well as the actions such as Error
detection, retransmission, flow control, and random access.
o The Data Link Layer protocols are Ethernet, token ring, FDDI and PPP.
o An important characteristic of a Data Link Layer is that datagram can
be handled by different link layer protocols on different links in a path.
For example, the datagram is handled by Ethernet on the first link, PPP
on the second link.

Following services are provided by the Data Link Layer:


o Framing & Link access: Data Link Layer protocols encapsulate each
network frame within a Link layer frame before the transmission across
the link. A frame consists of a data field in which network layer
datagram is inserted and a number of data fields. It specifies the
structure of the frame as well as a channel access protocol by which
frame is to be transmitted over the link.
o Reliable delivery: Data Link Layer provides a reliable delivery
service, i.e., transmits the network layer datagram without any error. A
reliable delivery service is accomplished with transmissions and
acknowledgements. A data link layer mainly provides the reliable
delivery service over the links as they have higher error rates and they
can be corrected locally, link at which an error occurs rather than
forcing to retransmit the data.
o Flow control: A receiving node can receive the frames at a faster rate
than it can process the frame. Without flow control, the receiver's
buffer can overflow, and frames can get lost. To overcome this
problem, the data link layer uses the flow control to prevent the
sending node on one side of the link from overwhelming the receiving
node on another side of the link.
o Error detection: Errors can be introduced by signal attenuation and
noise. Data Link Layer protocol provides a mechanism to detect one or
more errors. This is achieved by adding error detection bits in the
frame and then receiving node can perform an error check.
o Error correction: Error correction is similar to the Error detection,
except that receiving node not only detect the errors but also
determine where the errors have occurred in the frame.
o Half-Duplex & Full-Duplex: In a Full-Duplex mode, both the nodes
can transmit the data at the same time. In a Half-Duplex mode, only
one node can transmit the data at the same time.

Data Link Layer Design Issues


The data link layer in the OSI (Open System Interconnections) Model, is in between the
physical layer and the network layer. This layer converts the raw transmission facility
provided by the physical layer to a reliable and error-free link.
The main functions and the design issues of this layer are

 Providing services to the network layer


 Framing
 Error Control
 Flow Control

Services to the Network Layer


In the OSI Model, each layer uses the services of the layer below it and provides
services to the layer above it. The data link layer uses the services offered by the
physical layer.The primary function of this layer is to provide a well defined service
interface to network layer above it.

The types of services provided can be of three types −

 Unacknowledged connectionless service


 Acknowledged connectionless service
 Acknowledged connection - oriented service

Framing
The data link layer encapsulates each data packet from the network layer into frames
that are then transmitted.
A frame has three parts, namely −

 Frame Header
 Payload field that contains the data packet from network layer
 Trailer

Data Link Controls


Data Link Control is the service provided by the Data Link Layer to provide
reliable data transfer over the physical medium. For example, In the half-
duplex transmission mode, one device can only transmit the data at a time.
If both the devices at the end of the links transmit the data simultaneously,
they will collide and leads to the loss of the information. The Data link layer
provides the coordination among the devices so that no collision occurs.

The Data Link Control provides three functions:-

o Line discipline
o Flow Control
o Error Control
Line Discipline
o Line Discipline is a functionality of the Data link layer that provides the
coordination among the link systems. It determines which device can
send, and when it can send the data.

Line Discipline can be achieved in two ways:

o ENQ/ACK
o Poll/select

END/ACK

END/ACK stands for Enquiry/Acknowledgement is used when there is no


wrong receiver available on the link and having a dedicated path between
the two devices so that the device capable of receiving the transmission is
the intended one.

END/ACK coordinates which device will start the transmission and whether
the recipient is ready or not.

Working of END/ACK

The transmitter transmits the frame called an Enquiry (ENQ) asking whether
the receiver is available to receive the data or not.

The receiver responses either with the positive acknowledgement(ACK) or


with the negative acknowledgement(NACK) where positive
acknowledgement means that the receiver is ready to receive the
transmission and negative acknowledgement means that the receiver is
unable to accept the transmission.

Error Control
The data link layer ensures error free link for data transmission. The issues it caters to
with respect to error control are −

 Dealing with transmission errors


 Sending acknowledgement frames in reliable connections
 Retransmitting lost frames
 Identifying duplicate frames and deleting them
 Controlling access to shared channels in case of broadcasting

Flow Control
The data link layer regulates flow control so that a fast sender does not drown a slow
receiver. When the sender sends frames at very high speeds, a slow receiver may not
be able to handle it. There will be frame losses even if the transmission is error-free.
The two common approaches for flow control are −

 Feedback based flow control


 Rate based flow control

Error Detection & Correction


There are many reasons such as noise, cross-talk etc., which may help data to get
corrupted during transmission. The upper layers work on some generalized view of
network architecture and are not aware of actual hardware data processing. Hence, the
upper layers expect error-free transmission between the systems. Most of the
applications would not function expectedly if they receive erroneous data. Applications
such as voice and video may not be that affected and with some errors they may still
function well.
Data-link layer uses some error control mechanism to ensure that frames (data bit
streams) are transmitted with certain level of accuracy. But to understand how errors is
controlled, it is essential to know what types of errors may occur.

Types of Errors
There may be three types of errors:
Single bit error

In a frame, there is only one bit, anywhere though, which is corrupt.


Multiple bits error

Frame is received with more than one bits in corrupted state.


Burst error

 Frame contains more than1 consecutive bits corrupted.


Error control mechanism may involve two possible ways:
 Error detection
 Error correction

Error Detection
Errors in the received frames are detected by means of Parity Check and Cyclic
Redundancy Check (CRC). In both cases, few extra bits are sent along with actual data
to confirm that bits received at other end are same as they were sent. If the counter-
check at receiver’ end fails, the bits are considered corrupted.
Parity Check
One extra bit is sent along with the original bits to make number of 1s either even in
case of even parity, or odd in case of odd parity.
The sender while creating a frame counts the number of 1s in it. For example, if even
parity is used and number of 1s is even then one bit with value 0 is added. This way
number of 1s remains even.If the number of 1s is odd, to make it even a bit with value 1
is added.
The receiver simply counts the number of 1s in a frame. If the count of 1s is even and
even parity is used, the frame is considered to be not-corrupted and is accepted. If the
count of 1s is odd and odd parity is used, the frame is still not corrupted.
If a single bit flips in transit, the receiver can detect it by counting the number of 1s. But
when more than one bits are erro neous, then it is very hard for the receiver to detect
the error.
Cyclic Redundancy Check (CRC)
CRC is a different approach to detect if the received frame contains valid data. This
technique involves binary division of the data bits being sent. The divisor is generated
using polynomials. The sender performs a division operation on the bits being sent and
calculates the remainder. Before sending the actual bits, the sender adds the remainder
at the end of the actual bits. Actual data bits plus the remainder is called a codeword.
The sender transmits data bits as codewords.
At the other end, the receiver performs division operation on codewords using the same
CRC divisor. If the remainder contains all zeros the data bits are accepted, otherwise it
is considered as there some data corruption occurred in transit.

Error Correction
In the digital world, error correction can be done in two ways:
 Backward Error Correction When the receiver detects an error in the data
received, it requests back the sender to retransmit the data unit.
 Forward Error Correction When the receiver detects some error in the data
received, it executes error-correcting code, which helps it to auto-recover and to
correct some kinds of errors.
The first one, Backward Error Correction, is simple and can only be efficiently used
where retransmitting is not expensive. For example, fiber optics. But in case of wireless
transmission retransmitting may cost too much. In the latter case, Forward Error
Correction is used.
To correct the error in data frame, the receiver must know exactly which bit in the frame
is corrupted. To locate the bit in error, redundant bits are used as parity bits for error
detection.For example, we take ASCII words (7 bits data), then there could be 8 kind of
information we need: first seven bits to tell us which bit is error and one more bit to tell
that there is no error.
For m data bits, r redundant bits are used. r bits can provide 2r combinations of
information. In m+r bit codeword, there is possibility that the r bits themselves may get
corrupted. So the number of r bits used must inform about m+r bit locations plus no-
error information, i.e. m+r+1.

2 power r >=m+r+1
Data-link Control & Protocols
Data-link layer is responsible for implementation of point-to-point flow and error control
mechanism.

Flow Control
When a data frame (Layer-2 data) is sent from one host to another over a single
medium, it is required that the sender and receiver should work at the same speed. That
is, sender sends at a speed on which the receiver can process and accept the data.
What if the speed (hardware/software) of the sender or receiver differs? If sender is
sending too fast the receiver may be overloaded, (swamped) and data may be lost.
Two types of mechanisms can be deployed to control the flow:
 Stop and Wait
This flow control mechanism forces the sender after transmitting a data frame to
stop and wait until the acknowledgement of the data-frame sent is received.

 Sliding Window
In this flow control mechanism, both sender and receiver agree on the number of
data-frames after which the acknowledgement should be sent. As we learnt, stop
and wait flow control mechanism wastes resources, this protocol tries to make
use of underlying resources as much as possible.

Error Control
When data-frame is transmitted, there is a probability that data-frame may be lost in the
transit or it is received corrupted. In both cases, the receiver does not receive the
correct data-frame and sender does not know anything about any loss.In such case,
both sender and receiver are equipped with some protocols which helps them to detect
transit errors such as loss of data-frame. Hence, either the sender retransmits the data-
frame or the receiver may request to resend the previous data-frame.
Requirements for error control mechanism:
 Error detection - The sender and receiver, either both or any, must ascertain
that there is some error in the transit.
 Positive ACK - When the receiver receives a correct frame, it should
acknowledge it.
 Negative ACK - When the receiver receives a damaged frame or a duplicate
frame, it sends a NACK back to the sender and the sender must retransmit the
correct frame.
 Retransmission: The sender maintains a clock and sets a timeout period. If an
acknowledgement of a data-frame previously transmitted does not arrive before
the timeout the sender retransmits the frame, thinking that the frame or it’s
acknowledgement is lost in transit.
There are three types of techniques available which Data-link layer may deploy to
control the errors by Automatic Repeat Requests (ARQ):
Stop-and-wait ARQ
 The following transition may occur in Stop-and-Wait ARQ:
o The sender maintains a timeout counter.
o When a frame is sent, the sender starts the timeout counter.
o If acknowledgement of frame comes in time, the sender transmits the next
frame in queue.
o If acknowledgement does not come in time, the sender assumes that either
the frame or its acknowledgement is lost in transit. Sender retransmits the
frame and starts the timeout counter.
o If a negative acknowledgement is received, the sender retransmits the
frame.
 Go-Back-N ARQ
Stop and wait ARQ mechanism does not utilize the resources at their best. When
the acknowledgement is received, the sender sits idle and does nothing. In Go-
Back-N ARQ method, both sender and receiver maintain a window.
 The sending-window size enables the sender to send multiple frames without
receiving the acknowledgement of the previous ones. The receiving-window
enables the receiver to receive multiple frames and acknowledge them. The
receiver keeps track of incoming frame’s sequence number.
When the sender sends all the frames in window, it checks up to what sequence
number it has received positive acknowledgement. If all frames are positively
acknowledged, the sender sends next set of frames. If sender finds that it has
received NACK or has not receive any ACK for a particular frame, it retransmits
all the frames after which it does not receive any positive ACK.
 Selective Repeat ARQ
In Go-back-N ARQ, it is assumed that the receiver does not have any buffer
space for its window size and has to process each frame as it comes. This
enforces the sender to retransmit all the frames which are not acknowledged.
In Selective-Repeat ARQ, the receiver while keeping track of sequence numbers,
buffers the frames in memory and sends NACK for only frame which is missing or
damaged.
The sender in this case, sends only packet for which NACK is received.

High-level Data Link Control (HDLC)


High-level Data Link Control (HDLC) is a group of communication protocols of the data
link layer for transmitting data between network points or nodes. Since it is a data link
protocol, data is organized into frames. A frame is transmitted via the network to the
destination that verifies its successful arrival. It is a bit - oriented protocol that is
applicable for both point - to - point and multipoint communications.

Transfer Modes
HDLC supports two types of transfer modes, normal response mode and asynchronous
balanced mode.
 Normal Response Mode (NRM) − Here, two types of stations are there, a
primary station that send commands and secondary station that can respond to
received commands. It is used for both point - to - point and multipoint
communications.

 Asynchronous Balanced Mode (ABM) − Here, the configuration is balanced,


i.e. each station can both send commands and respond to commands. It is used
for only point - to - point communications.
HDLC Frame
HDLC is a bit - oriented protocol where each frame contains up to six fields. The
structure varies according to the type of frame. The fields of a HDLC frame are −
 Flag − It is an 8-bit sequence that marks the beginning and the end of the frame.
The bit pattern of the flag is 01111110.
 Address − It contains the address of the receiver. If the frame is sent by the
primary station, it contains the address(es) of the secondary station(s). If it is sent
by the secondary station, it contains the address of the primary station. The
address field may be from 1 byte to several bytes.
 Control − It is 1 or 2 bytes containing flow and error control information.
 Payload − This carries the data from the network layer. Its length may vary from
one network to another.
 FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The
standard code used is CRC (cyclic redundancy code)

Types of HDLC Frames


There are three types of HDLC frames. The type of frame is determined by the control
field of the frame −
 I-frame − I-frames or Information frames carry user data from the network layer.
They also include flow and error control information that is piggybacked on user
data. The first bit of control field of I-frame is 0.
 S-frame − S-frames or Supervisory frames do not contain information field. They
are used for flow and error control when piggybacking is not required. The first
two bits of control field of S-frame is 10.
 U-frame − U-frames or Un-numbered frames are used for myriad miscellaneous
functions, like link management. It may contain an information field, if required.
The first two bits of control field of U-frame is 11.

PPP Protocol
The PPP stands for Point-to-Point protocol. It is the most commonly used
protocol for point-to-point access. Suppose the user wants to access the
internet from the home, the PPP protocol will be used.
It is a data link layer protocol that resides in the layer 2 of the OSI model. It
is used to encapsulate the layer 3 protocols and all the information available
in the payload in order to be transmitted across the serial links. The PPP
protocol can be used on synchronous link like ISDN as well as asynchronous
link like dial-up. It is mainly used for the communication between the two
devices.

It can be used over many types of physical networks such as serial cable,
phone line, trunk line, cellular telephone, fiber optic links such as SONET. As
the data link layer protocol is used to identify from where the transmission
starts and ends, so ISP (Internet Service Provider) use the PPP protocol to
provide the dial-up access to the internet.

Services provided by PPP

o It defines the format of frames through which the transmission occurs.


o It defines the link establishment process. If user establishes a link with
a server, then "how this link establishes" is done by the PPP protocol.
o It defines data exchange process, i.e., how data will be exchanged, the
rate of the exchange.
o The main feature of the PPP protocol is the encapsulation. It defines
how network layer data and information in the payload are
encapsulated in the data link frame.
o It defines the authentication process between the two devices. The
authentication between the two devices, handshaking and how the
password will be exchanged between two devices are decided by the
PPP protocol.

Services Not provided by the PPP protocol

o It does not support flow control mechanism.


o It has a very simple error control mechanism.
o As PPP provides point-to-point communication, so it lacks addressing
mechanism to handle frames in multipoint configuration.

It is a byte-oriented protocol as it provides the frames as a collection of bytes


or characters. It is a WAN (Wide Area Network) protocol as it runs over
the internet link which means between two routers, internet is widely used.
PPP has two main uses which are given below:

o It is widely used in broadband communications having heavy loads and


high speed. For example, an internet operates on heavy load and high
speed.
o It is used to transmit the multiprotocol data between the two
connected (point-to-point) computers. It is mainly used in point-to-
point devices, for example, routers are point-to-point devices where
PPP protocol is widely used as it is a WAN protocol not a simple LAN
ethernet protocol.

Frame format of PPP protocol


The frame format of PPP protocol contains the following fields:

o Flag: The flag field is used to indicate the start and end of the frame.
The flag field is a 1-byte field that appears at the beginning and the
ending of the frame. The pattern of the flag is similar to the bit pattern
in HDLC, i.e., 01111110.
o Address: It is a 1-byte field that contains the constant value which is
11111111. These 8 ones represent a broadcast message.
o Control: It is a 1-byte field which is set through the constant value,
i.e., 11000000. It is not a required field as PPP does not support the
flow control and a very limited error control mechanism. The control
field is a mandatory field where protocol supports flow and error
control mechanism.
o Protocol: It is a 1 or 2 bytes field that defines what is to be carried in
the data field. The data can be a user data or other information.
o Payload: The payload field carries either user data or other
information. The maximum length of the payload field is 1500 bytes.
o Checksum: It is a 16-bit field which is generally used for error
detection.

Transition phases of PPP protocol


The following are the transition phases of a PPP protocol:

o Dead: Dead is a transition phase which means that the link is not used
or there is no active carrier at the physical layer.
o Establish: If one of the nodes starts working then the phase goes to
the establish phase. In short, we can say that when the node starts
communication or carrier is detected then it moves from the dead to
the establish phase.
o Authenticate: It is an optional phase which means that the
communication can also moves to the authenticate phase. The phase
moves from the establish to the authenticate phase only when both
the communicating nodes agree to make the communication
authenticated.
o Network: Once the authentication is successful, the network is
established or phase is network. In this phase, the negotiation of
network layer protocols take place.
o Open: After the establishment of the network phase, it moves to the
open phase. Here open phase means that the exchange of data takes
place. Or we can say that it reaches to the open phase after the
configuration of the network layer.
o Terminate: When all the work is done then the connection gets
terminated, and it moves to the terminate phase.

On reaching the terminate phase, the link moves to the dead phase which
indicates that the carrier is dropped which was earlier created.

There are two more possibilities that can exist in the transition
phase:

o The link moves from the authenticate to the terminate phase when the
authentication is failed.
o The link can also move from the establish to the dead state when the
carrier is failed.

PPP Stack
In PPP stack, there are three set of protocols:

o Link Control Protocol (LCP)

The role of LCP is to establish, maintain, configure, and terminate the links. It
also provides negotiation mechanism.

o Authentication protocols

There are two types of authentication protocols, i.e., PAP (Password


Authenticate protocols), and CHAP (Challenged Handshake Authentication
Protocols).

1. PAP (Password Authentication Protocols)


PAP is less secure as compared to CHAP as in case of PAP protocol, password
is sent in the form of a clear text. It is a two-step process. Suppose there are
two routers, i.e., router 1 and router 2. In the first step, the router 1 wants to
authenticate so it sends the username and password for the authentication.
In the second step, if the username and password are matched then the
router 2 will authenticate the router 1 otherwise the authentication failed.

2. CHAP (Challenged Handshake Authentication Protocol)

CHAP is a three-step process. Let's understand the three steps of CHAP.


Step 1: Suppose there are two routers, i.e., router 1 and router 2. In this step,
router 1 sends the username but not the password to the router 2.

Step 2: The router 2 maintains a database that contains a list of allowed


hosts with their login credentials. If no data is found which means that the
router 1 is not a valid host to connect with it and the connection gets
terminated. If the match is found then the random key is passed. This
random key along with the password is passed in the MD5 hashing function,
and the hashing function generates the hashed value from the password and
the random key (password + random key). The hashed value is also known
as Challenge. The challenge along with the random key will be sent to the
router 1.
Step 3: The router 1 receives the hashed value and a random key from the
router 2. Then, the router 1 will pass the random key and locally stored
password to the MD5 hashing function. The MD5 hashing function generates
the hashed value from the combination of random key and password. If the
generated hashed value does not match with the received hashed value then
the connection gets terminated. If it is matched, then the connection is
granted. Based on the above authentication result, the authentication signal
that could be either accepted or rejected is sent to the router 2.

Medium Access Control Sublayer (MAC


sublayer)
The medium access control (MAC) is a sublayer of the data link layer of the open
system interconnections (OSI) reference model for data transmission. It is responsible
for flow control and multiplexing for transmission medium. It controls the transmission of
data packets via remotely shared channels. It sends data over the network interface
card.

MAC Layer in the OSI Model


The Open System Interconnections (OSI) model is a layered networking framework that
conceptualizes how communications should be done between heterogeneous systems.
The data link layer is the second lowest layer. It is divided into two sublayers −
 The logical link control (LLC) sublayer
 The medium access control (MAC) sublayer
The following diagram depicts the position of the MAC layer −
Functions of MAC Layer
 It provides an abstraction of the physical layer to the LLC and upper layers of the
OSI network.
 It is responsible for encapsulating frames so that they are suitable for
transmission via the physical medium.
 It resolves the addressing of source station as well as the destination station, or
groups of destination stations.
 It performs multiple access resolutions when more than one data frame is to be
transmitted. It determines the channel access methods for transmission.
 It also performs collision resolution and initiating retransmission in case of
collisions.
 It generates the frame check sequences and thus contributes to protection
against transmission errors.

MAC Addresses
MAC address or media access control address is a unique identifier allotted to a
network interface controller (NIC) of a device. It is used as a network address for data
transmission within a network segment like Ethernet, Wi-Fi, and Bluetooth.
MAC address is assigned to a network adapter at the time of manufacturing. It is
hardwired or hard-coded in the network interface card (NIC). A MAC address
comprises of six groups of two hexadecimal digits, separated by hyphens, colons, or no
separators. An example of a MAC address is 00:0A:89:5B:F0:11.
What is channel allocation in computer
network?
When there are more than one user who desire to access a shared network channel,
an algorithm is deployed for channel allocation among the competing users. The
network channel may be a single cable or optical fiber connecting multiple nodes, or a
portion of the wireless spectrum. Channel allocation algorithms allocate the wired
channels and bandwidths to the users, who may be base stations, access points or
terminal equipment.

Channel Allocation Schemes


Channel Allocation may be done using two schemes −

 Static Channel Allocation


 Dynamic Channel Allocation

Static Channel Allocation


In static channel allocation scheme, a fixed portion of the frequency channel is allotted
to each user. For N competing users, the bandwidth is divided into N channels using
frequency division multiplexing (FDM), and each portion is assigned to one user.
This scheme is also referred as fixed channel allocation or fixed channel assignment.
In this allocation scheme, there is no interference between the users since each user is
assigned a fixed channel. However, it is not suitable in case of a large number of users
with variable bandwidth requirements.

Dynamic Channel Allocation


In dynamic channel allocation scheme, frequency bands are not permanently assigned
to the users. Instead channels are allotted to users dynamically as needed, from a
central pool. The allocation is done considering a number of parameters so that
transmission interference is minimized.
This allocation scheme optimises bandwidth usage and results is faster transmissions.
Dynamic channel allocation is further divided into centralised and distributed allocation.

Static Channel Allocation in computer


network
When there is more than one user who desires to access a shared network channel, an
algorithm is deployed for channel allocation among the competing users. Static channel
allocation is a traditional method of channel allocation in which a fixed portion of the
frequency channel is allotted to each user, who may be base stations, access points or
terminal equipment. This scheme is also referred to as fixed channel allocation or fixed
channel assignment.

Working Principle
Suppose that there are N competing users. Here, the total bandwidth is divided into N
discrete channels using frequency division multiplexing (FDM). In most cases, the size
of the channels is equal. Each of these channels is assigned to one user.

Advantages
Static channel allocation scheme is particularly suitable for situations where there are a
small number of fixed users having a steady flow of uniform network traffic. The
allocation technique is simple and so the additional overhead of a complex algorithm
need not be incurred. Besides, there is no interference between the users since each
user is assigned a fixed channel which is not shared with others.

Disadvantages
Most real-life network situations have a variable number of users, usually large in
number with bursty traffic. If the value of N is very large, the bandwidth available for
each user will be very less. This will reduce the throughput if the user needs to send a
large volume of data once in a while.
It is very unlikely that all the users will be communicating all the time. However, since
all of them are allocated fixed bandwidths, the bandwidth allocated to non-
communicating users lies wasted.
If the number of users is more than N, then some of them will be denied service, even if
there are unused frequencies.

Dynamic Channel Allocation in computer


network
When there are more than one user who desire to access a shared network channel,
an algorithm is deployed for channel allocation among the competing users. Dynamic
channel allocation encompasses the channel allocation schemes where channels are
allotted to users dynamically as per their requirements, from a central pool.

Working Principle
In dynamic channel allocation schemes, frequency channels are not permanently
allotted to any user. Channels are assigned to the user as needed depending upon the
network environment. The available channels are kept in a queue or a spool. The
allocation of the channels is temporary. Distribution of the channels to the contending
users is based upon distribution of the users in the network and offered traffic load. The
allocation is done so that transmission interference is minimized.

Dynamic Channel Allocation Schemes


The dynamic channel allocation schemes can be divided into three categories −

 Interference Adaptive Dynamic Channel Allocation (IA-DCA)


 Location Adaptive Dynamic Channel Allocation (LA-DCA)
 Traffic Adaptive Dynamic Channel Allocation (TA-DCA)
All these schemes evaluate the cost of using each available channel and allocates the
channel with the optimum cost.

Advantages
Dynamic channel allocation schemes allots channels as needed. This results in
optimum utilization of network resources. There are less chances of denial of services
and call blocking in case of voice transmission. These schemes adjust bandwidth
allotment according to traffic volume, and so are particularly suitable for bursty traffic.

Disadvantages
Dynamic channel allocation schemes increases the computational as well as storage
load on the system.

Multiple access protocol- ALOHA, CSMA,


CSMA/CA and CSMA/CD
Data Link Layer
The data link layer is used in a computer network to transmit the data between two devices or
nodes. It divides the layer into parts such as data link control and the multiple
access resolution/protocol. The upper layer has the responsibility to flow control and
the error control in the data link layer, and hence it is termed as logical of data link
control. Whereas the lower sub-layer is used to handle and reduce the collision or multiple
access on a channel. Hence it is termed as media access control or the multiple access
resolutions.

Data Link Control


A data link control is a reliable channel for transmitting data over a dedicated link using various
techniques such as framing, error control and flow control of data packets in the computer
network.

What is a multiple access protocol?


When a sender and receiver have a dedicated link to transmit data packets, the data link control is
enough to handle the channel. Suppose there is no dedicated path to communicate or transfer the
data between two devices. In that case, multiple stations access the channel and simultaneously
transmits the data over the channel. It may create collision and cross talk. Hence, the multiple
access protocol is required to reduce the collision and avoid crosstalk between the channels.

For example, suppose that there is a classroom full of students. When a teacher asks a question,
all the students (small channels) in the class start answering the question at the same time
(transferring the data simultaneously). All the students respond at the same time due to which
data is overlap or data lost. Therefore it is the responsibility of a teacher (multiple access
protocol) to manage the students and make them one answer.

Following are the types of multiple access protocol that is subdivided into the
different process as:

Random Access Protocol


In this protocol, all the station has the equal priority to send the data over a
channel. In random access protocol, one or more stations cannot depend on
another station nor any station control another station. Depending on the
channel's state (idle or busy), each station transmits the data frame.
However, if more than one station sends the data over a channel, there may
be a collision or data conflict. Due to the collision, the data frame packets
may be lost or changed. And hence, it does not receive by the receiver end.

Following are the different methods of random-access protocols for


broadcasting frames on the channel.

o Aloha
o CSMA
o CSMA/CD
o CSMA/CA

ALOHA Random Access Protocol

It is designed for wireless LAN (Local Area Network) but can also be used in a
shared medium to transmit data. Using this method, any station can transmit
data across a network simultaneously when a data frameset is available for
transmission.

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data
through multiple stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no
collision detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha
Whenever data is available for sending over a channel at stations, we use
Pure Aloha. In pure Aloha, when each station transmits data to a channel
without checking whether the channel is idle or not, the chances of collision
may occur, and the data frame can be lost. When any station transmits the
data frame to a channel, the pure Aloha waits for the receiver's
acknowledgment. If it does not acknowledge the receiver end within the
specified time, the station waits for a random amount of time, called the
backoff time (Tb). And the station may assume the frame has been lost or
destroyed. Therefore, it retransmits the frame until all the data are
successfully transmitted to the receiver.

1. The total vulnerable time of pure Aloha is 2 * Tfr.


2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.

As we can see in the figure above, there are four stations for accessing a shared
channel and transmitting data frames. Some frames collide because most stations
send their frames at the same time. Only two frames, frame 1.1 and frame 2.2, are
successfully transmitted to the receiver end. At the same time, other frames are
lost or destroyed. Whenever two frames fall on a shared channel simultaneously,
collisions can occur, and both will suffer damage. If the new frame's first bit enters
the channel before finishing the last bit of the second frame. Both frames are
completely finished, and both stations must retransmit the data frame.
Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's efficiency


because pure Aloha has a very high possibility of frame hitting. In slotted
Aloha, the shared channel is divided into a fixed time interval called slots.
So that, if a station wants to send a frame to a shared channel, the frame
can only be sent at the beginning of the slot, and only one frame is allowed
to be sent to each slot. And if the stations are unable to send data to the
beginning of the slot, the station will have to wait until the beginning of the
slot for the next time. However, the possibility of a collision remains when
trying to send a frame at the beginning of two or more station time slot.

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is


37%.
2. The probability of successfully transmitting the data frame in the
slotted Aloha is S = G * e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.

CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access protocol to


sense the traffic on a channel (idle or busy) before transmitting the data. It
means that if the channel is idle, the station can send data to the channel.
Otherwise, it must wait until the channel becomes idle. Hence, it reduces the
chances of a collision on a transmission medium.

CSMA Access Modes

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first
sense the shared channel and if the channel is idle, it immediately sends the
data. Else it must wait and keep track of the status of the channel to be idle
and broadcast the frame unconditionally as soon as the channel is idle.

Non-Persistent: It is the access mode of CSMA that defines before


transmitting the data, each node must sense the channel, and if the channel
is inactive, it immediately sends the data. Otherwise, the station must wait
for a random time (not continuously), and when the channel is found to be
idle, it transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent


modes. The P-Persistent mode defines that each node senses the channel,
and if the channel is inactive, it sends a frame with a P probability. If the
data is not transmitted, it waits for a (q = 1-p probability) random time
and resumes the frame with the next time slot.

O- Persistent: It is an O-persistent method that defines the superiority of


the station before the transmission of the frame on the shared channel. If it
is found that the channel is inactive, each station waits for its turn to
retransmit the data.
CSMA/ CD

It is a carrier sense multiple access/ collision detection network


protocol to transmit data frames. The CSMA/CD protocol works with a
medium access control layer. Therefore, it first senses the shared channel
before broadcasting the frames, and if the channel is idle, it transmits a
frame to check whether the transmission was successful. If the frame is
successfully received, the station sends another frame. If any collision is
detected in the CSMA/CD, the station sends a jam/ stop signal to the shared
channel to terminate data transmission. After that, it waits for a random time
before sending a frame to a channel.

CSMA/ CA

It is a carrier sense multiple access/collision avoidance network


protocol for carrier transmission of data frames. It is a protocol that works
with a medium access control layer. When a data frame is sent to a channel,
it receives an acknowledgment to check whether the channel is clear. If the
station receives only a single (own) acknowledgments, that means the data
frame has been successfully transmitted to the receiver. But if it gets two
signals (its own and one more in which the collision of frames), a collision of
the frame occurs in the shared channel. Detects the collision of the frame
when a sender receives an acknowledgment signal.

Following are the methods used in the CSMA/ CA to avoid the collision:

Interframe space: In this method, the station waits for the channel to
become idle, and if it gets the channel is idle, it does not immediately send
the data. Instead of this, it waits for some time, and this time period is called
the Interframe space or IFS. However, the IFS time is often used to define
the priority of the station.

Contention window: In the Contention window, the total time is divided


into different slots. When the station/ sender is ready to transmit the data
frame, it chooses a random slot number of slots as wait time. If the channel
is still busy, it does not restart the entire process, except that it restarts the
timer only to send data packets when the channel is inactive.

Acknowledgment: In the acknowledgment method, the sender station


sends the data frame to the shared channel if the acknowledgment is not
received ahead of time.

B. Controlled Access Protocol


It is a method of reducing data frame collision on a shared channel. In the
controlled access method, each station interacts and decides to send a data
frame by a particular station approved by all other stations. It means that a
single station cannot send the data frames unless all other stations are not
approved. It has three types of controlled access: Reservation, Polling,
and Token Passing.

C. Channelization Protocols
It is a channelization protocol that allows the total usable bandwidth in a
shared channel to be shared across multiple stations based on their time,
distance and codes. It can access all the stations at the same time to send
the data frames to the channel.

Following are the various methods to access the channel based on their
time, distance and codes:
1. FDMA (Frequency Division Multiple Access)
2. TDMA (Time Division Multiple Access)
3. CDMA (Code Division Multiple Access)

FDMA

It is a frequency division multiple access (FDMA) method used to divide the


available bandwidth into equal bands so that multiple users can send data
through a different frequency to the subchannel. Each station is reserved
with a particular band to prevent the crosstalk between the channels and
interferences of stations.

TDMA

Time Division Multiple Access (TDMA) is a channel access method. It allows


the same frequency bandwidth to be shared across multiple stations. And to
avoid collisions in the shared channel, it divides the channel into different
frequency slots that allocate stations to transmit the data frames. The
same frequency bandwidth into the shared channel by dividing the signal
into various time slots to transmit it. However, TDMA has an overhead of
synchronization that specifies each station's time slot by adding
synchronization bits to each slot.

CDMA

The code division multiple access (CDMA) is a channel access method. In


CDMA, all stations can simultaneously send the data over the same channel.
It means that it allows each station to transmit the data frames with full
frequency on the shared channel at all times. It does not require the division
of bandwidth on a shared channel based on time slots. If multiple stations
send data to a channel simultaneously, their data frames are separated by a
unique code sequence. Each station has a different unique code for
transmitting the data over a shared channel. For example, there are multiple
users in a room that are continuously speaking. Data is received by the users
if only two-person interact with each other using the same language.
Similarly, in the network, if different stations communicate with each other
simultaneously with different code language.

Wireless LAN and IEEE 802.11


Wireless LANs are those Local Area Networks that use high frequency radio waves
instead of cables for connecting the devices in LAN. Users connected by WLANs can
move around within the area of network coverage. Most WLANs are based upon the
standard IEEE 802.11 or WiFi.

IEEE 802.11 Architecture


The components of an IEEE 802.11 architecture are as follows
1) Stations (STA) − Stations comprise all devices and equipments that are connected
to the wireless LAN. A station can be of two types:

 Wireless Access Pointz (WAP) − WAPs or simply access points (AP) are
generally wireless routers that form the base stations or access.
 Client. − Clients are workstations, computers, laptops, printers, smartphones,
etc.
Each station has a wireless network interface controller.
2) Basic Service Set (BSS) −A basic service set is a group of stations communicating
at physical layer level. BSS can be of two categories depending upon mode of
operation:

 Infrastructure BSS − Here, the devices communicate with other devices through
access points.
 Independent BSS − Here, the devices communicate in peer-to-peer basis in an
ad hoc manner.
3) Extended Service Set (ESS) − It is a set of all connected BSS.
4) Distribution System (DS) − It connects access points in ESS.

Advantages of WLANs
 They provide clutter free homes, offices and other networked places.
 The LANs are scalable in nature, i.e. devices may be added or removed from the
network at a greater ease than wired LANs.
 The system is portable within the network coverage and access to the network is
not bounded by the length of the cables.
 Installation and setup is much easier than wired counterparts.
 The equipment and setup costs are reduced.

Disadvantages of WLANs
 Since radio waves are used for communications, the signals are noisier with more
interference from nearby systems.
 Greater care is needed for encrypting information. Also, they are more prone to
errors. So, they require greater bandwidth than the wired LANs.
 WLANs are slower than wired LANs.

IEEE 802.3 and Ethernet


Ethernet is a set of technologies and protocols that are used primarily in LANs. It was
first standardized in 1980s by IEEE 802.3 standard. IEEE 802.3 defines the physical
layer and the medium access control (MAC) sub-layer of the data link layer for wired
Ethernet networks. Ethernet is classified into two categories: classic Ethernet and
switched Ethernet.
Classic Ethernet is the original form of Ethernet that provides data rates between 3 to
10 Mbps. The varieties are commonly referred as 10BASE-X. Here, 10 is the maximum
throughput, i.e. 10 Mbps, BASE denoted use of baseband transmission, and X is the
type of medium used. Most varieties of classic Ethernet have become obsolete in
present communication scenario.
A switched Ethernet uses switches to connect to the stations in the LAN. It replaces the
repeaters used in classic Ethernet and allows full bandwidth utilization.

IEEE 802.3 Popular Versions


There are a number of versions of IEEE 802.3 protocol. The most popular ones are -
 IEEE 802.3: This was the original standard given for 10BASE-5. It used a thick
single coaxial cable into which a connection can be tapped by drilling into the
cable to the core. Here, 10 is the maximum throughput, i.e. 10 Mbps, BASE
denoted use of baseband transmission, and 5 refers to the maximum segment
length of 500m.
 IEEE 802.3a: This gave the standard for thin coax (10BASE-2), which is a thinner
variety where the segments of coaxial cables are connected by BNC connectors.
The 2 refers to the maximum segment length of about 200m (185m to be
precise).
 IEEE 802.3i: This gave the standard for twisted pair (10BASE-T) that uses
unshielded twisted pair (UTP) copper wires as physical layer medium. The further
variations were given by IEEE 802.3u for 100BASE-TX, 100BASE-T4 and
100BASE-FX.
 IEEE 802.3i: This gave the standard for Ethernet over Fiber (10BASE-F) that
uses fiber optic cables as medium of transmission.
Frame Format of Classic Ethernet and IEEE 802.3
The main fields of a frame of classic Ethernet are -
 Preamble: It is the starting field that provides alert and timing pulse for
transmission. In case of classic Ethernet it is an 8 byte field and in case of IEEE
802.3 it is of 7 bytes.
 Start of Frame Delimiter: It is a 1 byte field in a IEEE 802.3 frame that contains
an alternating pattern of ones and zeros ending with two ones.
 Destination Address: It is a 6 byte field containing physical address of
destination stations.
 Source Address: It is a 6 byte field containing the physical address of the
sending station.
 Length: It a 7 bytes field that stores the number of bytes in the data field.
 Data: This is a variable sized field carries the data from the upper layers. The
maximum size of data field is 1500 bytes.
 Padding: This is added to the data to bring its length to the minimum requirement
of 46 bytes.
 CRC: CRC stands for cyclic redundancy check. It contains the error detection
information.
Token Ring in Computer Networks
What is a Token Ring?
A token ring is a computer network topology and access method to connect
devices in a physical ring or loop. In a token ring network, data can travel in
a unidirectional or bidirectional manner around the ring, and devices are
connected to the network in a sequential fashion. This topology contrasts
other network topologies, such as Ethernet, which use a bus or star
configuration.

What is a Token Ring Network?


A Token Ring network is a type of local area network (LAN) technology that
uses a ring topology to connect devices. In a Token Ring network, devices
are connected in a physical or logical ring, and data travels around the ring
in a unidirectional or bidirectional manner. The term "token" refers to a
particular control packet to manage network access.

History of Token Ring


Token Ring technology has a rich history that dates back to the 1970s and
has seen several developments and changes. Here's a brief history of the
Token Ring:

1. Early Development (1970s): The concept of Token Ring technology


was initially developed by IBM in the early 1970s. IBM introduced its
first Token Ring network under the name "IBM Token Ring
Architecture" in the late 1970s. This technology was intended for use
in IBM's larger mainframe computer systems.
2. IEEE Standardization (1980s): In the early 1980s, Token Ring
technology began to gain broader acceptance. The Institute of
Electrical and Electronics Engineers (IEEE) developed a standard for
Token Ring LANs called IEEE 802.5. This standardization effort helped
Token Ring become a more widely adopted LAN technology.
3. Commercialization (1980s): Throughout the 1980s, various
technology companies, including IBM and others, commercialized
Token Ring hardware and software. This led to the widespread
deployment of Token Ring networks in corporate and enterprise
environments. The technology offered deterministic access, which was
attractive for mission-critical applications.
4. Speed Improvements (1990s): In the early 1990s, Token Ring
networks primarily operated at 4 Mbps (megabits per second).
However, technological advancements led to the development of
Token Ring networks running at 16 Mbps, providing faster data
transmission.
5. Challenges from Ethernet (1990s): Despite its reliability and
determinism, Token Ring faced competition from Ethernet LANs, which
were becoming more popular and cost-effective. Ethernet's shared bus
topology and CSMA/CD (Carrier Sense Multiple Access with Collision
Detection) access method made it easier to implement and scale,
while Token Ring required specialized hardware.
6. Token Ring 100 and 1000 (2000s): To remain competitive, Token
Ring technology evolved to offer higher speeds. Token Ring 100 (100
Mbps) and Token Ring 1000 (1 Gbps) were introduced in the early
2000s. However, these developments came relatively late in the
history of LAN technologies, and Ethernet had already established its
dominance.
7. Decline and Legacy (2000s-Present): Despite efforts to improve
and evolve the Token Ring, it gradually declined in popularity and
market share. Ethernet became the dominant LAN technology for most
networking applications. Many organizations migrated from token rings
to Ethernet networks, rendering token rings largely legacy.

What is Token Ring Star Topology?


A Token Ring star topology is a type of traditional Token Ring network
topology in which the physical layout of the network combines the star
configuration, even though the logical structure of the network remains a
ring. This design combines elements of both star and ring topologies to
provide certain advantages. The token ring star topology works in the
following manner.

1. Physical Star Layout: In a Token Ring star topology, all the devices
are connected to a central hub or Multistation Access Unit (MAU). This
central hub is also known as the focal point of the star. Each device,
such as computers or network printers, has a dedicated connection to
the hub, and these connections radiate out from the hub like the
spokes of a wheel.
2. Logical Ring Structure: The layout of the logical ring topology; the
entire network maintains a logical ring structure created internally
within the central hub. This means that data packets and the token will
continue circling within a ring within the hub, just as they would in a
traditional Token Ring network with a physical ring topology.
3. Token Passing: The controller of the token passing still controls
access to the network. When a device is connected to the hub then,
the hub needs to transmit data. It waits for the token to arrive at the
hub. Once it has the token, it can transmit data onto the logical ring
within the hub. The token continues to circulate until another device
needs to transmit.

What are Type 1 and Type 3 Token Ring Networks?


In Token Ring networks, Type 1 and Type 3 are two different standards or
categories of Token Ring networks. These standards are part of the IEEE
802.5 series for Token Ring LANs. Each type specifies other physical
characteristics and requirements for Token Ring networks:

Type 1 Token Ring (IEEE 802.5):

o Speed: The Type 1 Token Ring operates at a data rate of 4 Mbps


(megabits per second).
o Cabling: Type 1 Token Ring networks use shielded twisted-pair cables.
It is also referred to as STP (Shielded Twisted Pair) cabling. These
cables are designed to reduce electromagnetic interference, and these
also enhance the reliability of the network.
o Topology: Type 1 Token Ring networks typically use a physical ring
topology, where devices are connected sequentially in a closed-loop
configuration. The token-passing protocol maintains the logical ring
structure.
o Connectors: Type 1 Token Ring networks often use IBM Data
Connectors (IDCs) as the standard connectors for connecting devices
to the network.

Type 3 Token Ring (IEEE 802.5):

o Speed: Type 3 Token Ring networks operate at a data rate of 16 Mbps


(megabits per second). This represents a significant speed increase
compared to Type 1.
o Cabling: Type 3 Token Ring networks also use shielded twisted-pair
cables (STP) similar to Type 1. However, these cables may have
different specifications to accommodate the higher data rate.
o Topology: Type 3 Token Ring networks can maintain the physical ring
topology, but they can also be implemented with a physical star
topology where devices are connected to a central hub (Multistation
Access Unit or MAU). Despite the physical star layout, the logical ring
structure is maintained internally within the hub.
o Connectors: Type 3 Token Ring networks may use the same IBM Data
Connectors (IDCs) as Type 1 networks or other connectors compatible
with the higher data rate.

Token Bus (IEEE 802.4)


Token Bus (IEEE 802.4) is a popular standard for token passing LANs. In a
token bus LAN, the physical media is a bus or a tree, and a logical ring is
created using a coaxial cable. The token is passed from one user to another
in a sequence (clockwise or anticlockwise). Each station knows the address
of the station to its “left” and “right” as per the sequence in the logical ring. A
station can only transmit data when it has the token. The working of a token
bus is somewhat similar to Token Ring.
The Token Bus (IEEE 802.4) is a standard for deploying token rings in LANs
over a virtual ring. The physical medium uses coaxial cables and has a bus
or tree architecture. The nodes/stations form a virtual ring, and the token is
transmitted from one node to the next in a sequence along the virtual ring.
Each node knows the address of the station before it and the station after it.
When a station has the token, it can only broadcast data. The token bus
works in a similar way as the Token Ring.
1. Preamble – It is used for bit synchronization. It is a 1-byte field.
2. Start Delimiter – These bits mark the beginning of the frame. It is a 1-byte
field.
3. Frame Control – This field specifies the type of frame – data frame and
control frames. It is a 1-byte field.
4. Destination Address – This field contains the destination address. It is a 2
to 6 bytes field.
5. Source Address – This field contains the source address. It is a 2 to 6
bytes field.
6. Data – If 2-byte addresses are used then the field may be up to 8182
bytes and 8174 bytes in the case of 6-byte addresses.
7. Checksum – This field contains the checksum bits which are used to
detect errors in the transmitted data. It is 4 bytes field.
8. End Delimiter – This field marks the end of a frame. It is a 1-byte field.
Ring topology has the following advantages:

1. Data collisions are less likely because each node sends out a data packet
after receiving the token.
2. Under heavy traffic, token passing makes ring topology perform better
than bus topology.
Characteristics:-
1. Bus Topology: Token Bus uses a bus topology, where all devices on the
network are connected to a single cable or “bus”.
2. Token Passing: A “token” is passed around the network, which gives
permission for a device to transmit data.
3. Priority Levels: Token Bus uses three priority levels to prioritize data
transmission. The highest priority level is reserved for control messages
and the lowest for data transmission.
4. Collision Detection: Token Bus employs a collision detection mechanism
to ensure that two devices do not transmit data at the same time.
5. Maximum Cable Length: The maximum cable length for Token Bus is
limited to 1000 meters.
6. Data Transmission Rates: Token Bus can transmit data at speeds of up to
10 Mbps.
7. Limited Network Size: Token Bus is typically used for small to medium-
sized networks with up to 72 devices.
8. No Centralized Control: Token Bus does not require a central controller to
manage network access, which can make it more flexible and easier to
implement.
9. Vulnerable to Network Failure: If the token is lost or a device fails, the
network can become congested or fail altogether.
10. Security: Token Bus has limited security features, and unauthorized
devices can potentially gain access to the network.
Fiber Distributed Data Interface
(FDDI)
Fiber Distributed Data Interface (FDDI) is a set of ANSI and ISO standards for
transmission of data in local area network (LAN) over fiber optic cables. It is
applicable in large LANs that can extend up to 200 kilometers in diameter.

Features
 FDDI uses optical fiber as its physical medium.
 It operates in the physical and medium access control (MAC layer) of
the Open Systems Interconnection (OSI) network model.
 It provides high data rate of 100 Mbps and can support thousands of
users.
 It is used in LANs up to 200 kilometers for long distance voice and
multimedia communication.
 It uses ring based token passing mechanism and is derived from IEEE
802.4 token bus standard.
 It contains two token rings, a primary ring for data and token transmission
and a secondary ring that provides backup if the primary ring fails.
 FDDI technology can also be used as a backbone for a wide area
network (WAN).

The following diagram shows FDDI −


 Preamble: 1 byte for synchronization.
 Start Delimiter: 1 byte that marks the beginning of the frame.
 Frame Control: 1 byte that specifies whether this is a data frame or
control frame.
 Destination Address: 2-6 bytes that specifies address of destination
station.
 Source Address: 2-6 bytes that specifies address of source station.
 Payload: A variable length field that carries the data from the network
layer.
 Checksum: 4 bytes frame check sequence for error detection.
 End Delimiter: 1 byte that marks the end of the frame.

Network Devices (Hub, Repeater, Bridge,


Switch, Router, Gateways and Brouter)
Network Devices: Network devices, also known as networking hardware, are
physical devices that allow hardware on a computer network to communicate
and interact with one another. For example Repeater, Hub, Bridge, Switch,
Routers, Gateway, Brouter, and NIC, etc.
1. Repeater – A repeater operates at the physical layer. Its job is to regenerate
the signal over the same network before the signal becomes too weak or
corrupted to extend the length to which the signal can be transmitted over the
same network. An important point to be noted about repeaters is that they do
not amplify the signal. When the signal becomes weak, they copy it bit by bit
and regenerate it at its star topology connectors connecting if original strength.
It is a 2-port device.

2. Hub – A hub is a basically multi-port repeater. A hub connects multiple wires


coming from different branches, for example, the connector in star topology
which connects different stations. Hubs cannot filter data, so data packets are
sent to all connected devices. In other words, the collision domain of all hosts
connected through Hub remains one. Also, they do not have the intelligence to
find out the best path for data packets which leads to inefficiencies and
wastage.
Types of Hub
 Active Hub:- These are the hubs that have their power supply and can
clean, boost, and relay the signal along with the network. It serves both as a
repeater as well as a wiring center. These are used to extend the maximum
distance between nodes.
 Passive Hub:- These are the hubs that collect wiring from nodes and power
supply from the active hub. These hubs relay signals onto the network
without cleaning and boosting them and can’t be used to extend the distance
between nodes.
 Intelligent Hub:- It works like an active hub and includes remote
management capabilities. They also provide flexible data rates to network
devices. It also enables an administrator to monitor the traffic passing
through the hub and to configure each port in the hub.
3. Bridge – A bridge operates at the data link layer. A bridge is a repeater, with
add on the functionality of filtering content by reading the MAC addresses of the
source and destination. It is also used for interconnecting two LANs working on
the same protocol. It has a single input and single output port, thus making it a
2 port device.
Types of Bridges
 Transparent Bridges:- These are the bridge in which the stations are
completely unaware of the bridge’s existence i.e. whether or not a bridge is
added or deleted from the network, reconfiguration of the stations is
unnecessary. These bridges make use of two processes i.e. bridge
forwarding and bridge learning.
 Source Routing Bridges:- In these bridges, routing operation is performed
by the source station and the frame specifies which route to follow. The host
can discover the frame by sending a special frame called the discovery
frame, which spreads through the entire network using all possible paths to
the destination.
4. Switch – A switch is a multiport bridge with a buffer and a design that can
boost its efficiency(a large number of ports imply less traffic) and performance.
A switch is a data link layer device. The switch can perform error checking
before forwarding data, which makes it very efficient as it does not forward
packets that have errors and forward good packets selectively to the correct
port only. In other words, the switch divides the collision domain of hosts, but
the broadcast domain remains the same.
Types of Switch
1. Unmanaged switches: These switches have a simple plug-and-play design
and do not offer advanced configuration options. They are suitable for small
networks or for use as an expansion to a larger network.
2. Managed switches: These switches offer advanced configuration options
such as VLANs, QoS, and link aggregation. They are suitable for larger,
more complex networks and allow for centralized management.
3. Smart switches: These switches have features similar to managed switches
but are typically easier to set up and manage. They are suitable for small- to
medium-sized networks.
4. Layer 2 switches: These switches operate at the Data Link layer of the OSI
model and are responsible for forwarding data between devices on the same
network segment.
5. Layer 3 switches: These switches operate at the Network layer of the OSI
model and can route data between different network segments. They are
more advanced than Layer 2 switches and are often used in larger, more
complex networks.
6. PoE switches: These switches have Power over Ethernet capabilities, which
allows them to supply power to network devices over the same cable that
carries data.
7. Gigabit switches: These switches support Gigabit Ethernet speeds, which
are faster than traditional Ethernet speeds.
8. Rack-mounted switches: These switches are designed to be mounted in a
server rack and are suitable for use in data centers or other large networks.
9. Desktop switches: These switches are designed for use on a desktop or in a
small office environment and are typically smaller in size than rack-mounted
switches.
10. Modular switches: These switches have modular design, which allows for
easy expansion or customization. They are suitable for large networks and
data centers.

5. Routers – A router is a device like a switch that routes data packets based
on their IP addresses. The router is mainly a Network Layer device. Routers
normally connect LANs and WANs and have a dynamically updating routing
table based on which they make decisions on routing the data packets. The
router divides the broadcast domains of hosts connected through it

Gateway – A gateway, as the name suggests, is a passage to connect two


networks that may work upon different networking models. They work as
messenger agents that take data from one system, interpret it, and transfer it to
another system. Gateways are also called protocol converters and can operate
at any network layer. Gateways are generally more complex than switches or
routers. A gateway is also called a protocol converter.
7. Brouter – It is also known as the bridging router is a device that combines
features of both bridge and router. It can work either at the data link layer or a
network layer. Working as a router, it is capable of routing packets across
networks and working as the bridge, it is capable of filtering local area network
traffic.
8. NIC – NIC or network interface card is a network adapter that is used to
connect the computer to the network. It is installed in the computer to establish
a LAN. It has a unique id that is written on the chip, and it has a connector to
connect the cable to it. The cable acts as an interface between the computer
and the router or modem. NIC card is a layer 2 device which means that it
works on both the physical and data link layers of the network model.

You might also like