CN Notes - Unit2
CN Notes - Unit2
o In the OSI model, the data link layer is a 6 th layer from the top and
2nd layer from the bottom.
o The communication channel that connects the adjacent nodes is
known as links, and in order to move the datagram from source to the
destination, the datagram must be moved across an individual link.
o The main responsibility of the Data Link Layer is to transfer the
datagram across an individual link.
o The Data link layer protocol defines the format of the packet
exchanged across the nodes as well as the actions such as Error
detection, retransmission, flow control, and random access.
o The Data Link Layer protocols are Ethernet, token ring, FDDI and PPP.
o An important characteristic of a Data Link Layer is that datagram can
be handled by different link layer protocols on different links in a path.
For example, the datagram is handled by Ethernet on the first link, PPP
on the second link.
Framing
The data link layer encapsulates each data packet from the network layer into frames
that are then transmitted.
A frame has three parts, namely −
Frame Header
Payload field that contains the data packet from network layer
Trailer
o Line discipline
o Flow Control
o Error Control
Line Discipline
o Line Discipline is a functionality of the Data link layer that provides the
coordination among the link systems. It determines which device can
send, and when it can send the data.
o ENQ/ACK
o Poll/select
END/ACK
END/ACK coordinates which device will start the transmission and whether
the recipient is ready or not.
Working of END/ACK
The transmitter transmits the frame called an Enquiry (ENQ) asking whether
the receiver is available to receive the data or not.
Error Control
The data link layer ensures error free link for data transmission. The issues it caters to
with respect to error control are −
Flow Control
The data link layer regulates flow control so that a fast sender does not drown a slow
receiver. When the sender sends frames at very high speeds, a slow receiver may not
be able to handle it. There will be frame losses even if the transmission is error-free.
The two common approaches for flow control are −
Types of Errors
There may be three types of errors:
Single bit error
Error Detection
Errors in the received frames are detected by means of Parity Check and Cyclic
Redundancy Check (CRC). In both cases, few extra bits are sent along with actual data
to confirm that bits received at other end are same as they were sent. If the counter-
check at receiver’ end fails, the bits are considered corrupted.
Parity Check
One extra bit is sent along with the original bits to make number of 1s either even in
case of even parity, or odd in case of odd parity.
The sender while creating a frame counts the number of 1s in it. For example, if even
parity is used and number of 1s is even then one bit with value 0 is added. This way
number of 1s remains even.If the number of 1s is odd, to make it even a bit with value 1
is added.
The receiver simply counts the number of 1s in a frame. If the count of 1s is even and
even parity is used, the frame is considered to be not-corrupted and is accepted. If the
count of 1s is odd and odd parity is used, the frame is still not corrupted.
If a single bit flips in transit, the receiver can detect it by counting the number of 1s. But
when more than one bits are erro neous, then it is very hard for the receiver to detect
the error.
Cyclic Redundancy Check (CRC)
CRC is a different approach to detect if the received frame contains valid data. This
technique involves binary division of the data bits being sent. The divisor is generated
using polynomials. The sender performs a division operation on the bits being sent and
calculates the remainder. Before sending the actual bits, the sender adds the remainder
at the end of the actual bits. Actual data bits plus the remainder is called a codeword.
The sender transmits data bits as codewords.
At the other end, the receiver performs division operation on codewords using the same
CRC divisor. If the remainder contains all zeros the data bits are accepted, otherwise it
is considered as there some data corruption occurred in transit.
Error Correction
In the digital world, error correction can be done in two ways:
Backward Error Correction When the receiver detects an error in the data
received, it requests back the sender to retransmit the data unit.
Forward Error Correction When the receiver detects some error in the data
received, it executes error-correcting code, which helps it to auto-recover and to
correct some kinds of errors.
The first one, Backward Error Correction, is simple and can only be efficiently used
where retransmitting is not expensive. For example, fiber optics. But in case of wireless
transmission retransmitting may cost too much. In the latter case, Forward Error
Correction is used.
To correct the error in data frame, the receiver must know exactly which bit in the frame
is corrupted. To locate the bit in error, redundant bits are used as parity bits for error
detection.For example, we take ASCII words (7 bits data), then there could be 8 kind of
information we need: first seven bits to tell us which bit is error and one more bit to tell
that there is no error.
For m data bits, r redundant bits are used. r bits can provide 2r combinations of
information. In m+r bit codeword, there is possibility that the r bits themselves may get
corrupted. So the number of r bits used must inform about m+r bit locations plus no-
error information, i.e. m+r+1.
2 power r >=m+r+1
Data-link Control & Protocols
Data-link layer is responsible for implementation of point-to-point flow and error control
mechanism.
Flow Control
When a data frame (Layer-2 data) is sent from one host to another over a single
medium, it is required that the sender and receiver should work at the same speed. That
is, sender sends at a speed on which the receiver can process and accept the data.
What if the speed (hardware/software) of the sender or receiver differs? If sender is
sending too fast the receiver may be overloaded, (swamped) and data may be lost.
Two types of mechanisms can be deployed to control the flow:
Stop and Wait
This flow control mechanism forces the sender after transmitting a data frame to
stop and wait until the acknowledgement of the data-frame sent is received.
Sliding Window
In this flow control mechanism, both sender and receiver agree on the number of
data-frames after which the acknowledgement should be sent. As we learnt, stop
and wait flow control mechanism wastes resources, this protocol tries to make
use of underlying resources as much as possible.
Error Control
When data-frame is transmitted, there is a probability that data-frame may be lost in the
transit or it is received corrupted. In both cases, the receiver does not receive the
correct data-frame and sender does not know anything about any loss.In such case,
both sender and receiver are equipped with some protocols which helps them to detect
transit errors such as loss of data-frame. Hence, either the sender retransmits the data-
frame or the receiver may request to resend the previous data-frame.
Requirements for error control mechanism:
Error detection - The sender and receiver, either both or any, must ascertain
that there is some error in the transit.
Positive ACK - When the receiver receives a correct frame, it should
acknowledge it.
Negative ACK - When the receiver receives a damaged frame or a duplicate
frame, it sends a NACK back to the sender and the sender must retransmit the
correct frame.
Retransmission: The sender maintains a clock and sets a timeout period. If an
acknowledgement of a data-frame previously transmitted does not arrive before
the timeout the sender retransmits the frame, thinking that the frame or it’s
acknowledgement is lost in transit.
There are three types of techniques available which Data-link layer may deploy to
control the errors by Automatic Repeat Requests (ARQ):
Stop-and-wait ARQ
The following transition may occur in Stop-and-Wait ARQ:
o The sender maintains a timeout counter.
o When a frame is sent, the sender starts the timeout counter.
o If acknowledgement of frame comes in time, the sender transmits the next
frame in queue.
o If acknowledgement does not come in time, the sender assumes that either
the frame or its acknowledgement is lost in transit. Sender retransmits the
frame and starts the timeout counter.
o If a negative acknowledgement is received, the sender retransmits the
frame.
Go-Back-N ARQ
Stop and wait ARQ mechanism does not utilize the resources at their best. When
the acknowledgement is received, the sender sits idle and does nothing. In Go-
Back-N ARQ method, both sender and receiver maintain a window.
The sending-window size enables the sender to send multiple frames without
receiving the acknowledgement of the previous ones. The receiving-window
enables the receiver to receive multiple frames and acknowledge them. The
receiver keeps track of incoming frame’s sequence number.
When the sender sends all the frames in window, it checks up to what sequence
number it has received positive acknowledgement. If all frames are positively
acknowledged, the sender sends next set of frames. If sender finds that it has
received NACK or has not receive any ACK for a particular frame, it retransmits
all the frames after which it does not receive any positive ACK.
Selective Repeat ARQ
In Go-back-N ARQ, it is assumed that the receiver does not have any buffer
space for its window size and has to process each frame as it comes. This
enforces the sender to retransmit all the frames which are not acknowledged.
In Selective-Repeat ARQ, the receiver while keeping track of sequence numbers,
buffers the frames in memory and sends NACK for only frame which is missing or
damaged.
The sender in this case, sends only packet for which NACK is received.
Transfer Modes
HDLC supports two types of transfer modes, normal response mode and asynchronous
balanced mode.
Normal Response Mode (NRM) − Here, two types of stations are there, a
primary station that send commands and secondary station that can respond to
received commands. It is used for both point - to - point and multipoint
communications.
PPP Protocol
The PPP stands for Point-to-Point protocol. It is the most commonly used
protocol for point-to-point access. Suppose the user wants to access the
internet from the home, the PPP protocol will be used.
It is a data link layer protocol that resides in the layer 2 of the OSI model. It
is used to encapsulate the layer 3 protocols and all the information available
in the payload in order to be transmitted across the serial links. The PPP
protocol can be used on synchronous link like ISDN as well as asynchronous
link like dial-up. It is mainly used for the communication between the two
devices.
It can be used over many types of physical networks such as serial cable,
phone line, trunk line, cellular telephone, fiber optic links such as SONET. As
the data link layer protocol is used to identify from where the transmission
starts and ends, so ISP (Internet Service Provider) use the PPP protocol to
provide the dial-up access to the internet.
o Flag: The flag field is used to indicate the start and end of the frame.
The flag field is a 1-byte field that appears at the beginning and the
ending of the frame. The pattern of the flag is similar to the bit pattern
in HDLC, i.e., 01111110.
o Address: It is a 1-byte field that contains the constant value which is
11111111. These 8 ones represent a broadcast message.
o Control: It is a 1-byte field which is set through the constant value,
i.e., 11000000. It is not a required field as PPP does not support the
flow control and a very limited error control mechanism. The control
field is a mandatory field where protocol supports flow and error
control mechanism.
o Protocol: It is a 1 or 2 bytes field that defines what is to be carried in
the data field. The data can be a user data or other information.
o Payload: The payload field carries either user data or other
information. The maximum length of the payload field is 1500 bytes.
o Checksum: It is a 16-bit field which is generally used for error
detection.
o Dead: Dead is a transition phase which means that the link is not used
or there is no active carrier at the physical layer.
o Establish: If one of the nodes starts working then the phase goes to
the establish phase. In short, we can say that when the node starts
communication or carrier is detected then it moves from the dead to
the establish phase.
o Authenticate: It is an optional phase which means that the
communication can also moves to the authenticate phase. The phase
moves from the establish to the authenticate phase only when both
the communicating nodes agree to make the communication
authenticated.
o Network: Once the authentication is successful, the network is
established or phase is network. In this phase, the negotiation of
network layer protocols take place.
o Open: After the establishment of the network phase, it moves to the
open phase. Here open phase means that the exchange of data takes
place. Or we can say that it reaches to the open phase after the
configuration of the network layer.
o Terminate: When all the work is done then the connection gets
terminated, and it moves to the terminate phase.
On reaching the terminate phase, the link moves to the dead phase which
indicates that the carrier is dropped which was earlier created.
There are two more possibilities that can exist in the transition
phase:
o The link moves from the authenticate to the terminate phase when the
authentication is failed.
o The link can also move from the establish to the dead state when the
carrier is failed.
PPP Stack
In PPP stack, there are three set of protocols:
The role of LCP is to establish, maintain, configure, and terminate the links. It
also provides negotiation mechanism.
o Authentication protocols
MAC Addresses
MAC address or media access control address is a unique identifier allotted to a
network interface controller (NIC) of a device. It is used as a network address for data
transmission within a network segment like Ethernet, Wi-Fi, and Bluetooth.
MAC address is assigned to a network adapter at the time of manufacturing. It is
hardwired or hard-coded in the network interface card (NIC). A MAC address
comprises of six groups of two hexadecimal digits, separated by hyphens, colons, or no
separators. An example of a MAC address is 00:0A:89:5B:F0:11.
What is channel allocation in computer
network?
When there are more than one user who desire to access a shared network channel,
an algorithm is deployed for channel allocation among the competing users. The
network channel may be a single cable or optical fiber connecting multiple nodes, or a
portion of the wireless spectrum. Channel allocation algorithms allocate the wired
channels and bandwidths to the users, who may be base stations, access points or
terminal equipment.
Working Principle
Suppose that there are N competing users. Here, the total bandwidth is divided into N
discrete channels using frequency division multiplexing (FDM). In most cases, the size
of the channels is equal. Each of these channels is assigned to one user.
Advantages
Static channel allocation scheme is particularly suitable for situations where there are a
small number of fixed users having a steady flow of uniform network traffic. The
allocation technique is simple and so the additional overhead of a complex algorithm
need not be incurred. Besides, there is no interference between the users since each
user is assigned a fixed channel which is not shared with others.
Disadvantages
Most real-life network situations have a variable number of users, usually large in
number with bursty traffic. If the value of N is very large, the bandwidth available for
each user will be very less. This will reduce the throughput if the user needs to send a
large volume of data once in a while.
It is very unlikely that all the users will be communicating all the time. However, since
all of them are allocated fixed bandwidths, the bandwidth allocated to non-
communicating users lies wasted.
If the number of users is more than N, then some of them will be denied service, even if
there are unused frequencies.
Working Principle
In dynamic channel allocation schemes, frequency channels are not permanently
allotted to any user. Channels are assigned to the user as needed depending upon the
network environment. The available channels are kept in a queue or a spool. The
allocation of the channels is temporary. Distribution of the channels to the contending
users is based upon distribution of the users in the network and offered traffic load. The
allocation is done so that transmission interference is minimized.
Advantages
Dynamic channel allocation schemes allots channels as needed. This results in
optimum utilization of network resources. There are less chances of denial of services
and call blocking in case of voice transmission. These schemes adjust bandwidth
allotment according to traffic volume, and so are particularly suitable for bursty traffic.
Disadvantages
Dynamic channel allocation schemes increases the computational as well as storage
load on the system.
For example, suppose that there is a classroom full of students. When a teacher asks a question,
all the students (small channels) in the class start answering the question at the same time
(transferring the data simultaneously). All the students respond at the same time due to which
data is overlap or data lost. Therefore it is the responsibility of a teacher (multiple access
protocol) to manage the students and make them one answer.
Following are the types of multiple access protocol that is subdivided into the
different process as:
o Aloha
o CSMA
o CSMA/CD
o CSMA/CA
It is designed for wireless LAN (Local Area Network) but can also be used in a
shared medium to transmit data. Using this method, any station can transmit
data across a network simultaneously when a data frameset is available for
transmission.
Aloha Rules
Pure Aloha
Whenever data is available for sending over a channel at stations, we use
Pure Aloha. In pure Aloha, when each station transmits data to a channel
without checking whether the channel is idle or not, the chances of collision
may occur, and the data frame can be lost. When any station transmits the
data frame to a channel, the pure Aloha waits for the receiver's
acknowledgment. If it does not acknowledge the receiver end within the
specified time, the station waits for a random amount of time, called the
backoff time (Tb). And the station may assume the frame has been lost or
destroyed. Therefore, it retransmits the frame until all the data are
successfully transmitted to the receiver.
As we can see in the figure above, there are four stations for accessing a shared
channel and transmitting data frames. Some frames collide because most stations
send their frames at the same time. Only two frames, frame 1.1 and frame 2.2, are
successfully transmitted to the receiver end. At the same time, other frames are
lost or destroyed. Whenever two frames fall on a shared channel simultaneously,
collisions can occur, and both will suffer damage. If the new frame's first bit enters
the channel before finishing the last bit of the second frame. Both frames are
completely finished, and both stations must retransmit the data frame.
Slotted Aloha
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first
sense the shared channel and if the channel is idle, it immediately sends the
data. Else it must wait and keep track of the status of the channel to be idle
and broadcast the frame unconditionally as soon as the channel is idle.
CSMA/ CA
Following are the methods used in the CSMA/ CA to avoid the collision:
Interframe space: In this method, the station waits for the channel to
become idle, and if it gets the channel is idle, it does not immediately send
the data. Instead of this, it waits for some time, and this time period is called
the Interframe space or IFS. However, the IFS time is often used to define
the priority of the station.
C. Channelization Protocols
It is a channelization protocol that allows the total usable bandwidth in a
shared channel to be shared across multiple stations based on their time,
distance and codes. It can access all the stations at the same time to send
the data frames to the channel.
Following are the various methods to access the channel based on their
time, distance and codes:
1. FDMA (Frequency Division Multiple Access)
2. TDMA (Time Division Multiple Access)
3. CDMA (Code Division Multiple Access)
FDMA
TDMA
CDMA
Wireless Access Pointz (WAP) − WAPs or simply access points (AP) are
generally wireless routers that form the base stations or access.
Client. − Clients are workstations, computers, laptops, printers, smartphones,
etc.
Each station has a wireless network interface controller.
2) Basic Service Set (BSS) −A basic service set is a group of stations communicating
at physical layer level. BSS can be of two categories depending upon mode of
operation:
Infrastructure BSS − Here, the devices communicate with other devices through
access points.
Independent BSS − Here, the devices communicate in peer-to-peer basis in an
ad hoc manner.
3) Extended Service Set (ESS) − It is a set of all connected BSS.
4) Distribution System (DS) − It connects access points in ESS.
Advantages of WLANs
They provide clutter free homes, offices and other networked places.
The LANs are scalable in nature, i.e. devices may be added or removed from the
network at a greater ease than wired LANs.
The system is portable within the network coverage and access to the network is
not bounded by the length of the cables.
Installation and setup is much easier than wired counterparts.
The equipment and setup costs are reduced.
Disadvantages of WLANs
Since radio waves are used for communications, the signals are noisier with more
interference from nearby systems.
Greater care is needed for encrypting information. Also, they are more prone to
errors. So, they require greater bandwidth than the wired LANs.
WLANs are slower than wired LANs.
1. Physical Star Layout: In a Token Ring star topology, all the devices
are connected to a central hub or Multistation Access Unit (MAU). This
central hub is also known as the focal point of the star. Each device,
such as computers or network printers, has a dedicated connection to
the hub, and these connections radiate out from the hub like the
spokes of a wheel.
2. Logical Ring Structure: The layout of the logical ring topology; the
entire network maintains a logical ring structure created internally
within the central hub. This means that data packets and the token will
continue circling within a ring within the hub, just as they would in a
traditional Token Ring network with a physical ring topology.
3. Token Passing: The controller of the token passing still controls
access to the network. When a device is connected to the hub then,
the hub needs to transmit data. It waits for the token to arrive at the
hub. Once it has the token, it can transmit data onto the logical ring
within the hub. The token continues to circulate until another device
needs to transmit.
1. Data collisions are less likely because each node sends out a data packet
after receiving the token.
2. Under heavy traffic, token passing makes ring topology perform better
than bus topology.
Characteristics:-
1. Bus Topology: Token Bus uses a bus topology, where all devices on the
network are connected to a single cable or “bus”.
2. Token Passing: A “token” is passed around the network, which gives
permission for a device to transmit data.
3. Priority Levels: Token Bus uses three priority levels to prioritize data
transmission. The highest priority level is reserved for control messages
and the lowest for data transmission.
4. Collision Detection: Token Bus employs a collision detection mechanism
to ensure that two devices do not transmit data at the same time.
5. Maximum Cable Length: The maximum cable length for Token Bus is
limited to 1000 meters.
6. Data Transmission Rates: Token Bus can transmit data at speeds of up to
10 Mbps.
7. Limited Network Size: Token Bus is typically used for small to medium-
sized networks with up to 72 devices.
8. No Centralized Control: Token Bus does not require a central controller to
manage network access, which can make it more flexible and easier to
implement.
9. Vulnerable to Network Failure: If the token is lost or a device fails, the
network can become congested or fail altogether.
10. Security: Token Bus has limited security features, and unauthorized
devices can potentially gain access to the network.
Fiber Distributed Data Interface
(FDDI)
Fiber Distributed Data Interface (FDDI) is a set of ANSI and ISO standards for
transmission of data in local area network (LAN) over fiber optic cables. It is
applicable in large LANs that can extend up to 200 kilometers in diameter.
Features
FDDI uses optical fiber as its physical medium.
It operates in the physical and medium access control (MAC layer) of
the Open Systems Interconnection (OSI) network model.
It provides high data rate of 100 Mbps and can support thousands of
users.
It is used in LANs up to 200 kilometers for long distance voice and
multimedia communication.
It uses ring based token passing mechanism and is derived from IEEE
802.4 token bus standard.
It contains two token rings, a primary ring for data and token transmission
and a secondary ring that provides backup if the primary ring fails.
FDDI technology can also be used as a backbone for a wide area
network (WAN).
5. Routers – A router is a device like a switch that routes data packets based
on their IP addresses. The router is mainly a Network Layer device. Routers
normally connect LANs and WANs and have a dynamically updating routing
table based on which they make decisions on routing the data packets. The
router divides the broadcast domains of hosts connected through it