0% found this document useful (0 votes)
26 views26 pages

Unit Nio 3

The data link layer provides three main types of services: 1. Unacknowledged connectionless service without error or flow control 2. Acknowledged connectionless service using stop and wait protocols for acknowledgments 3. Acknowledged connection-oriented service establishing connections before transmitting frames in order and sequence It handles framing by encapsulating network layer packets into frames with a header, payload, and trailer. Error control involves retransmitting lost frames, detecting duplicate frames, and flow control uses feedback-based mechanisms like stop and wait or sliding windows.

Uploaded by

Aditya Rathod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views26 pages

Unit Nio 3

The data link layer provides three main types of services: 1. Unacknowledged connectionless service without error or flow control 2. Acknowledged connectionless service using stop and wait protocols for acknowledgments 3. Acknowledged connection-oriented service establishing connections before transmitting frames in order and sequence It handles framing by encapsulating network layer packets into frames with a header, payload, and trailer. Error control involves retransmitting lost frames, detecting duplicate frames, and flow control uses feedback-based mechanisms like stop and wait or sliding windows.

Uploaded by

Aditya Rathod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Services provided by Data Link Layer

ata Link Layer is generally representing protocol layer in program that is simply
used to handle and control the transmission of data between source and
destination machines. It is simply responsible for exchange of frames among
nodes or machines over physical network media. This layer is often closest and
nearest to Physical Layer (Hardware).

Data Link Layer is basically second layer of seven-layer Open System


Interconnection (OSI) reference model of computer networking and lies just
above Physical Layer.
This layer usually provides and gives data reliability and provides various tools
to establish, maintain, and also release data link connections between network
nodes. It is responsible for receiving and getting data bits usually from Physical
Layer and also then converting these bits into groups, known as data link
frames so that it can be transmitted further. It is also responsible to handle
errors that might arise due to transmission of bits.

Service Provided to Network Layer :


The important and essential function of Data Link Layer is to provide an
interface to Network Layer. Network Layer is third layer of seven-layer OSI
reference model and is present just above Data Link Layer.
The main aim of Data Link Layer is to transmit data frames they have received
to destination machine so that these data frames can be handed over to
network layer of destination machine. At the network layer, these data frames
are basically addressed and routed.
This process is shown in diagram :

1. Actual Communication :
In this communication, physical medium is present through which Data Link
Layer simply transmits data frames. The actual path is Network Layer -> Data
link layer -> Physical Layer on sending machine, then to physical media and
after that to Physical Layer -> Data link layer -> Network Layer on receiving
machine.
2. Virtual Communication :
In this communication, no physical medium is present for Data Link Layer to
transmit data. It can be only be visualized and imagined that two Data Link
Layers are communicating with each other with the help of or using data link
protocol.
Types of Services provided by Data Link Layer :
The Data link layer generally provides or offers three types of services as given
below :
1. Unacknowledged Connectionless Service
2. Acknowledged Connectionless Service
3. Acknowledged Connection-Oriented Service
1. Unacknowledged Connectionless Service :
Unacknowledged connectionless service simply provides datagram styles
delivery without any error, issue, or flow control. In this service, source
machine generally transmits independent frames to destination machine
without having destination machine to acknowledge these frames.
This service is called as connectionless service because there is no
connection established among sending or source machine and destination or
receiving machine before data transfer or release after data transfer.
In Data Link Layer, if anyhow frame is lost due to noise, there will be no
attempt made just to detect or determine loss or recovery from it. This simply
means that there will be no error or flow control. An example can be
Ethernet.
2. Acknowledged Connectionless Service :
This service simply provides acknowledged connectionless service i.e.
packet delivery is simply acknowledged, with help of stop and wait for
protocol.
In this service, each frame that is transmitted by Data Link Layer is simply
acknowledged individually and then sender usually knows whether or not
these transmitted data frames received safely. There is no logical connection
established and each frame that is transmitted is acknowledged individually.
This mode simply provides means by which user of data link can just send or
transfer data and request return of data at the same time. It also uses
particular time period that if it has passed frame without getting
acknowledgment, then it will resend data frame on time period.
This service is more reliable than unacknowledged connectionless service.
This service is generally useful over several unreliable channels, like
wireless systems, Wi-Fi services, etc.
3. Acknowledged Connection-Oriented Service :
In this type of service, connection is established first among sender and
receiver or source and destination before data is transferred.
Then data is transferred or transmitted along with this established
connection. In this service, each of frames that are transmitted is provided
individual numbers first, so as to confirm and guarantee that each of frames
is received only once that too in an appropriate order and sequence.

Data Link Layer Design Issues


The data link layer in the OSI (Open System Interconnections) Model, is in between the
physical layer and the network layer. This layer converts the raw transmission facility
provided by the physical layer to a reliable and error-free link.
The main functions and the design issues of this layer are
 Providing services to the network layer
 Framing
 Error Control
 Flow Control

Services to the Network Layer


In the OSI Model, each layer uses the services of the layer below it and provides
services to the layer above it. The data link layer uses the services offered by the
physical layer.The primary function of this layer is to provide a well defined service
interface to network layer above it.

The types of services provided can be of three types −


 Unacknowledged connectionless service
 Acknowledged connectionless service
 Acknowledged connection - oriented service
Framing
The data link layer encapsulates each data packet from the network layer into frames
that are then transmitted.
A frame has three parts, namely −
 Frame Header
 Payload field that contains the data packet from network layer
 Trailer

Error Control
The data link layer ensures error free link for data transmission. The issues it caters to
with respect to error control are −
 Dealing with transmission errors
 Sending acknowledgement frames in reliable connections
 Retransmitting lost frames
 Identifying duplicate frames and deleting them
 Controlling access to shared channels in case of broadcasting

Flow Control
The data link layer regulates flow control so that a fast sender does not drown a slow
receiver. When the sender sends frames at very high speeds, a slow receiver may not
be able to handle it. There will be frame losses even if the transmission is error-free.
The two common approaches for flow control are −

 Feedback based flow control


 Rate based flow control

Flow control in Data Link Layer


Flow control is a technique that allows two stations working at different speeds to
communicate with each other. It is a set of measures taken to regulate the amount of
data that a sender sends so that a fast sender does not overwhelm a slow receiver. In
data link layer, flow control restricts the number of frames the sender can send before it
waits for an acknowledgment from the receiver.
Approaches of Flow Control
Flow control can be broadly classified into two categories −

 Feedback based Flow Control In these protocols, the sender sends frames after
it has received acknowledgments from the user. This is used in the data link
layer.
 Rate based Flow Control These protocols have built in mechanisms to restrict
the rate of transmission of data without requiring acknowledgment from the
receiver. This is used in the network layer and the transport layer.

Flow Control Techniques in Data Link Layer


Data link layer uses feedback based flow control mechanisms. There are two main
techniques −

Stop and Wait


This protocol involves the following transitions −
 The sender sends a frame and waits for acknowledgment.
 Once the receiver receives the frame, it sends an acknowledgment frame back to
the sender.
 On receiving the acknowledgment frame, the sender understands that the
receiver is ready to accept the next frame. So it sender the next frame in queue.

Sliding Window
This protocol improves the efficiency of stop and wait protocol by allowing multiple
frames to be transmitted before receiving an acknowledgment.
The working principle of this protocol can be described as follows −
 Both the sender and the receiver has finite sized buffers called windows. The
sender and the receiver agrees upon the number of frames to be sent based
upon the buffer size.
 The sender sends multiple frames in a sequence, without waiting for
acknowledgment. When its sending window is filled, it waits for acknowledgment.
On receiving acknowledgment, it advances the window and transmits the next
frames, according to the number of acknowledgments received.

Framing in Data Link Layer


Frames are the units of digital transmission, particularly in computer networks
and telecommunications. Frames are comparable to the packets of energy
called photons in the case of light energy. Frame is continuously used in Time
Division Multiplexing process. 
Framing is a point-to-point connection between two computers or devices
consisting of a wire in which data is transmitted as a stream of bits. However,
these bits must be framed into discernible blocks of information. Framing is a
function of the data link layer. It provides a way for a sender to transmit a set of
bits that are meaningful to the receiver. Ethernet, token ring, frame relay, and
other data link layer technologies have their own frame structures. Frames have
headers that contain information such as error-checking codes. 
At the data link layer, it extracts the message from the sender and provides it to
the receiver by providing the sender’s and receiver’s addresses. The advantage
of using frames is that data is broken up into recoverable chunks that can easily
be checked for corruption. 

The process of dividing the data into frames and reassembling it is transparent
to the user and is handled by the data link layer.
Framing is an important aspect of data link layer protocol design because it
allows the transmission of data to be organized and controlled, ensuring that
the data is delivered accurately and efficiently.
Problems in Framing
 Detecting start of the frame: When a frame is transmitted, every station
must be able to detect it. Station detects frames by looking out for a special
sequence of bits that marks the beginning of the frame i.e. SFD (Starting
Frame Delimiter).
 How does the station detect a frame: Every station listens to link for SFD
pattern through a sequential circuit. If SFD is detected, sequential circuit
alerts station. Station checks destination address to accept or reject frame.
 Detecting end of frame: When to stop reading the frame.
 Handling errors: Framing errors may occur due to noise or other
transmission errors, which can cause a station to misinterpret the frame.
Therefore, error detection and correction mechanisms, such as cyclic
redundancy check (CRC), are used to ensure the integrity of the frame.
 Framing overhead: Every frame has a header and a trailer that contains
control information such as source and destination address, error detection
code, and other protocol-related information. This overhead reduces the
available bandwidth for data transmission, especially for small-sized frames.
 Framing incompatibility: Different networking devices and protocols may
use different framing methods, which can lead to framing incompatibility
issues. For example, if a device using one framing method sends data to a
device using a different framing method, the receiving device may not be
able to correctly interpret the frame.
 Framing synchronization: Stations must be synchronized with each other
to avoid collisions and ensure reliable communication. Synchronization
requires that all stations agree on the frame boundaries and timing, which
can be challenging in complex networks with many devices and varying
traffic loads.
 Framing efficiency: Framing should be designed to minimize the amount of
data overhead while maximizing the available bandwidth for data
transmission. Inefficient framing methods can lead to lower network
performance and higher latency.
Types of framing
There are two types of framing: 
1. Fixed-size: The frame is of fixed size and there is no need to provide
boundaries to the frame, the length of the frame itself acts as a delimiter.  
 Drawback: It suffers from internal fragmentation if the data size is less than
the frame size
 Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as well
as the beginning of the next frame to distinguish. This can be done in two
ways: 
1. Length field – We can introduce a length field in the frame to indicate the
length of the frame. Used in Ethernet(802.3). The problem with this is that
sometimes the length field might get corrupted.
2. End Delimiter (ED) – We can introduce an ED(pattern) to indicate the end
of the frame. Used in Token Ring. The problem with this is that ED can
occur in the data. This can be solved by: 
1. Character/Byte Stuffing: Used when frames consist of characters. If data
contains ED then, a byte is stuffed into data to differentiate it from ED. 
Let ED = “$” –> if data contains ‘$’ anywhere, it can be escaped using ‘\O’
character. 
–> if data contains ‘\O$’ then, use ‘\O\O\O$'($ is escaped using \O and \O is
escaped using \O).

Disadvantage – It is very costly and obsolete method. 


2. Bit Stuffing: Let ED = 01111 and if data = 01111 
–> Sender stuffs a bit to break the pattern i.e. here appends a 0 in data =
011101. 
–> Receiver receives the frame. 
–> If data contains 011101, receiver removes the 0 and reads the data. 

Examples: 
 If Data –> 011100011110 and ED –> 0111 then, find data after bit stuffing. 
--> 011010001101100 
 If Data –> 110001001 and ED –> 1000 then, find data after bit stuffing? 
--> 11001010011 
framing in the Data Link Layer also presents some challenges, which include:
Variable frame length: The length of frames can vary depending on the data
being transmitted, which can lead to inefficiencies in transmission. To address
this issue, protocols such as HDLC and PPP use a flag sequence to mark the
start and end of each frame.
Bit stuffing: Bit stuffing is a technique used to prevent data from being
interpreted as control characters by inserting extra bits into the data stream.
However, bit stuffing can lead to issues with synchronization and increase the
overhead of the transmission.
Synchronization: Synchronization is critical for ensuring that data frames are
transmitted and received correctly. However, synchronization can be
challenging, particularly in high-speed networks where frames are transmitted
rapidly.
Error detection: Data Link Layer protocols use various techniques to detect
errors in the transmitted data, such as checksums and CRCs. However, these
techniques are not foolproof and can miss some types of errors.
Efficiency: Efficient use of available bandwidth is critical for ensuring that data
is transmitted quickly and reliably. However, the overhead associated with
framing and error detection can reduce the overall efficiency of the
transmission.
What Is Cyclic Redundancy Check (CRC),
and It’s Role in Checking Error
With the increase in data transactions over multiple network channels, “data error” has
become common. Due to external or internal interferences, the data to be transmitted
becomes corrupted or damaged, which leads to the loss of sensitive information.

To overcome such a situation and determine whether our data is damaged or not, error
detection methods are used, one of which we will be discussing in this article on “What
Is Cyclic Redundancy Check (CRC)?”.

What Is a Cyclic Redundancy Check (CRC)?

The CRC is a network method designed to detect errors in the data and information transmitted
over the network. This is performed by performing a binary solution on the transmitted data at
the sender’s side and verifying the same at the receiver’s side.

The term CRC is used to describe this method because Check represents the “data verification,”
Redundancy refers to the “recheck method,” and Cyclic points to the “algorithmic formula.”

Now that we are aware about CRC, let's look into some terms and conditions related to the CRC
method.

CRC Terms and Attributes

As discussed in the previous section, CRC is performed both at the sender and the receiver side.
CRC applies the CRC Generator and CRC Checker at the sender and receiver sides, respectively.

The CRC is a complex algorithm derived from the CHECKSUM error detection algorithm, using
the MODULO algorithm as the basis of operation. It is based on the value of polynomial
coefficients in binary format for performing the calculations.

For Example: 

 x2+x+1 (polynomial equation)


 Converting to binary format-
 Going through the equation, we have value at the 0th position (x), value at the 1’st position (x),
and the 2nd position (x2).
 So, the binary value will be - [111] 
 Similarly for equation, [x2+1], the binary value will be, [101].
 There is no value at the “x” position, so the value is [0].

Moving on, let’s look at the working steps of the CRC method.
Working of CRC Method

To understand the working of the CRC method, we will divide the steps into two parts:

Sender Side (CRC Generator and Modulo Division):

1. The first step is to add the no. of zeroes to the data to be sent, calculated using k-1 (k - is the
bits obtained through the polynomial equation.)
2. Applying the Modulo Binary Division to the data bit by applying the XOR and obtaining the
remainder from the division
3. The last step is to append the remainder to the end of the data bit and share it with the
receiver.

Receiver Side (Checking for errors in the received data):

To check the error, perform the Modulo division again and check whether the remainder
is 0 or not, 

1. If the remainder is 0, the data bit received is correct, without any errors.
2. If the remainder is not 0, the data received is corrupted during transmission.

Example -  The data bit to be sent is [100100], and the polynomial equation is
[x3+x2+1].

Data bit - 100100

Divisor (k) - 1101 (Using the given polynomial)

Appending Zeros - (k-1) > (4-1) > 3

Dividend - 100100000
Sender Side:

Now appending the remainder [001] to the data bit and sharing the new data with the
receiver.

New Data Bit - [100100001]

Receiver Side:

The Obtained remainder is [000], i.e., zero, which according to the CRC method, concludes that
the data is error-free.
With the completion of the working steps, we are completed with this article on “What Is Cyclic
Redundancy Check?”.

Conclusion

In this article on “What Is Cyclic Redundancy Check (CRC)?”, we understood the working of
the data redundancy method for checking data error at the recovery side and preventing any
corruption from being used.

To gain more insights and information about the topic and its various counterparts, you can refer
to Simplilearn’s Cyber Security Expert course. After completing this professional course, you
will become well versed in working with the CRC method and multiple network algorithms.

If you have any questions about this article on ‘What Is Cyclic Redundancy Check?’. Feel free to
mention them in the comment section at the bottom of this page. Our expert team will help you
solve your queries at the earliest.

Hamming Code in Computer Network


Hamming code is a set of error-correction codes that can be used to detect
and correct the errors that can occur when the data is moved or stored from
the sender to the receiver. It is a technique developed by R.W. Hamming for
error correction. Redundant bits – Redundant bits are extra binary bits that
are generated and added to the information-carrying bits of data transfer to
ensure that no bits were lost during the data transfer. The number of redundant
bits can be calculated using the following formula:
2^r ≥ m + r + 1
where, r = redundant bit, m = data bit
Suppose the number of data bits is 7, then the number of redundant bits can be
calculated using: = 2^4 ≥ 7 + 4 + 1 Thus, the number of redundant bits=
4 Parity bits.  A parity bit is a bit appended to a data of binary bits to ensure
that the total number of 1’s in the data is even or odd. Parity bits are used for
error detection. There are two types of parity bits:
1. Even parity bit: In the case of even parity, for a given set of bits, the
number of 1’s are counted. If that count is odd, the parity bit value is set to 1,
making the total count of occurrences of 1’s an even number. If the total
number of 1’s in a given set of bits is already even, the parity bit’s value is 0.
2. Odd Parity bit – In the case of odd parity, for a given set of bits, the number
of 1’s are counted. If that count is even, the parity bit value is set to 1,
making the total count of occurrences of 1’s an odd number. If the total
number of 1’s in a given set of bits is already odd, the parity bit’s value is 0.
General Algorithm of Hamming code: Hamming Code is simply the use of
extra parity bits to allow the identification of an error.
1. Write the bit positions starting from 1 in binary form (1, 10, 11, 100, etc).
2. All the bit positions that are a power of 2 are marked as parity bits (1, 2, 4, 8,
etc).
3. All the other bit positions are marked as data bits.
4. Each data bit is included in a unique set of parity bits, as determined its bit
position in binary form. a. Parity bit 1 covers all the bits positions whose
binary representation includes a 1 in the least significant position (1, 3, 5, 7,
9, 11, etc). b. Parity bit 2 covers all the bits positions whose binary
representation includes a 1 in the second position from the least significant
bit (2, 3, 6, 7, 10, 11, etc). c. Parity bit 4 covers all the bits positions whose
binary representation includes a 1 in the third position from the least
significant bit (4–7, 12–15, 20–23, etc). d. Parity bit 8 covers all the bits
positions whose binary representation includes a 1 in the fourth position from
the least significant bit bits (8–15, 24–31, 40–47, etc). e. In general, each
parity bit covers all bits where the bitwise AND of the parity position and the
bit position is non-zero.
5. Since we check for even parity set a parity bit to 1 if the total number of ones
in the positions it checks is odd.
6. Set a parity bit to 0 if the total number of ones in the positions it checks is
even.
Determining the position of redundant bits – These redundancy bits are
placed at positions that correspond to the power of 2. 

As in the above example:


 The number of data bits = 7
 The number of redundant bits = 4
 The total number of bits = 11
 The redundant bits are placed at positions corresponding to power of 2- 1, 2,
4, and 8

 Suppose the data to be transmitted is 1011001, the bits will be placed as


follows: 

Determining the Parity bits:


 R1 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the least significant position. R1: bits 1, 3, 5, 7,
9, 11 

  To find the redundant bit R1, we check for even parity. Since the total
number of 1’s in all the bit positions corresponding to R1 is an even number
the value of R1 (parity bit’s value) = 0
 R2 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the second position from the least significant
bit. R2: bits 2,3,6,7,10,11 

  To find the redundant bit R2, we check for even parity. Since the total
number of 1’s in all the bit positions corresponding to R2 is odd the value of
R2(parity bit’s value)=1
 R4 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the third position from the least significant bit.
R4: bits 4, 5, 6, 7 

1.  To find the redundant bit R4, we check for even parity. Since the total
number of 1’s in all the bit positions corresponding to R4 is odd the value of
R4(parity bit’s value) = 1
2. R8 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the fourth position from the least significant bit.
R8: bit
8,9,10,11 

 
 To find the redundant bit R8, we check for even parity. Since the total
number of 1’s in all the bit positions corresponding to R8 is an even number
the value of R8(parity bit’s value)=0. Thus, the data transferred is:

Error detection and correction: Suppose in the above example the 6th bit is
changed from 0 to 1 during data transmission, then it gives new parity values in
the binary number: 

The bits give the binary number 0110 whose decimal representation is 6. Thus,
bit 6 contains an error. To correct the error the 6th bit is changed from 1 to 0.

 Here are some of the features of Hamming code:

Error Detection and Correction: Hamming code is designed to detect and


correct single-bit errors that may occur during the transmission of data. This
ensures that the recipient receives the same data that was transmitted by the
sender.
Redundancy: Hamming code uses redundant bits to add additional information
to the data being transmitted. This redundancy allows the recipient to detect
and correct errors that may have occurred during transmission.
Efficiency: Hamming code is a relatively simple and efficient error-correction
technique that does not require a lot of computational resources. This makes it
ideal for use in low-power and low-bandwidth communication networks.
Widely Used: Hamming code is a widely used error-correction technique and is
used in a variety of applications, including telecommunications, computer
networks, and data storage systems.
Single Error Correction: Hamming code is capable of correcting a single-bit
error, which makes it ideal for use in applications where errors are likely to
occur due to external factors such as electromagnetic interference.
Limited Multiple Error Correction: Hamming code can only correct a limited
number of multiple errors. In applications where multiple errors are likely to
occur, more advanced error-correction techniques may be required.
For Implementation you can refer this article.

What is a Parity Bit?


A parity bit is a check bit, which is added to a block of data for error detection purposes.
It is used to validate the integrity of the data. The value of the parity bit is assigned
either 0 or 1 that makes the number of 1s in the message block either even or odd
depending upon the type of parity. Parity check is suitable for single bit error detection
only.
The two types of parity checking are
 Even Parity − Here the total number of bits in the message is made even.
 Odd Parity − Here the total number of bits in the message is made odd.

Error Detection by Adding Parity Bit


Sender’s End − While creating a frame, the sender counts the number of 1s in it and
adds the parity bit in following way
 In case of even parity − If number of 1s is even, parity bit value is 0. If number of
1s is odd, parity bit value is 1.
 In case of odd parity − If number of 1s is odd, parity bit value is 0. If number of
1s is even, parity bit value is 1.

Receiver’s End − On receiving a frame, the receiver counts the number of 1s in it. In
case of even parity check, if the count of 1s is even, the frame is accepted, otherwise it
is rejected. In case of odd parity check, if the count of 1s is odd, the frame is accepted,
otherwise it is rejected.

Example
Suppose that a sender wants to send the data 1001101 using even parity check
method. It will add the parity bit as shown below.

The receiver will decide whether error has occurred by counting whether the total
number of 1s is even. When the above frame is received, three cases may occur
namely, no error, single bit error detection and failure to detect multiple bits error. This
is illustrated as follows

Difference between Synchronous and


Asynchronous Transmission
Synchronous Transmission: In Synchronous Transmission, data is sent in form of
blocks or frames. This transmission is the full-duplex type. Between sender and receiver,
synchronization is compulsory. In Synchronous transmission, There is no gap present
between data. It is more efficient and more reliable than asynchronous transmission to
transfer a large amount of data. 
Example:
 Chat Rooms
 Telephonic Conversations
 Video Conferencing 

Asynchronous Transmission: In Asynchronous Transmission, data is sent in form of


byte or character. This transmission is the half-duplex type transmission. In this
transmission start bits and stop bits are added with data. It does not require
synchronization. 
Example:
 Email
 Forums
 Letters

Now, let’s see the difference between Synchronous Transmission and Asynchronous


Transmission:

S.
No. Synchronous Transmission Asynchronous Transmission

In Synchronous transmission, data is In Asynchronous transmission, data is sent


1. sent in form of blocks or frames. in form of bytes or characters.

2. Synchronous transmission is fast. Asynchronous transmission is slow.


S.
No. Synchronous Transmission Asynchronous Transmission

3. Synchronous transmission is costly. Asynchronous transmission is economical.

In Asynchronous transmission, the time


In Synchronous transmission, the time
interval of transmission is not constant, it is
interval of transmission is constant.
4. random.

In this transmission, users have to wait


Here, users do not have to wait for the
till the transmission is complete before
completion of transmission in order to get a
getting a response back from the
response from the server.
5. server.

In Synchronous transmission, there is In Asynchronous transmission, there is a


6. no gap present between data. gap present between data.

While in Asynchronous transmission, the


Efficient use of transmission lines is
transmission line remains empty during a
done in synchronous transmission.
7. gap in character transmission.

The start and stop bits are used in


The start and stop bits are not used in
transmitting data that imposes extra
transmitting data.
8. overhead.

Asynchronous transmission does not need


Synchronous transmission needs
synchronized clocks as parity bit is used in
precisely synchronized clocks for the
this transmission for information of new
information of new bytes.
9. bytes.

Errors are detected and corrected in Errors are detected and corrected when the
10. real time. data is received.

Low latency due to real-time High latency due to processing time and
11. communication. waiting for data to become available.

Examples: Telephonic conversations, Examples: Email, File transfer,Online


12. Video conferencing, Online gaming. forms.
CSMA with Collision Detection (CSMA/CD)
Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a network
protocol for carrier transmission that operates in the Medium Access Control (MAC)
layer. It senses or listens whether the shared channel for transmission is busy or not,
and defers transmissions until the channel is free. The collision detection technology
detects collisions by sensing transmissions from other stations. On detection of a
collision, the station stops transmitting, sends a jam signal, and then waits for a random
time interval before retransmission.

Algorithms
The algorithm of CSMA/CD is:
 When a frame is ready, the transmitting station checks whether the channel is
idle or busy.
 If the channel is busy, the station waits until the channel becomes idle.
 If the channel is idle, the station starts transmitting and continually monitors the
channel to detect collision.
 If a collision is detected, the station starts the collision resolution algorithm.
 The station resets the retransmission counters and completes frame
transmission.
The algorithm of Collision Resolution is:
 The station continues transmission of the current frame for a specified time along
with a jam signal, to ensure that all the other stations detect collision.
 The station increments the retransmission counter.
 If the maximum number of retransmission attempts is reached, then the station
aborts transmission.
 Otherwise, the station waits for a backoff period which is generally a function of
the number of collisions and restart main algorithm.
The following flowchart summarizes the algorithms:
 Though this algorithm detects collisions, it does not reduce the number of
collisions.
 It is not appropriate for large networks performance degrades exponentially when
more stations are added.

WAN Connectivity Protocols


our company is connected to the Internet, right? (everyone nod your head yes) So
what WAN protocol do you use to connect to the Internet? Chances are, that if you
have a T1 leased line to the Internet or a private network between locations, you use
one of these three WAN Protocols: HDLC, PPP, or Frame-relay. Let’s explore the
differences and similarities of these protocols.

What is HDLC?
HDLC stands for High-Level Data Link Control protocol. Like the two other WAN
protocols mentioned in this article, HDLC is a Layer 2 protocol (see OSI Model for
more information on Layers). HDLC is a simple protocol used to connect point to
point serial devices. For example, you have point to point leased line connecting two
locations, in two different cities. HDLC would be the protocol with the least amount
of configuration required to connect these two locations. HDLC would be running
over the WAN, between the two locations. Each router would be de-encapsulating
HDLC and turning dropping it off on the LAN.

HDLC performs error correction, just like Ethernet. Cisco’s version of HDLC is
actually proprietary because they added a protocol type field. Thus, Cisco HDLC can
only work with other Cisco devices.

HDLC is actually the default protocol on all Cisco serial interfaces. If you do a show
running-config on a Cisco router, your serial interfaces (by default) won’t have any
encapsulation. This is because they are configured to the default of HDLC. If you do
a show interface serial 0/0, you’ll see that you are running HDLC. Here is an
example:

What is PPP?
You may have heard of the Point to Point Protocol (PPP) because it is used for most
every dial up connection to the Internet. PPP is documented in RFC 1661. PPP is
based on HDLC and is very similar. Both work well to connect point to point leased
lines.

The differences between PPP and HDLC are:

 PPP is not proprietary when used on a Cisco router

 PPP has several sub-protocols that make it function.

 PPP is feature-rich with dial up networking features

Because PPP has so many dial-up networking features, it has become the most
popular dial up networking protocol in use today. Here are some of the dial-up
networking features it offers:
 Link quality management monitors the quality of the dial-up link and how many
errors have been taken. It can bring the link down if the link is receiving too
many errors.

 Multilink can bring up multiple PPP dialup links and bond them together to
function as one.

 Authentication is supported with PAP and CHAP. These protocols take your
username and password to ensure that you are allowed access to the network
you are dialing in to.

To change from HDLC to PPP, on a Cisco router, use the encapsulation


ppp command, like this:
After changing the encapsulation to ppp, I typed ppp ? to list the PPP options
available. There are many PPP options when compared to HDLC. The list of PPP
options in the screenshot is only a partial list of what is available.

What is Frame-Relay?

Frame Relay is a Layer 2 protocol and commonly known as a service from carriers.
For example, people will say “I ordered a frame-relay circuit”. Frame relay creates a
private network through a carrier’s network. This is done with permanent virtual
circuits (PVC). A PVC is a connection from one site, to another site, through the
carrier’s network. This is really just a configuration entry that a carrier makes on their
frame relay switches.

Obtaining a frame-relay circuit is done by ordering a T1 or fractional T1 from the


carrier. On top of that, you order a frame-relay port, matching the size of the circuit
you ordered. Finally, you order a PVC that connects your frame relay port to another
of your ports inside the network.

The benefits to frame-relay are:

 Ability to have a single circuit that connects to the “frame relay cloud” and gain
access to all other sites (as long as you have PVCs). As the number of locations
grow, you would save more and more money because you don’t need as many
circuits as you would if you were trying to fully-mesh your network with point to
point leased lines.

 Improved disaster recovery because all you have to do is to order a single circuit
to the cloud and PVC’s to gain access to all remote sites.

 By using the PVCs, you can design your WAN however you want. Meaning, you
define what sites have direct connections to other sites and you only pay the
small monthly PVC fee for each connection.

Some other terms you should know, concerning frame relay are:

 LMI = local management interface. LMI is the management protocol of frame


relay. LMI is sent between the frame relay switches and routers to communicate
what DLCI’s are available and if there is congestion in the network.
 DLCI = data link connection identifier. This is a number used to identify each
PVC in the frame relay network.
 CIR = committed information rate. This is the amount bandwidth you pay to
guarantee you will receive, on each PVC. Generally you have much less CIR
than you have port speed. You can, of course, burst above your CIR to your port
speed but that traffic is marked DE.
 DE = discard eligible. Traffic marked DE (that was above your CIR) CAN be
discarded by the frame-relay network if there is congestion.
 FECN & BECN = forward explicit congestion notification & backward explicit
congestion notification. These are bits set inside LMI packets to alert the frame-
relay devices that there is congestion in the network.

You might also like