Data Communication (3rd Sem)
Data Communication (3rd Sem)
By : Akshith Shetty
Keshav Nayak
Unit -1
Data communication: Transfer of data from one device to another via some form of
transmission medium.
Message:Information(data) to be communicated.
Transmission: A physical path by which a message travels from one sender to receiver.
->Communication model:-
● Covers an area within ● Covers an area within 100 ● Covers a larger area that
1km to 10 kms. kms. goes beyond 100 kms.
1.In circuit switching a dedicated common path is established between two stations through
nodes.
2. That path is a connected sequence of physical links between nodes.
3. Data is transmitted along a dedicated path.
4. Each node data is switched to an approximate channel without delay.
Eg: Telephone network
->Packet Switching:-
->Frame Relay:-
1. Frame Relay is a packet-switched data communication service that operates at the data link
layer (Layer 2) of the OSI model.
2. It is designed for efficient and cost-effective data transmission over Wide Area Networks
(WANs).
3. Frame Relay uses frames to encapsulate data and relies on a network of switches (Frame
Relay switches or routers) to forward these frames to their destination.
1. It operates at both the data link layer (Layer 2) and the network layer (Layer 3) of the OSI
model.
2. It was designed to handle a wide range of multimedia data types and provide a high level of
service quality for various applications.
3. Asynchronous Transfer Mode (ATM) is a high-speed, cell-switched networking technology
which is Sometimes known as cell relay.
1. Syntax:
a. Definition: Syntax refers to the structure and format of the data
b. Importance: Syntax ensures that devices understand the structure of the data being
exchanged
2. Semantics:
a. Definition: Semantics defines the meaning of each section of data.
b. Importance: Semantics ensure that the information being exchanged is understood
in the correct context.
3. Timing:
a. Definition: Timing refers to when data is sent and how long devices should wait for a
response.
b. Importance: Timing is crucial for coordinating communication between devices
->Explain protocol architecture in computer network
7. Application layer
● Application layer serves as a window for service applications to access network
● Provides access to the TCP/IP environment to users.
6. Presentation layer
● Data is manipulated as per required format to transmit over the network.
● It performs encryption/decryption and compression of data.
5. Session layer
● It establishes a session for communication between two devices.
● It also terminates the session once the session is completed.
4. Transport layer
● In transport layer data is called as a segment
● Source and destination port addresses are added to its header and forwarded to the
network layer.
3. Network layer
● In network layer data is called as a packet
● In network layer sender and receiver's IP address are added to segments received
transport layer
OSI TCP/IP
● Has 7 layers ● Has 5 layers
Central office(CO):
● Refers to the physical location where telecommunication equipment is housed and
interconnected to provide various communication services.
Customer premise equipment(CPE):
● Refers to devices located at customers destinations used to connect and interact with
the service provider
Internet service provider(ISP):
● Is a company or organisation that offers individuals and businesses access to
internet services
Network service provider(NSP):
● A company or organisation that operates and manages a large scale network
infrastructure
Network access point(NAP):
● Physical location where multiple ISPs and networks connect to exchange internet
traffic.
Point of presence(POP):
● Refers to physical location where ISP has a presence within a layer network
infrastructure
Definitions:
● Data communication: Data communication is the transfer of data from one device to
another through some form of transmission medium
● Computer network: Collection of interconnected computers and devices that can
share data among themselves.
● Internet: It is a global network of computers that is accessed by world wide web.
● Protocol: Set of rules that must be followed for efficient exchange of data over a
network
● Frequency:Rate at which signal repeats OR number of cycles in 1 sec.
● Phase: Phase/phase shift, describes position of the waveform relative to time 0.
● Wavelength: Distance occupied by a single cycle.
● Bandwidth: Refers to the maximum amount of data that can be transmitted over a
network in a given amount of time.
● Amplitude: Refers to range between highest and lowest voltage levels of a signal
Attenuation:
● Refers to gradual loss in strength of signal, resulting in reduction in its amplitude.
● This can lead to weaker and distorted signals at the receiving end.
● Techniques such as signal boosting and error correction are employed to mitigate
the effects of attenuation.
Delay distortion:
● Different components of signal experiences varying delays as they traverse through a
medium
● Delay distortion may lead to intersymbol interference(Transmitted signal overlaps
with each other)
● Techniques such as equalisation can be used to mitigate effects of delay distortion
Noise:
● Refers to unwanted interference while transmitting signal
● Noise can corrupt the original data being sent
● Techniques such as shielding and signal processing algorithms are used to mitigate
the effect of noise
Types of noise:
● Thermal noise:
○ Thermal noise is due to thermal agitation of electrons
○ Thermal noise is a function of temperature.
○ Thermal noise is uniformly distributed across the bandwidths used in
communication channels and hence referred to as white noise.
Some definitions:
● Analog data: Refers to the information that is continuous.
● Digital data: Refers to the information that is discreet.
● Analog signal: Has many levels of intensity over a period of time.
● Digital signal: Can have limited number of defined values
In NRZ-L:
● High voltage is used to represent ‘0’.
● Low voltage is used to represent ‘1’.
Ex:
Non-return to zero-inverted(NRZ-I):
NRZ-I is a digital signal encoding format, that represents binary data using two
different voltage levels, typically high and low
In NRZ-I:
● When ‘0’ is encountered there is no transition at the beginning of the interval
● When ‘1’ is encountered there is transition at the beginning of the interval
Ex:
0 1 0 0 1 1 0 0 0 1 0
Advantages:
● Makes efficient use of bandwidth
● Easy to identify noise.
Disadvantages:
● Presence of DC components
● Lack of synchronisation capability
Multi-level binary:
This scheme represents more than 2 levels to represent digital data
In bipolar AMI:
● When ‘0’ is encountered there is no signal line
● When ‘1’ is encountered, for successive 1’s the signal alternates between positive
and negative voltage.
Ex: 0 1 0 0 1 1 0 0 0 1 0
Pseudoternary:
Bipolar AMI is a digital signal encoding format that represents binary data using three
different voltage levels.here representation is opposite to AMI.
In bipolar Pseudoternary:
● When ‘1’ is encountered there is no signal line
● When ‘0’ is encountered, for successive 1’s the signal alternates between positive
and negative voltage.
Ex: 0 1 0 0 1 1 0 0 0 1 0
Biphase:
Biphase is a digital signal encoding scheme where each bit is represented by transition at
the middle of the bit period.
Manchester
● When ‘0’ is encountered, signal transition is from high to low at the middle of the
interval
● When ‘1’ is encountered, signal transitions from low to high at the middle of interval.
Ex: 0 1 0 0 1 1 0 0 0 1 0
Differential manchester:
● When ‘0’ is encountered,then transition at the beginning of the interval
● When ‘1’ is encountered,then no transition at the beginning of the interval
● In addition to these there is always transition at the middle of the interval
Ex:
0 1 0 0 1 1 0 0 0 1 0
Manchester and differential manchester:
Advantages:
● Synchronisation: Because there is predictable transition during each
bit time, the receiver can synchronise on that transition
● No DC component: DC components are completely eliminated in
biphase
Disadvantages:
● One notable disadvantage is susceptibility to noise and distortion
● Biphase encoding may require more complex circuitry and processing
Modulation rate:
Modulation rate is the rate at which signal elements are generated.
Scrambling technique:
Idea behind this approach: The sequence that would result in a constant voltage level is
replaced by filling sequences that will provide sufficient transition for receivers clock to
maintain synchronisation.
Design goals
● No long sequences of zero-leve; line signals
● No DC component
● Error-detection capability
● No reduction in data rate
Bipolar with 8-zeros substitution:
Drawback of bipolar-AMI is that long strings of zeros may result in loss of synchronisation.
To overcome this, encoding is embedded with following rules:
● If an octet of zero occur and the last voltage pulse preceding this octet was positive,
then the eight zeros of the octet are encoded as 000 + - 0 - +
● If an octet of zero occur and the last voltage pulse preceding this octet was negative,
then the eight zeros of the octet are encoded as 000 - + 0 + -
Ex:
In this scheme strings of 4 zeros are replaced by sequence containing one or two pulse
Ex: 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0
B8ZS and HDB3Z:
❖ Types of errors
⮚ In Data communication, an error occurs when bit are altered between transmission and
reception
⮚ Two general types of errors are:
▪ Single bit error: A single bit error is a type of data communication error that occurs
when a single bit (0 or 1) in a data unit, (byte or a packet) is altered
▪ Burst error: A burst error is a type of data communication error that occurs when
there is a consecutive alteration of multiple bits within a short sequence of data.
⮚ Some terminologies:
▪ Dataword: A dataword, also known as a "message," is the original data that needs
to be transmitted or processed.
▪ Code word: A codeword is the result of applying an encoding scheme to the
original dataword.
▪ Redundant bit: A redundant bit, also known as a "parity bit" or "check bit," is an
additional bit that is added to the original dataword during encoding
❖ Error detection:
⮚ The sender creates codewords out of datawords by using a generator that applies that some
rules and procedures
⮚ Each codeword sent to the receiver may change during transmission.
⮚ If the received codeword is the same as one of the valid codewords, the word is accepted
⮚ If the received codeword is not valid, it is discarded.
❖ Parity check
⮚ Parity check is a basic error detection technique which involves adding a single "parity bit" to
the original dataword. Here's how parity check works:
⮚ Even parity:
▪ In even parity, the total number of 1s (binary 'on' bits) in the dataword and the parity bit
combined should be even.
▪ The parity bit is set to 0 or 1 so that the total number of 1s in the dataword and parity
bit combined becomes even.
▪ If an odd number of bits are inverted due to error, an undetected error occurs.
⮚ Odd parity:
▪ In odd parity, the total number of 1s (binary 'on' bits) in the dataword and the parity bit
combined should be odd.
▪ The parity bit is set to 0 or 1 so that the total number of 1s in the dataword and parity
bit becomes odd.
▪ If an even number of bits are inverted due to error, an undetected error occurs.
Note:
To perform the one's complement operation on a set of binary digits, replace 0 digits with 1 digits
and 1 digits with 0 digits.
The one's-complement addition of two binary integers of equal bit length is performed as follows: 1.
The two numbers are treated as unsigned binary integers and added. 2. If there is a carry out of the
leftmost bit, add 1 to the sum. This is called an endaround carry.
Hamming distance:
● Definition: The Hamming distance d(v1, v2) between two n-bit binary sequences v1 and v2 is
the number of bits in which v1 and v2 disagree. For example,
if v1 = 011011, v2 = 110001
Now let us consider the block code technique for error correction. Suppose we wish to transmit
blocks of data of length k bits. Instead of transmitting each block as k bits, we map each k-bit
sequence into a unique n-bit codeword.
Now let us consider the block code technique for error correction. Suppose we wish to transmit
blocks of data of length k bits. Instead of transmitting each block as k bits, we map each k-bit
sequence into a unique n-bit codeword.
00 00000
01 00111
10 11001
11 11110
Note:
● To detect s errors, the minimum Hamming distance should be dmin = s + 1.
● It can be shown that to correct t errors, we need to have dmin = 2t + 1.
Cyclic redundancy check(CRC)
● For given a k-bit block of bits sequence, the transmitter generates an (n - k)-bit
sequence, known as a frame check sequence (FCS)
● The resulting frame, consisting of n bits, is exactly divisible by some predetermined
number.
● The receiver then divides the incoming frame by that number and, if there is no
remainder, assumes there was no error.
Note:In a cyclic code, those e(x) errors that are divisible by g(x) are not caught.
Points to be noted:
1. If the generator has more than one term and the coefficient of 𝑥 0 is 1, all single-bit
errors can be caught.
2. If a generator cannot divide 𝑥 𝑡 + 1 (t between 2 and n - 1), then all isolated double
errors can be detected.
3. A generator that contains a factor of x + 1 can detect all odd-numbered errors.
4. All burst errors with L ≤ r will be detected.
5. All burst errors with L = r + 1 will be detected with probability 1 – (1/2)𝑟−1.
6. All burst errors with L > r + 1 will be detected with probability 1 – (1/2)𝑟 .
a. Here r -> highest power of generator polynomial
b. L -> length of error
DIGITAL-TO-ANALOG CONVERSION:
Process of changing one of the characteristics of an analog signal based on the digital data.
Carrier signal:In analog transmission, the sending device produces a high-frequency signal
that acts as a base for the information signal. This signal is called the carrier signal or carrier
frequency.
Modulation:Modulation refers to the process of encoding digital or analog data onto a carrier
signal for transmission over a communication channel.
Implementation of BASK:
● If digital data are presented as a unipolar NRZ digital signal with a high voltage of 1 V
and a low voltage of 0 V, the implementation can be achieved by multiplying the NRZ
digital signal by the carrier signal coming from an oscillator.
● When the amplitude of the NRZ signal is 1, the amplitude of the carrier frequency
remains the same.
● when the amplitude of the NRZ signal is 0, the amplitude of the carrier frequency is
zero.
Multilevel ASK
● Multilevel ASKs have more than two levels.
● We can use 4, 8, 16, or more different amplitudes for the signal and modulate the
data using 2, 3, 4, or more bits at a time.
● Although this is not implemented with pure ASK, it is implemented with QAM
● In frequency shift keying, the frequency of the carrier signal is varied to represent
data.
● In FSK both peak amplitude and phase remain constant.
Binary Frequency Shift keying:
● In BFSK we consider two carrier frequencies f1 and f2.
● We use the first carrier if the data element is 0; we use the second if the data
element is 1.
Implementation of BFSK:
● In phase shift keying, the phase of the carrier is varied to represent the data.
● In PSK both peak amplitude and frequency remain constant.
Implementation:
Constellation diagram:
A constellation diagram is a graphical representation used to display and visualise the various
symbols or signal states that represent digital data.
For each point on the diagram, four pieces of information can be deduced
1. The projection of the point on the X axis defines the peak amplitude of the in-phase
component
2. The projection of the point on the Y axis defines the peak amplitude of the quadrature
component.
3. The length of the line (vector) that connects the point to the origin is the peak amplitude of
the signal element
4. The angle the line makes with the X axis is the phase of the signal element.
● For ASK, we are using only an in-phase carrier. Therefore, the two points should be on
the X axis. Binary 0 has an amplitude of 0 V; binary 1 has an amplitude of 1 V (for
example). The points are located at the origin and at 1 unit.
● BPSK also uses only an in-phase carrier. However, we use a polar
NRZ signal for modulation. It creates two types of signal elements,
one with amplitude 1 and the other with amplitude −1. This can be
stated in other words: BPSK creates two different signal elements,
one with amplitude 1 V and in phase and the other with amplitude 1
V and 180° out of phase.
1
● All signal elements in QPSK have an amplitude of 22 , but their phases are
different (45°, 135°, −135°, and −45°).
Quadrature Amplitude Modulation (QAM) is a modulation scheme used to transmit data over
a carrier signal by varying both the amplitude and phase of the carrier wave.
Here's how QAM works:
● Carrier Signal: The carrier signal is typically a sinusoidal wave, and its frequency is much
higher than the data signal to be transmitted.
● Amplitude Variation: The amplitude of the carrier signal is varied to represent different
combinations of digital bits. Each amplitude represents a unique symbol or constellation
point.
● Phase Variation: In addition to amplitude variation, QAM also varies the phase of the carrier
signal to represent different symbols. The phase changes can be in multiples of 90 degrees
(π/2 radians), which allows multiple phase states.
● Pulse Code Modulation (PCM) is a technique used to convert analog signals, such as
audio or video, into a digital format.
● PCM is widely used in applications like audio recording, voice communication, and
data transmission.
● Sampling:
○ The first step in PCM is sampling, where the continuous analog signal is
sampled at regular intervals.
○ These samples represent the amplitude of the analog signal at specific
points in time.
○ The Nyquist-Shannon sampling theorem dictates that the sampling rate must
be at least twice the highest frequency component of the analog signal to
accurately reconstruct it later.
● Quantization:
○ After sampling, each sampled value is quantized, which means it is
mapped to a discrete set of values.
○ This process involves assigning a digital code (typically binary) to each
sampled value.
● Encoding:
○ The quantized values are then encoded into a digital bitstream.
○ Each binary code represents one sample and is transmitted or stored
as a sequence of bits.
Ex:
Delta modulation:
● Delta modulation (DM) is a technique used for converting analog signals into digital
format.
● It operates by quantizing the difference (delta) between the current sample of an
analog signal and the previous quantized sample.
● Sampling:
○ Like in Pulse Code Modulation (PCM), delta modulation begins with the
sampling of the analog signal at regular intervals.
○ These samples represent the amplitude of the analog signal at specific points
in time.-
● Delta Calculation:
○ In delta modulation, instead of quantizing the sampled value directly as in
PCM, the system quantized the difference (delta) between the current sample
and the previous quantized sample.
○ This means that, at each sampling instance, the system only considers
whether the signal has increased or decreased since the last quantized value.
● Comparison:
○ The delta (difference) is compared to a predefined step size,
also known as the step size or step size parameter (∆).
○ This step size determines the granularity of the quantization and essentially
sets the resolution of the delta modulation system.
Unit-2
Multiplexing:
Multiplexing in data communication is a technique used to combine multiple signals into a single
transmission medium.
● The lines on the left direct their transmission streams to a multiplexer (MUX)
● MUX combines them into a single stream (many-to-one).
● At the receiving end, that stream is fed into a demultiplexer (DEMUX),
● DEMUX separates the stream back into its component transmissions (one-to-many)
and directs them to their corresponding lines.
Note: In the figure, the word link refers to the physical path. The word channel refers to the
portion of a link that carries a transmission between a given pair of lines. One link can have
many (n) channels.
Multiplexing process:
Demultiplexing process:
● The demultiplexer uses a series of filters to decompose the multiplexed signal
into its constituent component signals.
● The individual signals are then passed to a demodulator that separates them
from their carriers and passes them to the output lines.
Q)Assume that a voice channel occupies a bandwidth of 4 kHz. We need to combine three
voice channels into a link with a bandwidth of 12 kHz, from 20 to 32 kHz. Show the
configuration, using the frequency domain. Assume there are no guard bands.
Solution:
We shift (modulate) each of the three voice channels to a different bandwidth, as shown in
Figure 6.6. We use the 20- to 24-kHz bandwidth for the first channel, the 24- to 28-kHz
bandwidth for the second channel, and the 28- to 32-kHz bandwidth for the third one. Then
we combine them as shown in Figure 6.6. At the receiver, each channel receives the entire
signal, using a filter to separate out its own signal. The first channel uses a filter that passes
frequencies between 20 and 24 kHz and filters out (discards) any other frequencies. The
second channel uses a filter that passes frequencies between 24 and 28 kHz, and the third
channel uses a filter that passes frequencies between 28 and 32 kHz. Each channel then
shifts the frequency to start from zero.
Q)Five channels, each with a 100-kHz bandwidth, are to be multiplexed together. What is
the minimum bandwidth of the link if there is a need for a guard band of 10 kHz between
the channels to prevent interference?
Solution:
For five channels, we need at least four guard bands. This means that the required
bandwidth is at least 5 × 100 + 4 × 10 = 540 kHz, as shown in Figure 6.7.
● Very narrow bands of light from different sources are combined to make a
wider band of light.
● At the receiver, the signals are separated by the demultiplexer.
Although WDM technology is very complex, the basic idea is very simple.
a. We want to combine multiple light sources into one single light at the
multiplexer and do the reverse at the demultiplexer.
b. The combining and splitting of light sources are easily handled by a
prism.
● In this, the number of slots in each ● In this, the number of slots in each
frame are equal to the number of frame are less than the number of
input lines. input lines.
● Slots in this carry data only and ● Slots in this contain both data and
there is no need of addressing. address of the destination.
● Synchronous bits are added to each ● Synchronous bits are not added to
frame each frame
● In this, buffering is not done, the ● In this, buffering is done and only
frame is sent after a specific interval those inputs are given slots in the
of time whether it has data to send output frame whose buffer contains
or not. data to send.
Buffer : In Time Division Multiplexing (TDM), a buffer refers to a temporary storage area or
memory space that is used to hold data temporarily as it is being transmitted or received
within the TDM system.
Q) In Figure 6.13, the data rate for each input connection is 1 kbps. If 1 bit at a
time is multiplexed (a unit is 1 bit), what is the duration of
1. each input slot,
2. each output slot, and
3. each frame?
Solution
We can answer the questions as follows:
1. The data rate of each input connection is 1 kbps. This means that the bit
duration is 1/1000 s or 1 ms. The duration of the input time slot is 1 ms (same
as bit duration).
2. The duration of each output time slot is one-third of the input time slot. This
means that the duration of the output time slot is 1/3 ms.
3. Each frame carries three output time slots. So the duration of a frame is 3 ×
1/3 ms, or 1 ms. The duration of a frame is the same as the duration of an
input unit.
Q)Figure 6.14 shows synchronous TDM with a data stream for each input and
one data stream for the output. The unit of data is 1 bit. Find,
(1) the input bit duration,
(2) the output bit duration,
(3) the output bit rate, and
(4) the output frame rate.
Solution We can answer the questions as follows:
1. The input bit duration is the inverse of the bit rate: 1/1 Mbps = 1 μs.
2. The output bit duration is one-fourth of the input bit duration, or 1/4 μs.
3. The output bit rate is the inverse of the output bit duration, or 1/4 μs, or 4 Mbps. This can also be
deduced from the fact that the output rate is 4 times as fast as any input rate; so the output rate = 4
× 1 Mbps = 4 Mbps
4. The frame rate is always the same as any input rate. So the frame rate is 10,00,000 frames per
second. Because we are sending 4 bits in each frame, we can verify the result of the previous
question by multiplying the frame rate by the number of bits per frame.
Q) Four 10-kbps connections are multiplexed together. A unit is 1 bit. Find (i) the
duration of 1 bit before multiplexing, (ii) the transmission rate of the link, (iii) the
duration of a time slot, and (iv) the duration of a frame.
The duration of 1 bit before multiplexing is the inverse of the bit rate:
The transmission rate of the link is the sum of the bit rates of all the multiplexed channels:
Duration of a time slot = Duration of 1 bit before multiplexing / Number of channels(or 1/ total
transmission rate) = 0.1 ms / 4 = 0.025 ms
The duration of a frame is the total amount of time required for all the channels to transmit data in
one cycle. In this case, the duration of a frame is equal to the sum of the durations of all the time
slots:
● (i) 0.1 ms
● (ii) 40 kbps
● (iii) 0.025 ms
● (iv) 0.1 ms
Multilevel multiplexing:
● Multilevel multiplexing is a technique used when the data rate of an input line is a multiple
of others.
● For example, in Figure 6.19, we have two inputs of 20 kbps and three inputs of 40 kbps.
● The first two input lines can be multiplexed together to provide a data rate equal to the last
three.
● A second level of multiplexing can create an output of 160 kbps.
Multiple-Slot Allocation:
● Sometimes it is more efficient to allot more than one slot in a frame to a single input line.
● In Figure 6.20, the input line with a 50-kbps data rate can be given two slots in the output.
● We insert a demultiplexer in the line to make two inputs out of one.
Pulse Stuffing:
● Sometimes the bit rates of sources are not multiple integers of each other.
● One solution is to make the highest input data rate the dominant data rate and then add
dummy bits to the input lines with lower rates. This will increase their rates.
● This technique is called pulse stuffing, bit padding, or bit stuffing.
Frame synchronisation:
● Synchronisation between the multiplexer and demultiplexer is a major issue.
● If the multiplexer and the demultiplexer are not synchronised, a bit belonging to one
channel may be received by the other channel.
● For this reason, one or more synchronisation bits are usually added to the beginning of each
frame.
● These bits, called framing bits that follow a particular pattern pattern that allows the
demultiplexer to synchronise with the incoming stream.
Example 3: Consider four sources, each creating 250 8-bit characters per second. If the interleaved
unit is a character and 1 synchronizing bit is added to each frame, find
(a) the data rate of each source,
(b) the duration of each character in each source,
(c) the frame rate,
(d) the duration of each frame,
(e) the number of bits in each frame, and
(f) the data rate of the link
Solution :
We can answer the questions as follows:
A. The data rate of each source is 250 × 8 = 2000 bps = 2 kbps.
B. Each source sends 250 characters per second; therefore, the duration of a character is 1/250
s, or 4 ms.
C. Each frame has one character from each source, which means the link needs to send 250
frames per second to keep the transmission rate of each source.
D. The duration of each frame is 1/250 s, or 4 ms. Note that the duration of each frame is the
same as the duration of each character coming from each source.
E. Each frame carries 4 characters and 1 extra synchronizing bit. This means that each frame is
4 × 8 + 1 = 33 bits.
F. This means that the data rate of the link is 250 × 33, or 8250 bps.
Spread spectrum:
In spread spectrum (SS), we combine signals from different sources to fit into a larger bandwidth,
but our goals are to prevent eavesdropping and jamming.
Bandwidth spreading:
● A pseudorandom code generator, called pseudorandom noise (PN), creates a k-bit pattern
for every hopping period 𝐵𝐵 .
● The frequency table uses the pattern to find the frequency to be used for this hopping
period and passes it to the frequency synthesizer.
● The frequency synthesizer creates a carrier signal of that frequency, and the source signal
modulates the carrier signal.
● The direct sequence spread spectrum (DSSS) technique also expands the bandwidth of the
original signal, but the process is different.
● Each bit is assigned a code of n bits, called chips, where the chip rate is n times that of the
data bit.
DSS example:
● As an example, let us consider the sequence used in a wireless LAN, the famous Barker
sequence, where n is 11.
● The figure shows the chips and the result of multiplying the original data by the chips to get
the spread signal.
● The spreading code is 11 chips having the pattern 10110111000 (in this case).
● If the original signal rate is N, the rate of the spread signal is 11N.
● This means that the required bandwidth for the spread signal is 11 times larger than the
bandwidth of the original signal.
Flow control:
● Flow control is a technique for assuring that a transmitting entity does not overwhelm(very
great in amount) a receiving entity with data.
● The receiving entity typically allocates a data buffer of some maximum length for a transfer.
● In the absence of flow control, the receiver’s buffer may fill up and overflow while it is
processing old data.
Stop wait flow control:
● The simplest form of flow control, known as stop-and-wait flow control.
● A source entity transmits a frame.
● After the destination entity receives the frame, it sends back an acknowledgment to the
frame just received.
● The source must wait until it receives the acknowledgment before sending the next frame.
Breaking the larger frames:
● Source will break up a large block of data into smaller blocks and transmit the data in many
frames.
● This is done for the following reasons:
○ The buffer size of the receiver may be limited.
○ With smaller frames, errors are detected sooner, and a smaller amount of data
needs to be retransmitted.
○ On a shared medium, such as a LAN, it is undesirable to permit one station to occupy
the medium for an extended period.
Error control:
Error control refers to mechanisms to detect and correct errors that occur in the transmission of
frames.
Stop-and-Wait ARQ:
● Stop-and-wait ARQ is based on the stop-and-wait flow control technique
● The source station transmits a single frame and then must await an acknowledgment (ACK).
No other data frames can be sent until the acknowledgement is received
● Two sorts of errors could occur.
○ First, the frame that arrives at the destination could be damaged.
■ After a frame is transmitted, the source station waits for an
acknowledgment.
■ If no acknowledgment is received by the time that the timer expires, then
the same frame is sent again.
○ The second sort of error is a damaged acknowledgment.
■ If the ACK is damaged it will not be recognised by A, which will therefore
time out and resend the same frame.
■ This duplicate frame arrives and is accepted by B.
■ To avoid this problem, frames are alternately labelled with 0 or 1, and
positive acknowledgments are of the form ACK0 and ACK1.
■ In keeping with the sliding-window convention, an ACK0 acknowledges
receipt of a frame numbered 1 and indicates that the receiver is ready for a
frame numbered 0.
Go-back-N ARQ:
Note: Because of the propagation delay on the line, by the time that an acknowledgment (positive
or negative) arrives back at the sending station, it has already sent at least one additional frame
beyond the one being acknowledged.
Selective-Reject ARQ:
● With selective-reject ARQ, the only frames retransmitted are those that receive a negative
acknowledgment, in this case called SREJ, or those that time out.
● Selective reject is more efficient than go-back-N, because it minimises the amount of
retransmission.
● On the other hand, the receiver must maintain a buffer large enough
● The transmitter, too, requires more complex logic to be able to send a frame out of
sequence.
● Because of such complications, selective-reject ARQ is much less widely used than go-back-N
ARQ.
● Note : for a k-bit sequence number field, which provides a sequence number range of 2𝐵 , the
maximum window size is limited to 2𝐵 -1 .
Difference between Go-back-N and selective repeat protocol:
HDLC protocol:
High-level Data Link Control (HDLC) is a bit-oriented protocol for communication over point-to-point
and multipoint links.
Note:
Point to point link: A point-to-point link is a communication link established between two devices or
nodes, creating a direct, dedicated connection between them
Multipoint link: A multipoint link, also known as a broadcast or shared medium, is a communication
link that connects multiple devices to a common communication medium or channel
Basic Characteristics:
Unbalanced configuration: Consists of one primary and one or more secondary stations and
supports both full-duplex and half-duplex transmission.
Note:
● Half-duplex:In half-duplex transmission, data can flow in only one direction at a time.
● Full-duplex:In full-duplex transmission, data can flow in both directions simultaneously at
the same time
• Balanced configuration: Consists of two combined stations and supports both full-duplex and half-
duplex transmission.
● NRM is a communication mode that defines how a primary station and secondary
station interact and exchange data.
Frames in HDLC:
Flag fields:
● Flag fields delimit the frame at both ends with the unique pattern 01111110.
● A single flag may be used as the closing flag for one frame and the opening flag for the next.
Address Field:
● The address field identifies the secondary station that is to receive the frame.
● This field is not needed for point-to-point links but is always included for the sake of
uniformity.
Control fields:
● HDLC defines three types of frames, each with a different control field format.
○ Information frame: Information frames are used to carry user data between the
communicating stations.
○ Supervisory frame: Supervisory frames are used to manage the flow of information
frames and control various aspects of communication.
○ Unnumbered frame: Unnumbered frames are used for various control purposes,
including link establishment, initialization, and termination.
Information Field
● The frame check sequence (FCS) is an error detecting code calculated from the remaining
bits of the frame, exclusive of flags.
HDLC operations:
Link setup and disconnect
● Set asynchronous balanced/ extended mode (SABM, SABME) : (Command) Set mode;
extended = 7-bit sequence numbers
● Unnumbered Acknowledgment (UA) (Response) Acknowledge acceptance of one of the set-
mode commands
● Disconnect (DISC) (Command) Terminate logical link connection
● The N(S) and N(R) fields of the I-frame are sequence numbers that support flow control and
error control
● N(S) is the sequence number of the frame being sent
● N(R) is the acknowledgment for I-frames received
● The receive ready (RR) frame acknowledges the last I-frame received by indicating the next I-
frame expected.
Busy condition:
Reject recovery:
● A transmits I-frame number 3 as the last in a sequence of I-frames. The frame suffers an
error.
● A, however, would have started a timer as the frame was transmitted. This timer has a
duration long enough to span the expected response time.
● A, however, would have started a timer as the frame was transmitted. This timer has a
duration long enough to span the expected response time.
● Let us assume we have four stations, 1, 2, 3, and 4, connected to the same channel. The data
from station 1 are d1, from station 2 are d2, and so on.
● The code assigned to the first station is c1, to the second is c2, and so on.
● We assume that the assigned codes have two properties.
○ If we multiply each code by another, we get 0.
○ If we multiply each code by itself, we get 4 (the number of stations).
● Station 1 multiplies its data by its code to get d1 ⋅ c1.
● Station 2 multiplies its data by its code to get d2 ⋅ c2, and so on.
● The data that go on the channel are the sum of all these terms, as shown in the box.
● Suppose station 2 wants to receive the data of station 1,It multiplies the data on the channel
by c1, the code of station 1.
● Because (c1 ⋅ c1) is 4, but (c2 ⋅ c1), (c3 ⋅ c1), and (c4 ⋅ c1) are all 0s, station 2 divides the
result by 4 to get the data from station 1.
(Explain : Multiplication of code to data,addition of individual terms and how a station will access a
data of another station)
Walsh table:
Circuit switching:
Illustration:
● When end system A needs to communicate with end system M, system A needs to request a
connection to M that must be accepted by all switches as well as by M itself. This is called
the setup phase.
● A circuit (channel) is reserved on each link, and the combination of circuits or channels
defines the dedicated path.
● After the dedicated path is established, the data-transfer phase can take place.
● After all data has been transferred, the circuits are torn down.
Advantages:
● Dedicated paths
● Fixed bandwidth
● Fixed data rate
● Suitable for long continuous communication
Disadvantage:
Setup Phase:
● Resource Reservation: During this setup phase, a channel is reserved on each
link
● Path Establishment: The combination of these reserved channels creates a
dedicated path between the source and the destination.
Data Transfer Phase:
● Uninterrupted Data Flow: With the established and dedicated path, the actual
data transfer takes place without interruptions
● Continuous Connection: Throughout the data transfer phase, the resources
are dedicated to this communication session hence ensuring continuous
transmission.
Teardown Phase:
● Circuit Release: The circuits that were reserved for this particular
communication session are released.
● Return of Resources: After teardown, the resources that were previously
dedicated to this communication session become available for other
potential communication sessions.
Key features :
1. Dedicated Communication Paths: Space division switching creates dedicated
communication paths between input and output ports.
2. Independence of Channels: Each communication path or channel operates independently.
3. Non-Blocking Architecture: Any input can be connected to any output without conflicts
4. Crossbar Switches: Common implementations of space division switching use crossbar
switches.
Disadvantages:
● The number of crosspoints grows with the square of the number of attached stations. This is
costly for a large switch.
● The loss of a crosspoint prevents connection between the two devices whose lines intersect
at that crosspoint.
● The crosspoints are inefficiently utilised; even when all of the attached devices are active,
only a small fraction of the crosspoints are engaged.
● Let us assume data from input line I and J are a and b respectively
● This is fed to Time slot interchange system
● The routing logic can rearrange the time slots based on specific routing instructions.
● After the time slots have been rearranged, the TSI system sends them to the output as a new
TDM signal
Softswitch architecture:
● In any telephone network switch, the most complex element is the software that controls
call processing.
● Typically, this software runs on a proprietary processor(Custom designed)that is integrated
with the physical circuit-switching hardware.
● A more flexible approach is to physically separate the call-processing function from the
hardware-switching function.
● In softswitch terminology, the physical-switching function is performed by a media gateway
(MG) and the call processing logic resides in a media gateway controller (MGC).
● Efficient Bandwidth Utilisation: This approach allows for the more efficient use of available
bandwidth because packets can be transmitted as soon as they are ready
● Flexibility: Packet switching is highly adaptable and versatile. It can handle various data
types, including voice, video, and text, with differing bandwidth requirements.
● Scalability: As network traffic grows, additional network nodes and links can be added
without significant disruption.
● Cost-Effective: Packet switching is typically more cost-effective than circuit switching
because it makes efficient use of network resources.
Datagram networks:
● Packets in this approach are referred to as datagrams. Each packet is treated independently
of all others.
● In this example, all four packets (or datagrams) belong to the same message, but may travel
different paths to reach their destination.
● This is so because the links may be involved in carrying packets from other sources and do
not have the necessary bandwidth available to carry all the packets from A to X.
● This approach can cause the datagrams of a transmission to arrive at their destination out of
order with different delays between the packets. Packets may also be lost or dropped
because of a lack of resources.
Note:The datagram networks are sometimes referred to as connectionless networks. The term
connectionless here means that the switch (packet switch) does not keep information about the
connection state. There are no setup or teardown phases. Each packet is treated the same by a
switch regardless of its source or destination.
● Flexibility: Each packet can take its own route to reach the destination. This allows for better
load balancing and redundancy in case of network issues.
● Scalability: It easily scales to accommodate varying network conditions.
● Efficiency: Datagram communication allows for efficient use of network resources.
Note: In networking, overhead refers to the additional data or resources required to support the
communication process beyond the actual user data being transmitted
Virtual-circuit networks:
● As in a circuit-switched network, there are setup and teardown phases in addition to the
data transfer phase.
● As in a circuit-switched network, all packets follow the same path established during the
connection.
● Resources can be allocated during the setup phase, as in a circuit-switched network, or on
demand, as in a datagram network.
● As in a datagram network, data is packetized and each packet carries an address in the
header.
Note : A virtual-circuit network is normally implemented in the data-link layer, while a circuit-
switched network is implemented in the physical layer and a datagram network in the network layer.
But this may change in the future
● Resource Reservation: Resources are reserved during the call setup phase. This enables the
network to allocate bandwidth, ensuring a certain level of service
● Reliable Data Delivery: Similar to circuit-switched networks, VCNs guarantee in-sequence,
error-free packet delivery, which is particularly beneficial for applications such as voice and
video calls.
● Reduced Overhead: The establishment of a virtual circuit minimises the need for extensive
routing information in each packet.
● Entire data follows the same route ● Packets can follow any route
● The total delay is due to the time needed to create the connection, transfer
data, and disconnect the circuit.
● The call request signal suffers both propagation delay and processing delay
● Call accept signal suffers only propagation delay as the connection is already
setup
● Data is transferred as a entire block without processing delay at each node
● The acknowledgement signal do not suffer a processing delay
● A virtual circuit is requested using a Call Request packet, which incurs a delay at each node.
● The virtual circuit is accepted with a Call Accept packet. In contrast to the to Circuit switching
call accept singal, the call accept packet suffers a processing delay at each node
● The reason is that this packet is queued at each node and must wait its turn for
transmission.
● Once the virtual circuit is established, the message is transmitted in packets.
● In contrast to circuit switching, the acknowledgement packet also suffers a processing delay
at each node.
Delay during call setup phase No call setup phase but there There is a delay during both
but no delay during data is a delay during data call setup and data
transmission transmission transmission
Busy signal if the called party is Sender may be notified if the Sender is notified of
busy Packet is not delivered connection denial
Routing: Determining the best path to carry the packet from source to destination.
● Correctness: The routing mechanism must ensure that packets are delivered accurately to
the intended destination.
● Simplicity: The routing system should be easy to implement and manage.
● Robustness: It should be capable of finding alternative routes in case of failures without
losing data or breaking connections.
● Stability: While reacting to failures, the network should maintain stability, avoiding
oscillations or drastic fluctuations.
● Fairness: The routing strategy should be fair in its treatment of different data flows.
● Optimality: The routing approach aims to optimize(Make the best use of)network
performance.
● Efficiency: Routing should be performed with minimal overhead.
Routing Strategies :
Routing strategy refers to the set of rules used to determine how data packets are forwarded from
their source to their destination.
1. Fixed routing:
● In fixed routing, a predetermined, unchanging path is established for each source-
destination pair of nodes in a network.
● This configuration is set either statically or changes only when there's a modification in the
network topology(Network topology is the physical or logical arrangement of the nodes and
connections in a network).
● This approach employs a central routing matrix that stores, for every source-destination pair
of nodes, the identity of the next node on the route.
● Routing tables are derived from the central matrix and stored at each node. Each node
stores a single column of the central routing directory, which indicates the next node to take
for each destination.
2. Flooding:
● Key points in the flooding process:
○ Initial Transmission: The source node transmits the packet to all of its neighbours.
○ Propagation: Each receiving node further transmits the packet to its neighbours
except the one from which it received the packet.
○ Duplicates and Counters: To prevent infinite packet circulation, some mechanisms
are necessary. Two common approaches are used:
a. Duplicate Checking: Nodes keep track of the packets they have already
retransmitted. If a duplicate packet arrives, it's discarded.
b. Hop Count Field: Each packet contains a field representing the maximum number
of hops the packet can traverse. Each time a node forwards the packet, it
decrements the hop count by one. When the hop count reaches zero, the packet is
discarded.
1. The label on each packet in the figure indicates the current value of the hop count field in
that packet.
2. A packet is to be sent from node 1 to node 6 and is assigned a hop count of 3.
3. On the first hop, three copies of the packet are created, and the hop count is decremented
to 2.
4. For the second hop of all these copies, a total of nine copies are created.
5. One of these copies reaches node 6, which recognizes that it is the intended destination
and does not retransmit.
6. However, the other nodes generate a total of 22 new copies for their third and final hop.
Each packet now has a hop count of 1.
7. Note that if a node is not keeping track of packet identifiers, it may generate multiple copies
at this third stage.
8. All packets received from the third hop are discarded, because the hop count is exhausted.
3. Random routing:
● In random routing, a node selects only one outgoing path for retransmission of an incoming
packet.
● The outgoing link is chosen at random, excluding the link on which the packet arrived.
● If all links are equally likely to be chosen, then a node may simply utilise outgoing links in a
round-robin fashion.
● A refinement of this technique is to assign a probability to each outgoing link and to select
the link based on that probability.
Adaptive routing:
● In virtually all packet-switching networks, some sort of adaptive routing technique is used.
● That is, the routing decisions are changed as conditions on the network change.
● The principal conditions that influence routing decisions are given:
○ Failure: When a node or link fails, it can no longer be used as part of a route.
○ Congestion: When a particular portion of the network is heavily congested, it is
desirable to take other routes.
● Advantages:
○ An adaptive routing strategy can improve performance
○ An adaptive routing strategy can aid in congestion control
● Disadvantages:
○ The routing decision is more complex
○ Consume more bandwidth
○ Dynamic routing requires more resources such as CPU, RAM etc
Dijkstra’s Algorithm :
Define:
The algorithm has three steps; steps 2 and 3 are repeated until T = N.
1. Initialization:
a. Start with only the source node in the set of incorporated nodes: T = {s}
b. Set initial path costs to neighbouring nodes as their direct link costs from the source
node: L(n) : w(s,n)
c. If direct link does not exist then : L(n) : ∞
2. Get Next Node:
a. Choose the neighbouring node not in T with the least-cost path from the source
node.
b. Add this node to T and include the edge contributing to the least-cost path from the
current set T to this newly added node.
3. Update Least-Cost Paths:
a. For each node in T, consider if the path cost to a particular node via the newly added
node provides a lower cost than the current known path from the source node.
b. If a lower-cost path is found, update the cost along with the path for that node to
this new minimum.
Example:
Bellman–Ford Algorithm:
Define:
● S : Source node.
● w(i,j) : Link cost from node i to node j in the graph
● h : Maximum number of links in a path at the current stage of the algorithm.
● 𝐵𝐵 (𝐵) : Cost of the least-cost path from node s to node n under the constraint of no more
than h links
Initialization:
Update:
● 𝐵𝐵+1 (𝐵) is calculated by selecting the minimum sum of 𝐵𝐵 (𝐵) and w(j,n) from all nodes j
(excluding the source node s)
Example:
Unit - 3
Congestion: congestion refers to a situation where the demand for network resources exceeds the
available capacity.
Effects of Congestion:
● The scenario describes a node with multiple I/O ports, each connected to other nodes or end
systems.
● Each port has two buffers: one for incoming packets and one for outgoing packets.
● The node makes routing decisions for incoming packets and moves them to the appropriate
output buffer.
● If packets arrive too quickly for the node to process or faster than they can be cleared from
output buffers, congestion can occur.
In the event of congestion, where memory is exhausted and no buffer space is available, two general
strategies can be adopted:
Packet Discard Strategy:
● The node discards incoming packets for which there is no available buffer space.
● The risk with this strategy is that it may result in the loss of data
Flow Control Strategy:
● This involves signaling neighboring nodes to regulate the rate at which they send
packets.
● However If a node restrains the flow of packets from one neighbor, it may cause
congestion in the output buffer of that neighbor.
Effect of Congestion:
● Packet Loss: Congestion can lead to packet loss and This loss may impact the accuracy of
data transmission.
● Increased Delays: Increased packet processing times affect real-time applications and user
experience.
● Reduced Network Throughput: Congestion can reduce the overall throughput of the
network as resources are consumed in managing and resolving congestion-related issues.
● Propagation of Congestion: While attempting to manage congestion locally, may cause
congestion in other parts of the network.
Congestion control:
Backpressure:
● When a node in the network becomes congested, it restricts the flow of packets from its
neighbouring nodes.
● This restriction propagates backward along logical connections, reaching the sources of the
data traffic(flow of digital information or data between devices).
● The sources then adapt by slowing down the rate at which they send new packets into the
network.
Choke packets:
➔ Definition:
◆ A choke packet is a special control packet generated by a congested node, such as a
router or a end system.
◆ This packet is then sent back to the source node, requesting it to to reduce the flow
of data traffic to the congested destination.
➔ Example: ICMP Source Quench Packet:
◆ An instance of a choke packet is the ICMP (Internet Control Message Protocol)
Source Quench packet.
◆ This packet is used to request the source node to reduce the flow of data traffic to
the congested destination.
➔ Handling Source Quench Messages:
◆ When a source node receives a source quench message, it is expected to reduce the
rate at which it sends data to the specified destination.
◆ The source continues to adjust its sending rate until it no longer receives source
quench messages.
➔ Limitation:
◆ Choke packets are considered a relatively basic or crude technique for congestion
control.
◆ More sophisticated methods, known as explicit congestion signaling, provide more
efficient ways to manage congestion in modern networks.
● Instead of receiving direct signals that the network is congested, end systems or sources
infer(assume) congestion based on changes in the network's behaviour.
● There are two key indicators of congestion:
○ Increased Transmission Delay
○ Packet Discards
● How Implicit Congestion Signaling Works:
○ End System Detection:
■ End systems or sources in the network detect changes in
transmission characteristics such as delays and packet discards.
○ Autonomous Response:
■ Upon detecting increased delays or packet discards, end systems
autonomously interpret these signals as signs of network
congestion.
○ Flow Reduction:
■ In response to the congestion, end systems automatically reduce the
flow of data they send into the network.
● Unlike implicit congestion signaling, which relies on packet loss to signal congestion, explicit
congestion signaling uses explicit signals to inform end systems about congestion.
1. Backward signaling:
a. Signals are sent from the network to the source.
b. Backward signaling informs the source of congestion, prompting them to reduce
their transmission rate.
2. Forward signaling:
a. Signals are sent from the network to the destination.
b. Forward signaling informs the destination of congestion, prompting them to reduce
their reception rate.
1. Header modification: Specific bits in the headers of data packets are modified to convey
congestion information.
2. Control packets: Dedicated control packets are sent solely for congestion signaling.
Traffic management:
Methods used to refine the application of congestion control techniques and discard policy are :
● Fairness
● Quality of service
● Reservation
Fairness:
Quality of service:
Reservations
● Reservations involve the establishment of a contract between the network and users when
forming a logical connection.
● This contract defines the terms for data flow, including parameters like data rate.
● The network commits to providing a certain level of service (QoS) as long as the traffic
adheres to the agreed-upon contract.
● Excess traffic may be handled or discarded, and new reservations may be denied if current
resources are insufficient.
Two important tools in managing networks are traffic shaping and traffic policing.
● Traffic Shaping: Traffic shaping is a network management technique aimed at smoothing out
the flow of network traffic.
● Traffic Policing: Traffic policing is another network management tool that involves
discriminating between incoming packets based on their adherence to a Quality of Service
(QoS) agreement.
Two important techniques that can be used for traffic shaping or traffic policing are :
1. Token bucket
2. Leaky bucket.
Token bucket:
Key Concepts:
Token Bucket:
● A virtual bucket where tokens accumulate over time.
● Tokens are units of permission for sending data.
Token Generation:
● Tokens are generated at a fixed rate, often denoted as 'r' tokens per second.
● Tokens are added to the bucket until it reaches its maximum capacity, 'c' tokens.
Sending Data:
● When a unit of data needs to be sent, the system checks the number of tokens
required (based on data size).
● If there are enough tokens in the bucket, the data is sent.
● The system removes the corresponding number of tokens from the bucket.
● Tokens are subtracted based on the size of the sent data.
● This means that during periods of token shortage, data will be delayed until the
necessary tokens accumulate in the bucket.
Idle Hosts:
Idle Accumulation:
● If a host is not actively sending data, tokens continue to accumulate in the bucket.
● This reflects idle periods where tokens are stored for future use.
● Bucket:
○ In the leaky bucket algorithm, there is a virtual "bucket" that can hold a limited
amount of data.
○ This represents the buffer for storing incoming data.
● Input Rate:
○ Data is fed into the bucket at a variable rate.
○ This input rate can be bursty, meaning there may be periods of high data input
followed by periods of low or no input.
● Leak Rate (r):
○ The leaky bucket has a leak rate, denoted as 'r.'
○ It is the maximum rate at which the bucket can release data.
● Bucket Capacity (b):
○ The bucket has a maximum capacity, denoted as 'b.'
○ If the bucket is full and more data arrives, the excess data is discarded or subjected
Topology : A topology refers to the way in which the components of a network are arranged and
connected
Basic topologies:
Mesh topology
Advantage:
● Robustness
● Privacy and security
Disadvantage
● Cabling complex
● High cost
Bus topology:
Physical Layout:
● Bus (Backbone): The central cable acts as the backbone and connects all the devices
in the network.
● Drop Lines: Devices are connected to the bus through drop lines.
● Taps: Taps are connectors that either splice into the main cable or puncture its
sheathing to establish contact with the metallic core.
Signal Propagation:
● As a signal travels along the backbone, it experiences signal degradation due to
energy transformation into heat.
● This imposes limits on the number of taps a bus can support and the distance
between those taps.
Advantage:
● Reduced cabling
● Cost effective
Disadvantage:
● Signal degradation
● Problems in scalability
Ring topology:
● Point-to-Point Connections:
○ Each device has a point-to-point connection with only the two devices on either side
of it in the ring.
● Signal Transmission:
○ A signal is passed along the ring in one direction.
● Repeaters:
○ Each device in the ring incorporates a repeater, which is responsible for regenerating
and passing along the signal.
Advantage:
● Ease of installation and ease of reconfiguration
● Simple routing
Disadvantage
● Signal degradation
● Problems in scalability
Star topology:
● Point-to-Point Links:
○ Each device has a point-to-point link connecting it to the central hub.
○ Devices do not have direct connections to each other.
● Centralised Controller (Hub or Switch):
○ The central hub or switch acts as an exchange for data.
○ When a device wants to communicate with another device, it sends the data to the
hub, which then transmits it to the intended device.
Advantage:
Disadvantage:
● Dependency on Central Hub: If the hub fails, the entire network is affected
● Cost:The cost of the central hub can be relatively high, especially if it needs to support a
large number of devices.
Functionalities:
● On transmission, assemble data into a frame with address and error-detection fields.
● On reception, disassemble frame and perform address recognition and error detection
● Govern access to the LAN transmission medium.
● Provide an interface to higher layers and perform flow and error control.
These are functions typically associated with OSI layer 2. The set of functions in the last bullet item
are grouped into a logical link control (LLC) layer. The functions in the first three bullet items are
treated as a separate layer, called medium access control (MAC). The separation is done for the
following reasons:
● The logic required to manage access to a shared-access medium is not found in traditional
layer 2 data link control.
● For the same LLC, several MAC options may be provided.
Three services are provided by LLC as alternatives for attached devices using LLC:
1. Unacknowledged Connectionless Service:
a. Operates like connectionless protocols by sending data in discrete units (datagrams).
b. Lacks flow- and error-control mechanisms.
2. Connection-Mode Service:
a. Involves establishing a logical connection before data exchange.
b. Implements flow control and error control mechanisms.
3. Acknowledged Connectionless Service:
a. Datagrams are acknowledged without the need for a pre-established logical
connection.
b. Provides acknowledgment without the overhead of establishing a logical connection
first.
Frame Format:
● The overall frame format includes the MAC frame components such as MAC control,
destination MAC address, source MAC address, LLC PDU and CRC
Address Fields (DSAP and SSAP):
● DSAP (Destination Service Access Point): A 7-bit address specifying the destination
user of the LLC. One bit indicates whether the DSAP is an individual or group
address.
● SSAP (Source Service Access Point): A 7-bit address specifying the source user of the
LLC. One bit indicates whether the PDU is a command or response PDU.
Control Field:
● The LLC control field format is identical to that of HDLC, utilising extended (7-bit)
sequence numbers. This field helps manage the flow and sequencing of data.
Information Field:
● The actual user data is carried in the information field of the LLC PDU.
The MAC layer receives a block of data from the LLC layer and is responsible for performing
functions related to medium access and for transmitting the data.In this case, the PDU is referred to
as a MAC frame.Fields of MAC are explained as follows:
● MAC Control:
○ Contains protocol control information required for the proper functioning of the
MAC protocol.
● Destination MAC Address:
○ Identifies the intended recipient of the frame, facilitating the delivery of the data to
the correct device.
● Source MAC Address:
○ Identifies the sender of the frame, helping the receiving device to determine the
source of the incoming data.
● LLC (Logical Link Control):
○ Carries the LLC data from the next higher layer in the networking protocol stack.
● CRC (Cyclic Redundancy Check):
○ Also known as the Frame Check Sequence (FCS) field.It serves as an error-detecting
code, allowing the receiving device to check for errors or corruption in the frame
during transmission
MAC protocols
Many protocols have been devised to handle access to a shared link. All of these protocols belong to
a sublayer in the data-link layer called media access control (MAC). We categorize them into three
groups
● Random-access methods are those methods where no station is given priority, and each
station has the right to transmit data based on a predefined protocol
● This method is termed "random access" because transmission occurs without a scheduled
time, and it is also referred to as a "contention method" because stations compete for
access.
● Collision : Collision refers to a situation where two or more nodes (such as computers or
devices) attempt to transmit data over a shared communication medium simultaneously N.
Pure aloha:
● The idea is that each station sends a frame whenever one is available.
● The station checks if it has received an acknowledgment.
● If an acknowledgment is received, the transmission is successful, and the process ends for
this frame.
● If no acknowledgment is received within a specified timeout period, the station proceeds to
handle the collision.
● The station checks if it has exceeded the maximum number of retransmission attempts
(Kmax).
● If the maximum number of attempts is reached, the station gives up on retransmission.
● If the attempts are within the limit, the station proceeds to wait for a random backoff time
(TB) and then retransmits the frame.
Note : The time-out period is equal to the maximum possible round-trip propagation delay, which is
twice the amount of time required to send a frame between the two most widely separated stations
(2 x Tp).
Where,
Slotted aloha:
● Time is divided into fixed-size slots, each corresponding to the time required to transmit one
frame (𝐵𝐵𝐵 seconds).
● A station is permitted to transmit its frame only at the start of a time slot.
● The vulnerable time, during which collisions can occur, is now equal to the duration of one
time slot (Tfr seconds). ie Slotted ALOHA vulnerable time = 𝐵𝐵𝐵
● Collisions can still occur if two or more stations attempt to transmit at the beginning of the
same time slot.
● If a collision occurs, the stations involved must wait until the next time slot to attempt
transmission again.
Vulnerable Time
● The vulnerable time for CSMA is equal to the propagation time (Tp).
● During this propagation time, if another station attempts to send a frame, a collision may
occur.
● In the above figure:
○ The grey area in the figure represents the vulnerable area in both time and space.
○ In this area, if any other station attempts to send a frame, there is a risk of collision
with the ongoing transmission from Station A.
Nonpersistent
● In the non persistent method, a station that has a frame to send senses the line.
● If the line is idle, it sends immediately.
● If the line is not idle, it waits a random amount of time and then senses the line again.
● The nonpersistent approach reduces the chance of collision because it is unlikely that two or
more stations will wait the same amount of time and retry to send simultaneously
P - persistant
In this method, after the station finds the line idle it follows these steps:
1. With probability p, the station sends its frame
2. With probability q = 1 − p, the station waits for the beginning of the next time slot and
checks the line again
a. If the line is idle, it goes to step 1.
b. If the line is busy, it acts as though a collision has occurred and uses the backoff
procedure.
Carrier sense multiple access with collision detection (CSMA/CD)
● We sense the channel before we start sending the frame by using one of the persistence
processes
● In CSMA/CD, transmission and collision detection are continuous processes.The station
transmits and receives continuously and simultaneously (using two different ports or a
bidirectional port). We use a loop to show that transmission is a continuous process.
● We constantly monitor in order to detect one of two conditions: either transmission is
finished or a collision is detected. Either event stops transmission
● When we come out of the loop, if a collision has not been detected, it means that
transmission is complete
● If collision is detected jamming signal is sent to makesure that all the stations be aware of
collision
Carrier sense multiple access with collision avoidance (CSMA/CA)
● When an idle channel is found, the station does not send immediately. It waits for a period
of time called the interframe space or IFS.
● After waiting an IFS time, if the channel is still idle, the station can send, but it still needs to
wait a time equal to the contention window
● The contention window is an amount of time divided into slots.
● Next step is it sends the Ready to send(RTS) signal and waits for clear to send signal(CTS)
● After receiving CTS it again waits for an IFS time and then finally it sends the frame
● If acknowledgement is received it stops the process and if acknowledgement is not received
it takes steps for retransmission.
IEEE 802.11 Architecture
Preamble:
● A 7-octet pattern of alternating 0s and 1s.
● Used by the receiver to establish bit synchronisation.
Start Frame Delimiter (SFD):
● The sequence is : 10101011.
● Marks the actual start of the frame and helps the receiver locate the first bit of the
frame.
Destination Address (DA):
● Specifies the station(s) for which the frame is intended.
● Can be a unique physical address, a multicast address, or a broadcast address.
Source Address (SA):
● Specifies the station that sent the frame.
Length/Type:
● If the value is less than or equal to 1500 decimal, it indicates the length of the MAC
Client Data field.
● If the value is greater than or equal to 1536 decimal, it indicates the nature of the
MAC client protocol.
● Length and Type interpretations are mutually exclusive.
MAC Client Data:
● Data unit supplied by LLC (Logical Link Control).
● Maximum size is 1500 octets for a basic frame, 1504 octets for a Q-tagged frame,
and 1982 octets for an envelope frame.
Pad:
● Octets added to ensure that the frame is long enough for proper collision detection
(CD) operation.
Frame Check Sequence (FCS):
● A 32-bit cyclic redundancy check (CRC) based on all fields except preamble, SFD, and
FCS.
● Used for error checking.
Extension:
● Added if required for 1-Gbps half-duplex operation.
● Necessary to enforce the minimum carrier event duration in half-duplex mode at an
operating speed of 1 Gbps.
18.With a neat IEEE 802.11 Protocol Architecture, explain DCF and PCF for medium access control
● Idea: Uses Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) for wireless
networks.
● Operation:
● Station listens, waits for an Interframe Space (IFS) if medium is idle, and transmits if
clear.
● If medium is busy, station defers transmission until it's free.
● After ongoing transmission, station waits, performs exponential backoff, and retries.
● Priority: Uses different IFS values (SIFS, PIFS, DIFS) to prioritize access.
● Idea: Optional method built on DCF, adds centralized control through a Point Coordinator
(e.g., access point).
● Operation:
● Point Coordinator issues polls for controlled access.
● Superframe divides time into a contention-free period (PCF control) and a
contention period (DCF operates).
16.With a flowchart, explain IEEE 802.11 Medium Access Control Logic