0% found this document useful (0 votes)
22 views29 pages

Unit-2 Bandwidth Utilization, Transmission Media Switching, Introduction To Data Link Layer (E-Next - In)

Computer Network All unit pdf

Uploaded by

indochini16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views29 pages

Unit-2 Bandwidth Utilization, Transmission Media Switching, Introduction To Data Link Layer (E-Next - In)

Computer Network All unit pdf

Uploaded by

indochini16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Syllabus

https://e-next.in
Bandwidth Utilization: Multiplexing and Spreading
Bandwidth utilization is the wise use of available bandwidth to achieve specific goals.
There are two broad categories of bandwidth utilization: multiplexing and spreading.
In multiplexing, our goal is efficiency; we combine several channels into one.
In spreading, our goals are privacy and antijamming; we expand the bandwidth of a channel to
insert redundancy, which is necessary to achieve these goals.

MULTIPLEXING
Whenever the bandwidth of a medium linking two devices is greater than the bandwidth
needs of the devices, the bandwidth is wasted. An efficient system maximizes the utilization of all
resources by sharing the link. Multiplexing is the set of techniques that allows the simultaneous
transmission of multiple signals across a single data link.

In a multiplexed system, n lines share the bandwidth of one link. At the sending end,devices
direct their transmission streams to a multiplexer (MUX), which combines them into a single stream
(many-to one). At the receiving end, that stream is fed into a demultiplexer (DEMUX), which
separates the stream back into its component transmissions (one-to-many) and directs them to their
corresponding lines.
There are three basic multiplexing techniques: frequency-division multiplexing,
wavelength-division multiplexing, and time-division multiplexing. The first two are techniques
designed for analog signals, the third, for digital signals.

Frequency-Division Multiplexing
Frequency-division multiplexing (FDM) is an analog technique that can be applied when the
bandwidth of a link (in hertz) is greater than the combined bandwidths of the signals to be
transmitted. In FDM, signals generated by each sending device modulate different carrier
frequencies. These modulated signals are then combined into a single composite signal that can be
transported by the link. channels are separated by strips unused bandwidth called as guard bands-to
prevent signals from overlapping.
FDM can also be used to combine sources sending digital signals by converting a digital
signal into an analog signal before FDM is used to multiplex them.

The figure below shows the conceptual view of FDM


https://e-next.in
Multiplexing Process
Each source generates a signal of a similar frequency range. Inside the multiplexer, these similar
signals modulates different carrier frequencies (f1,f2 and f3). The resulting modulated signals are
then combined into a single composite signal that is sent out over a media link that has enough
bandwidth to accommodate it.

Demultiplexing Process
The demultiplexer uses a series of filters to decompose the multiplexed signal into its
constituent component signals. The individual signals are then passed to a demodulator that
separates them from their carriers and passes them to the output lines.

Applications of FDM
A very common application of FDM is AM and FM radio broadcasting. Radio uses the air as
the transmission medium. A special band from 530 to 1700 kHz is assigned to AM radio. All radio
stations need to share this band and each AM station needs 10kHz of bandwidth. Each station uses a
different carrier frequency, which means it is shifting its signal and multiplexing. The signal that
goes to the air is a combination of signals. A receiver receives all these signals, but filters (by tuning)
only the one which is desired. Without multiplexing, only one AM station could broadcast to the
common link, the air.

https://e-next.in
Implementation
FDM can be implemented very easily. In many cases, such as radio and television
broadcasting, there is no need for a physical multiplexer or demultiplexer. As long as the stations
agree to send their broadcasts to the air using different carrier frequencies, multiplexing is achieved.
In cellular telephone system, a base station needs to assign a carrier frequency to the
telephone user. There is not enough bandwidth in a cell to permanently assign a bandwidth range to
every telephone user. When a user hangs up, her or his bandwidth is assigned to another caller.

The Analog Carrier System


To maximize the efficiency of their infrastructure, telephone companies have traditionally
multiplexed signals from lower-bandwidth lines onto higher-bandwidth lines using FDM.
One of these hierarchical systems used by AT&T is made up of groups, supergroups,
master groups, and jumbo groups . shown in the figure below

In this analog hierarchy, 12 voice channels are multiplexed onto a higher-bandwidth line to
create a group. A group has 48 kHz of bandwidth and supports 12 voice channels.
At the next level, up to five groups can be multiplexed to create a composite signal called a
supergroup. A supergroup has a bandwidth of 240 kHz and supports up to 60 voice channels.
Supergroups can be made up of either five groups or 60 independent voice channels.
At the next level, 10 supergroups are multiplexed to create a master group. A master group
must have 2.40 MHz of bandwidth, but the need for guard bands between the supergroups increases
the necessary bandwidth to 2.52 MHz. Master groups support up to 600 voice channels.
Finally, six master groups can be combined into a jumbo group. A jumbo group
must have 15.12 MHz (6 x 2.52 MHz) but is augmented to 16.984 MHz to allow for
guard bands between the master groups.

Wavelength-Division Multiplexing
Wavelength-division multiplexing (WDM) is designed to use the high-data-rate capability of
fiber-optic cable. The optical fiber data rate is higher than the data rate of metallic transmission
cable. Using a fiber-optic cable for one single line wastes the available bandwidth. Multiplexing
allows us to combine several lines into one.
WDM is conceptually the same as FDM, except that the multiplexing and demultiplexing
involve optical signals transmitted through fiber-optic channels. Here also we are combining
different signals of different frequencies. The difference is that the frequencies are very high.
Figure below shows WDM

https://e-next.in
Very narrow bands of light from different sources are combined to make a wider band of
light. At the receiver, the signals are separated by the demultiplexer.
WDM is an analog multiplexing technique to combine optical signals. The combining and
splitting of light sources are easily handled by a prism.
One application of WDM is the SONET network in which multiple optical fiber lines are
multiplexed and demultiplexed.

Time-Division Multiplexing
Time-division multiplexing (TDM) is a digital process that allows several devices to share
the high bandwidth of a link on the time basis. Each connection occupies a portion of time in the
link. However, since we are concerned with only multiplexing we assumed that all the data in a
message from source 1 always go to one specific destination, be it 1, 2, 3, or 4. The delivery is fixed.
Figure below gives a conceptual view of TDM.

There are two types of TDM : synchronous and statistical.


In Synchronous TDM, each input connection has an allotment in the output even if it is not sending
data whereas in statistical TDM, an input line is given as slot only when it has a slot’s worth of data
to send.

Synchronous Time Division Multiplexing


In synchronous TDM, the data flow of each input connection is divided into units, where
each input occupies one input time slot. A unit can be 1 bit, one character, or one block of data. Each
input unit becomes one output unit and occupies one output time slot.
In this technique, a round of data units from each input connection is collected into a frame.
If we have n connections, a frame is divided into n time slots and one slot is allocated for each unit,
one for each input line. If the duration of the input unit is T, the duration of each slot is T/n and the
duration of each frame is T.

https://e-next.in
The data rate of the output link must be n times the data rate of a connection to guarantee the
flow of data.

Interleaving
TDM can be visualized as two fast-rotating switches, one on the multiplexing side and the
other on the demultiplexing side. The switches are synchronized and rotate at the same speed, but in
opposite directions.
On the multiplexing side, as the switch opens in front of a connection, that connection has the
opportunity to send a unit onto the path. This process is called interleaving.
On the demultiplexing side, as the switch opens in front of a connection, that connection has
the opportunity to receive a unit from the path.

Empty Slots
If a source does not have data to send, the corresponding slot in the output frame is empty.

https://e-next.in
Data Rate Management
If data rates of the connections are not the same we can use any of the three strategies
multilevel multiplexing, multiple-slot allocation, and pulse stuffing.

Multilevel Multiplexing
Multilevel multiplexing is a technique used when the data rate of an input line is a multiple
of others. For example we have two inputs of 20 kbps and three inputs of 40 kbps. The first two
input lines can be multiplexed together to provide a data rate equal to the last three. A second level
of multiplexing can create an output of 160 kbps.

Multiple-Slot Allocation
If an input line that has a data rate that is a multiple of another input line, it is more efficient to allot
more than one slot in a frame to a single input line. For Example, the input line with a 50-kbps data
rate can be given two slots in the output. We insert a serial-to-parallel converter in the line to make
two inputs out of one.

Pulse Stuffing
Used when the bit rates of sources are not multiple integers of each other. The highest input
data rate the dominant data rate and then add dummy bits to the input lines with lower rates. This
will increase their rates. This technique is called pulse stuffing, bit padding, or bit stuffing. The
input with a data rate of 46 is pulse-stuffed to increase the rate to 50 kbps.

https://e-next.in
SPREAD SPECTRUM
Multiplexing combines signals from several sources to achieve bandwidth efficiency; the
available bandwidth of a link is divided between the sources.
Spread spectrum (SS) is used to combine signals from different sources to fit into a larger
bandwidth, but secure the transmission is our goal. Spread spectrum is designed to be used in
wireless applications (LANs and WANs), where all stations use air (or a vacuum) as the medium for
communication. Our goal is that the station must be able to share this medium without interception
by an eavesdropper and without being subject to jamming from a malicious intruder.

For example: military operations.


To achieve these goals, spread spectrum techniques add redundancy; they spread the original
spectrum needed for each station. If the required bandwidth for each station is B, spread spectrum
expands it to Bss such that Bss » B. The expanded bandwidth allows the source to wrap its message
in a protective envelope for a more secure transmission. The spreading process occurs after the
signal is created by the source.
For Example: If we are sending a delicate, expensive gift. We can insert the gift in a special
box to prevent it from being damaged during transportation, and we can use a superior delivery
service to guarantee the safety of the package.
There are two techniques to spread the bandwidth: frequency hopping spread spectrum
(FHSS) and direct sequence spread spectrum (DSSS).

Frequency Hopping Spread Spectrum (FHSS)


The frequency hopping spread spectrum (FHSS) technique uses N different carrier
frequencies that are modulated by the source signal. At one moment, the signal modulates one
carrier frequency; at the next moment, the signal modulates another carrier frequency. However the
modulation is done using one carrier frequency at a time.

Implementation
A pseudorandom code generator, called pseudorandom noise (PN), creates a k-bit pattern for
every hopping period. The frequency table uses the pattern to find the frequency to be used for this
hopping period and passes it to the frequency synthesizer. The frequency synthesizer creates a
carrier signal of that frequency, and the source signal modulates the carrier signal.
If there are many k-bit patterns and the hopping period is short, a sender and receiver can
have privacy. If an intruder tries to intercept the transmitted signal, she can only access a small piece
of data because she does not know the spreading sequence to quickly adapt herself to the next hop.
The scheme has also an antijamming effect. A malicious sender may be able to send noise to jam the
signal for one hopping period (randomly), but not for the whole period.

https://e-next.in
The general layout for FHSS

Frequency selection in FHSS

Direct Sequence Spread Spectrum


The direct sequence spread spectrum (DSSS) technique replaces each data bit with n bits
using a spreading code. In other words, each bit is assigned a code of n bits, called chips, where the
chip rate is n times that of the data bit.

Implementation
The original signal is multiplied by the chips to get the spread signal.
Consider the chip to be the Barker sequence with 11 chips having the pattern 10110111000 .
We assume that the original signal and the chips in the chip generator use polar NRZ encoding.
If the original signal rate is N, the rate of the spread signal is 11N. This means that the
required bandwidth for the spread signal is 11 times larger than the bandwidth of the original signal.
The spread signal can provide privacy if the intruder does not know the code. It can also provide
immunity against interference if each station uses a different code.

https://e-next.in
https://e-next.in
Switching:
A network is a set of connected devices. Multiple devices are connected by a point-to-point
connection between each pair of devices (a mesh topology) or between a central device and every
other device (a star topology).
These methods, however, are impractical and wasteful when applied to very large networks. The
number and length of the links require too much infrastructure to be cost-efficient, and the majority
of those links would be idle most of the time.
Other topologies employing multipoint connections, such as a bus, are ruled out because the
distances between devices and the total number of devices increase beyond the capacities of the
media and equipment.
A better solution is switching. A switched network consists of a series of interlinked nodes, called
switches. Switches are devices capable of creating temporary connections between two or more
devices linked to the switch. In a switched network, some of these nodes are connected to the end
systems (computers or telephones, for example). Others are used only for routing.

The end systems (communicating devices) are labeled A, B, C, D, and so on, and the
switches are labeled I, II, III, IV, and V. Each switch is connected to multiple links.

Switching Can be implemented in three ways: circuit switching, packet switching, and
message switching. The first two are commonly used today. The third has been phased out in general
communications but still has networking applications. Packet-switched networks can further be
divided into two subcategories-virtual-circuit networks and datagram networks

In message switching, each switch stores the whole message and forwards it to the next
switch. Although, we don't see message switching at lower layers, it is still used in some applications
like electronic mail (e-mail).

https://e-next.in
CIRCUIT-SWITCHED NETWORKS
A circuit-switched network consists of a set of switches connected by physical links. A
connection between two stations is a dedicated path made of one or more links. However, each
connection uses only one dedicated channel on each link. Each link is normally divided into n
channels by using FDM or TDM

The figure above show trivial circuit-switched network with four switches and four links. Each link
is divided into n (n is 3 in the figure) channels by using FDM or TDM.

When end system A needs to communicate with end system M, system A needs to request a
connection to M that must be accepted by all switches as well as by M itself. This is called the setup
phase; a circuit (channel) is reserved on each link, and the combination of circuits or channels
defines the dedicated path. After the dedicated path made of connected circuits (channels) is
established, data transfer can take place. After all data have been transferred, the circuits are torn
down.

Characteristics of Circuit Switching:


 Circuit switching takes place at the physical layer.
 Before starting communication, the stations must make a reservation for the resources to be
used during the communication. These resources, such as channels (bandwidth in FDM and
time slots in TDM), switch buffers, switch processing time, and switch input/output ports,
must remain dedicated during the entire duration of data transfer until the teardown phase.
 Data transferred between the two stations are not packetized (physical layer transfer of the
signal). The data are a continuous flow sent by the source station and received by the
destination station, although there may be periods of silence.
 There is no addressing involved during data transfer except during the setup phase. The
switches route the data based on their occupied band (FDM) or time slot (TDM).

Three Phases
The actual communication in a circuit-switched network requires three phases: connection
setup, data transfer, and connection teardown.

Setup Phase
Before the two parties (or multiple parties in a conference call) can communicate, a dedicated
circuit (combination of channels in links) needs to be established. The end systems are normally
connected through dedicated lines to the switches, so connection setup means creating dedicated
channels between the switches.

https://e-next.in
For example, in Figure above, when system A needs to connect to system M, it sends a setup
request that includes the address of system M, to switch I. Switch I finds a channel between itself
and switch IV that can be dedicated for this purpose. Switch I then sends the request to switch IV,
which finds a dedicated channel between itself and switch III. Switch III informs system M of
system A's intention at this time.
In the next step to making a connection, an acknowledgment from system M needs to be sent
in the opposite direction to system A. Only after system A receives this acknowledgment is the
connection established.

Data Transfer Phase


After the establishment of the dedicated circuit (channels), the two parties can transfer data.

Teardown Phase
When one of the parties needs to disconnect, a signal is sent to each switch to release the
resources.

Efficiency
It can be argued that circuit-switched networks are not as efficient as the other two types of
networks because resources are allocated during the entire duration of the connection. These
resources are unavailable to other connections.
In a telephone network, people normally terminate the communication when they have
finished their conversation. However, in computer networks, a computer can be connected to another
computer
even if there is no activity for a long time. In this case, allowing resources to be dedicated means that
other connections are deprived.

Delay
A circuit-switched network normally has low efficiency, the delay in this type of network is
minimal. During data transfer the data are not delayed at each switch; the resources are allocated for
the duration of the connection. The total delay is due to the time needed to create the connection,
transfer data, and disconnect the circuit

Packet Switched Networks:


In a packet-switched network, data needs to be divided into packets of fixed or variable size.
The size of the packet is determined by the network and the governing protocol.
In packet switching, there is no resource allocation for a packet. This means that there is no
reserved bandwidth on the links, and there is no scheduled processing time for each packet.
Resources are allocated on demand. The allocation is done on a firstcome, first-served basis.
When a switch receives a packet, no matter what is the source or destination, the packet must wait if
there are other packets being processed. This lack of reservation may create delay.

DATAGRAM NETWORKS
In a datagram network, each packet is treated independently of all others. Even if a packet is
part of a multipacket transmission, the network treats it as though it existed alone. Packets in this
approach are referred to as datagrams. Datagram switching is normally done at the network layer.
Figure below shows how the datagram approach is used to deliver four packets from
station A to station X.

https://e-next.in
In this example, all four packets (or datagrams) belong to the same message, but may travel different
paths to reach their destination. This is so because the links may be involved in carrying packets
from other sources and do not have the necessary bandwidth available to carry all the packets from
A to X. This approach can cause the datagrams of a transmission to arrive at their destination out of
order with different delays between the packets. Packets may also be lost or dropped because of a
lack of resources.
The datagram networks are sometimes referred to as connectionless networks. There are no
setup or teardown phases.

Routing Table
The routing tables are dynamic and are updated periodically. The destination addresses and the
corresponding forwarding output ports are recorded in the tables. This is different from the table of a
circuitswitched network in which each entry is created when the setup phase is completed and deleted
when the teardown phase is over. Figure below shows the routing table for a switch.

Destination Address
Every packet in a datagram network carries a header that contains, among other information,
the destination address of the packet. When the switch receives the packet, this destination address is
examined; the routing table is consulted to find the corresponding port through which the packet should
be forwarded.

Efficiency
The efficiency of a datagram network is better than that of a circuit-switched network; resources
are allocated only when there are packets to be transferred. If a source sends a packet and there is a delay
of a few minutes before another packet can be sent, the resources can be reallocated during these minutes
for other packets from other sources.

https://e-next.in
Delay
There may be greater delay in a datagram network than in a virtual-circuit network. Although
there are no setup and teardown phases, each packet may experience a wait at a switch before it is
forwarded.

VIRTUAL-CIRCUIT NETWORKS
A virtual-circuit network is a cross between a circuit-switched network and a datagram network.
It has some characteristics of both.
1. As in a circuit-switched network, there are setup and teardown phases in addition to the data
transfer phase.
2. Resources can be allocated during the setup phase, as in a circuit-switched network, or on
demand, as in a datagram network.
3. As in a datagram network, data are packetized and each packet carries an address in the header.
4. As in a circuit-switched network, all packets follow the same path established during the
connection.
5. A virtual-circuit network is normally implemented in the data link layer, while a circuit-switched
network is implemented in the physical layer and a datagram network in the network layer.

Figure 8.10 is an example of a virtual-circuit network.

Addressing
In a virtual-circuit network, two types of addressing are involved: global and local (virtual-circuit
identifier).

Global Addressing
A source or a destination needs to have a global address-an address that can be unique in the
scope of the network or internationally if the network is part of an international network

Virtual-Circuit Identifier
The identifier that is actually used for data transfer is called the virtual-circuit identifier
(VCI) A VCI, unlike a global address, is a small number that has only switch scope; it is used by a
frame between two switches. When a frame arrives at a switch, it has a VCI; when it leaves, it has a
different VCl.

https://e-next.in
STRUCTURE OF A SWITCH

Structure of Circuit Switches


Circuit switching today can use either of two technologies: the space-division switch or
the time-division switch.

Space-Division Switch
In space-division switching, the paths in the circuit are separated from one another spatially.

Crossbar Switch
A crossbar switch connects n inputs to m outputs in a grid, using electronic microswitches
(transistors) at each crosspoint .The major limitation of this design is the number of crosspoints required.
To connect n inputs to m outputs using a crossbar switch requires n x m crosspoints.
For example, to connect 1000 inputs to 1000 outputs requires a switch with 1,000,000
crosspoints. A crossbar with this number of crosspoints is impractical. Such a switch is also inefficient
because less than 25 percent of the crosspoints are in use at any given time. The rest are idle.

Multistage Switch
The solution to the limitations of the crossbar switch is the multistage switch, which
combines crossbar switches in several (normally three) stages.

To design a three-stage switch, we follow these steps:


1. We divide the N input lines into groups, each of n lines. For each group, we use one crossbar
of size n x k, where k is the number of crossbars in the middle stage. In other words, the first
stage has N/n crossbars of n x k crosspoints.
2. We use k crossbars, each of size (N/n) x (N/n) in the middle stage.
3. We use N/n crossbars, each of size k x n at the third stage.

https://e-next.in
Time-Division Switch
Time-division switching uses time-division multiplexing (TDM) inside a switch. The most
popular technology is called the time-slot interchange (TSI).
Figure below shows a system connecting four input lines to four output lines. With following
The figure combines a TDM multiplexer, a TDM demultiplexer, and a TSI consisting of
random access memory (RAM) with several memory locations. The size of each location is the same
as the size of a single time slot. The number of locations is the same as the number of inputs (in most
cases, the numbers of inputs and outputs are equal
The RAM fills up with incoming data from time slots in the order received. Slots are then
sent out in an order based on the decisions of a control unit.

Time- and Space-Division Switch Combinations


When we compare space-division and time-division switching, some interesting facts
emerge. The advantage of space-division switching is that it is instantaneous. Its disadvantage is the
number of crosspoints required to make space-division switching acceptable in terms of blocking.
The advantage of time-division switching is that it needs no crosspoints. Its disadvantage,
in the case of TSI, is that processing each connection creates delays. Each time slot must be stored
by the RAM, then retrieved and passed on.
In a third option, we combine space-division and time-division technologies to take
advantage of the best of both. Combining the two results in switches that are optimized both
physically (the number of crosspoints) and temporally (the amount of delay). Multistage switches of
this sort can be designed as time-space-time (TST) switch.

Figure below shows a simple TST switch that consists of two time stages and one space stage
and has 12 inputs and 12 outputs. Instead of one time-division switch, it divides the inputs into three
groups (of four inputs each) and directs them to three timeslot interchanges. The result is that the
average delay is one-third of what would result from using one time-slot interchange to handle all 12
inputs. The last stage is a mirror image of the first stage. The middle stage is a spacedivision switch
(crossbar) that connects the TSI groups to allow connectivity between all possible input and output
pairs.

Structure of Packet Switching:

https://e-next.in
https://e-next.in
Data Link Layer

Services:

https://e-next.in
Video on Switching

https://e-next.in
Error Detection and Correction
Types of Errors
Whenever bits flow from one point to another, they are subject to unpredictable changes
because of interference. This interference can change the shape of the signal. There are two types of
errors: Single-Bit Error and Burst error.

Single-Bit Error
The term single-bit error means that only 1 bit of a given data unit (such as a byte, character,
or packet) is changed from 1 to 0 or from 0 to 1.

Burst Error
The term burst error means that 2 or more bits in the data unit have changed from 1 to 0
or from 0 to 1.
Figure below shows the effect of a burst error on a data unit. In this case, 0100010001000011
was sent, but 0101110101100011 was received. A burst error does not necessarily mean that the
errors occur in consecutive bits. The length of the burst is measured from the first corrupted bit to
the last corrupted bit. Some bits in between may not have been corrupted.

A burst error is more likely to occur than a single-bit error. The duration of noise is normally longer
than the duration of 1 bit, which means that when noise affects data, it affects a set of bits. The
number of bits affected depends on the data rate and duration of noise.

Redundancy
The central concept in detecting or correcting errors is redundancy. To be able to detect or
correct errors, we need to send some extra bits with our data. These redundant bits are added by the
sender and removed by the receiver. Their presence allows the receiver to detect or correct corrupted
bits.

Detection Versus Correction


The correction of errors is more difficult than the detection. Error detection is checking the
received data for errors whereas Error Correction is fixing the errors.
In error correction, we need to know the exact number of bits that are corrupted and more
importantly, their location in the message. The number of the errors and the size of the message are
important factors. If we need to correct one single error in an 8-bit data unit, we need to consider
eight possible error locations.

Forward Error Correction Versus Retransmission


There are two main methods of error correction. Forward error correction is the process in
which the receiver tries to guess the message by using redundant bits.

https://e-next.in
Correction by retransmission is a technique in which the receiver detects the occurrence of an
error and asks the sender to resend the message. Resending is repeated until a message arrives that
the receiver believes is error-free.

BLOCK CODING
In block coding, we divide our message into blocks, each of k bits, called datawords. We
add r redundant bits to each block to make the length n = k + r. The resulting n-bit blocks are called
codewords.
With k bits, we can create a combination of 2k datawords; with n bits, we can create a
combination of 2n codewords. Since n > k, the number of possible codewords is larger than the
number of possible datawords. The block coding process is one-to-one; the same dataword is always
encoded as the same codeword. This means that we have 2n - 2k codewords that are not used. We call
these codewords invalid or illegal.

Error Detection
The sender creates codewords out of datawords by using a generator that applies the rules
and procedures of encoding . Each codeword sent to the receiver may change during transmission. If
the received codeword is the same as one of the valid codewords, the word is accepted; the
corresponding dataword is extracted for use. If the received codeword is not valid, it is discarded.
However, if the codeword is corrupted during transmission but the received word still matches a
valid codeword, the error remains undetected. This type of coding can detect only single errors. Two
or more errors may remain undetected.

Error Correction
Error correction is much more difficult than error detection. In error detection, the receiver needs to
know only that the received codeword is invalid; in error correction the receiver needs to find (or
guess) the original codeword sent. Hence we need more redundant bits for error correction than for
error detection.

https://e-next.in
Cyclic Redundancy Check (CRC)
An error detection mechanism in which a special number is appended to a block of data in order to
detect any changes introduced during storage (or transmission). The CRC is recalculated on
retrieval (or reception) and compared to the value originally transmitted, which can reveal certain
types of error. For example, a single corrupted bit in the data results in a one-bit change in the
calculated CRC, but multiple corrupt bits may cancel each other out.

A CRC is derived using a more complex algorithm than the simple CHECKSUM, involving
MODULO ARITHMETIC (hence the 'cyclic' name) and treating each input word as a set of
coefficients for a polynomial.
CRC is more powerful than VRC and LRC in detecting errors.
• It is not based on binary addition like VRC and LRC. Rather it is based on binary division.
• At the sender side, the data unit to be transmitted IS divided by a predetermined divisor
(binary number) in order to obtain the remainder. This remainder is called CRC.
• The CRC has one bit less than the divisor. It means that if CRC is of n bits, divisor is of n+
1 bit.
• The sender appends this CRC to the end of data unit such that the resulting data unit
becomes exactly divisible by predetermined divisor i.e. remainder becomes zero.
• At the destination, the incoming data unit i.e. data + CRC is divided by the same number
(predetermined binary divisor).
• If the remainder after division is zero then there is no error in the data unit & receiver
accepts it.
• If remainder after division is not zero, it indicates that the data unit has been damaged in
transit and therefore it is rejected.
• This technique is more powerful than the parity check and checksum error detection.
• CRC is based on binary division. A sequence of redundant bits called CRC or CRC
remainder is appended at the end of a data unit such as byte.

Requirements of CRC :
A CRC will be valid if and only if it satisfies the following requirements:
1.It should have exactly one less bit than divisor.
2. Appending the CRC to the end of the data unit should result in the bit sequence which is
exactly divisible by the divisor.
• The various steps followed in the CRC method are
1. A string of n as is appended to the data unit. The length of predetermined divisor is n+ 1.
2. The newly formed data unit i.e. original data + string of n as are divided by the divisor
using binary division and remainder is obtained. This remainder is called CRC.

https://e-next.in
3. Now, string of n Os appended to data unit is replaced by the CRC remainder (which is
also of n bit).
4. The data unit + CRC is then transmitted to receiver.
5. The receiver on receiving it divides data unit + CRC by the same divisor & checks the
remainder.
6. If the remainder of division is zero, receiver assumes that there is no error in data and it
accepts it.
7. If remainder is non-zero then there is an error in data and receiver rejects it.
• For example, if data to be transmitted is 1001 and predetermined divisor is 1011. The
procedure given below is used:
1. String of 3 zeroes is appended to 1011 as divisor is of 4 bits. Now newly formed data is
1011000.

1. Data unit 1011000 is divided by 1011.

https://e-next.in
2. During this process of division, whenever the leftmost bit of dividend or remainder is 0,
we use a string of Os of same length as divisor. Thus in this case divisor 1011 is replaced by
0000.
3. At the receiver side, data received is 1001110.
4. This data is again divided by a divisor 1011.
5. The remainder obtained is 000; it means there is no error.

• CRC can detect all the burst errors that affect an odd number of bits.
• The probability of error detection and the types of detectable errors depends on the choice
of divisor.
• Thus two major requirement of CRC are:
(a) CRC should have exactly one bit less than divisor.
(b) Appending the CRC to the end of the data unit should result in the bit sequence which is
exactly divisible by the divisor.

https://e-next.in
Video on CRC Mechanism

https://e-next.in
Multiple Choice Questions:
The _______ technique uses M different carrier frequencies that are modulated by the source signal.
At one moment, the sign modulates one carrier frequency; at the next moment, the signal modulates
another carrier frequency.

 A) DSSS
 B) FHSS
 C) FDM
 D) TDM

Which multiplexing technique transmits digital signals?

 A) WDM
 B) FDM
 C) TDM
 D) None of the above

In __________, each packet is treated independently of all others.

 A) circuit switching
 B) datagram switching
 C) frame switching
 D) none of the above

In a three-stage space division switch, if N = 200, the number of crosspoints is ______.

 A) 40,000
 B) less than 40,000
 C) greater than 40,000
 D) greater than 100,000

In cyclic redundancy checking, the divisor is _______ the CRC.

 A) one bit less than


 B) one bit more than
 C) The same size as
 D) none of the above

Which error detection method consists of just one redundant bit per data unit?

 A) CRC
 B) Checksum
 C) Simple parity check
 D) Two-dimensional parity check

A simple parity-check code can detect __________ errors.

 A) an odd-number of
 B) an even-number of
 C) two
 D) no error

https://e-next.in
The _____of errors is more difficult than the ______.

 A) detection; correction
 B) correction; detection
 C) creation; correction
 D) creation; detection

FDM is an _________technique.

 A) digital
 B) analog
 C) either (a) or (b)
 D) none of the above

Which multiplexing technique involves signals composed of light beams?

 A) WDM
 B) FDM
 C) TDM
 D) none of the above

https://e-next.in
GQ- Unit-2

1 State the need of multiplexing.


2 Define multiplexing.
3 List and explain in brief the types of multiplexing techniques.
4 Explain TDM and its types .
5 How is WDM different from FDM?
6 Explain frequency hopping spread spectrum.
7 Distinguish between link and channel in multiplexing.
8. Define DSSS and explain how it achieves bandwidth spreading.
9 What is the position of the transmission media in the OSI or the Internet model?
10 Name the two major categories of transmission media. How do guided media differ
from unguided media?
11 What is the function of the twisting in twisted-pair cable?
12 Name the advantages of optical fiber over twisted-pair and coaxial cable.
13 What is the difference between omnidirectional waves and unidirectional waves?
14 Write short note on Infrared transmission.
15 State the need of switching. State its categories.
16 Explain Circuit switched network.
17 Discuss the terms efficiency and delay in switching techniques.
18 Compare space-division and time-division switches.
19 List four major components of a packet switch and their functions.
20 Distinguish between communication at the network layer and communication at the
data-link layer.
21 Distinguish between a point-to-point link and a broadcast link.
22 Write a short note on ARP.
23 List and explain the types of addresses.
24 Explain the term error and its types.
25 Explain with an example block coding techniques.
26 Explain with an example CRC mechanism.
27 Explain checksum mechanism with an example.
28 In CRC, if the dataword is 5 bits and the code word is 8 bits, how many 0s need to be
added to the dataword to make the dividend? What is the size of the remainder? What
is the size of the divisor?
29 A category of error detecting (and correcting) code, called the Hamming code, is a code
in which dmin = 3. This code can detect up to two errors (orcorrect one single error). In
this code, the values of n, k, and r are related as:
n = 2r − 1 and k = n − r. Find the number of bits in the dataword and the codewords if r
is 3.

https://e-next.in

You might also like