CSC 422 Part 1
CSC 422 Part 1
INFORMATION THEORY)
COMPILED BY
UNIVERSITY OF ILORIN
2020/2021 SESSION
1. DATA COMMUNICATION AND INFORMATION THEORY
Shannon's diagram of a general communications system, which shows the process that produces a
message.
The article was the founding work of the field of information theory. It was later published in 1949
as a book titled The Mathematical Theory of Communication (ISBN 0-252-72546-8), which was
published as a paperback in 1963 (ISBN 0-252-72548-4). The book contains an additional article
by Warren Weaver, providing an overview of the theory for a more general audience. Shannon's
article laid out the basic elements of communication:
• An information source that produces a message
• A transmitter that operates on the message to create a signal which can be sent through a
channel
• A channel, which is the medium over which the signal, carrying the information that
composes the message, is sent
• A receiver, which transforms the signal back into the message intended for delivery
• A destination, which can be a person or a machine, for whom or which the message is
intended
It also developed the concepts of information entropy and redundancy, and introduced the term bit
as a unit of information.
1.2 Introduction to Data communication
Data: Data refers to the information or message, which is present in the form that is agreed upon
by user and creator of data (mostly Digital data)
Data Communication: is exchange of data between two devices via some form of transmission
medium.
Message or Signal: is electrical or electromagnetic wave sent through medium from one point to
another, which contains encoded message. A messages can be in the form of sound, text, numbers,
pictures, video or combinations of these.
Sender: A sender is device which sends the message, example : computer, workstation, video
camera, telephone etc.
Medium: It is physical path over which data travels from a sender to receiver.
Receiver: A receiver is a device which receives the message, example : computer, TV receiver,
workstation, telephone receiver, radio receiver etc.
Protocol: A protocol is defined as the set of rules which governs data communication. The
connection of two devices takes places via the communication medium but the actual
communication between them will take place with take place with the help of a protocol .
Information theory studies the transmission, processing, utilization, and extraction of information.
Abstractly, information can be thought of as the resolution of uncertainty. In the case of
communication of information over a noisy channel, this abstract concept was made concrete in
1948 by Claude Shannon in his paper "A Mathematical Theory of Communication", in which
"information" is thought of as a set of possible messages, where the goal is to send these messages
over a noisy channel, and then to have the receiver reconstruct the message with low probability
of error, in spite of the channel noise
A key measure in information theory is "entropy". Entropy quantifies the amount of uncertainty
involved in the value of a random variable or the outcome of a random process. For example,
identifying the outcome of a fair coin flip (with two equally likely outcomes) provides less
information (lower entropy) than specifying the outcome from a roll of a die (with six equally
likely outcomes). Some other important measures in information theory are mutual information,
channel capacity, error exponents and relative entropy.
Information theory often concerns itself with measures of information of the distributions
associated with random variables. Important quantities of information are entropy, a measure of
information in a single random variable, and mutual information, a measure of information in
common between two random variables. The former quantity is a property of the probability
distribution of a random variable and gives a limit on the rate at which data generated by
independent samples with the given distribution can be reliably compressed. The latter is a
property of the joint distribution of two random variables, and is the maximum rate of reliable
communication across a noisy channel in the limit of long block lengths, when the channel
statistics are determined by the joint distribution.
1.3.1 Entropy
If 𝕏 is the set of all messages {x1, …, xn} that X could be, and p(x) is the probability of some
, then the entropy, H, of X is defined:
(Here, I(x) is the self-information, which is the entropy contribution of an individual message,
and 𝔼X is the expected value.) A property of entropy is that it is maximized when all the
messages in the message space are equiprobable p(x) = 1/n; i.e., most unpredictable, in which
case H(X) = log n.
1.3.2 Measures of Information theory
Mutual Information
Mutual information (MI) of two random variables is a measure of the mutual dependence
between the two variables. More specifically, it quantifies the "amount of information" (in units
such as bits) obtained about one random variable, through the other random variable. The concept
of mutual information is intricately linked to that of entropy of a random variable, a fundamental
notion in information theory, that defines the "amount of information" held in a random variable
Venn diagram for various information measures associated with correlated variables X and Y. The
area contained by both circles is the joint entropy H(X,Y). The circle on the left (red and violet) is
the individual entropy H(X), with the red being the conditional entropy H(X|Y). The circle on the
right (blue and violet) is H(Y), with the blue being H(Y|X). The violet is the mutual information
I(X;Y).
Channel Capacity
Channel capacity is the tight upper bound on the rate at which information can be reliably
transmitted over a communication channel.
By the noisy-channel coding theorem, the channel capacity of a given channel is the limiting
information rate (in units of information per unit time) that can be achieved with arbitrarily small
error probability.
Error Exponent
In information theory, the error exponent of a channel code or source code over the block length
of the code is the logarithm of the error probability. For example, if the probability of error of a
decoder drops as e−nα, where n is the block length, the error exponent is α
Relative entropy
Relative entropy is a measure of the difference between two probability distributions P and Q. It
is not symmetric in P and Q. In applications, P typically represents the "true" distribution of data,
observations, or a precisely calculated theoretical distribution, while Q typically represents a
theory, model, description, or approximation of P.
2. SIGNALS
Signal is an electrical transmission of alternating current (AC) on network cabling that is generated
by a networking component such as a network interface card (NIC). An electromagnetic signal is
transmitted through air, vacuum to satellite or antenna to mobile. Signals can be either analog or
digital.
An example of analog data is the human voice. When somebody speaks, a continuous wave is
created in the air. Analog data --voice, video --continuously varying patterns of different intensity
(amplitude). Analog signal can be classified as simple or composite. A simple analog signal, a
sine wave, cannot be decomposed into simpler signals. A composite analog signal is composed of
multiple sine waves. Three characteristics namely – amplitude, frequency and phase fully
describes a sine wave.
A digital signal is discrete. It has only a limited number of definite discreet values, as 1 and 0. An
example of digital data is data stored in memory of a computer in the form of 0's and 1's. For
example, a 1 can be encoded as a positive voltage and a 0 as zero voltage. Digital data --text,
digitized images --takes discrete values, usually binary (0,1). Example of digitized text is the
ASCII code. 8 -bits so 255 patterns including -upper and lower case
characters, integers 0-9, special characters and some "control" characters are used in
communication. Bit interval and baud rate are used to describe digital signals.
Bit Interval
The bit interval is the time required to send one single bit. The bit rate is the number of bit intervals
per second. This means that the bit rate is number of bits sent in one second, usually expressed in
bits per second (bps).
Baud rate
Baud rate refers to the number of signal units per second that are required to represent those bits.
Baud rate is less than or equal to the bit rate.
The difference between baud rate and bit rate occurs as they define different but related
information. Thus Baud rate is effective measure of information transmitted and bit rate is measure
of the data transmitted (which might include error correcting codes, frame, frame-packet numbers
etc.].
Example 1
A signal carries three bits in each signal element. If 1200 signal elements are sent per second, find
the baud rate and the bit rate.
Solution
Baud rate = Number of signal elements = 1200 bps
Bit rate = baud rate × Number of bits per signal element
= 1200 × 3
= 3600 bps
Example 2
The bit rate of a signal is 2000. If each signal element carries five bits, what is the baud rate?
Solution
Baud rate = Bit rate / Number of bits per signal element
= 2000 / 5
= 400 bps
3. FOURIER ANALYSIS
Fourier analysis is a method of defining periodic waveform s in terms of trigonometric functions.
The method gets its name from a French mathematician and physicist named Jean Baptiste Joseph,
Baron de Fourier, who lived during the 18th and 19th centuries. Fourier analysis is used in
electronics, acoustics, and communications.
Many waveforms consist of energy at a fundamental frequency and also at harmonic frequencies
(multiples of the fundamental). The relative proportions of energy in the fundamental and the
harmonics determines the shape of the wave. The wave function (usually amplitude , frequency,
or phase versus time ) can be expressed as of a sum of sine and cosine function s called a Fourier
series , uniquely defined by constants known as Fourier coefficient s. If these coefficients are
represented by a , a 1 , a 2 , a 3 , ..., a n , ... and b 1 , b 2 , b 3 , ..., b n , ..., then the Fourier series F (
x ), where x is an independent variable (usually time), has the following form:
F ( x ) = a /2 + a 1 cos x + b 1 sin x + a 2 cos 2 x + b 2 sin 2 x + ...
+ a n cos nx + b n sin nx + ...
In Fourier analysis, the objective is to calculate coefficients a , a 1 , a 2 , a 3 , ..., a n and b 1 , b 2 , b
3 , ..., b n up to the largest possible value of n . The greater the value of n (that is, the more terms
in the series whose coefficients can be determined), the more accurate is the Fourier-series
representation of the waveform.
4. DATA TRANSMISSION
4.1 Analog VERSUS Digital transmission
ANALOG TRANSMISSION --a means of transmitting ONLY analog signals.
• Data can be analog or digital; signal is always analog.
• Propagation can be over guided [wired, coaxial, optical fiber, cable] or unguided
medium (space, atmosphere).
• Analog signal will become weaker in signal strength (attenuate) over distance and
will be impaired by noise.
• An AMPLIFIER will boost the energy of the signal but also the noise. Noise is a
undesirable random electrical transmission on network cabling that is generated by
networking components such as network interface cards (NICs) or induced in cabling
by proximity to electrical equipment that generates electromagnetic interference (EMI).
• No coding is possible and thus no self-error correction is possible.
DIGITAL TRANSMISSION --a means of transmitting both digital and analog signals. Usually
assume that the signal is carrying digital (or digitized) data.
• Digital transmission can propagate to a limited distance before attenuation distorts
the signal and compromises the data integrity.
• A REPEATER retrieves the (digital) signal; recovers the (digital) data, e.g., a pattern
of 1's and 0's; and retransmits a new signal. Digital transmission is the preferred method for
several reasons :
• Equipment used for digital transmission is cheaper as compared to analog transmission.
• Use of repeaters, which recover the data and retransmit, are preferred over amplifiers,
which boost both signal and noise.
• Errors are not cumulative and so it is possible to transmit over longer distances, using
lower quality guided medium with better data integrity.
• Multiplexing -transmission links have high bandwidth and must propagate multiple
signals simultaneously to utilize the bandwidth. In digital transmission, time-
division multiplexing is used. Signals share the same medium over different time slots.
This is easier than analog transmission where the analog signals occupy different frequency
spectrum (frequency-division).
• Encryption of signal is possible for security and privacy.
• Coding is possible and self-error correction is possible.
Two theoretical formulas were developed to calculate the data rate : one by Nyquist for a noiseless
channel, another by Shannon for a noisy channel.
For noiseless channel, the Nyquist bit rate formula defines the theoretical maximum bit rate as:
Example 3
Consider a noiseless channel with a bandwidth of 2000 Hz transmitting a signal with two signal
levels. Calculate the bit rate.
Solution
Bitrate= 2 × 2000 × log2 2 = 4000 bps
Example 4
Consider the same noiseless channel, transmitting a signal with four signal levels.
Solution
Bitrate = 2 × 2000 × log2 4 = 8000 bps.
In reality, there cannot be a noiseless channel; the channel is always noisy. Claude Shannon
introduced a formula, called the Shannon capacity, to determine the theoretical highest data rate
for a noisy channel :
𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦 = 𝐵𝑎𝑛𝑑𝑤𝑖𝑑𝑡ℎ × log 2 (1 + 𝑆𝑁𝑅)
Example 5
Calculate the channel capacity of telephone line using Shannon formula.
Solution
A telephone line has a bandwidth of 3000 Hz (300 Hz to 3300 Hz). The signal-to-nose ration is
usually 3162. For this channel capacity is :
Capacity = Bandwidth × log2 (1 + SNR)
= 3000 × log2 (1 + 3162)
= 3000 × log2 (3163)
= 3000 × 11.62
= 34,860 bps
That is, the highest bit rate for a telephone line is 34.860 Kbps. If data want to be sent faster than
this, the bandwidth of the line can be increased or signal-to-noise ratio improved.
In serial transmission one bit follows another, so we need only one communication channel or wire
rather than n to transmit data between two communicating devices. Serial transmission is possible
in one of two ways: synchronous and asynchronous.
Synchronous Data
Synchronous data require a coherent clocking signal between transmitter and receiver,
called a data clock, to synchronize the interpretation of the data sent and received. The
data clock is extracted from the serial data stream at the receiver by special circuits called clock
recovery circuits.
Once the clock is recovered at the receiving end, bit and character synchronization can be
established. Bit synchronization requires that the high and low condition of the binary data sent
matches that received and is not in an inverted state. Character synchronization implies that the
beginning and end of a character word is established so that these characters can be decoded and
defined. Overall the clock recovered from the message stream itself maintains synchronization.
Figure 4.5(a) shows how a synchronous binary transmission would send the ASCII character E
(hex 45 or 1000101). The least significant bit (LSB) is transmitted first, followed by the remaining
bits of the character. There are no additional bits added to the transmission.
With synchronous transmission, a block of bits is transmitted in a steady stream without start and
stop codes. The block may be many bits in length. To prevent timing drift between transmitter and
receiver, their clocks must somehow be synchronized. One possibility is to provide a separate
clock line between transmitter and receiver. One side (transmitter or receiver) pulses the line
regularly with one short pulse per bit-time. The other side uses these regular pulses as a clock. This
technique works well over short distances, but over longer distances the clock pulses are subject
to the same impairments as the data signal, and timing errors can occur. The other alternative is to
embed the clocking information in the data signal; for digital signals, this can be accomplished
with Manchester or Differential Manchester encoding. For analog signals, a number of techniques
can be used; for example, the carrier frequency itself can be used to synchronize the receiver based
on the phase of the carrier.
With synchronous transmission, there is another level of synchronization required to allow the
receiver to determine the beginning and end of a block of data; to achieve this, each block begins
with a preamble bit pattern and generally ends with a postamble bit pattern.
Figure 4.6 shows, in general terms. a typical frame format for synchronous transmission.
Typically, the frame starts with a preamble called a flag, which is eight bit-long. The same flag is
used as a postamble. The receiver looks for the occurrence of the flag pattern to signal the start of
a frame. This is followed by some number of control fields, then a data field (variable length for
most protocols), more control fields, and finally the flag is repeated.
For sizable blocks of data, synchronous transmission is far more efficient than asynchronous.
Asynchronous transmission requires 20 percent or more overheads. The control information,
preamble, and postamble in synchronous transmission are typically less than 100 bits. For
example, one of the more common schemes, HDLC, contains 48 bits of control, preamble, and
postamble. Thus, for a 1000-character block of data, each frame consists of 48 bits of overhead
and 1000 X 8 = 8,000 bits of data, for a percentage overhead of only 0.6%.
Direction of Transmission
Direction of Transmission
Asynchronous Data
Asynchronous data formats incorporate the use of framing bits to establish the beginning (start bit)
and ending (stop bit) of a data character word as shown in Figure 4.5 (b). A clocking signal is not
recovered from the data stream, although the internal clocks of the transmitter and receiver must
be the same frequency for data to be correctly received. To understand the format of an
asynchronous character, it is first necessary to be aware of the state of the transmission line when
it is idle and no data is being sent. The idle condition results from the transmission line being held
at a logic 1, high state, or mark condition. The receiver responds to a change in the state of the line
as an indication that data has been sent to it. This change of state is indicated by the line going low
or logic 0, caused by the transmission of a start bit at the beginning of the character transmission
as shown in Figure 4.5 (b). Data bits representing the code of the character being sent follow next
ending with one or two stop bits. The stop bits actually specify the minimum time the line must
return to logic 1 condition before the receiver can detect the next start bit of the next character.
Asynchronous transmission is simple and cheap but requires an overhead of two to three bits per
character. For example, for an 8-bit code, using a 1-bit-long stop bit, two out of every ten bits
convey no information but are there merely for synchronization; thus the overhead is 20%. Of
course, sending larger blocks of bits between the start and stop bits could reduce the percentage
overhead.
A more efficient stream of data takes less time to be transmitted simply because there are less bits
to be sent. However, the overall efficiency of a transmission relies on more than the efficiency of
individual characters within a message. For asynchronous data, the entire message will retain a
70% efficiency because no additional bits or overhead are required to send the data. Bit and
character synchronization are built into the framing bits.
Synchronous data, on the other hand, requires a preamble message, which is a set pattern of binary
ones and zeros used to facilitate clock recovery, so the data to be bit and character synchronized
before data can be correctly received. This adds additional bits to be sent and reduces the overall
efficiency of the transmission. Despite this added burden, synchronous transmissions remain more
efficient than asynchronous ones.
Signal that is received will be impaired or distorted during transmission. For analog signals, signal
quality is reduced. For digital signals, errors are introduced ,1 recognized
as 0 and vice versa. Transmission medium is imperfect and these impairments affect the capacity
of the channel. Three types of impairments can occur : attenuation, distortion, and noise.
Attenuation is the weakening in strength, of a signal as it passes through the medium. As the signal
travels through the transmission medium, some of its power is absorbed, the signal gets weaker,
and the receiving equipment has less and less chance of correctly interpreting the data. Thus a wire
carrying electrical signal gets warm, if not hot, after a while. Some of the electrical energy in the
signal is converted to heat. To compensate for this loss, amplifiers are used to amplify the signal .
Attenuation is caused by signal absorption, connector loss, and coupling loss. To minimize
attenuation, use high-grade cabling such as enhanced category 5 cabling. Also try to minimize the
number of connector devices or couplers, ensuring that these are high-grade components as well.
When a signal attenuates a large amount, the receiving device might not be able to detect it or
might misinterpret it, therefore causing errors.
ii) DISTORTION
Distortion means that the signal changes its form or shape. The distortion of electrical signals
occurs as they pass through metallic conductors. Attenuation Distortion occurs because high
frequencies lose power more rapidly than low frequencies during transmission. Thus the received
signal is distorted by unequal loss of its component
frequencies. Signals that start at the source as clean, rectangular pulses may be received as rounded
pulses with ringing at the rising and falling edges. These effects are properties of transmission
through metallic conductors, and become more pronounced as the conductor length increases. To
compensate for distortion, signal power must be increased or the transmission rate decreased.
Delay Distortion
It occurs when the method of transmission involves transmission at different frequencies.
The bits transmitted at one frequency may travel slightly faster than the bits transmitted at another
frequency. Delay distortion occurs in guided medium. All frequency components of the signal may
not "travel" at the same speed --can cause distortion --e.g., for two consecutive bits, the portion of
the signal carrying one bit may overlap with the portion of the signal carrying the neighboring bit.
The various frequency components in digital signal arrive at the receiver with varying delays,
resulting in delay distortion.
As bit rate increases, some of the frequency components associated with each bit transition are
delayed and start to interfere with frequency components associated with a later bit, causing inter-
symbol interference, which is a major limitation o maximum bit rate.
iii) NOISE
Noise refers to unintentional signal (voltages) introduced in a line by various phenomenons such
as heat or electromagnetic induction created by other sources.
Noise is an undesirable random electrical transmission on network cabling that is generated by
networking components such as network interface cards (NICs) or induced in cabling by proximity
to electrical equipment that generates electromagnetic interference (EMI). Noise is generated by
all electrical and electronic devices, including motors, fluorescent lamps, power lines, and office
equipment, and it can interfere with the transmission of signals on a network. The better the signal-
to-noise ratio of an electrical transmission system, the greater the throughput of information on the
system.
The binary data being transmitted will be altered by noise and result in incorrect data received.
Noise and momentary electrical disturbances may cause data to be changed as it passes through a
communications channel.
The noisy signals, which cause the data lost or corruption, are classified into different types such
as : white noise or thermal noise, induced noise, interference, crosstalk, impulse noise and human
errors may corrupt the signal.
White noise is present in all electronic devices and cannot be eliminated by any circuits. It
increases with temperature, but it is independent of frequency. That means the white noise covers
the whole frequency spectrum and will be picked up by both low and high frequency devices. As
bandwidth increases, (thermal) white noise power increases. White noise is also called as thermal
noise or additive noise. The amount of noise is directly
Figure 4.9: Noise
Sender
Receiver
proportional to the temperature of the medium. White noise usually is not a problem unless it
becomes so strong that it obliterates the transmission. Thermal noise (or additive noise) is the
random motion of electrons in a wire that created an extra signal not originally sent by the
transmitter.
Thermal noise is also called as additive noise. Additive noise is generated internally by
components such as resistors and solid-state devices used to implement the communication system.
Thermal noise is the most common impairment in a wireless communication system. There are
three general sources:
1) The noise that enters the antenna with the signal, aptly called antenna noise,
2) the noise generated due to ohmic absorption in the various passive hardware components, and
3) noise produced in amplifiers through thermal action within semiconductors.
Induced noise comes from sources such as motors and appliances with coils. These devices act as
a sending antenna and the transmission medium acts as a receiving antenna.
What can we do to minimize the white noise?
The medium should be kept as cool as possible
Impulse noise is a spike (a signal with high energy in a very short period of time) that comes from
power lines, lightning, and so on. Impulse Noise consisting of random occurrences of energy
spikes having random amplitude and spectral content. Impulse noise in a data channel can be a
definitive cause of data transmission errors.
Interference is caused by picking up the unwanted electromagnetic signals nearby such as
crosstalk due to adjacent cables transmitting electronic signals or lightning causing power surge.
Crosstalk is the undesired effect of one circuit (or channel) on another circuit (or channel). It
occurs when one line picks up some of the signal traveling down another line. Crosstalk effect can
be experienced during telephone conversations when one can hear other conversations in the
background. Crosstalk is a form of interference in which signals in one cable induce
electromagnetic interference (EMI) in an adjacent cable. The twisting in twisted-pair cabling
reduces the amount of crosstalk that occurs, and crosstalk can be further reduced by shielding
cables or physically separating them. Crosstalk is a feature of copper cables only—fiber-optic
cables do not experience crosstalk
The ability of a cable to reject crosstalk in Ethernet networks is usually measured using a scale
called near-end crosstalk (NEXT). NEXT is expressed in decibels (dB), and the higher the NEXT
rating of a cable, the greater its ability to reject crosstalk. A more complex scale called Power Sum
NEXT (PS NEXT) is used to quantify crosstalk in high-speed Asynchronous Transfer Mode
(ATM) and Gigabit Ethernet networks.
Human Error
Noise sometimes is caused by human being such as plugging or unplugging the signal cables, or
power on/off the related communications equipment.
The effects of noise may be minimized by increasing the power in the transmitted signal. However,
equipment and other practical constraints limit the power level in the transmitted signal. Another
basic limitation is the available channel bandwidth. A bandwidth constraint is usually due to the
physical limitations of the medium and the electronic components used to implement the
transmitter and the receiver. These two limitations result in constraining the amount of data that
can be transmitted reliably over any communications channel. Shannon's basic results relate the
channel capacity to the available transmitted power and channel bandwidth.
Signal to noise ratio to quantify noise
Signal-to-noise ratio (S/N) is a parameter used to quantify how much noise there is in a signal. A
high SNR means a high power signal relative to noise level, resulting in a good-quality signal.
SNR is represented in decibel (db).
S/N = 10 Log10 (S/N)
Where S = average signal power
N = noise power
Bit Error Rate
The BER (Bit Error Rate) is the probability of a signal bit being corrupted in a define time interval.
-5 -5
BER of 10 means on average 1 bit in 10 will be corrupted.
-5 -6
Note that, a BER of 10 over voice-graded line is typical and BER of less than 10 over digital
communication is common.
A Bit Error Rate (BER) is a significant measure of system performance in terms of noise. A BER
-6
of 10 , for example, means that one bit of every million may be destroyed during transmission.
Several factors affect the BER:
• Bandwidth
• S/N (Signal-to-noise ratio)
• Transmission medium
• Transmission distance
• Environment
• Performance of transmitter and receiver
The communication channel provides the connection between the transmitter and the receiver. The
physical channel may be a pair of wires that carry the electrical signal, or an optical fiber that
carries the information on a modulated light beam, or an underwater ocean channel in which the
information is transmitted acoustically, or free space over which the information-bearing signal is
radiated by use of an antenna. Other media that can be characterized as communication channels
are data storage media, such as magnetic tape, magnetic disks, and optical disks.
Information sent through a communications channel has a source from which the information
originates, and a destination to which the information is delivered. Although information originates
from a single source, there may be more than one destination, depending upon how many receive
stations are linked to the channel and how much energy the transmitted signal possesses. In a
digital communications channel, the information is represented by individual data bits, which may
be encapsulated into multibit message units. A byte, which consists of eight bits, is an example of
a message unit that may be conveyed through a digital communications channel. A collection of
bytes may itself be grouped into a frame or other higher-level message unit. Such multiple levels
of encapsulation facilitate the handling of messages in a complex data communications network
In some cases, the information may not be reproduced or the information may not reach the
receiver at all. Such phenomena can be understood from the following channel characteristics
issues :
Signal-to-Noise Ratio (S/N Ratio) is a very important parameter in assessing the channel capacity
or throughput of a data channel. From Shannon's Law, the maximum data rate (bit rate), which a
channel can possibly support, is given by the product of the line bandwidth and the signal-to-noise
ratio of the channel. Channel capacity, shown often as "C" in communication formulas, is the
amount of discrete information bits that a defined area or segment in a communications medium
can hold. Thus, a telephone wire may be considered a channel in this sense.
Shannon‘s Law
The maximum data rate of a noisy channel whose bandwidth W in Hz, and whose signal-to-noise
ratio is S/N, is given by
C= W Log2(1 + S/N)
Where W = Bandwidth in Hz
S = Average signal power in watts
N = Random noise power in watts
C = Maximum data rate possible
Example
Calculate maximum data rate for telephone line, which having 30 dB signal-to-noise ratio.
Solution
Bandwidth (W) of telephone line = 3300 – 300 Hz = 3000 Hz.
S/N= 39 dB = 1000
C = 3000 × Log2 (1 + 1000)
30 Kbps
4.8
5. MODULATION/DEMODULATION
In analog transmission the sending device produces high - frequency signal that acts as a basis for
the information signal. The base signal is called the carrier signal or carrier frequency. The
receiving device is tuned to the frequency of the carrier signal that it expects from the sender.
Digital information is then encoded onto the carrier signal by modifying one or more of its
characteristic (amplitude, frequency or phase). This kind of modification is called modulation (or
shift keying) and the information signal is called a modulating signal.
The process of changing some characteristic (e.g. amplitude, frequency or phase) of a carrier wave
in accordance with the intensity of the signal is known as modulation.
5.1 Types of Modulation
Signal modulation can be divided into two broad categories: Analog and Digital modulation. The
aim of digital modulation is to transfer a digital bit stream over an analog band pass channel
The aim of analog modulation is to transfer an analog baseband (or low pass) signal, for example
an audio signal or TV signal, over an analog band pass channel, for example a limited radio
frequency band or a cable TV network channel.
S. N FM AM
1. The amplitude of carrier remains constant The amplitude of carrier changes with
with modulation modulation.
2. The carrier frequency changes with The carrier frequency remains constant
modulation. with modulation.
3. The carrier frequency changes according to The carrier amplitude changes according to
the strength of the modulating signal. the strength of the modulating signal.
4. The value of modulation index (mf) can be The value of modulation factor (m) cannot
more than 1. be more than 1 for distortionless AM
signal.
Two terms used frequently in data communication are bit rate and baud rate. Bit rate is the number
of bits transmitted during 1s. Baud rate refers to the number of signal units per second that are
required to represent those bits. A signal unit is composed of one or more bits. Bit rate is the
number of bits per second. Baud rate is the number of signal units per second. Baud rate is always
less than or equal to the bit rate.
An analogy can clarify the concept of baud and bits. In transportation, a baud is analogous to a car,
and a bit is analogous to passenger. A car can carry one or more passengers. If 2000 cars go from
one location to another, carrying only one passenger, then 2000 passengers are transported.
However, if each car carries two passengers, then 4000 passengers are transported. Note that
number of cars, not the number of passengers, determines the traffic and therefore, the need for
wider highways. Similarly, the number of bauds determines the required bandwidth, not the
number of bits.
In FSK, two fixed amplitude carrier signal are used, one for a binary 0 and the other for a binary
1. The different between the two carriers is known as the frequency shift. FSK avoids most of the
noise problem of ASK. Because the receiving device is looking for specific frequency changes
over a given number of periods, it can ignore voltage spikes. The limiting factors of FSK are the
physical capabilities of the carrier.
Advantages
FSK is insensitive to channel fluctuations and not easily effected by noise.
Resilient to signal strength variations
Does not require linear amplifiers in the transmitter
Disadvantage
FSK is a low performance type of digital modulation.
In PSK, both the amplitude and frequency remains constant as the phase changes. PSK is not
susceptible to the noise degradation that mostly affects ASK, nor to the bandwidth limitations of
FSK.
Advantages
PSK, phase shift keying enables data to be carried on a radio communications signal in a more
efficient manner than Frequency Shift
Disadvantage: Implementation is complex and expensive.
Figure
5.9: Phase
Shift Keying
4. QAM
Possible variations of QAM are numerous. Theoretically, any measurable number of changes in
amplitude can be combined with any measurable number of changes in phase. For example : 4-
QAM or 8-QAM. In 4-QAM, two-amplitude change and 2 phase shift as shown in figure. In 8-
QAM, 2-amplitude change and 4 phase shift. In 8-QAM number of amplitude shifts is less than
number of phase shifts. Because amplitude changes are susceptible to noise and require greater
shift differences than do phase changes, the number of phase shifts used by a QAM system always
larger than the number of amplitude shifts.
5.4 MODEM
The devices (computers) that generate the digital data (DTE) usually generate a sequence of digital
pulses which are not suitable for transmission on a medium. Typically there is another device -a
modem (DCE) which prepares this signal for the transmission medium.
Modem stands for modulator/demodulator. A modulator converts a digital signal into an analog
signal using Amplitude Shift Keying (ASK), Frequency Shift Keying (FSK), Phase Shift Keying
(PSK), or Quadrature Amplitude Modulation (QAM) appropriate for telephone lines. A
demodulator converts an analog signal into a digital signal.
The two PCs at the end are the DTEs; the modems are the DCEs. The DTE creates a digital signal
and relay it to the modem via an interface (like the EIA 232). The modulated signal is received by
demodulation function of second modem. It decodes it and then relays the resulting digital signal
to the receiving computer via an interface. Generally, modem is any type of data communications
equipment (DCE) that enables digital data transmission over the analog Public Switched
Telephone Network (PSTN). The term “modem” (which actually stands for
“modulator/demodulator”) is usually reserved for analog modems, which interface, through a
serial transmission connection such as the RS-232 interface, with data terminal equipment (DTE)
such as computers. The modem converts the digital signal coming from the computer into an
analog signal that can be carried over a Plain Old Telephone Service (POTS) line. The term “digital
modem” is sometimes used for ISDN terminal adapters.
Modems were developed in the 1960s by Bell Labs, which developed a series of standards called
the Bell Standards. These standards defined modem technologies of up to a 9600-bps transmission
speed. But after the breakup of Bell Telephone, the task of developing modem standards was taken
over by the International Telegraph and Telephone Consultative Committee (CCITT), which is
now called the International Telecommunication Union (ITU). According to ITU specifications,
modem standards are classified by a series of specifications known as the V series. The
International Telecommunication Union (ITU), which defines standards of up to V.90 (which
supports 56-Kbps downloads and 33.6-Kbps uploads).
Digital Modem
Any type of modem used for synchronous transmission of data over circuit-switched digital lines.
One example of a digital modem is an ISDN terminal adapter. Digital modems are not used for
changing analog signals into digital signals because they operate on end-to-end digital services.
Instead, they use advanced digital modulation techniques for changing data frames from a network
into a format suitable for transmission over a digital line such as an Integrated Services Digital
Network (ISDN) line. They are basically data framing devices, rather than signal modulators.
Analog Modem
A modem used for asynchronous transmission of data over Plain Old Telephone Service (POTS)
lines. Analog modems are still a popular component for remote communication between users and
remote networks. The word “modem” stands for “modulator/demodulator,” which refers to the
fact that modems convert digital transmission signals to analog signals and vice versa. For
example, in transmission, an analog modem converts the digital signals it receives from the local
computer into audible analog signals that can be carried as electrical impulses over POTS to a
destination computer or network. To transmit data over a telephone channel, the modem modulates
the incoming digital signal to a frequency within the carrying range of analog phone lines (between
300 Hz and 3.3 kHz). To accomplish this, multiplexing of the digital signal from the computer
with a carrier signal is performed. The resulting modulated signal is transmitted into the local loop
and transmitted to the remote station where a similar modem demodulates it into a digital signal
suitable for the remote computer.
6. MULTIPLEXING
Multiplexing (or muxing) is a way of sending multiple signals or streams of information over a
communications link at the same time in the form of a single, complex signal; the receiver recovers
the separate signals, a process called demultiplexing (or demuxing).
The multiplexed signal is transmitted over a communication channel, such as a cable. The
multiplexing divides the capacity of the communication channel into several logical channels, one
for each message signal or data stream to be transferred. A reverse process, known as
demultiplexing, extracts the original channels on the receiver end. A device that performs the
multiplexing is called a multiplexer (MUX), and a device that performs the reverse process is
called a demultiplexer (DEMUX or DMX).
One of the most common applications for FDM is traditional radio and television broadcasting
from terrestrial, mobile or satellite stations, or cable television. Only one cable reaches a customer's
residential area, but the service provider can send multiple television channels or signals
simultaneously over that cable to all subscribers without interference. Receivers must tune to the
appropriate frequency (channel) to access the desired signal
Figure 6.2: FDM
2. Time Division Multiplexing (TDM)
TDM is applied primarily on digital signals but can be applied on analog signals as well. In TDM
the shared channel is divided among its user by means of time slot. Each user can transmit data
within the provided time slot only. Digital signals are divided in frames, equivalent to time slot i.e.
frame of an optimal size which can be transmitted in given time slot.
TDM works in synchronized mode. Both ends, i.e. Multiplexer and De-multiplexer are timely
synchronized and both switch to next channel simultaneously.
Statistical TDM (STDM) represents an improvement over standard TDM. In STDM, if a sender is
not ready to transmit in a cycle, the next sender that is ready can transmit. This reduces the number
of wasted slots and increases the utilization of the communication channel. STDM data blocks are
known as packets and must contain header information to identify the receiving destination.
Applications that use TDM include long-distance telephone service over a T-1 wire line and the
Global System for Mobile Communications (GSM) standard for cellular phones. STDM is used in
packet-switching networks for LAN and Internet communications.
WDM used in Synchronous Optical Network (SONET). It utilizes various optical fiber lines that
are multiplexed and demultiplexed. Further, on each wavelength time division multiplexing can
be incorporated to accommodate more data signals.
Each channel transmits its bits as a coded channel-specific sequence of pulses called chips. Each
station is assigned with this unique code. Signals travel with these codes independently, inside the
whole bandwidth. The receiver knows in advance the chip code signal it has to receive. Number
of chips per bit, or chips per symbol, is the spreading factor. This coded transmission typically is
accomplished by transmitting a unique time-dependent series of short pulses, which are placed
within chip times within the larger bit time. All channels, each with a different code, can be
transmitted on.
Advantages over conventional techniques are that variable bandwidth is possible (just as in
statistical multiplexing) and it is more secure. CDM is widely used in digital television and radio
broadcasting and in 3G mobile cellular networks. Where CDM allows multiple signals from
multiple sources, it is called Code-Division Multiple Access (CDMA). A significant application
of CDMA is the Global Positioning System (GPS).
the same fiber or radio channel or other medium, and asynchronously demultiplexed.
6. Polarization-division multiplexing
Polarization-division multiplexing uses the polarization of electromagnetic radiation to separate
orthogonal channels. It is in practical use in both radio and optical communications, particularly
in 100 Gbit/s per channel fiber optic transmission systems.