Unit 4 - DC - 2023-2024
Unit 4 - DC - 2023-2024
➢ Information theory is a branch of probability theory which may be applied to the study
of communication systems.
➢ WHAT IS INFORMATION?
➢ Consider you are planning to tour a city located in such an area
where rainfall is very rare.
➢ To know about the weather forecast you will call the weather
Bureau and may receive one of the following information
➢ The first message contains very little information because whether in a desert city in summer is
expected to be hot and sunny for maximum time.
➢ The second message forecasting is scattered rain contains some more information because it is not an
event that occurs often the forecast of a cyclonic storm contains even more information compared to
the second message this is because it is not an event that occurs often.
➢ Hence on a conceptual basis the amount of information received from the knowledge of occurrence of
an event may be related to likelihood or probability of occurrence of that event.
➢ The message associated with the least likelihood event does consists of maximum information.
➢ The amount of information in a message depends only upon the uncertainty of the underlying event
rather than its actual content.
INFORMATION SOURCES
➢ An information source may be viewed as an object with which produces an event, the
outcome of which is selected at random according to a probability distribution.
➢ A practical source in a communication system is a device which produces messages
and can be either analog or discrete.
CLASSIFICATION OF INFORMATION SOURCES:
➢ Information sources can be classified as having memory or being memoryless
➢ A source with memory is one for which a current symbol depends on previous
symbols
➢ A memoryless source is one for which symbol produced is independent of previous
symbols
What is a discrete memory less source(DMS)?
A discrete memory less source (DMS) can be characterized by the list of symbols ,the probability of
assignment to these symbols, and the specification of the rate of generating these symbols by source.
➢ Messages containing knowledge of high probability of occurrence convey relatively little information.
➢ we note that if an event is certain (that is the event occurs with probability) it conveys zero information.
➢ Thus a mathematical measure of information should be a function of the probability of the outcome and
should satisfy the following axioms
Definition:
➢ Let us consider a discrete memoryless source (DMS) denoted by ‘x’ and having
alphabet {x1,x2………..xm}.
Units Of I(xi):
The base of the logarithm in equation 1 is quite arbitrary.it is a standard
practice to use a logarithm to base 2 .The resulting unit is bit.
ENTROPY:
➢ In practical communication system we usually transmit long sequences of symbols from an information
source.
➢ we are more interested in the average information that resource produces than the information content of
a single symbol.
➢ In order to get the information content of the symbol, we take notice of the fact that the flow of
information in a system can fluctuate widely because of randomness involved in the selection of symbols.
➢ The amount of information I(Sk) produced by the source during an arbitrary signalling interval depends
on the symbol Sk emitted by the source at that time.
➢ I(Sk) is a discrete random variable that takes on the values I(So), I(S1),…… I(Sk-1) with probabilities P0 ,
P1....... Pk-1 respectively. The mean over the source alphabet ‘S’ is given by
H(S) = E[I(Sk)]
= σ𝑘−1
𝑘=0 𝑃𝑘 I(𝑠𝑘 )…………………………………………….(2)
1
= σ𝑘−1
𝑘=0 𝑃𝑘 log( )
𝑝𝑘
Here H(s) is called Entropy of a discrete memoryless source with source alphabet ‘S’.
➢ Entropy: It is a measure of the average information content per source symbol.
➢ Note: Entropy H(S) depends only on the probabilities of the symbols in the
alphabet ‘S’ of the source.
PROPERTIES OF ENTROPY :
➢ Consider a discrete memoryless source whose output is modelled as a discrete random variable ‘S’ which
takes symbols from an alphabet S={S0,S1…..SK-1} with probabilities P0 , P1....... Pk-1 respectively and the set
of probabilities must satisfy the condition
σ𝑘−1
𝑘=0 𝑃𝑘 =1
2. H(s)=log2 K; if and only if PK =1/K for all K(i.e. all the symbols in the alphabet are equiprobable);This
upper bound corresponds to maximum uncertainty.
➢ To prove the properties of entropy we proceed as follows………………..
➢ Property 1 proof (lower bound):
✓ Since each probability Pk is less than or equal to unity, it follows that
each term in equation(2), Pk log 2(1/ Pk ) is always non negative so
H(S)>=0.
✓ So the product term Pk log 2(1/ Pk ) is zero if and only if Pk = 0 or 1.
✓ Therefore we deduce that H(S)=0 if and only if Pk = 0 or i.e., Pk = 1
for some k and all the rest are zero.
➢ Property 2 proof (upper bound):
➢To prove this upper bound and property 2 we use natural logarithm
log x x-1 ;x 0 ……………………………..(4)
SOURCE CODING THEOREM
➢ An important problem in communication systems is efficient representation of data
generated by a discrete source.
➢ For the source encoder to be efficient we require knowledge of the statistics of the
source.
➢ In particular if some source symbols are known to be more probable than others then
➢ we may exploit this feature in the generation of a source code by assigning short code
words to frequent sources symbols and long codewords to rare source symbols we
referred to such a source code as variable length code.
➢ Morse code is an example of variable length code.
➢ In the Morse code the letters of the alphabet and numerals are encoded in streams of
marks and spaces denoted as dots “.” and dashes “-” respectively in the English language.
➢ In the English language the letter E occurs more frequently than letter Q .
➢ For example Morse code encodes E into a single “.” the shortest code word in the code,
and it encodes Q into “- - . - ” into the longest code in the code.
2. The source code is uniquely decodable so that the original source sequence can be
reconstructed perfectly from the encoded binary sequence
➢ Consider the block diagram shown in Figure 9.3
depicts a discrete memoryless source whose
output is converted by the source encoder into a
block of 0’s and 1’s denoted by bk.
➢ The source has an alphabet with K different
symbols and the kth symbol Sk occurs with
probability Pk k=0,1…………K-1. ➢ let 𝐿𝑚𝑖𝑛 denotes the minimum possible
➢ Let the binary code word assigned to the symbol value of 𝐿ത ,the coding efficiency of the
sk by the encoder have length lk measured in bits source encoder is
𝐿𝑚𝑖𝑛
➢ The average code word length 𝐿ഥ of the source =
𝐿ത
encoder 𝑳ഥ = σ𝑲−𝟏
𝒌=𝟎 𝑷𝒌𝑰𝒌.
with ഥ𝐿 𝐿𝑚𝑖𝑛 we have 1
➢ The parameter 𝐿ത represents the average number
of bits per source symbol used in the source ➢ The source encoder is efficient if
encoding process. approaches unity.
SOURCE CODING THEOREM(SHANNON’S FIRST THEOREM)
➢The source coding theorem states that : For a Discrete Memoryless Source (DMS),the entropy
H(S),the average code word length ഥ𝐿 per symbol is bounded as
ഥ𝐿 H(S)
➢ According to source coding theorem the Entropy H(S) represents a fundamental limit on the
average number of bits per source sample necessary to represent a discrete memoryless source.
➢ In that L may be chosen as small as possible but no smaller than the entropy.
➢We may rewrite the efficiency of source encoder intro in terms of entropy H(s) as
𝐿𝑚𝑖𝑛 𝐻(𝑆)
= =
𝐿ത 𝐿ത
Entropy of a Binary memoryless channel
PREFIX CODE:
Here code –II is a prefix code
HUFFMAN CODING
➢ Huffman code is an important class of prefix code.
➢ The basic idea behind Huffman coding is to assign to each symbol of an alphabet a sequence of
bits roughly equal in length to the amount of information conveyed by the symbol in question.
➢ The end result is a source code whose average code word length approaches the fundamental
limit set by the entropy of a discrete memoryless source namely H(S).
➢ The essence of the algorithm used to synthesize the Huffman code is to replace the prescribed
set of source statistics of a discrete memoryless source with a simpler one
➢ This reduction process is continued in a step by step manner until we are left with a final set of
only two source statistic (symbols) for which (0,1) is an optimal code.
➢ Starting from this trivial code we then work backward and thereby construct the Huffman
coding for the given source.
The code for each (original) source symbol is found by working backward and tracing the
sequence of zeros and ones assigned to that symbol as well as its successors.
Example:
SPREAD SPECTRUM TECHNIQUES: PN Sequences, Notion of
Spread Spectrum, DSSS: DSSS with CBPSK, Processing gain,
Probability of error.
➢ Not withstanding the importance of these two primary communication resources, there are
situations where it is necessary to sacrifice their efficient utilization in order to meet certain other
design objectives.
➢ For example the system may be required to provide a form of secure communication in hostile
environment such that the transmitted signal is not easily detected or recognised by unwanted
listeners
1. Spread spectrum is a means of transmission in which the data sequence occupies a bandwidth in
2. The spectrum spreading is accomplished before transmission through the use of a code that is
independent of the data sequence the same code is used in the receiver (operating in the
synchronism with the transmitter) to despread the received signal so that the original data
1. Spread Spectrum modulation was originally developed for military applications where
resistance to jamming is of major concern. Spread Spectrum is used to combat
intentional interference (jamming).
2. Civilian applications also benefit from the unique characteristics of spread spectrum
modulation. For example: Spread spectrum is used to avoid the self interference due to
multipath propagation in ground based mobile environments.
i. A message can be hidden in the background noise by spreading it’s bandwidth using
the code word and then transmitting the coded signal at a low power level.
ii. Due to these modifications the probability of being intercepted (detected) is reduced
to a great extent therefore such a spread encoded signal is called as low probability of
intercept signal.
5. In obtaining the message privacy: The message privacy can be obtained by superimposing
pseudo random pattern on the transmitted message
CLASSIFICATION OF SPREAD SPECTRUM MODULATION
TECHNIQUES
➢ Spread Spectrum modulation techniques are broadly classified into two categories namely :
1) The Averaging type systems: The Averaging type systems reduces the interference by averaging it
over a long time . The Direct Sequence Spread Spectrum (DSSS) is an averaging type system.
2) Avoidance type systems: The avoidance system reduces the interference by making the signal
avoid the interference for a long fraction of time. The avoidance systems are further classified
depending on the type of modulation used.
➢ Pseudo-Noise sequence is a periodic binary sequence with a noise like wave that is usually generated by
means of a feedback shift register ,the general block diagram is shown in the figure.
➢ A Feedback shift register concept consists of an ordinary shift register made up m-flip flops and logic
circuit that are interconnected to form a multiloop feedback circuit.
➢ At each pulse of the clock, the state of each flip flop is shifted to the next one down the line.
➢ With each clock pulse the logic circuit computes a Boolean function of the states of the flip-flops. The
result is then feedback as a input to the first flip flop, thereby preventing the shift register from emptying.
The PN sequence so generated is determined by the length m of the shift register its initial state and the
feedback logic
➢ Let Sj (k) denotes the state of the jth flip-flop from the kth clock pulse ;this state may be represented by symbol ‘0’
or ‘1’.
➢ The state of the shift register after the kth clock pulse is then defined by the
{s1(k),s2(k),………………..sm(k),where k 0.
➢ For the initial state, k is zero.
➢ From the definition of a shift register we have
s𝑗 𝑘 + 1 = 𝑠j-1(k), k≥0
1≤j≤0
➢ Where s0(k) is the input applied to the first-flip and after the kth clock pulse.
➢ According to the configuration described in the figure s0(k) is a Boolean function of the individual states
s1(k),s2(k),………………..sm(k).
➢ For a specified length ‘m’ this Boolean function uniquely determines the subsequent sequence of states and
therefore the PN sequence produced at the output of the final flip flop in the shift register.
➢ With the total number of m flip-flops, the number of possible states of the shift register is at most 2m
➢ Therefore the PN sequence generated by a feedback shift register must eventually become periodic with a
period of 2m
➢ A feedback shift register is said to be linear when the feedback logic consists of entirely modulo-2
operators.
➢ We say so because for a zero state s0(k) the input produced by the feedback logic would be ‘0’,
the shift register would then continue to remain in the zero state and the output would therefore
consist entirely of 0’s.
➢ Consequently the period of PN sequence produced by linear feedback shift register with
➢ When the period is exactly 2m-1 the PN sequence is called as MAXIMUM LENGTH SEQUENCE or
simply m-SEQUENCE
Example: For the PN-sequence generator shown in the figure, obtain and draw
the PN-Sequence.
SOLUTION : Let us assume that the initial state of the shift register is Q3Q2Q1 =001,the
outputs of Q3 & Q2 are connected to a modulo-2 adder i.e., EX-OR gate. The operation of this
PN sequence generator is shown in the table below.
Properties of Maximum Length Sequence
(m-sequence):
Maximum length sequence has many properties possessed by a truly
random sequence. They are
1. Balanced property
2. Run property
3. Correlation property
1. Balance property: In each period of a maximum length sequence the number of 1’s is
always one more than number of 0’s. This property is called as Balance property.
2. Among the runs of 1’s and of 0’s in each period of a maximum length sequence one-
half of the runs of each kind are of length one ,one-fourth are of length two, one-
eighth are of length three and so on as long as these fractions represent meaningful
number of runs. This property is called as Run property.
➢ Let {bk } denotes a binary data sequence and {ck } denotes pseudo-noise(PN) sequence.
➢ Let the waveforms b(t) and c(t) denotes their respective Non-Return-to-zero(NRZ) representation in terms
of 2 levels equal in amplitude and opposite in polarity namely 1.
➢ we will refer to b(t) as an information bearing data signal and c(t) as PN signal.
➢ The desired modulation is achieved by applying the data signal and the PN signal to a product modulator
or multiplier.
➢ From Fourier transform theory the multiplication of two signals produces a signal whose spectrum equals
the convolution of the spectrum of the two component signals.
➢ Thus if the message signal b(t) is a narrow band and PN signal c(t) is wide band, the product modulated
signal m(t) will have a spectrum that is nearly the same as a wide band FM signal.
➢ In our present application the PN sequence performs the role of a spreading code
➢ By multiplying the information bearing signal by the PN signal each information bit is “chopped” up into
number of small time increments as illustrated in the waveforms in Figure 9.6.
➢ These small time increments are commonly referred to as chips.
➢ For baseband transmission the product signal m(t) represents the transmitted signal. We may express
transmitted signal as
m(t) = c(t)b(t)……………………………………………..(1)
➢ The received signal r(t) consists of the transmitted signal m(t) plus an additive interference i(t) denoted by r(t)
as shown in the channel model of Figure 9.5b.
r(t) = m(t)+ i(t)
= c(t)b(t) + i(t)………………………………………..(2)
➢ Hence to recover the original message signal b(t), the received signal r(t) is applied to a demodulator that
consists of a multiplier followed by an integrator, and a decision device.
➢ The multiplier is supplied with a locally generated PN sequence that is an exact replica of that used in the
transmitter.
➢ Moreover we assume that the receiver operates in perfect synchronism with a transmitter which means the
PN sequence in the receiver is lined up exactly with that in the transmitter.
➢ The multiplier output in the receiver is therefore given by
Z(t) = c(t)r(t)
= c2(t)b(t)+c(t)i(t)……………………………………………………..(3)
Equation (3) shows that the data signal b(t) is multiplied twice by the PN signal c(t). whereas the
unwanted signal i(t) is multiplied only once by the PN signal alternates between the levels -1 and
+1and the alternation is destroyed when it is squared. Hence
z(t)=b(t)+c(t)i(t)……………………………………………...(5)
The data signal b(t) is reproduced at the multiplier output in the receiver, except for the
effect of the interference represented by the additive term c(t)i(t)
➢ Multiplication of the interference i(t) by local generator PN signal c(t) means that the spreading
code will affect the interference just as it did the original signal at the transmitter.
➢ We now observe that the data component b(t) is narrowband whereas spurious component c(t)i(t)
is wide band.
➢ Hence by applying the multiplier output to a base band (low pass) filter with a bandwidth just
large enough to accommodate the recovery of the data signal b(t) most of the power in the
spurious component c(t)i(t) is filtered out.
➢ The effect of the interference i(t) is thus significantly reduced at the receiver output the receiver.
➢ In figure the low pass filtering action is actually performed by the integrator that evaluates the
area under the signal produced at the multiplier output
➢ The integration is carried out for the bit interval 0 t Tb providing the same value of v.
➢ Finally a decision is made by the receiver; if v is greater than the threshold of ‘0’ the
receiver says that binary symbol 1 of the original data sequence was sent in the sent in the
interval 0 t Tb
➢ and if v is less than 0 the receiver says that symbol 0 was sent if v is exactly 0 the receiver
makes random guess in favour of 1 or 0
➢ In summary the use of spreading code in the transmitter produces a wide band transmitted
signal that appears noise like to a receiver that has no knowledge of the spreading code.
➢ We note that the longer we make the period of the spreading court the closer will be the
transmitted signal be to a truly random binary wave and the harder it is to detect.
➢ Naturally the price we have to pay for the improved protection against interference is
increased transmission bandwidth system complexity end processing delay .
➢ However when our primary concern is a security of transmission these are not unreasonably
cost to pay
Direct Sequence Spread Spectrum(DSSS)
➢ The spread spectrum technique described in the previous topic is referred to as Direct Sequence
Spread Spectrum, we may incorporate coherent binary phase shift keying (PSK) into the transmitter
and receiver of the direct sequence spread spectrum as shown in the figure 9.7.
➢ The transmitter first converts the incoming binary data sequence {bK} into an NRZ waveform b(t)
which is followed by 2 stages of modulation.
➢ The first stage consists of product modulator or multiplier with the data signal b(t) (representing a data
sequence) and the PN signal c(t) as inputs.
➢ The transmitted signal x(t) is thus a direct sequence spread binary phase shift keyed signal (DS/BPSK).
➢ The phase modulation (t) of x(t) has one of two values ‘0’ and ‘’ depending on the polarities of the
message signal b(t) and PN signal c(t) at time t in accordance with the truth table of Table 9.3
➢ The Figure 9.8 illustrates the waveforms for the second stage of modulation.
➢ Part of the modulated waveform is shown in Figure9.6c is reproduced in Figure9.8a;the
waveform shown here corresponds to one period of PN sequence.
➢ Figure9.8b shows the waveform of a sinusoidal carrier and Figure9.8c shows the DS/BPSK
waveform that results from the second stage of modulation.
➢ The receiver shown in Fig.9.7 b consists of two stages of demodulation.
➢ In the first stage, the received signal y(t) and locally generated carrier are applied to a
product modulator followed by a low pass filter whose bandwidth is equal to that of m(t).
➢ This stage of demodulation process reverses the phase shift keying applied to the
transmitted signal.
➢ The second stage of demodulation performs spectrum de-spreading by multiplying the low
pass filter output by locally generated replica of the PN signal c(t), followed by integration
over a bit interval 0 t Tb and finally decision making in performed.
Mode of analysis
➢ In the normal form of transmitter shown in the Figure 9.7a, the spectrum spreading is
performed prior to phase modulation.
➢ For the purpose of analysis however, we find it more convenient to interchange the order of
these operations as shown in the Figure 9.9.
➢ We are permitted to do this because the spectrum spreading and the binary phase shift keying
are both linear operations; likewise for the phase demodulation and spectrum dispreading.
➢ But for the interchange of operations to be feasible, it is important to synchronise the
incoming data sequence and the PN sequence.
➢ The model of Figure 9.9 also includes representations of the channel and the receiver.
➢ In this model it is assumed that the interference j(t) limits the performance, so that the effect
of channel noise may be ignored. Accordingly the channel output is given by
y(t) = x(t)+j(t)
=c(t)s(t)+j(t)………………………..(6)
➢ In the channel model included in Figure 9.9 the interfering signal is denoted by j(t),
➢ The channel model in Fig.9.9 is passband in spectral content whereas that in Fig.9.5 b is the
base band form.
➢ In the receiver, the received signal y(t) is first multiplied by the PN signal c(t) yielding an
output that equals to the coherent detector input u(t).Thus,
u(t)=c(t)y(t)
➢ Eq(7) shows that the coherent detector input u(t) consists of a binary PSK signal s(t)
embedded in additive code-modulated interference denoted by c(t)j(t).
➢ The modulated nature of the later component forces the interference signal (jammer) to
spread its spectrum such that the detection of information bits at the receiver output is
afforded increased realibility.
Synchronization:
➢ For proper operation, a spread spectrum communication system requires that the locally
generated PN sequence used in the receiver to de-spread the received signal be
synchronized to the PN sequence used in the transmitter.
➢ A solution to the synchronization problem consists of two parts :Acquisition and
Tracking.
➢ In Acquisition or coarse synchronization the two PN codes are aligned with in a fraction
of a chip or in short duration.
➢ Once the PN signal is acquired ,tracking or fine synchronization takes place.
Frequency Hopping spread spectrum(FHSS)
Types of Frequency Hopping:
Slow Frequency Hopping (FH-MFSK)
Working
FH-MFSK RECEIVER
In slow frequency hopping type, more than one symbols are transmitted during the time between two
frequency hops.
In slow frequency hopping, time duration between two hops (Th or Tc as shown) > data bit duration (Tb).
This is depicted in the figure.
In fast frequency hopping type, one complete or fraction of data symbol is transmitted during the time
between two frequency hops. Due to this, frequency hopping rate may exceed often compare to data bit rate in
a binary sequence.
In fast frequency hopping, time duration between two hops (Th or Tc as shown) <= data bit duration (Tb).
This is depicted in the figure
Fast Frequency Hopping
FDM VS FHSS