EC6402 Communication Theory PDF
EC6402 Communication Theory PDF
net
ww
w.E
a syE
ngi
nee
rin
g.n
et
ww
Process, Transmission of a Random Process Through a LTI filter.
w.E
UNIT IV NOISE CHARACTERIZATION 9
Noise sources and types – Noise figure and noise temperature – Noise in cascaded systems.
Narrow band noise – PSD of in-phase and quadrature noise –Noise performance in AM systems –
asy
Noise performance in FM systems – Pre-emphasis and de-emphasis – Capture effect, threshold
effect.
gin
Entropy - Discrete Memoryless channels - Channel Capacity -Hartley - Shannon law - Source
coding theorem - Huffman & Shannon - Fano codes
TEXT BOOKS:
et
1. J.G.Proakis, M.Salehi, ―Fundamentals of Communication Systems‖, Pearson Education 2006.
2. S. Haykin, ―Digital Communications‖, John Wiley, 2005.
REFERENCES:
1. B.P.Lathi, ―Modern Digital and Analog Communication Systems‖, 3rd Edition, Oxford
University Press, 2007.
2. B.Sklar, ―Digital Communications Fundamentals and Applications‖, 2nd Edition Pearson
Education 2007
3. H P Hsu, Schaum Outline Series - ―Analog and Digital Communications‖ TMH 2006
4. Couch.L., "Modern Communication Systems", Pearson, 2001.
COMMUNICATION THEORY
1. AMPLITUDE MODULATION 7
2. ANGLE MODULATION 27
3. RANDOM PROCESS 32
ww
4. NOISE CHARACTERIZATION 64
asy
En
gin
ee rin
g.n
et
CONTENT PG.NO
UNIT-I
AMPLITUDE MODULATION
1.0 Introduction 8
1.1 Amplitude modulation 8
Analysis of Amplitude Modulation Carrier Wave
Frequency Spectrum of AM Wave
Modulation Index (m)
Limitations of Amplitude Modulation
ww
1.2 AM Transmitter 15
w.E
1.3 SBB Transmission
Filter method
Phase shift method
15
asy
Advantages and Disadvantages
gin
Advantages and Disadvantages
1.5 DSB-SC
Spectrum signals
Generation of DSB-SC
ee
Distortion & Attenuation rin
24
Demodulation process
g.n
1.6 Hilbert transforms
Properties of Hilbert transforms
Pre envelope
et 27
Complex envelope
UNIT-II
ANGLE MODULATION
2.0 Introduction 36
2.1 Frequency Modulation 39
Modulation index
Equation of PM Wave
2.4 Wide-Band FM 44
Generation of wideband FM signals
Indirect Method
System 1
ww System 2
w.E
2.6 Comparisons of Various Modulations
Comparison of WFM & WFM
47
asy
2.7 Application and its uses 51
En
UNIT-III
RANDOM PROCESSES
3.2
3.3
Central Limit Theorem
Stationary process g.n 56
57
3.4 Correlation
Definition:
Wide Sense Stationary et 58
Pearson's correlation coefficient
UNIT-IV
NOISE CHARACTERIZATION
4.0 Introduction 64
4.1 Analysis of Noise in Communication Systems 65
4.2 Classification of Noise 99
4.1.1 Explanation of External Noise
4.1.2 Explanation of Internal Noise in
communication
4.1.3 Signal to Noise Ratio
w.E
4.3 Noise in Cascade Systems
4.4 Narrow Band Noise
104
105
4.5 FM Capture Effect
asy 109
4.6 Pre-Emphasis
En
Pre-emphasis circuit
110
4.7 De-Emphasis
gin 110
rin 112
112
UNIT-V
INFORMATION THEORY g.n
5.0 Introduction
5.1 Entropy
Formula for entropy
et 114
116
Properties
Tree diagram
References
Glossary Terms
Tutorial Problems
Worked Out Problems
Question Bank
Question Paper
ww
w.E
asy
En
gin
ee rin
g.n
et
CHAPTER I
AMPLITUDE MODULATION
ww
Antenna Height
Narrow Banding
w.E
Poor radiation and penetration
Diffraction angle
Multiplexing. asy
Functions of the Carrier Wave:
En
gin
The main function of the carrier wave is to carry the audio or video signal from the transmitter to
the receiver. The wave that is resulted due to superimposition of audio signal and carrier wave is
called the modulated wave.
Types of modulation: ee rin
The sinusoidal carrier wave can be given by the equation,
vc = Vc Sin(wct + θ) = Vc Sin(2fct + θ) g.n
Vc – Maximum Value
fc – Frequency et
θ – Phase Relation
Since the three variables are the amplitude, frequency, and phase angle, the modulation can be
done by varying any one of them. Thus there are three modulation types namely:
Amplitude Modulation (AM)
Frequency Modulation (FM)
Phase Modulation (PM)
We have introduced linear modulation. In particular,
DSB-SC, Double sideband suppressed carrier
DSB-LC, Double sideband large carrier (AM)
ww
SUPER HETERODYNE RECEIVER
COMPARISION OF VARIOUS AM TECHNIQUES
w.E
1.1 AMPLITUDE MODULATION:
"Modulation is the process of superimposing a low frequency signal on a high frequency
carrier signal." asy
En OR
gin
"The process of modulation can be defined as varying the RF carrier wave in accordance
with the intelligence or information in a low frequency signal."
ee OR
rin
"Modulation is defined as the precess by which some characteristics, usually amplitude,
g.n
frequency or phase, of a carrier is varied in accordance with instantaneous value of some
other voltage, called the modulating voltage."
Need For Modulation
et
1. If two musical programs were played at the same time within distance, it would be difficult
for anyone to listen to one source and not hear the second source. Since all musical sounds
have approximately the same frequency range, form about 50 Hz to 10KHz. If a desired
program is shifted up to a band of frequencies between 100KHz and 110KHz, and the
second program shifted up to the band between 120KHz and 130KHz, Then both programs
gave still 10KHz bandwidth and the listener can (by band selection) retrieve the program
of his own choice. The receiver would down shift only the selected band of frequencies to
a suitable range of 50Hz to 10KHz.
2. A second more technical reason to shift the message signal to a higher frequency is related
to antenna size. It is to be noted that the antenna size is inversely proportional to the
frequency to be radiated. This is 75 meters at 1 MHz but at 15KHz it has increased to 5000
meters (or just over 16,000 feet) a vertical antenna of this size is impossible.
3. The third reason for modulating a high frequency carrier is that RF (radio frequency)
energy will travel a great distance than the same amount of energy transmitted as sound
power.
Types of Modulation
The carrier signal is a sine wave at the carrier frequency. Below equation shows that the sine wave
has three characteristics that can be altered.
Instantaneous voltage (E) =Ec(max)Sin(2πfct + θ)
ww
The term that may be varied are the carrier voltage Ec, the carrier frequency fc, and the carrier
phase angle θ. So three forms of modulations are possible.
w.E
1. AmplitudeModulation
Amplitude modulation is an increase or decrease of the carrier voltage (Ec), will all other
asy
factors remaining constant.
2. FrequencyModulation
En
Frequency modulation is a change in the carrier frequency (fc) with all other factors
remaining constant.
gin
3. PhaseModulation
ee rin
Phase modulation is a change in the carrier phase angle (θ). The phase angle cannot
change without also affecting a change in frequency. Therefore, phase modulation is in
reality a second form of frequency modulation.
g.n
EXPLAINATION OF AM:
et
The method of varying amplitude of a high frequency carrier wave in accordance with the
information to be transmitted, keeping the frequency and phase of the carrier wave unchanged is
called Amplitude Modulation. The information is considered as the modulating signal and it is
superimposed on the carrier wave by applying both of them to the modulator. The detailed
diagram showing the amplitude modulation process is given below.
ww
w.E FIG 1.1 Amplitude Modulation
As shown above, the carrier wave has positive and negative half cycles. Both these cycles are
asy
varied according to the information to be sent. The carrier then consists of sine waves whose
En
amplitudes follow the amplitude variations of the modulating wave. The carrier is kept in an
envelope formed by the modulating wave. From the figure, you can also see that the amplitude
gin
variation of the high frequency carrier is at the signal frequency and the frequency of the carrier
ee
wave is the same as the frequency of the resulting wave.
Analysis of Amplitude Modulation Carrier Wave:
Let vc = Vc Sin wct rin
vm = Vm Sin wmt
g.n
vc – Instantaneous value of the carrier
Vc – Peak value of the carrier
Wc – Angular velocity of the carrier
et
vm – Instantaneous value of the modulating signal
Vm – Maximum value of the modulating signal
wm – Angular velocity of the modulating signal
fm – Modulating signal frequency
It must be noted that the phase angle remains constant in this process. Thus it can be ignored.
The amplitude of the carrier wave varies at fm.The amplitude modulated wave is given by the
equation A = Vc + vm = Vc + Vm Sin wmt = Vc [1+ (Vm/Vc Sin wmt)]
= Vc (1 + mSin wmt)
ww
the modulating signal (wc >> wm). Thus, the second and third cosine equations are more close to
the carrier frequency. The equation is represented graphically as shown below.
w.E
Frequency Spectrum of AM Wave:
Lower side frequency – (wc – wm)/2
asy
Upper side frequency – (wc +wm)/2
En
The frequency components present in the AM wave are represented by vertical lines
approximately located along the frequency axis. The height of each vertical line is drawn in
gin
proportion to its amplitude. Since the angular velocity of the carrier is greater than the angular
rin
Thus there will not be any change in the original frequency, but the side band frequencies (w c –
g.n
wm)/2 and (wc +wm)/2 will be changed. The former is called the upper side band (USB)
frequency and the later is known as lower side band (LSB) frequency.
et
Since the signal frequency wm/2 is present in the side bands, it is clear that the carrier voltage
component does not transmit any information.
Two side banded frequencies will be produced when a carrier is amplitude modulated by a single
frequency. That is, an AM wave has a band width from (wc – wm)/2 to (wc +wm)/2 , that is, 2wm/2
or twice the signal frequency is produced. When a modulating signal has more than one
frequency, two side band frequencies are produced by every frequency. Similarly for two
frequencies of the modulating signal 2 LSB‘s and 2 USB‘s frequencies will be produced.
The side bands of frequencies present above the carrier frequency will be same as the ones present
below. The side band frequencies present above the carrier frequency is known to be the upper
side band and all those below the carrier frequency belong to the lower side band. The USB
frequencies represent the some of the individual modulating frequencies and the LSB frequencies
SCE 13 DEPT OF ECE
represent the difference between the modulating frequency and the carrier frequency. The total
bandwidth is represented in terms of the higher modulating frequency and is equal to twice this
frequency.
Modulation Index (m):
The ratio between the amplitude change of carrier wave to the amplitude of the normal carrier
wave is called modulation index. It is represented by the letter ‗m‘.
It can also be defined as the range in which the amplitude of the carrier wave is varied by the
modulating signal. m = Vm/Vc.
Percentage modulation, %m = m*100 = Vm/Vc * 100
ww
The percentage modulation lies between 0 and 80%.
Another way of expressing the modulation index is in terms of the maximum and minimum values
w.E
of the amplitude of the modulated carrier wave. This is shown in the figure below.
asy
En
gin
ee rin
g.n
et
FIG 1.2 Amplitude Modulation Carrier Wave
2 Vin = Vmax – Vmin
Vin = (Vmax – Vmin)/2
Vc = Vmax – Vin
= Vmax – (Vmax-Vmin)/2
=(Vmax + Vmin)/2
Substituting the values of Vm and Vc in the equation m = Vm/Vc , we get
M = Vmax – Vmin/Vmax + Vmin
As told earlier, the value of ‗m‘ lies between 0 and 0.8. The value of m determines the strength
and the quality of the transmitted signal. In an AM wave, the signal is contained in the variations
of the carrier amplitude. The audio signal transmitted will be weak if the carrier wave is only
modulated to a very small degree. But if the value of m exceeds unity, the transmitter output
produces erroneous distortion.
Power Relations in an AM wave:
A modulated wave has more power than had by the carrier wave before modulating. The total
power components in amplitude modulation can be written as:
Ptotal = Pcarrier + PLSB + PUSB
ww
Considering additional resistance like antenna resistance R.
Pcarrier = [(Vc/√2)/R]2 = V2C/2R
w.E
Each side band has a value of m/2 Vc and r.m.s value of mVc/2√2. Hence power in LSB and USB
can be written as
asy
PLSB = PUSB = (mVc/2√2)2/R = m2/4*V2C/2R = m2/4 Pcarrier
Ptotal = V2C/2R + [m2/4*V2C/2R] + [m2/4*V2C/2R] = V2C/2R (1 + m2/2) = Pcarrier (1 + m2/2)
En
In some applications, the carrier is simultaneously modulated by several sinusoidal modulating
gin
signals. In such a case, the total modulation index is given as
ee
Mt = √(m12 + m22 + m32 + m42 + …..
rin
If Ic and It are the r.m.s values of unmodulated current and total modulated current and R is the
resistance through which these current flow, then
2
Ptotal/Pcarrier = (It.R/Ic.R) = (It/Ic) 2
g.n
Ptotal/Pcarrier = (1 + m2/2)
4. Poor Audio Quality – To obtain high fidelity reception, all audio frequencies till 15 KiloHertz
must be reproduced and this necessitates the bandwidth of 10 KiloHertz to minimise the
interference from the adjacent broadcasting stations. Therefore in AM broadcasting stations audio
quality is known to be poor.
1.2 AM TRANSMITTERS:
Transmitters that transmit AM signals are known as AM transmitters. These transmitters are used
in medium wave (MW) and short wave (SW) frequency bands for AM broadcast. The MW band
has frequencies between 550 KHz and 1650 KHz, and the SW band has frequencies ranging from
3 MHz to 30 MHz. The two types of AM transmitters that are used based on their transmitting
ww
powers are:
High Level
w.E
Low Level
High level transmitters use high level modulation, and low level transmitters use low level
asy
modulation. The choice between the two modulation schemes depends on the transmitting power
En
of the AM transmitter. In broadcast transmitters, where the transmitting power may be of the order
of kilowatts, high level modulation is employed. In low power transmitters, where only a few
gin
watts of transmitting power are required , low level modulation is used.
ee
High-Level and Low-Level Transmitters Below figure's show the block diagram of high-level and
rin
low-level transmitters. The basic difference between the two transmitters is the power
amplification of the carrier and modulating signals
Figure (a) shows the block diagram of high-level AM transmitter.
g.n
et
Figure (a) is drawn for audio transmission. In high-level transmission, the powers of the carrier
and modulating signals are amplified before applying them to the modulator stage, as shown in
figure (a). In low-level modulation, the powers of the two input signals of the modulator stage are
not amplified. The required transmitting power is obtained from the last stage of the transmitter,
the class C power amplifier.
The various sections of the figure (a) are:
Carrier oscillator
Buffer amplifier
Frequency multiplier
ww
Power amplifier
Audio chain
w.E
Modulated class C power amplifier
Carrier oscillator
asy
The carrier oscillator generates the carrier signal, which lies in the RF range. The frequency of the
En
carrier is always very high. Because it is very difficult to generate high frequencies with good
frequency stability, the carrier oscillator generates a sub multiple with the required carrier
gin
frequency. This sub multiple frequency is multiplied by the frequency multiplier stage to get the
ee
required carrier frequency. Further, a crystal oscillator can be used in this stage to generate a low
rin
frequency carrier with the best frequency stability. The frequency multiplier stage then increases
the frequency of the carrier to its requirements.
Buffer Amplifier
g.n
et
The purpose of the buffer amplifier is twofold. It first matches the output impedance of the carrier
oscillator with the input impedance of the frequency multiplier, the next stage of the carrier
oscillator. It then isolates the carrier oscillator and frequency multiplier.
This is required so that the multiplier does not draw a large current from the carrier oscillator. If
this occurs, the frequency of the carrier oscillator will not remain stable.
Frequency Multiplier
The sub-multiple frequency of the carrier signal, generated by the carrier oscillator , is now
applied to the frequency multiplier through the buffer amplifier. This stage is also known as
harmonic generator. The frequency multiplier generates higher harmonics of carrier oscillator
frequency. The frequency multiplier is a tuned circuit that can be tuned to the requisite carrier
frequency that is to be transmitted.
Power Amplifier
The power of the carrier signal is then amplified in the power amplifier stage. This is the
basic requirement of a high-level transmitter. A class C power amplifier gives high power current
pulses of the carrier signal at its output.
Audio Chain
The audio signal to be transmitted is obtained from the microphone, as shown in figure (a). The
audio driver amplifier amplifies the voltage of this signal. This amplification is necessary to drive
the audio power amplifier. Next, a class A or a class B power amplifier amplifies the power of the
audio signal.
ww
Modulated Class C Amplifier
This is the output stage of the transmitter. The modulating audio signal and the carrier signal, after
w.E
power amplification, are applied to this modulating stage. The modulation takes place at this
stage. The class C amplifier also amplifies the power of the AM signal to the reacquired
asy
transmitting power. This signal is finally passed to the antenna., which radiates the signal into
En
space of transmission.
Figure (b) shows the block diagram of a low-level AM transmitter.
gin
ee rin
g.n
et
The low-level AM transmitter shown in the figure (b) is similar to a high-level transmitter, except
that the powers of the carrier and audio signals are not amplified. These two signals are directly
applied to the modulated class C power amplifier.
Modulation takes place at the stage, and the power of the modulated signal is amplified to the
required transmitting power level. The transmitting antenna then transmits the signal.
Coupling of Output Stage and Antenna
The output stage of the modulated class C power amplifier feeds the signal to the transmitting
antenna. To transfer maximum power from the output stage to the antenna it is necessary that the
impedance of the two sections match. For this , a matching network is required. The matching
between the two should be perfect at all transmitting frequencies. As the matching is required at
SCE 18 DEPT OF ECE
ww
w.E
The matching network used for coupling the output stage of the transmitter and the antenna is
asy
called double π-network. This network is shown in figure (c). It consists of two inductors , L1 and
En
L2 and two capacitors, C1 and C2. The values of these components are chosen such that the input
impedance of the network between 1 and 1'. Shown in figure (c) is matched with the output
gin
impedance of the output stage of the transmitter. Further, the output impedance of the network is
matched with the impedance of the antenna.
ee rin
The double π matching network also filters unwanted frequency components appearing at the
output of the last stage of the transmitter. The output of the modulated class C power amplifier
g.n
may contain higher harmonics, such as second and third harmonics, that are highly undesirable.
are totally suppressed, and only the desired signal is coupled to the antenna.
Comparision of Am and Fm Signals
et
The frequency response of the matching network is set such that these unwanted higher harmonics
Both AM and FM system are used in commercial and non-commercial applications. Such as radio
broadcasting and television transmission. Each system has its own merits and demerits. In a
Particular application, an AM system can be more suitable than an FM system. Thus the two are
equally important from the application point of view.
Advantage of FM systems over AM Systems
The advantages of FM over AM systems are:
The amplitude of an FM wave remains constant. This provides the system designers an
opportunity to remove the noise from the received signal. This is done in FM receivers by
employing an amplitude limiter circuit so that the noise above the limiting amplitude is
suppressed. Thus, the FM system is considered a noise immune system. This is not
possible in AM systems because the baseband signal is carried by the amplitude variations
it self and the envelope of the AM signal cannot be altered.
Most of the power in an FM signal is carried by the side bands. For higher values of the
modulation index, mc, the major portion of the total power is contained is side bands, and
the carrier signal contains less power. In contrast, in an AM system, only one third of the
total power is carried by the side bands and two thirds of the total power is lost in the form
of carrier power.
ww
In FM systems, the power of the transmitted signal depends on the amplitude of the
unmodulated carrier signal, and hence it is constant. In contrast, in AM systems, the power
w.E
depends on the modulation index ma. The maximum allowable power in AM systems is
100 percent when ma is unity. Such restriction is not applicable int case of FM systems.
asy
This is because the total power in an FM system is independent of the modulation index,
En
mf and frequency deviation fd. Therefore, the power usage is optimum in an FM system.
In an AM system, the only method of reducing noise is to increase the transmitted power
gin
of the signal. This operation increases the cost of the AM system. In an FM system, you
ee
can increase the frequency deviation in the carrier signal to reduce the noise. if the
rin
frequency deviation is high, then the corresponding variation in amplitude of the baseband
signal can be easily retrieved. if the frequency deviation is small, noise 'can overshadow
g.n
this variation and the frequency deviation cannot be translated into its corresponding
et
amplitude variation. Thus, by increasing frequency deviations in the FM signal, the noise
effect can he reduced. There is no provision in AM system to reduce the noise effect by
any method, other than increasing itss transmitted power.
In an FM signal, the adjacent FM channels are separated by guard bands. In an FM system
there is no signal transmission through the spectrum space or the guard band. Therefore,
there is hardly any interference of adjacent FM channels. However, in an AM system,
there is no guard band provided between the two adjacent channels. Therefore, there is
always interference of AM radio stations unless the received signalis strong enough to
suppress the signal of the adjacent channel.
The disadvantages of FM systems over AM systems
There are an infinite number of side bands in an FM signal and therefore the theoretical
bandwidth of an FM system is infinite. The bandwidth of an FM system is limited by
SCE 20 DEPT OF ECE
Carson's rule, but is still much higher, especially in WBFM. In AM systems, the
bandwidth is only twice the modulation frequency, which is much less than that of WBFN.
This makes FM systems costlier than AM systems.
The equipment of FM system is more complex than AM systems because of the complex
circuitry of FM systems; this is another reason that FM systems are costlier AM systems.
The receiving area of an FM system is smaller than an AM system consequently FM
channels are restricted to metropolitan areas while AM radio stations can be received
anywhere in the world. An FM system transmits signals through line of sight
propagation, in which the distance between the transmitting and receiving antenna should
ww not be much. in an AM system signals of short wave band stations are transmitted through
atmospheric layers that reflect the radio waves over a wider area.
w.E
1.3 SSB TRANSMISSION:
There are two methods used for SSB Transmission.
1. Filter Method
asy
En
2. Phase Shift Method
3. Block diagram of SSB
Filter Method:
gin
ee
This is the filter method of SSB suppression for the transmission. Fig 1.3
rin
g.n
et
FIG 1.3 Filter Method
1. A crystal controlled master oscillator produces a stable carrier frequency fc (say 100 KHz)
2. This carrier frequency is then fed to the balanced modulator through a buffer amplifier
which isolates these two satges.
3. The audio signal from the modulating amplifier modulates the carrier in the balanced
modulator. Audio frequency range is 300 to 2800 Hz. The carrier is also suppressed in this
stage but allows only to pass the both side bands. (USB & LSB).
4. A band pass filter (BPF) allows only a single band either USB or LSB to pass through it. It
depends on our requirements.
5. This side band is then heterodyned in the balanced mixer stage with 12 MHz frequency
produced by crystal oscillator or synthesizer depends upon the requirements of our
transmission. So in mixer stage, the frequency of the crystal oscillator or synthersizer is
added to SSB signal. The output frequency thus being raised to the value desired for
transmission.
6. Then this band is amplified in driver and power amplifier stages and then fed to the aerial
for the transmission.
ww
Phase Shift Method:
The phaseing method of SSB generation uses a phase shift technique that causes one of the side
w.E
bands to be conceled out. A block diagram of a phasing type SSB generator is shown in fig 1.4.
asy
En
gin
ee rin
FIG 1.4 Phase Shift Method g.n
et
It uses two balanced modulators instead of one. The balanced modulators effectively eliminate
the carrier. The carrier oscillator is applied directly to the upper balanced modulator along
with the audio modulating signal. Then both the carrier and modulating signal are shifted in
phase by 90o and applied to the second, lower, balanced modulator. The two balanced
modulator output are then added together algebraically. The phase shifting action causes one
side band to be canceled out when the two balanced modulator outputs are combined.
ww
Ring modulation is a signal-processing function in electronics, an implementation of
w.E
amplitude modulation or frequency mixing, performed by multiplying two signals, where
one is typically a sine-wave or another simple waveform. It is referred to as "ring"
asy
modulation because the analog circuit of diodes originally used to implement this
technique took the shape of a ring. This circuit is similar to a bridge rectifier, except that
En
instead of the diodes facing "left" or "right", they go "clockwise" or "anti-clockwise". A
gin
ring modulator is an effects unit working on this principle.
The carrier, which is AC, at a given time, makes one pair of diodes conduct, and reverse-
ee
biases the other pair. The conducting pair carries the signal from the left transformer
rin
secondary to the primary of the transformer at the right. If the left carrier terminal is
g.n
positive, the top and bottom diodes conduct. If that terminal is negative, then the "side"
diodes conduct, but create a polarity inversion between the transformers. This action is
much like that of a DPDT switch wired for reversing connections.
et
Ring modulators frequency mix or heterodyne two waveforms, and output the sum and
difference of the frequencies present in each waveform. This process of ring modulation
produces a signal rich in partials. As well, neither the carrier nor the incoming signal is
prominent in the outputs, and ideally, not at all.
Two oscillators, whose frequencies were harmonically related and ring modulated against
each other, produce sounds that still adhere to the harmonic partials of the notes, but
contain a very different spectral make up. When the oscillators' frequencies are not
harmonically related, ring modulation creates inharmonic, often producing bell-like or
otherwise metallic sounds.
If the same signal is sent to both inputs of a ring modulator, the resultant harmonic
spectrum is the original frequency domain doubled (if f1 = f2 = f, then f2 − f1 = 0 and f2 + f1
= 2f). Regarded as multiplication, this operation amounts to squaring. However, some
distortion occurs due to the forward voltage drop of the diodes.
Some modern ring modulators are implemented using digital signal processing techniques
by simply multiplying the time domain signals, producing a nearly-perfect signal output.
Before digital music synthesizers became common, at least some analog synthesizers (such
as the ARP 2600) used analog multipliers for this purpose; they were closely related to
those used in electronic analog computers. (The "ring modulator" in the ARP 2600 could
w.E
the output waveform contains the sum and difference of the input frequencies. Thus, in the
basic case where two sine waves of frequencies f1 and f2 (f1 < f2) are multiplied, two new
asy
sine waves are created, with one at f1 + f2 and the other at f2 - f1. The two new waves are
En
unlikely to be harmonically related and (in a well designed ring modulator) the original
signals are not present. It is this that gives the ring modulator its unique tones.
gin
Inter modulation products can be generated by carefully selecting and changing the
ee
frequency of the two input waveforms. If the signals are processed digitally, the frequency-
rin
domain convolution becomes circular convolution. If the signals are wideband, this will
cause aliasing distortion, so it is common to oversample the operation or low-pass filter the
signals prior to ring modulation.
g.n
et
One application is spectral inversion, typically of speech; a carrier frequency is chosen to
be above the highest speech frequencies (which are low-pass filtered at, say, 3 kHz, for a
carrier of perhaps 3.3 kHz), and the sum frequencies from the modulator are removed by
more low-pass filtering. The remaining difference frequencies have an inverted spectrum -
High frequencies become low, and vice versa.
Advantages:
It allows better management of the frequency spectrum. More transmission can fit into a
given frequency range than would be possible with double side band DSB signals.
All of the transmitted power is message power none is dissipate as carrier power.
Disadvantages:
1. The cost of a single side band SSB receiver is higher than the double side band DSB
counterpart be a ratio of about 3:1.
SCE 24 DEPT OF ECE
2. The average radio user wants only to flip a power switch and dial a station. Single side
band SSB receivers require several precise frequency control settings to minimize
distortion and may require continual readjustment during the use of the system.
1.4 VESTIGIAL SIDE BAND (VSB) MODULATION:
• The following are the drawbacks of SSB signal generation:
1. Generation of an SSB signal is difficult.
2. Selective filtering is to be done to get the original signal back.
3. Phase shifter should be exactly tuned to 90°.
• To overcome these drawbacks, VSB modulation is used. It can view as a compromise
ww between SSB and DSB-SC. Figure1.5 shows all the three modulation schemes.
w.E
asy
En
gin
ee rin
g.n
et
ww
w.E
asy
En
gin
ee
FIG 1.6 Spectrum of VSB Signals
rin
Vestigial sideband (VSB) transmission is a compromise between DSB and SSB
g.n
In VSB modulation, one passband is passed almost completely whereas only a residual
et
portion of the other sideband is retained in such a way that the demodulation process can
still reproduce the original signal.
VSB signals are easier to generate because some roll-off in filter edges is allowed. This
results in system simplification. And their bandwidth is only slightly greater than that of
SSB signals (-25 %).
The filtering operation can be represented by a filter H(f) that passes some of the lower (or
upper) sideband and most of the upper (or lower) sideband.
ww
2. Attenuate the image signal before heterodyning.
Advantages:
w.E
VSB is a form of amplitude modulation intended to save bandwidth over regular AM.
asy
Portions of one of the redundant sidebands are removed to form a vestigial side band
signal.
En
The actual information is transmitted in the sidebands, rather than the carrier; both
gin
sidebands carry the same information. Because LSB and USB are essentially mirror
images of each other, one can be discarded or used for a second channel or for diagnostic
purposes.
Disadvantages:
ee rin
g.n
VSB transmission is similar to (SSB) transmission, in which one of the sidebands is
completely removed. In VSB transmission, however, the second sideband is not
1.5 DSB-SC:
et
completely removed, but is filtered to remove all but the desired range of frequencies.
which carries no intelligence, and each sideband carries the same information. Single Side Band
(SSB) Suppressed Carrier is 100% efficient.
ww
Generation:
FIG 1.7 Spectrum plot of an DSB-SC signal
w.E
DSB-SC is generated by a mixer. This consists of a message signal multiplied by a carrier signal.
The mathematical representation of this process is shown below, where the product-to-sum
trigonometric identity is used.
asy
En
gin
ee rin
g.n
et
FIG 1.8 Generation of DSB-SC signal
Demodulation:
Demodulation is done by multiplying the DSB-SC signal with the carrier signal just like the
modulation process. This resultant signal is then passed through a low pass filter to produce a
scaled version of original message signal. DSB-SC can be demodulated if modulation index is less
than unity.
The equation above shows that by multiplying the modulated signal by the carrier signal, the
result is a scaled version of the original message signal plus a second term. Since ,
this second term is much higher in frequency than the original message.
Once this signal passes through a low pass filter, the higher frequency component is removed,
ww
leaving just the original message.
w.E
Distortion and Attentuation:
For demodulation, the demodulation oscillator's frequency and phase must be exactly the
asy
same as modulation oscillator's, otherwise, distortion and/or attenuation will occur.
To see this effect, take the following conditions:
En
Message signal to be transmitted:
signal):
The resultant signal can then be given by
ee
Demodulation signal (with small frequency and phase deviations from the modulation
rin
g.n
et
The terms results in distortion and attenuation of the original
message signal. In particular, contributes to distortion while adds to the
attenuation.
1.6 HILBERT TRANSFORM:
(t) of a signal x(t) is defined by the equation
1 x(s)
(t) = ds,
t-s
SCE 29 DEPT OF ECE
where the integral is the Cauchy principal value integral. The reconstruction formula
1 (s)
x(t) = - ds,
t-s
ww
w.E FIG 1.9 Block diagram of Hilbert Transform Pair
The pair x(t), (t) is called a Hilbert transform pair is an LTI system whose transfer function is
asy
H(v) = - j · sgn v,because (t) = (1/ t) * x(t) which, by taking the Fourier transform implies
(v) = - j (sgn v) X(v).
En
A Hilbert transformer produces a -90 degree phase shift for the positive frequency components of
gin
the input x(t), the amplitude doesn't change.
Properties of the Hilbert transform:
rin
2. the same autocorrelation function
g.n
3. x(t) and (t) are orthogonal
4.The Hilbert transform of (t) is -x(t)
Pre envelope:
et
The pre envelope of a real signal x(t) is the complex function
x+(t) = x(t) + j (t).
The pre envelope is useful in treating band pass signals and systems. This is due to the result
2 X(v), v > 0
X(0), v=0
X+(v) = 0, v<0
Complex envelope:
The complex envelope of a band pass signal x(t) is
ww
w.E
asy
En
FIG 1.10 Block Diagram of a Basic Superheterodyne Radio Receiver
gin
The way in which the receiver works can be seen by following the signal as is passes through the
receiver.
ee
Front end amplifier and tuning block: Signals enter the front end circuitry from the
antenna. This circuit block performs two main functions:
rin
o
g.n
Tuning: Broadband tuning is applied to the RF stage. The purpose of this is to
reject the signals on the image frequency and accept those on the wanted
et
frequency. It must also be able to track the local oscillator so that as the receiver is
tuned, so the RF tuning remains on the required frequency. Typically the selectivity
provided at this stage is not high. Its main purpose is to reject signals on the image
frequency which is at a frequency equal to twice that of the IF away from the
wanted frequency. As the tuning within this block provides all the rejection for the
image response, it must be at a sufficiently sharp to reduce the image to an
acceptable level. However the RF tuning may also help in preventing strong off-
channel signals from entering the receiver and overloading elements of the
receiver, in particular the mixer or possibly even the RF amplifier.
o Amplification: In terms of amplification, the level is carefully chosen so that it
does not overload the mixer when strong signals are present, but enables the signals
SCE 31 DEPT OF ECE
asy
as the filtering that enables signals on one frequency to be separated from those on the
En
next. Filters may consist simply of LC tuned transformers providing inter-stage coupling,
or they may be much higher performance ceramic or even crystal filters, dependent upon
what is required.
gin
ee
Detector / demodulator stage: Once the signals have passed through the IF stages of the
rin
superheterodyne receiver, they need to be demodulated. Different demodulators are
required for different types of transmission, and as a result some receivers may have a
g.n
variety of demodulators that can be switched in to accommodate the different types of
o et
transmission that are to be encountered. Different demodulators used may include:
AM diode detector: This is the most basic form of detector and this circuit block
would simple consist of a diode and possibly a small capacitor to remove any
remaining RF. The detector is cheap and its performance is adequate, requiring a
sufficient voltage to overcome the diode forward drop. It is also not particularly
linear, and finally it is subject to the effects of selective fading that can be apparent,
especially on the HF bands.
o Synchronous AM detector: This form of AM detector block is used in where
improved performance is needed. It mixes the incoming AM signal with another on
the same frequency as the carrier. This second signal can be developed by passing
the whole signal through a squaring amplifier. The advantages of the synchronous
AM detector are that it provides a far more linear demodulation performance and it
is far less subject to the problems of selective fading.
o SSB product detector: The SSB product detector block consists of a mixer and a
local oscillator, often termed a beat frequency oscillator, BFO or carrier insertion
oscillator, CIO. This form of detector is used for Morse code transmissions where
the BFO is used to create an audible tone in line with the on-off keying of the
transmitted carrier. Without this the carrier without modulation is difficult to
detect. For SSB, the CIO re-inserts the carrier to make the modulation
comprehensible.
w.E insensitive to amplitude variations as these could add extra noise. Simple FM
detectors such as the Foster Seeley or ratio detectors can be made from discrete
asy
components although they do require the use of transformers.
En
o PLL FM detector: A phase locked loop can be used to make a very good FM
demodulator. The incoming FM signal can be fed into the reference input, and the
gin
VCO drive voltage used to provide the detected audio output.
o
ee
Quadrature FM detector: This form of FM detector block is widely used within
rin
ICs. IT is simple to implement and provides a good linear output.
Audio amplifier: The output from the demodulator is the recovered audio. This is passed
g.n
into the audio stages where they are amplified and presented to the headphones or
loudspeaker.
1.8 COMPARISION OF VARIOUS AM:
PARAMETER VSB - SC SSB - SC
et DSB-SC
Definition A vestigial sideband (in Single-sideband In radio communications,
radio communication) is a modulation (SSB) is a asidebandis
sideband that has been refinement of a band of frequencies hig
only partly cut off or amplitude modulation her than or lower
suppressed. that more efficiently thanthe carrier frequency,
uses electrical power containing power as a
and bandwidth. result of
the modulation process.
keyless remotes
Uses Transmits TV signals Short wave radio Two way radio
communications communications.
ww
REFERENCES:
1. P. Lathi, Communication Systems, John Wiley and Sons, 2005.
wave. ee
especially as a means of broadcasting an audio signal by combining it with a radio carrier
rin
g.n
2. The modulation index: (modulation depth) of a modulation scheme describes by how
much the modulated variable of the carrier signal varies around its unmodulated level.
et
3. NarrowbandFM: If the modulation index of FM is kept under 1, then the FM produced is
regarded as narrow band FM.
4. Frequency modulation (FM): the encoding of information in a carrier wave by varying
the instantaneous frequency of the wave.
5. Amplication: The level is carefully chosen so that it does not overload the mixer when
strong signals are present, but enables the signals to be amplified sufficiently to ensure a
good signal to noise ratio is achieved.
6. Modulation: The process by which some of the characteristics of carrier wave is varied in
accordance with the message signal.
TUTORIAL PROBLEMS:
1. A 400 watts carrier is modulated to a depth of 75% calculate the total power in a double
side band full carrier AM wave.
Solution:
Carrier power Pc = 400 watts, m= 0.75
𝒎𝟐
Total power in a DSB-FC AM Wave = pt = 𝐩𝐜(𝟏 + )
𝟐
(𝟎.𝟕𝟓)𝟐
= 400(1+ 𝟐 )
= 512.5 watts.
2. For the maximum envelope voltage Vmax = 20V and a minimum positive envelope voltage of
w.ESolution:
Vmax = 20V ; Vmin = 6V
asy
𝒗𝒎𝒂𝒙−𝒗𝒎𝒊𝒏
(a) Modulation index, m = 𝒗𝒎𝒂𝒙+𝒗𝒎𝒊𝒏
En
= 14/26
= 0.538.
(b) Carrier Wave Vc:
gin
Vmax = Vc+ Vm
20
ee = Vc+ Vm
rin
Vmin = Vc - Vm
6 = Vc - Vm g.n
WORKED OUT PROBLEMS:
Vc = 13V.
et
1. Calculate the % power saving when the carrier and one of the sidebands are suppressed in an am wave
modulated to depth of 60%.
𝒎𝟐
(a) Total transmitted power 𝒑𝒕 = 𝒑𝒄 = (𝟏 + )
𝟐
𝒎𝟐
(b) 𝐏𝐒𝐁 = 𝐩𝐜( )
𝟒
𝐩𝐭−𝐏𝐒𝐁
(c) % 𝐩𝐨𝐰𝐞𝐫 𝐬𝐚𝐯𝐢𝐧𝐠𝐬 = ∗ 𝟏𝟎𝟎 Ans : 92.37%.
𝐏𝐓
2. For an AM DSBFC envelope with Vmax = 40V and Vmin = 10V , determine the
(a) Unmodulated carrier wave ; Vmax = Vc +Vm ; Vmin = Vc –Vm Ans : Vc = 25V.
(𝑽𝒎𝒂𝒙−𝑽𝒎𝒊𝒏)
(b) % Modulation index = *100.
(𝑽𝒎𝒂𝒙+𝑽𝒎𝒊𝒏)
CHAPTER 2
ANGLE MODULATION
ww
vary. These variations are controlled by both the frequency and the amplitude of the modulating
w.E
wave. In phase modulation the phase of the carrier is controlled by the modulating waveform.
The two main types of angle modulation are:
asy
Frequency modulation (FM), with its digital correspondence frequency-shift keying (FSK).
Phase modulation (PM), with its digital correspondence phase-shift keying (PSK).
CONTENT:
En
gin
FREQUENCY & PHASE MODULATION
NARROW BAND FM
WIDE BAND FM
ee
GENERATION OF WIDE BAND FM
rin
TRANSMISSION BANDWIDTH
g.n
FM TRANSMITTER
2.1 FREQUENCY & PHASE MODULATION: et
Besides using the amplitude of carrier to carrier information, one can also use the angle of a
carrier to carrier information. This approach is called angle modulation, and includes frequency
modulation (FM) and phase modulation (PM). The amplitude of the carrier is maintained constant.
The major advantage of this approach is that it allows the trade-off between bandwidth and noise
performance.
An angle modulated signal can be written as
s t = Acosθ(t)
where θ(t) is usually of the form θ t = 2πfct + ∅(t) and fc is the carrier frequency. The signal
∅(t) is derived from the message signal m(t) . If ∅ t = kpm(t) for some constant kp ,the
resulting modulation is called phase modulation. The parameter kp is called the phase
SCE 36 DEPT OF ECE
If the information to be transmitted (i.e., the baseband signal) is and the sinusoidal carrier
ww
w.E
asy
En
gin
In this equation, is the instantaneous frequency of the oscillator and is the frequency
deviation, which represents the maximum shift away from fc in one direction, assuming xm(t) is
limited to the range ±1.
ee rin
While most of the energy of the signal is contained within fc ± fΔ, it can be shown by Fourier
g.n
analysis that a wider range of frequencies is required to precisely represent an FM signal.
The frequency spectrum of an actual FM signal has components extending infinitely, although
et
their amplitude decreases and higher-order components are often neglected in practical design
problems.
Sinusoidal baseband signal:
Mathematically, a baseband modulated signal may be approximated by a sinusoidal continuous
wave signal with a frequency fm.
The integral of such a signal is:
where the amplitude of the modulating sinusoid is represented by the peak deviation
The harmonic distribution of a sine wave carrier modulated by such a sinusoidal signal can be
represented with Bessel functions; this provides the basis for a mathematical understanding of
frequency modulation in the frequency domain.
Modulation index:
As in other modulation systems, the value of the modulation index indicates by how much the
modulated variable varies around its unmodulated level. It relates to variations in the carrier
frequency:
where
ww is the highest frequency component present in the modulating signal xm(t), and is
w.E
the peak frequency-deviation—i.e. the maximum deviation of the instantaneous frequency from
the carrier frequency. For a sine wave modulation, the modulation index is seen to be the ratio of
asy
the amplitude of the modulating sine wave to the amplitude of the carrier wave (here unity).
If
En
, the modulation is called narrowband FM, and its bandwidth is approximately .
gin
For digital modulation systems, for example Binary Frequency Shift Keying (BFSK), where a
binary signal modulates the carrier, the modulation index is given by:
ee rin
where is the symbol period, and g.n
is used as the highest frequency of the
et
modulating binary waveform by convention, even though it would be more accurate to say it is the
highest fundamental of the modulating binary waveform. In the case of digital modulation, the
carrier is never transmitted. Rather, one of two frequencies is transmitted, either
or , depending on the binary state 0 or 1 of the modulation signal.
If , the modulation is called wideband FM and its bandwidth is approximately .
While wideband FM uses more bandwidth, it can improve the signal-to-noise ratiosignificantly;
for example, doubling the value of , while keeping constant, results in an eight-fold
improvement in the signal-to-noise ratio. (Compare this with Chirp spread spectrum, which uses
extremely wide frequency deviations to achieve processing gains comparable to traditional, better-
known spread-spectrum modes).
With a tone-modulated FM wave, if the modulation frequency is held constant and the modulation
index is increased, the (non-negligible) bandwidth of the FM signal increases but the spacing
between spectra remains the same; some spectral components decrease in strength as others
increase. If the frequency deviation is held constant and the modulation frequency increased, the
spacing between spectra increases.
Frequency modulation can be classified as narrowband if the change in the carrier frequency is
about the same as the signal frequency, or as wideband if the change in the carrier frequency is
much higher (modulation index >1) than the signal frequency. [6] For example, narrowband FM is
used for two way radio systems such as Family Radio Service, in which the carrier is allowed to
ww
deviate only 2.5 kHz above and below the center frequency with speech signals of no more than
3.5 kHz bandwidth. Wideband FM is used for FM broadcasting, in which music and speech are
w.E
transmitted with up to 75 kHz deviation from the center frequency and carry audio with up to a
20-kHz bandwidth.
Carson's rule:
asy
2.2 PHASE MODULATION:
En BT = 2 ∆f + fm .
gin
Phase Modulation (PM) is another form of angle modulation. PM and FM are closely related to
each other. In both the cases, the total phase angle θ of the modulated signal varies. In an FM
rin
g.n
In PM, the total phase of the modulated carrier changes due to the changes in the instantaneous
et
phase of the carrier keeping the frequency of the carrier signal constant. These two types of
modulation schemes come under the category of angle modulation. However, PM is not as
extensively used as FM.
ww
w.E
asy
En
At time t1, the amplitude of m(t) increases from zero to E1. Therefore, at t1, the phase modulated
carrier also changes corresponding to E1, as shown in Figure (a). This phase remains to this
gin
attained value until time t2, as between t1 and t2, the amplitude of m(t) remains constant at El. At
ee
t2, the amplitude of m(t) shoots up to E2, and therefore the phase of the carrier again increases
rin
corresponding to the increase in m(t). This new value of the phase attained at time t2remains
constant up to time t3. At time t3, m(t) goes negative and its amplitude becomes E3.
g.n
Consequently, the phase of the carrier also changes and it decreases from the previous value
et
attained at t2. The decrease in phase corresponds to the decrease in amplitude of m(t). The phase
of the carrier remains constant during the time interval between t3 and t4. At t4, m(t) goes positive
to reach the amplitude El resulting in a corresponding increase in the phase of modulated carrier at
time t4. Between t4 and t5, the phase remains constant. At t5 it decreases to the phase of the
unmodulated carrier, as the amplitude of m(t) is zero beyond t5.
Equation of a PM Wave:
To derive the equation of a PM wave, it is convenient to consider the modulating signal as a pure
sinusoidal wave. The carrier signal is always a high frequency sinusoidal wave. Consider the
modulating signal, em and the carrier signal ec, as given by, equation 1 and 2, respectively.
em = Em cos ωm t ------------(1)
ec = Ec sin ωc t ---------------(2)
The initial phases of the modulating signal and the carrier signal are ignored in Equations (1) and
(2) because they do not contribute to the modulation process due to their constant values. After
PM, the phase of the carrier will not remain constant. It will vary according to the modulating
signal em maintaining the amplitude and frequency as constants. Suppose, after PM, the equation
of the carrier is represented as:
e = Ec Sin θ ------------------(3)
Where θ, is the instantaneous phase of the modulated carrier, and sinusoid ally varies in
proportion to the modulating signal. Therefore, after PM, the instantaneous phase of the
modulated carrier can be written as:
ww
Where, kp is the
θ = ωc t + Kp em -------------------(4)
constant of proportionality for phase modulation.
w.E
Substituting Equation (1) in Equation (4), yon get:
θ = ωc t + Kp Em Cos ωm t ---------------------(5)
asy
In Equation (5), the factor, kpEm is defined as the modulation index, and is given as:
En
mp = Kp Em ------------------------(6)
where, the subscript p signifies; that mp is the modulation index of the PM wave. Therefore,
equation (5) becomes
gin
Substituting Equation (7) and (3), you get: ee
θ = ωc t + mp Cos ωm t ---------------------(7)
rin
e = Ec sin (ωct + mp cos ωmt) --------------------(8)
2.3 NARROW BAND FM MODULATION: g.n
The case where |θm(t)| ≪ 1 for all t is called
et
narrow band FM. Using the approximations
cos x ≃ 1 and sin x ≃ x for |x| ≪ 1, the FM signal can be approximated as:
s(t) = Ac cos[ωct + θm(t)]
= Ac cos ωct cos θm(t) − Ac sin ωctsin θm(t)
≃ Ac cos ωct − Acθm(t) sin ωct
or in complex notation
s t = ACRE{ejwct (1 + jθm t }
This is similar to the AM signal except that the discrete carrier component Ac coswc(t) is 90° out
of phase with the sinusoid Ac sinwc(t) multiplying the phase angle θm(t). The spectrum of
narrow band FM is similar to that of AM.
ww
consequently, in a non coherent manner.
• When the instantaneous frequency changes slowly relative to the time-constants of the filter, a
w.E
quasi-static analysis can be used.
• In quasi-static operation the filter output has the same instantaneous frequency as the input but
asy
with an envelope that varies according to the amplitude response of the filter at the
instantaneous frequency.
En
• The amplitude variations are then detected with an envelope detector like the ones used for
AM demodulation.
gin
ee
An FM Discriminator Using the Pre-Envelope:
rin
When θm(t) is small and band-limited so that cos θm(t) and sinθm(t) are essentially band-limited
signals with cut off frequencies less than fc, the pre-envelope of the FM signal is
s+(t) = s(t) + jˆs(t) = Acej (ωct+θm(t))
g.n
The angle of the pre-envelope is φ'(t) = arctan[ˆs(t)/s(t)] = ωct + θm(t)
The derivative of the phase is =ωct+ kθm(t)
d
t s t
et
dφ t s t d s dt
= dt s t
− s2 = ωct + kωm (t)
dt t +s^2(t)
d d
s^ t = ACsin ωct + θm t = AC ωct + kωm t cos
[ωct + θm t ]
dt dt
So,
s t d s^(t)d
− = AC2 ωct + kωm t ∗ cos2[wct + θm t + sin2[wct + θm t
dts^(t) dts t
The bandwidth of an FM discriminator must be at least as great as that of the received FM
signal which is usually much greater than that of the baseband message. This limits the degree of
noise reduction that can be achieved by preceding the discriminator by a bandpass receive filter.
Using a Phase-Locked Loop for FM Demodulation:
A device called a phase-locked loop (PLL) can be used to demodulate an FM signal with better
ww
performance in a noisy environment than a frequency discriminator. The block diagram of a
w.E
discrete-time version of a PLL as shown in figure,
asy
En
gin
ee
FIG 2.2 PLL Block diagram
rin
g.n
The block diagram of a basic PLL is shown in the figure below. It is basically a flip flop
et
consisting of a phase detector, a low pass filter (LPF),and a Voltage Controlled Oscillator (VCO)
The input signal Vi with an input frequency fi is passed through a phase detector. A phase detector
basically a comparator which compares the input frequency fiwith the feedback frequency fo .The
phase detector provides an output error voltage Ver (=fi+fo),which is a DC
voltage. This DC voltage is then passed on to an LPF. The LPF removes the high frequency noise
and produces a steady DC level, Vf (=Fi-Fo). Vf also represents the dynamic characteristics of the
PLL.
The DC level is then passed on to a VCO. The output frequency of the VCO (fo) is directly
proportional to the input signal. Both the input frequency and output frequency are compared and
adjusted through feedback loops until the output frequency equals the input frequency. Thus the
PLL works in these stages – free-running, capture and phase lock.
SCE 43 DEPT OF ECE
As the name suggests, the free running stage refer to the stage when there is no input voltage
applied. As soon as the input frequency is applied the VCO starts to change and begin producing
an output frequency for comparison this stage is called the capture stage. The frequency
comparison stops as soon as the output frequency is adjusted to become equal to the input
frequency. This stage is called the phase locked state.
Comments on PLL Performance:
• The frequency response of the linearized loop characteristics of a band-limited differentiator.
• The loop parameters must be chosen to provide a loop bandwidth that passes the desired
baseband message signal but is as small as possible to suppress out-of-band noise.
ww
• The PLL performs better than a frequency discriminator when the FM signal is corrupted by
additive noise. The reason is that the bandwidth of the frequency discriminator must be large
w.E
enough to pass the modulated FM signal while the PLL bandwidth only has to be large enough to
pass the baseband message. With wideband FM, the bandwidth of the modulated signal can be
asy
significantly larger than that of the baseband message.
Bandwidth of FM PLL vs. Costas Loop:
En
The PLL described in this experiment is very similar to the Costas loop presented in coherent
gin
demodulation of DSBSC-AM. However, the bandwidth of the PLL used for FM demodulation
ee
must be large enough to pass the baseband message signal, while the Costas loop is used to
rin
generate a stable carrier reference signal so its bandwidth should be very small and just wide
enough to track carrier drift and allow a reasonable acquisition time.
2.4 WIDE-BAND FM:
g.n
s t = ACcos(2πfct + φ(t)
Finding its FT is not easy:ϕ(t) is inside the cosine.
To analyze the spectrum, we use complex envelope.
et
s(t) can be written as: Consider single tone FM: s(t) =ACcos(2πfct + βsin2πfm(t))
Wideband FM is defined as the situation where the modulation index is above 0.5. Under these
circumstances the sidebands beyond the first two terms are not insignificant. Broadcast FM
stations use wideband FM, and using this mode they are able to take advantage of the wide
bandwidth available to transmit high quality audio as well as other services like a stereo channel,
and possibly other services as well on a single carrier.
The bandwidth of the FM transmission is a means of categorising the basic attributes for the
signal, and as a result these terms are often seen in the technical literature associated with
frequency modulation, and products using FM. This is one area where the figure for modulation
index is used.
GENERATION OF WIDEBAND FM SIGNALS:
Indirect Method for Wideband FM Generation:
Consider the following block diagram
Narrowband
m(t)
FM ( . )P gFM (WB) (t)
Modulator
ww
Assume a BPF is included in this
block to pass the signal with the
highest carrier freuqnecy and
reject all others
asy
modulator that was described in a previous lecture. The narrowband FM modulator generates a
En
narrowband FM signal using simple components such as an integrator (an OpAmp), oscillators,
multipliers, and adders. The generated narrowband FM signal can be converted to a wideband FM
gin
signal by simply passing it through a non–linear device with power P. Both the carrier frequency
ee
and the frequency deviation f of the narrowband signal are increased by a factor P. Sometimes,
the desired increase in the carrier frequency and the desired increase in f are different. In this
case, we increase f to the desired value and use a frequency shifter (multiplication by a sinusoid rin
followed by a BPF) to change the carrier frequency to the desired value.
g.n
System 1:
m(t)
Narrowband
Frequency Shifter
BPF
gFM2 (WB) (t)
et
FM ( . )2200 X CF=135 MHz
BWm = 5 kHz Modulator BW = 164 kHz f2 = 77 kHz
fc2 = 135 MHz
gFM3 (WB) (t) BW2 = 2(f2 + BWm)
gFM (NB) (t)
f3 = 77 kHz
f1 = 35 Hz = 164 kHz
fc3 = 660 MHz
fc1 = 300 kHz cos(2(525M)t)
BW3 = 2(f3 + BWm)
BW = 2*5 = 10 kHz
= 164 kHz
the desired carrier frequency. We could also have used an oscillator with a frequency that is the
sum of the frequencies of the input signal and the desired carrier frequency. This system is
characterized by having a frequency shifter with an oscillator frequency that is relatively large.
System 2:
Frequency Shifter
Narrowband BPF
m(t) FM ( . )44 X CF= 2.7 MHz ( . )50 gFM2 (WB) (t)
BWm = 5 kHz Modulator BW = 13.08 kHz f2 = 77 kHz
fc2 = 135 MHz
gFM3 (WB) (t) BW2 = 2(f2 + BWm)
gFM (NB) (t)
f3 = 1540 Hz
f1 = 35 Hz = 164 kHz
fc3 = 13.2 MHz
fc1 = 300 kHz cos(2(10.5M)t)
BW3 = 2(f3 + BWm)
BW = 2*5 = 10 kHz gFM4 (WB) (t)
= 13080 Hz
f4 = 1540 Hz
fc4 = 135/50 = 2.7 MHz
w.E
In this system, we are using two non–linear devices (or two sets of non–linear devices) with
orders 44 and 50 (44*50 = 2200). There are other possibilities for the factorizing 2200 such as
asy
2*1100,4*550,8*275,10*220.. Depending on the available components, one of these
En
factorizations may be better than the others. In fact, in this case, we could have used the same
factorization but put 50 first followed by 44. We want the output signal of the overall system to be
gin
as shown in the block diagram above, so we have to insure that the input to the non–linear device
ee
with order 50 has the correct carrier frequency such that its output has a carrier frequency of 135
MHz. This is done by dividing the desired output carrier frequency by the non–linearity order of
50, which gives 2.7 Mhz. This allows us to figure out the frequency of the require oscillator which rin
will be in this case either 13.2–2.7 = 10.5 MHz or 13.2+2.7 = 15.9 MHz. We are generally free
g.n
to choose which ever we like unless the available components dictate the use of one of them and
not the other. Comparing this system with System 1 shows that the frequency of the oscillator that
is required here is significantly lower (10.5 MHz compared to 525 MHz), which is generally an
et
advantage.
ww
w.E
asy
FIG 2.6 Spectrum of FM Bandwidth
2.6 FM TRANSMITTER
En
gin
Indirect method (phase shift) of modulation
The part of the Armstrong FM transmitter (Armstrong phase modulator) which is expressed in
ee
dotted lines describes the principle of operation of an Armstrong phase modulator. It should be
rin
noted, first that the output signal from the carrier oscillator is supplied to circuits that perform the
g.n
task of modulating the carrier signal. The oscillator does not change frequency, as is the case of
direct FM. These points out the major advantage of phase modulation (PM), or indirect FM, over
direct FM. That is the phase modulator is crystal controlled for frequency.
et
The crystal-controlled carrier oscillator signal is directed to two circuits in parallel. This signal
(usually a sine wave) is established as the reference past carrier signal and is assigned a value
0°.The balanced modulator is an amplitude modulator used to form an envelope of double side-
bands and to suppress the carrier signal (DSSC). This requires two input signals, the carrier signal
and the modulating message signal. The output of the modulator is connected to the adder circuit;
here the 90° phase-delayed carriers signal will be added back to replace the suppressed carrier.
The act of delaying the carrier phase by 90° does not change the carrier frequency or its wave-
shape. This signal identified as the 90° carrier signal.
ww
w.E
asy
En
gin
ee
FIG 2.8 Phasor diagram of Armstrong Modulator
rin
g.n
The carrier frequency change at the adder output is a function of the output phase shift and is
found by. fc = ∆θfs (in hertz)
et
When θ is the phase change in radians and fs is the lowest audio modulating frequency. In most
FM radio bands, the lowest audio frequency is 50Hz. Therefore, the carrier frequency change at
the adder output is 0.6125 x 50Hz = ± 30Hz since 10% AM represents the upper limit of carrier
voltage change, then ± 30Hz is the maximum deviation from the modulator for PM.
The 90° phase shift network does not change the signal frequency because the components and
resulting phase change are constant with time. However, the phase of the adder output voltage is
in a continual state of change brought about by the cyclical variations of the message signal, and
during the time of a phase change, there will also be a frequency change.
In figure. (c). during time (a), the signal has a frequency f1, and is at the zero reference phase.
ww
During time (c), the signal has a frequency f1 but has changed phase to θ. During time (b) when
the phase is in the process of changing, from 0 to θ. the frequency is less than f1.
w.E
Using Reactance modulator direct method
asy
En
gin
ee rin
FIG 2.9 Reactance Modulator g.n
The FM transmitter has three basic sections.
et
1. The exciter section contains the carrier oscillator, reactance modulator and the buffer
amplifier.
2. The frequency multiplier section, which features several frequency multipliers.
3. The poweroutput ection, which includes a low-
level power amplifier, the final power amplifier, and the impedance matching network to
properly load the power section with the antenna impedance.
The essential function of each circuit in the FM transmitter may be described as follows.
The Exciter
1. The function of the carrier oscillator is to generate a stable sine wave signal at the
rest frequency, when no modulation is applied. It must be able to linearly change
frequency when fully modulated, with no measurable change in amplitude.
2. The buffer amplifier acts as a constant high-impedance load on the oscillator to
help stabilize the oscillator frequency. The buffer amplifier may have a small gain.
3. The modulator acts to change the carrier oscillator frequency by application of the
message signal. The positive peak of the message signal generally lowers the
oscillator's frequency to a point below the rest frequency, and the negative message
ww peak raises the oscillator frequency to a value above the rest frequency. The greater
the peak-to-peak message signal, the larger the oscillator deviation.
w.E
Frequency multipliers are tuned-input, tuned-output RF amplifiers in which the output
resonant circuit is tuned to a multiple of the input frequency. Common frequency
asy
multipliers are 2x, 3x and 4x multiplication. A 5x Frequency multiplier is sometimes
En
seen, but its extreme low efficiency forbids widespread usage. Note that multiplication is
by whole numbers only. There can not a 1.5x multiplier, for instance.
gin
The final power section develops the carrier power, to be transmitted and often has a
ee
low-power amplifier driven the final power amplifier. The impedance matching network
rin
is the same as for the AM transmitter and matches the antenna impedance to the correct
load on the final over amplifier.
Frequency Multiplier
g.n
et
A special form of class C amplifier is the frequency. multiplier. Any class C amplifier is capable
of performing frequency multiplidàtion if the tuned circuit in the collector resonates at some
integer multiple of the input frequency.
For example a frequency doubler can be constructed by simply connecting a parallel tuned circuit
in the collector of a class C amplifier that resonates at twice the input frequency. When the
collector current pulse occurs, it excites or rings the tuned circuit at twice the input frequency. A
current pulse flows for every other cycle of the input.
A Tripler circuit is constructed in the same way except that the tuned circuit resonates at 3 times
the input - frequency. In this way, the tuned circuit receives one input pulse for every three cycles
of oscillation it produces Multipliers can be constructed to increase the input
frequency by any integer factor up to approximately 10. As' the multiplication factor gets higher,
the power output of the multiplier decreases. For most practical applications, the best result is
obtained with multipliers of 2 and 3.
Another way to look the operation of class C multipliers is .to .remember that the non-sinusoidal
current pulse is rich in harmonics. Each time the pulse occurs, the second, third, fourth, fifth, and
higher harmonics are generated. The purpose of the tuned circuit in the collector is to act as a filter
to select the desired harmonics.
ww
w.E
asy
En
FIG 2.10 Block Diagram of Frequency Multiplier - 1
gin
ee rin
g.n
FIG 2.10 Block Diagram of Frequency Multiplier - 2 et
In many applications a multiplication factor greater than that achievable with a single multiplier
stage is required. In such cases two or more multipliers are cascaded to produce an overall
multiplication of 6. In the second example, three multipliers provide an overall multiplication of
30. The total multiplication factor is simply the product of individual stage multiplication factors.
Reactance Modulator
The reactance modulator takes its name from the fact that the impedance of the circuit acts as a
reactance (capacitive or inductive) that is connected in parallel with the resonant circuit of the
Oscillator. The varicap can only appear as a capacitance that becomes part of the frequency
determining branch of the oscillator circuit. However, other discrete devices can appear as a
capacitor or as an inductor to the oscillator, depending on how the circuit is arranged. A colpitts
SCE 51 DEPT OF ECE
oscillator uses a capacitive voltage divider as the phase-reversing feedback path and would most
likely tapped coil as the phase-reversing element in the feedback loop and most commonly uses a
modulator that appears inductive.
2.7 COMPARISION OF VARIOUS MODULATIONS:
Comparisons of Various Modulations:
Amplitude modulation Frequency modulation Phase modulation
1. Amplitude of the carrier 1. Frequency of the carrier 1. Phase of the carrier wave
wave is varied in accordance wave is varied in accordance is varied in accordance with
with the message signal. with the message signal. the message signal.
ww
2.Much affected by noise.
3.System fidelity is poor.
2.More immune to the noise. 2. Noise voltage is constant.
3.Improved system fidelity. Improved system fidelity.
w.E
4.Linear modulation 4.Non Linear modulation 4.Non Linear modulation
asy
Comparisons of Narrowband and Wideband FM:
En
Narrowband FM
1. Modulation index > 1.
Wideband FM
1. Modulation index < 1.
gin
2.Bandwidth B = 2∆𝑓 𝐻𝑧. 2.Bandwidth B = 2𝑓𝑚 𝐻𝑧.
ee
3. Occupies more bandwidth. 3. Occupies less bandwidth.
4.Used in entertainment 4.Used in FM Mobile
broadcastings communication services. rin
2.8 APPLICATION & ITS USES: g.n
Magnetic Tape Storage.
Sound
et
Noise Fm Reduction
Frequency Modulation (FM) stereo decoders, FM Demodulation networks for FM
operation.
Frequency synthesis that provides multiple of a reference signal frequency.
Used in motor speed controls, tracking filters.
REFERENCES:
5. P. Lathi, Communication Systems, John Wiley and Sons, 2005.
6. Simon Haykins - ―Communication Systems‖ John Wilsey 2005.
7. J.G Prokias, M.Salelhi,‖Fundamental Of Communication Systems‖ Pearson Education
2006.
8. Muralibabu – ―Communication Theory‖.
GLOSSARY TERMS:
1. Frequency modulation (FM), with its digital correspondence frequency-shift
keying (FSK).
ww
2. Phase modulation (PM), with its digital correspondence phase-shift keying (PSK).
3. In PM, the total phase of the modulated carrier changes due to the changes in the
w.E
instantaneous phase of the carrier keeping the frequency of the carrier signal constant.
4. A device called a phase-locked loop (PLL) can be used to demodulate an FM signal with
asy
better performance in a noisy environment than a frequency discriminator.
En
5. As in other modulation systems, the value of the modulation index indicates by how much
the modulated variable varies around its unmodulated level.
gin
6. Amplitude Limiters, are used to keep the output constant despite changes in the input
signal to remove distortion.
TUTORIAL PROBLEMS: ee rin
1. If the modulating frequency is 1 kHZ and the maximum deviation is 10 KHZ, what is the
required for an FM signal?
g.n
Solution:
fm = 1khz, ∆f = 10khz
Bandwidth = 2(𝑚𝑓 + 1)fm
et
∆𝑓
𝑚𝑓 = = 10
𝑓𝑚
B = 22KHZ.
2. Cosider an angle modulated wave (pm) 𝑣 = 10 sin(𝜔𝑐𝑡 + 5 sin 𝑤𝑚𝑡), Let fm = 2khz calculate
the modulation index and find the bandwidth.
Solution:
The equation is of the form,𝑣 = 10 sin(𝜔𝑐𝑡 + 5 sin 𝑤𝑚𝑡), A = 10v, fm = 2 kHz, m = 5
Bandwidth =2 𝑚 + 1 𝑓𝑚 = 24 𝑘𝑧.
1. Find the deviation ratio if the maximum frequency deviation is 60 kHz and the fm = 10khz.
∆𝐟
𝐃𝐞𝐯𝐢𝐚𝐭𝐢𝐨𝐧 𝐫𝐚𝐭𝐢𝐨 = 𝐟𝐦 ; 𝐀𝐧𝐬 ∶ 𝟔
2. Angle modulated signal is given by 𝒙𝒏 𝒕 = 𝟓 𝐜𝐨𝐬[𝟐𝝅𝟏𝟎𝟔 𝒕 + 𝟎. 𝟐 𝐜𝐨𝐬 𝟐𝟎𝟎 𝝅𝒕] Find whether xa(t) is
PM or FM?
Ans : xa(t) can be either FM or PM.
ww
w.E
asy
En
gin
ee rin
g.n
et
UNIT – III
RANDOM PROCESS
ww
probability rule which assigns a probability to any meaningful event associated with the
observation of these functions. Suppose the sample function Xi(t) corresponds to the sample point
w.E
si in the sample space S and occurs with probability Pi.
• may be finite or infinite.
asy
• Sample functions may be defined at discrete or continuous time instants.
En
Random process associated with the Poisson model, and more generally, renewal theory include
The sequence of inter arrival times.
The sequence of arrival times.
gin
CONTENT:
The counting process.
ee
RANDOM VARIABLES rin
CENTRAL LIMIT THEOREM
g.n
STATIONARY PEROCESS
CORRELATION
COVARIANCE FUNCTION
et
ERGODIC PROCESS
GAUSSIAN PROCES
FLITERING THROUGH RANDOM PROCESS
3.1 RANDOM VARIABLES:
A random variable, usually written X, is a variable whose possible values are numerical outcomes
of a random phenomenon. Random variable consists of two types they are discrete and continuous
type variable this defines discrete- or continuous-time random processes. Sample function values
may take on discrete or continuous a value is defines discrete- or continuous Sample function
values may take on discrete or continuous values. This defines discrete- or continuous-parameter
random process.
RANDOM PROCESSES VS. RANDOM VARIABLES:
• For a random variable, the outcome of a random experiment is mapped onto variable, e.g., a
number.
• For a random processes, the outcome of a random experiment is mapped onto a waveform that is
a function of time.Suppose that we observe a random process X(t) at some time t1 to generate the
servation X(t1) and that the number of possible waveforms is finite. If Xi(t1) is observed with
probability Pi, the collection of numbers {Xi(t1)}, i =1, 2, . . . , n forms a random variable,
denoted by X(t1), having the probabilitydistribution Pi, i = 1, 2, . . . , n. E[ ・ ] = ensemble
ww
average operator.
w.E
DISCRETE RANDOM VARIABLES:
A discrete random variable is one which may take on only a countable number of distinct values
asy
such as 0,1,2,3,4,........ Discrete random variables are usually (but not necessarily) counts. If a
random variable can take only a finite number of distinct values, then it must be discrete.
En
Examples of discrete random variables include the number of children in a family, the Friday
gin
night attendance at a cinema, the number of patients in a doctor's surgery, the number of defective
light bulbs in a box of ten.
PROBABILITY DISTRIBUTION:
ee rin
The probability distribution of a discrete random variable is a list of probabilities associated with
g.n
each of its possible values. It is also sometimes called the probability function or the probability
et
mass function. Suppose a random variable X may take k different values, with the probability
that X = xi defined to be P(X = xi) = pi. The probabilities pi must satisfy the following:
1: 0 < pi < 1 for each i
2: p1 + p2 + ... + pk = 1.
All random variables (discrete and continuous) have a cumulative distribution function. It is a
function giving the probability that the random variable X is less than or equal to x, for every
value x. For a discrete random variable, the cumulative distribution function is found by summing
up the probabilities.
3.2 CENTRAL LIMIT THEOREM:
In probability theory, the central limit theorem (CLT) states that, given certain conditions, the
arithmetic mean of a sufficiently large number of iterates of independent random variables, each
with a well-defined expected value and well-defined variance, will be approximately normally
distributed.
The Central Limit Theorem describes the characteristics of the "population of the means" which
has been created from the means of an infinite number of random population samples of size (N),
all of them drawn from a given "parent population". The Central Limit Theorem predicts
that regardless of the distribution of the parent population:
[1] The mean of the population of means is always equal to the mean of the parent population
from which the population samples were drawn.
[2] The standard deviation of the population of means is always equal to the standard deviation of
ww
the parent population divided by the square root of the sample size (N).
[3] The distribution of means will increasingly approximate a normal distribution as the size N of
w.E
samples increases.
A consequence of Central Limit Theorem is that if we average measurements of a particular
asy
quantity, the distribution of our average tends toward a normal one. In addition, if a measured
En
variable is actually a combination of several other uncorrelated variables, all of them
"contaminated" with a random error of any distribution, our measurements tend to be
gin
contaminated with a random error that is normally distributed as the number of these variables
ee
increases.Thus, the Central Limit Theorem explains the ubiquity of the famous bell-shaped
rin
"Normal distribution" (or "Gaussian distribution") in the measurements domain.
Examples:
Uniform distribution
g.n
Triangular distribution
1/X distribution
Parabolic distribution
et
CLT Summary
more statistical fine-print
The uniform distribution on the left is obviously non-Normal. Call that the parent distribution.
ww
process is repeated, over and over, and averages of two are computed. The distribution of
averages of two is shown on the left.
w.E
asy
En
gin
ee
FIG 3.2 Distributions of Xbar
rin
Repeatedly taking three from the parent distribution, and computing the averages, produce
the probability density on the left.
g.n
et
the mean and variance, if they are present, also do not change over time and do not follow any
trends.
Stationary is used as a tool in time series analysis, where the raw data is often transformed to
become stationary; for example, economic data are often seasonal and/or dependent on a non-
stationary price level. An important type of non-stationary process that does not include a trend-
like behaviour is the cyclostationary process.
Note that a "stationary process" is not the same thing as a "process with a stationary
distribution". Indeed there are further possibilities for confusion with the use of "stationary" in the
context of stochastic processes; for example a "time-homogeneous" Markov chain is sometimes
ww
said to have "stationary transition probabilities". Besides, all stationary Markov random processes
are time-homogeneous.
w.E
Definition:
times
En
. Then, is said to be stationary if, for all , for all , and for
all ,
gin
Since does not affect
Wide Sense Stationary:
,
ee
is not a function of time.
rin
Weaker form of stationary commonly employed in signal processing is known as weak-sense
g.n
stationary, wide-sense stationary (WSS), covariance stationary, or second-order stationary. WSS
Any strictly stationary process which has a mean and a covariance is also WSS. et
random processes only require that 1st moment and covariance do not vary with respect to time.
So, a continuous-time random process x(t) which is WSS has the following restrictions on its
mean function.
product and its price. Correlations are useful because they can indicate a predictive relationship
that can be exploited in practice. For example, an electrical utility may produce less power on a
mild day based on the correlation between electricity demand and weather. In this example there
is a causal relationship, because extreme weather causes people to use more electricity for heating
or cooling; however, statistical dependence is not sufficient to demonstrate the presence of such a
causal relationship.
Formally, dependence refers to any situation in which random variables do not satisfy a
mathematical condition of probabilistic independence. In loose usage, correlation can refer to any
departure of two or more random variables from independence, but technically it refers to any of
ww
several more specialized types of relationship between mean values. There are several correlation
coefficients, often denoted ρ or r, measuring the degree of correlation. The most common of these
w.E
is the Pearson correlation coefficient, which is sensitive only to a linear relationship between two
variables. Other correlation coefficients have been developed to be more robust than the Pearson
asy
correlation that is, more sensitive to nonlinear relationships. Mutual information can also be
En
applied to measure dependence between two variables.
Pearson's correlation coefficient:
gin
He most familiar measure of dependence between two quantities is the Pearson product-moment
ee
correlation coefficient, or "Pearson's correlation coefficient", commonly called simply "the
rin
correlation coefficient". It is obtained by dividing the covariance of the two variables by the
product of their standard deviations. Karl Pearson developed the coefficient from a similar but
slightly different idea by Francis Galton.
g.n
et
The population correlation coefficient ρX,Y between two random variables X and Y with expected
values μX and μY and standard deviations ςX and ςY is defined as:
where E is the expected value operator, cov means covariance, and, corr a widely used alternative
notation for the correlation coefficient.
The Pearson correlation is defined only if both of the standard deviations are finite and nonzero. It
is a corollary of the Cauchy–Schwarz inequality that the correlation cannot exceed 1 in absolute
value. The correlation coefficient is symmetric: corr(X,Y) = corr(Y,X).
The Pearson correlation is +1 in the case of a perfect direct (increasing) linear relationship
(correlation), −1 in the case of a perfect decreasing (inverse) linear relationship (autocorrelation),
and some value between −1 and 1 in all other cases, indicating the degree of linear
SCE 60 DEPT OF ECE
dependence between the variables. As it approaches zero there is less of a relationship (closer to
uncorrelated). The closer the coefficient is to either −1 or 1, the stronger the correlation between
the variables.
If the variables are independent, Pearson's correlation coefficient is 0, but the converse is not true
because the correlation coefficient detects only linear dependencies between two variables. For
example, suppose the random variable X is symmetrically distributed about zero, and Y = X2.
Then Y is completely determined by X, so that X and Y are perfectly dependent, but their
correlation is zero; they are uncorrelated. However, in the special case when X and Y are jointly
normal, uncorrelatedness is equivalent to independence.
ww
If we have a series of n measurements of X and Y written as xi and yi where i = 1, 2, ..., n, then
the sample correlation coefficient can be used to estimate the population Pearson
w.E
correlation r between X and Y.
where x and y are the sample means of X and Y, and sx and sy are the sample standard
deviations of X and Y.
asy
This can also be written as:
En
gin
ee
If x and y are results of measurements that contain measurement error, the realistic limits on the
correlation coefficient are not −1 to +1 but a smaller range.
rin
3.5 COVARIANCE FUNCTIONS:
g.n
In probability theory and statistics, covariance is a measure of how much two variables change
et
together, and the covariance function, or kernel, describes the spatial covariance of a random
variable process or field. For a random field or stochastic process Z(x) on a domain D, a
covariance function C(x, y) gives the covariance of the values of the random field at the two
locations x and y:
The same C(x, y) is called the auto covariance function in two instances: in time series (to denote
exactly the same concept except that x and y refer to locations in time rather than in space), and in
multivariate random fields (to refer to the covariance of a variable with itself, as opposed to
the cross covariance between two different variables at different locations, Cov(Z(x1), Y(x2))).
can be computed as
ww
A function is a valid covariance function if and only if this variance is non-negative for all
possible choices of N and weights w1, …, wN. A function with this property is called positive
w.E
definite.
3.6 ERGODIC PROCESS:
asy
In the event that the distributions and statistics are not available we can avail ourselves of the
time averages from the particular sample function. The mean of the sample function Xλo(t)
is referred to as the
En
sample mean of the process X(t) and is defined as
gin
μ X T = (T )
1 T/2
−T/2
Xλo(t)dt
ee
This quantity is actually a random-variable by itself because its value depends on the parameter
rin
sample function over it was calculated. the sample variance of the random process is defined as
1 T/2
g.n
2 2
ς X T =( ) Xλo(t) − μx T dt
T −T/2
−T/2
x t ∗ x(t − T)dt
et
These quantities are in general not the same as the ensemble averages described before. A
random process X(t) is said to be ergodic in the mean, i.e., first-order ergodic if the mean
of sample average asymptotically approaches the ensemble mean
limT→∞ E μX T = μx(t)
lim var μX T = 0
T→∞
In a similar sense a random process X(t) is said to be ergodic in the ACF, i.e, second-order
ergodic if
lim E RXX(τ) = RXX(τ)
T→∞
ww
A random process X(t) is a Gaussian process if for all n and all (t1 ,t2 ,…,tn ), the random
variables have a jointly Gaussian density function. For Gaussian processes, knowledge of the
w.E
mean and autocorrelation; i.e., mX (t) and Rx (t1 ,t2 ) gives a complete statistical description of
the process. If the Gaussian process X(t) is passed through an LTI system, then the output process
asy
Y(t) will also be a Gaussian process. For Gaussian processes, WSS and strict stationary are
equivalent.
A Gaussian process is En
a stochastic process Xt, t ∈ T, for which any finite linear
combination of samples has
gin
a joint Gaussian distribution. More accurately, any
ee
linear functional applied to the sample functionXt will give a normally distributed result.
rin
Notation-wise, one can write X ~ GP(m,K), meaning the random function X is distributed as a GP
with mean function m and covariance function K.[1] When the input vector t is two- or multi-
dimensional a Gaussian process might be also known as a Gaussian random field. g.n
−∞
RX τ dτ < ∞.
et
A sufficient condition for the ergodicity of the stationary zero-mean Gaussian process X(t) is that
∞
ww =
∞
−∞
h(τ
∞
)E[X(t − τ )] dτ
w.E = mX
= mXH(0)
−∞
h(τ ) dτ
asy
where H(0) is the zero frequency response of the system.
Autocorrelation:
En
The autocorrelation function of the output random process Y (t). By definition, we have
gin
RY (t, u) = E[Y (t)Y (u)]
where t and u denote the time instants at which the process is observed. We may therefore use the
convolution integral to write
RY (t, u) = E [
∞
−∞
ee ∞
h(τ1)X(t − τ1) dτ1] ∞ h(τ2)X(t − τ2)
rin dτ2 ]
=
∞
−∞
∞
h(τ1) dτ1 −∞ h(τ2)E [X(t − τ1)X(t − τ2)] dτ2
g.n
function of the difference between the observation times t − τ1 and u − τ2.
Putting τ = t − u, we get
et
When the input X(t) is a wide-stationary random process, autocorrelation function of X(t) is only a
∞ ∞
RY (τ ) = −∞ −∞
h(τ1)h(τ2)RX(τ − τ1 + τ2) dτ1 dτ2
RY (0) = E[Y 2 (t)]
The mean square value of the output random process Y (t) is obtained by putting τ = 0 in the
above equation.
∞ ∞
E[ Y 2 (t)] = −∞ −∞
h(τ1)h(τ2)RX(τ2 − τ1) dτ1 dτ2
The mean square value of the output of a stable linear time-invariant filter in response to a wide-
sense stationary random process is equal to the integral over all frequencies.
of the power spectral density of the input random process multiplied by the squared magnitude of
the transfer function of the filter.
3.9 APPLICATION AND ITS USES:
A Gaussian process can be used as a prior probability
distribution over functions in Bayesian inference.
Wiener process (aka Brownian motion) is the integral of a white noise Gaussian process.
It is not stationary, but it has stationary increments.
REFERENCES:
1. Copyrights Zhu, Weihon School of Information Science and Engineering,
asy
4. Muralibabu – ―Communication Theory‖.
GLOSSARY TERMS:
En
1. A Random experiment is called a random experiment if its outcomes cannot be
predicted.
gin
ee
2. Sample space, the set of all possible outcomes of a random experiment.
rin
3. SSS, A Random process is strict sense stationary if its satisfies are in variant to a
shift of origin.
g.n
4. Ergodic Process, if time averages are the same for all sample functions and equal to
the corresponding ensemble averages.
5. Noise, as an unwanted interference of energy with the wanted signals.
6. The power spectral density of white noise is constant.
et
UNIT – IV
NOISE CHARACTERISATION
Noise is an inevitable consequence of the working of minerals and is an important health and
safety consideration for those working on the site. Whether it becomes "environmental noise"
depends on whether it disrupts or disturbs people outside the site boundary.
CONTENT:
NOISE IN COMMUNCATION SYSTEM
ww
CLASSIFICATION OF NOISE
NOISE FIGURE & TEMPERATURE
w.E
NOISE IN CASCADE SYSTEMS
REPRESENTATION OF NARROW BAND NOISE
asy
NOISE PERFORMANCE
En
PREEMPHESIS & DE EMPHESIS
CAPTURE AND THERSHOLD EFFECT
4.1 INTRODUCTION:
gin
ee
Noise is often described as the limiting factor in communication systems: indeed if there as no
noise there would be virtually no problem in communications.
rin
Noise is a general term which is used to describe an unwanted signal which affects a wanted
g.n
signal. These unwanted signals arise from a variety of sources which may be considered in one of
two main categories:-
a) Interference, usually from a human source (manmade)
b) Naturally occurring random noise.
et
Interference arises for example, from other communication systems (cross talk), 50 Hz supplies
(hum) and harmonics, switched mode power supplies, thyristor circuits, ignition (car spark plugs)
motors … etc. Interference can in principle be reduced or completely eliminated by careful
engineering (i.e. good design, suppression, shielding etc). Interference is essentially deterministic
(i.e. random, predictable), however observe.
When the interference is removed, there remains naturally occurring noise which is essentially
random (non-deterministic),. Naturally occurring noise is inherently present in electronic
communication systems from either ‗external‘ sources or ‗internal‘ sources.
Naturally occurring external noise sources include atmosphere disturbance (e.g. electric storms,
lighting, ionospheric effect etc), so called ‗Sky Noise‘ or Cosmic noise which includes noise from
galaxy, solar noise and ‗hot spot‘ due to oxygen and water vapour resonance in the earth‘s
atmosphere. These sources can seriously affect all forms of radio transmission and the design of a
radio system (i.e. radio, TV, satellite) must take these into account.The diagram below shows
noise temperature (equivalent to noise power, we shall discuss later) as a function of frequency for
sky noise.
ww
w.E
asy
En
gin
ee rin
The upper curve represents an antenna at low elevation (~ 5o above horizon), the lower curve
represents an antenna pointing at the zenith (i.e. 90o elevation).
g.n
et
Contributions to the above diagram are from galactic noise and atmospheric noise as shown
below.Note that sky noise is least over the band – 1 GHz to 10 GHz. This is referred to as a low
noise ‗window‘ or region and is the main reason why satellite links operate at frequencies in this
band (e.g. 4 GHz, 6GHz, 8GHz). Since signals received from satellites are so small it is important
to keep the background noise to a minimum.
Naturally occurring internal noise or circuit noise is due to active and passive electronic devices
(e.g. resistors, transistors ...etc) found in communication systems. There are various mechanism
which produce noise in devices; some of which will be discussed in the following sections.
THERMAL NOISE (JOHNSON NOISE):
This type of noise is generated by all resistances (e.g. a resistor, semiconductor, the resistance of a
resonant circuit, i.e. the real part of the impedance, cable etc).
ww
Free electrons are in contact random motion for any temperature above absolute zero (0 degree K,
~ -273 degree C). As the temperature increases, the random motion increases, hence thermal
w.E
noise, and since moving electron constitute a current, although there is no net current flow, the
motion can be measured as a mean square noise value across the resistance.
asy
En
gin
ee rin
FIGURE 4.1 Circuit Diagram of Thermal Noise Voltage
g.n
Experimental results (by Johnson) and theoretical studies (by Nyquist) give the mean square noise
_ 2
voltage as V 4 k TBR (volt 2 ) et
Where k = Boltzmann‘s constant = 1.38 x 10-23 Joules per K
T = absolute temperature
B = bandwidth noise measured in (Hz)
R = resistance (ohms)
The law relating noise power, N, to the temperature and bandwidth is
N = k TB watts
These equations will be discussed further in later section.
The equations above held for frequencies up to > 1013 Hz (10,000 GHz) and for at least all
practical temperatures, i.e. for all practical communication systems they may be assumed to be
valid.Thermal noise is often referred to as ‗white noise‘ because it has a uniform ‗spectral
density‘.
Note – noise power spectral density is the noise power measured in a 1 Hz bandwidth i.e. watts
per Hz. A uniform spectral density means that if we measured the thermal noise in any 1 Hz
bandwidth from ~ 0Hz → 1 MHz → 1GHz …….. 10,000 GHz etc we would measure the same
amount of noise.
From the equation N=kTB, noise power spectral density is p o kT watts per Hz.
I.e. Graphically figure 4.2 is shown as,
ww
w.E
asy
En
gin
SHOT NOISE:
ee rin
Shot noise was originally used to describe noise due to random fluctuations in electron emission
from cathodes in vacuum tubes (called shot noise by analogy with lead shot). Shot noise also
g.n
occurs in semiconductors due to the liberation of charge carriers, which have discrete amount of
I n2 2I DC 2 I o q e B (amps ) 2
Where
I DC is the direct current as the pn junction (amps)
ww
Thermal noise in resistors does not vary with frequency, as previously noted, by many resistors
also generates as additional frequency dependent noise referred to as excess noise. This noise also
w.E
exhibits a (1/f) characteristic, similar to flicker noise.
Carbon resistor generally generates most excess noise whereas were wound resistors usually
asy
generates negligible amount of excess noise. However the inductance of wire wound resistor
En
limits their frequency and metal film resistor are usually the best choices for high frequency
communication circuit where low noise and constant resistance are required.
BURST NOISE OR POPCORN NOISE:
gin
2
proportional to 1 .
f
ee
Some semiconductors also produce burst or popcorn noise with a spectral density which is
rin
GENERAL COMMENTS:
g.n
The diagram below illustrates the variation of noise with frequency.
et
For frequencies below a few KHz (low frequency systems), flicker and popcorn noise are the most
significant, but these may be ignored at higher frequencies where ‗white‘ noise predominates.
Thermal noise is always presents in electronic systems. Shot noise is more or less significant
depending upon the specific devices used for example as FET with an insulated gate avoids
junction shot noise. As noted in the preceding discussion, all transistors generate other types of
‗non-white‘ noise which may or may not be significant depending on the specific device and
application. Of all these types of noise source, white noise is generally assumed to be the most
significant and system analysis is based on the assumption of thermal noise. This assumption is
reasonably valid for radio systems which operates at frequencies where non-white noise is greatly
reduced and which have low noise ‗front ends‘ which, as shall be discussed, contribute most of the
internal (circuit) noise in a receiver system. At radio frequencies the sky noise contribution is
ww
significant and is also (usually) taken into account.
Obviously, analysis and calculations only gives an indication of system performance.
w.E
Measurements of the noise or signal-to-noise ratio in a system include all the noise, from whatever
source, present at the time of measurement and within the constraints of the measurements or
system bandwidth.
asy
En
Before discussing some of these aspects further an overview of noise evaluation as applicable to
communication systems will first be presented.
NOISE EVALUATION:
gin
OVERVIEW:
ee rin
It has been stated that noise is an unwanted signal that accompanies a wanted signal, and, as
discussed, the most common form is random (non-deterministic) thermal noise.
g.n
The essence of calculations and measurements is to determine the signal power to Noise power
ratio, i.e. the (S/N) ratio or (S/N) expression in dB.
i.e. Let S= signal power (mW)
N = noise power (mW)
et
S S
N ratio N
S S
10 log 10
N dB N
Also recall that
S (mW )
S dBm 10 log 10
1mW
N (mW )
and N dBm 10 log 10
1mW
S
i.e. 10 log 10 S 10 log 10 N
N dB
ww S
S dBm N dBm
N dB
w.E
Powers are usually measured in dBm (or dBw) in communications systems. The equation
S
asy
S dBm N dBm is often the most useful.
N dB
S
En
The at various stages in a communication system gives an indication of system quality and
N
gin
performance in terms of error rate in digital data communication systems and ‗fidelity‘ in case of
ee S
analogue communication systems. (Obviously, the larger the , the better the system will be).
N
rin
g.n
Noise, which accompanies the signal is usually considered to be additive (in terms of powers) and
its often described as Additive White Gaussian Noise, AWGN, noise. Noise and signals may also
S
et
be multiplicative and in some systems at some levels of , this may be more significant then
N
AWGN.In order to evaluate noise various mathematical models and techniques have to be used,
particularly concepts from statistics and probability theory, the major starting point being that
random noise is assumed to have a Gaussian or Normal distribution.
We may relate the concept of white noise with a Gaussian distribution as follows:
ww
w.E
asy
FIGURE 4.3 Probability of noise voltage vs voltage
En
Gaussian distribution – ‗graph‘ shows Probability of noise voltage vs voltage – i.e. most probable
noise voltage is 0 volts (zero mean). There is a small probability of very large +ve or –ve noise
voltages. gin
ee
White noise – uniform noise power from ‗DC‘ to very high frequencies.
rin
Although not strictly consistence, we may relate these two characteristics of thermal noise as
g.n
follows:
et
FIGURE 4.4 Characteristics of Thermal Noise
The probability of amplitude of noise at any frequency or in any band of frequencies (e.g. 1 Hz,
10Hz… 100 KHz .etc) is a Gaussian distribution.Noise may be quantified in terms of noise power
spectral density, p0 watts per Hz, from which Noise power N may be expressed as
N= p0 Bn watts
Where Bn is the equivalent noise bandwidth, the equation assumes p0 is constant across the band
(i.e. White Noise).
Note - Bn is not the 3dB bandwidth, it is the bandwidth which when multiplied by p0
Gives the actual output noise power N. This is illustrated further below.
ww
w.E FIGURE 4.5 Basic Ideal Low Pass Filter
Ideal low pass filter
Bandwidth B Hz = Bn asy
N= p0 Bn watts
En
Practical LPF
gin
3 dB bandwidth shown, but noise does not suddenly cease at B3dB
N= p0 Bn
ee
Therefore, Bn > B3dB, Bn depends on actual filter.
rin
In general the equivalent noise bandwidth is > B3dB.
g.n ____
et
Alternatively, noise may be quantified in terms of ‗mean square noise‘ i.e. V , which is 2
effectively a power. From this a ‗Root mean square (RMS)‘ value for the noise voltage may be
determined.
____
i.e. RMS = V2
In order to ease analysis, models based on the above quantities are used. For example, if we
imagine noise in a very narrow bandwidth, f , as f df , the noise approaches a sine wave
(with frequency ‗centred‘ in df).Since an RMS noise voltage can be determined, a ‗peak‘ value of
the noise may be invented since for a sine wave
Peak
RMS =
2
Note – the peak value is entirely fictious since in theory the noise with a Gaussian distribution
could have a peak value of + or - volts.
Hence we may relate
Mean square RMS 2 (RMS) Peak noise voltage (invented for convenience)
Problems arising from noise are manifested at the receiving end of a system and hence most of the
analysis relates to the receiver / demodulator with transmission path loss and external noise
sources (e.g. sky noise) if appropriate, taken into account.
The transmitter is generally assumed to transmit a signal with zero noise (i.e (S/N) at the Tx ∞
General communication system block diagrams to illustrate these points are shown below.
ww
w.E
asy
En
FIGURE 4.6 gin
Block Diagrams of Communication System
Transmission Line
ee
R = repeater (Analogue) or Regenerators (digital)
rin
g.n
These systems may facilitate analogue or digital data transfer. The diagram below characterizes
these typical systems in terms of the main parameters.
et
w.E
PT represents the output power at the transmitter.
GT represents the Tx aerial gain.
asy
Path loss represents the signal attenuation due to inverse square law and absorption e.g. in
atmosphere.
G represents repeater gains. En
gin
PR represents receiver inout signal power
rin
represents the at the input to the demodulator
N IN N
g.n
S
represents the quality of the output signal for analogue systems
N OUT et
DATA ERROR RATE represents the quality of the output (probability of error) for
digital data system.
4.2 ANALYSIS OF NOISE IN COMMUNICATION SYSTEMS:
Thermal Noise (Johnson noise)
It has been discussed that the thermal noise in a resistance R has a mean square value given by
____
V 4 k TBR (volt 2 )
2
R = resistance (ohms)
This is found to hold for large bandwidth (>1013 Hz) and large range in temperature.
This thermal noise may be represented by an equivalent circuit as shown below.
ww
w.E FIGURE 4.7 Equivalent Circuit of Thermal Noise Voltage
asy
i.e. equivalent to the ideal noise free resistor (with same resistance R) in series with a voltage
source with voltage Vn.
Since
En
gin
____
V 2 4 k TBR (volt 2 ) (mean square value , power)
then VRMS =
____
rin
The above equation indicates that the noise power is proportional to bandwidth.
For a given resistance R, at a fixed temperature T (Kelvin) g.n
____
We have V 2 (4 k TR ) B , where (4 k TR) is a constant – units watts per Hz. et
For a given system, with (4 k TR) constant, then if we double the bandwidth from B Hz to 2B Hz,
the noise power will double (i.e increased by 3 dB). If the bandwidth were increased by a factor of
10, the noise power is increased by a factor of 10.For this reason it is important that the system
bandwidth is only just ‗wide‘ enough to allow the signal to pass to limit the noise bandwidth to a
minimum.
I.e. Signal Spectrum
Signal Power = S
A) System BW = B Hz
N= Constant B (watts) = KB
B) System BW
N= Constant 2B (watts) = K2B
ww
w.E
asy
FIGURE 4.8 Spectrum Diagrams of Communication System
S S S S
For A, , For B,
S
N KB
En N K 2B
i.e.
N
for B only ½ that for A.
gin
Noise Voltage Spectral Density
ee rin
Since date sheets specify the noise voltage spectral density with unit‘s volts per Hz (volts per
root Hz).
et
( 2 kTR ) has units of volts per Hz . If the bandwidth B is doubled the noise voltage will
____
V 4 k B (T1 R1 T2 R2 ) Mean square noise
2
n
____
If T1= T2 = T then Vn 2 4 kT B ( R1 R2 )
ww
Resistance in Parallel
w.E
asy
En
gin
ee rin
g.n
et
FIGURE 4.10 Circuit Diagram of Resistance in Parallel
Since an ideal voltage source has zero impedance, we can find noise as an output Vo1,
due to Vn1 , an output voltage V02 due to Vn2.
R2 R1
Vo1 Vn1 Vo 2 V n 2
R1 R2 R1 R2
Hence,
2
___
R2
V 4 k T1 B R1
2
R1 R2
o1
and
ww
2
___
R1
V 4 k T2 B R2
2
R1 R2
o2
w.E
e.g . Vo1 Vn1
R1
R 1 R2
_____
R2
, Vo21 Vn1 2
R 1 R2
2
V n1 RMS , V n1 2
Mean sqaure
_____
Vn21 asy
_____ ____
R2 2
Vo21 Vn12
R 1 R2
En
gin
____
Thus V V
n
2
___
o1
2
V
___
o2
2
=
4kB
R1 R2 ee 2
R 2
2
R R
T1 R1R12 T2 R2 1 2
R1 R2
rin
_____
2
4kB R1 R2 (T1 R1 T2 R2 ) g.n
et
V n
R1 R2 2
Since T1 usually equals T2 (say T)
_____
4kB R1 R2 T (R1R2 )
V 2
n
R1 R2 2
or
_____
RR
V 2
4kTB 1 2
R1 R2
n
RR
i.e. the two noisy resistors in parallel behave as a resistance 1 2 which is the equivalent
R1 R2
resistance of the parallel combination.
ww
a) A transmission line (e.g. coax cable).
w.E
asy
En
FIGURE 4.12 Circuit Diagram of Coxial Cable
Zo is the characteristics impedance of the transmission line, i.e. the source resistance Rs.
gin
This is connected to the receiver with an input impedance RIN .
b) A radio path (e.g. satellite, TV, radio – using aerial)
ee rin
g.n
et
FIGURE 4.13 Circuit Diagram of Radio Path.
Again Zo=Rs the source resistance, connected to the receiver with input resistance Rin.
Typically Zo is 600 ohm for audio/ telephone systems
Or Zo is 50 ohm (for radio/TV systems)
75 ohm (radio frequency systems).
An equivalent circuit, when the line is connected to the receiver is shown below. (Note we omit
the noise due to Rin – this is considered in the analysis of the receiver section).
ww
w.E
The RMS voltage output, Vo (noise) is
Vo (noise)
____
RIN
v
2
asy
RIN RS
Similarly, the signal voltage output due to Vsignal at input is
Vs(signal) = (Vsignal ) En
RIN
gin
RIN Rs
Then Vo (noise)
____
R
v
2
v2
ee
For maximum power transfer, the input RIN is matched to the source R S , i.e. RIN = R S = R (say)
____
Vrms 2
Since average power =
2
____
2
V
Then N kTBn
R
i.e. Noise Power kTBn watts
For a matched system, N represents the average noise power transferred from the source to the
load. This may be written as
N
p0 kT watts per Hz
Bn
ww
where p0 is the noise power spectral density (watts per Hz)
asy
T is the absolute temperature K.
Note: that p0 is independent of frequency, i.e. white noise.
En
These equations indicate the need to keep the system bandwidth to a minimum, i.e. to that
gin
required to pass only the band of wanted signals, in order to minimize noise power, N.
ee
For example, a resistance at a temperature of 290 K (17 deg C), p0 = kT is 4 x 10-21 watts per Hz.
rin
For a noise bandwidth Bn = 1 KHz, N is 4 x 10-18 watts (-174 dBW).If the system bandwidth is
increased to 2 KHz, N will decrease by a factor of 2 (i.e. 8 x 10-18 watts or -171 dBW) which will
g.n
degrade the (S/N) by 3 dB.Care must also be exercised when noise or (S/N ) measurements are
et
made, for example with a power meter or spectrum analyser, to be clear which bandwidth the
noise is measured in, i.e. system or test equipment. For example, assume a system bandwidth is 1
MHz and the measurement instrument bandwidth is 250 KHz.
In the above example, the noise measured is band limited by the test equipment rather than the
system, making the system appears less noisy than it actually is. Clearly if the relative bandwidths
are known (they should be) the measured noise power may be corrected to give the actual noise
power in the system bandwidth.
If the system bandwidth was 250 KHz and the test equipment was 1 MHz then the measured result
now would be – 150 dBW (i.e. the same as the actual noise) because the noise power monitored
by the test equipment has already been band limited to 250 KHz.
SIGNAL – TO – NOISE :
The signal to noise ratio is given by
ww S Signal Power
N
Noise Power
w.E
The signal to noise in dB is expressed by
S
asy
S
dB 10 log 10
N N
S En
dB 10 log 10 S 10 log 10 N
gin
or
N
since 10 log10 S = S dBm if S in mW
S
represents the at the output.
N
S
In general ≥ , i.e. the network ‗adds‘ noise (thermal noise tc from the network devices) so
N IN
that the output (S/N) is generally worst than the input.
The amount of noise added by the network is embodied in the Noise Factor F, which is defined by
S N
S N
IN
Noise factor F =
OUT
w.E
the value of F, the better the network.
The network may be active elements, e.g. amplifiers, active mixers etc, i.e. elements with gain > 1
asy
or passive elements, e.g. passive mixers, feeders cables, attenuators i.e. elements with gain <1.
NOISE FIGURE – NOISE FACTOR FOR ACTIVE ELEMENTS :
En
For active elements with power gain G>1, we have
gin
ee rin
S N
FIGURE 4.17 Circuit Diagram of Noise Factor -1
S IN N OUT g.n
But SOUT = G S IN
F=
S N
IN
OUT
=
N IN S OUT
et
S IN N OUT
Therefore F=
N IN G S IN
N OUT
F=
G N IN
If the N OUT was due only to G times N IN the F would be 1 i.e. the active element would be noise
free. Since in general F v> 1 , then N OUT is increased by noise due to the active element i.e.
ww
w.E
asy
FIGURE 4.17 Circuit Diagram of Noise Factor -3
En
Ne is extra noise due to active elements referred to the input; the element is thus effectively
noiseless.
N OUT gin
G ( N IN N e )
ee
Hence F = =F=
G N IN G N IN
Rearranging gives,
rin
N e ( F 1) N IN
i.e. k Te B n = (F-1) k T S B n
or Te = (F-1) T S
The noise factor F is usually measured under matched conditions with noise source at ambient
temperature T S , i.e. T S ~ 290K is usually assumed, this is sometimes written as
This allows us to calculate the equivalent noise temperature of an element with noise factor F,
measured at 290 K.
For example, if we have an amplifier with noise figure FdB = 6 dB (Noise factor F=4) and
equivalent Noise temperature Te = 865 K.
Comments:-
a) We have introduced the idea of referring the noise to the input of an element, this noise is
not actually present at the input, it is done for convenience in the analysis.
b) The noise power and equivalent noise temperature are related, N=kTB, the temperature T
is not necessarily the physical temperature, it is equivalent to the temperature of a
ww resistance R (the system impedance) which gives the same noise power N when measured
in the same bandwidth Bn.
w.E
c) Noise figure (or noise factor F) and equivalent noise temperature Te are related and both
asy
indicate how much noise an element is producing.
Since, Te = (F-1) T S
En
Then for F=1, Te = 0, i.e. ideal noise free active element.
gin
NOISE FIGURE – NOISE FACTOR FOR PASSIVE ELEMENTS :
The theoretical argument for passive networks (e.g. feeders, passive mixers, attenuators) that is
ee
networks with a gain < 1 is fairly abstract, and in essence shows that the noise at the input, N IN is
rin
attenuated by network, but the added noise Na contributes to the noise at the output such that
N OUT = N IN .
g.n
Thus, since F =
S IN N OUT
N IN S OUT
and N OUT = N IN .
et
S IN 1
F=
G S IN G
If we let L denote the insertion loss (ratio) of the network i.e. insertion loss
LdB = 10 log L
1
Then L = and hence for passive network
G
F=L
Also, since Te = (F-1) T S
Then for passive network
Te = (L-1) T S
Where Te is the equivalent noise temperature of a passive device referred to its input.
REVIEW OF NOISE FACTOR – NOISE FIGURE –TEMPERATURE:
F, dB and Te are related by FdB = 10 logdB F
Te = (F-1)290
1 0 0
ww 2
4
3
6
290
870
w.E 8
16
9
12
2030
4350
asy
En
Typical values of noise temperature, noise figure and gain for various amplifiers and attenuators
are given below:
Device Frequency
gin
Te (K) FdB (dB) Gain (dB)
Maser Amplifier
Ga As Fet amp
9 GHz
9 GHz
4
330
ee 0.06
303
rin
20
6
Ga As Fet amp
Silicon Transistor
1 GHz
400 MHz
110
420
1.4
3.9 g.n12
13
L C Amp
Type N cable
10 MHz
1 GHz
1160 7.0
2.0
et
50
2.0
CASCADED NETWORK:
A receiver systems usually consists of a number of passive or active elements connected in series,
each element is defined separately in terms of the gain (greater than 1 or less than 1 as the case
may be), noise figure or noise temperature and bandwidth (usually the 3 dB bandwidth). These
elements are assumed to be matched.
A typical receiver block diagram is shown below, with example
ww
receiver, for example to A, the feeder input or B, the input to the first amplifier.
The equations so far discussed refer the noise to the input of that specific element i.e.
w.E
asy
En
Te or N e is the noise referred to the input.
gin
To refer the noise to the output we must multiply the input noise by the gain G.
ee
For example, for a lossy feeder, loss L, we had
rin
N e = (L-1) N IN , noise referred to input
1
et
= (1- ) N IN
L
Similarly, the equivalent noise temperature referred to the output is
1
Te referred to output = (1- ) TS
L
These points will be clarified later; first the system noise figure will be considered.
SYSTEM NOISE FIGURE:
Assume that a system comprises the elements shown below, each element defined and specified
separately.
ww
w.E
asy
En
FIGURE 4.18 Circuit Diagram of Cascade System -3
Note: - N IN for each stage is equivalent to a source at a temperature of 290 K since this is how
gin
each element is specified. That is, for each device/ element is specified
Now , N OUT G3 N IN 3 N e3
ee
G3 N IN 3 F3 1 N IN
rin
Since N IN 3 G2 N IN 2 N e 2 G2 N IN 2 F2 1N IN
N IN F2 1 N IN F3 1 N IN
1 F1 1
N ae G1 N ae G1G2 N ae
If we assume N ae is ≈ N IN , i.e. we would measure and specify Fsys under similar conditions as
ww
good noise factor and some gain to give an acceptable overall noise figure.
SYSTEM NOISE TEMPERATURE:
w.E
Since Te = (L-1) T S , i.e. F= 1 +
Te
Ts
Then Fsys 1
Te sys
asy
where Te sys is the equivalent Noise temperature of the system
Ts
En
and Ts is the noise temperature of the source
gin
and
Te 2
1 1
Te sys Te 1
1
1
Ts
Ts
Ts
G1
F2 1
ee
...etc
rin
i.e. from Fsys F1
G1
.......etc
g.n
which gives
Te sys Te1
Te 2 T
e3
Te 4
G1 G1G 2 G1G 2 G3
.......... .......... ......
et
Te sys is the receiver system equivalent noise temperature. Again, this shows that the system noise
temperature depends on the first stage to a large extent if the gain of the first stage is reasonably
large.The equations for Te sys and F sys refer the noise to the input of the first stage. This can best
be classified by examining the equation for Te sys in conjunction with the diagram below.
ww
Te 2 is referred to input of the 2nd stage – to refer this to the input of the 1st stage we must divide
Te 2 by G1.
w.E
Te 3 is referred to input of third stage, ( G1G2) to refer to input of 1st stage, etc.
asy
It is often more convenient to work with noise temperature rather than noise factor.
Given a noise factor we find Te from Te = (F-1)290.
En
Note: also that the gains (G1G2 G3 etc) may be gains > 1 or gains <1, i.e. losses L where L =
1
G
.
gin
See examples and tutorials for further classifications.
g.n
various stages in the receiver to the input and thus contrive to make all the stages ideal, noise free.
i.e. in practice or reality : -
et
The noise gets worse as we proceed from the aerial to the output. The technique outlined.
ww
All noise referred to input and all stages assumed noise free.
w.E
To complete the analysis consider the system below
asy
En
gin
ee
FIGURE 4.18 Circuit Diagram of Cascade System - 7
The overall noise temp = T sky T sys (referred to A)
1 1
N OUT N R G2 G4
L1 L3
i.e. the noise referred to input times the gain of each stage.
Now consider
ww
back from later stages.
S SR
Actual where B Ae is the aerial bandwidth.
w.E N k Tsky B Ae
S
L1
gin
L3
and
S
OUT R
N OUT N OUT N R ee
S
Hence, by receiving all the noise to the input, and finding NR, we can find rin
SR
which is the same
S
NR
g.n
as - i.e. we do not need to know all the system gain.
N OUT et
Consider now the diagram below
A filter in same form often follows the receiver and we often need to know p o , the noise power
spectral density.
i.e. recall that N= p o B = kTB.
p o = kT
p o = k( T sky T sys )
ww
In order for the effects of noise to be considered in systems, for example the analysis of
probability of error as a function of Signal to noise for an FSK modulated data systems, or the
w.E
performance of analogue FM system in the presence of noise, it is necessary to develop a model
which will allow noise to be considered algebraically.
asy
Noise may be quantified in terms of noise power spectral density p o watts per Hz such that the
En
average noise power in a noise bandwidth Bn Hz is
gin
N= p o Bn watts
Thus the actual noise power in a system depends on the system bandwidth and since we are often
S
ee
interested in the at the input to a demodulator, Bn is the smallest noise equivalent bandwidth
N
rin
before the demodulator.
g.n
Since average power is proportional to Vrms we may relate N to a ―peak‖ noise voltage so that
2
V
2
N = n = p o Bn
2
et
i.e. Vn 2 po Bn ‘peak‘ value of noise
In practice noise is a random signal with (in theory) a Gaussian distribution and hence peak values
up to or as otherwise limited by the system dynamic range are possible. Hence this ―peak‖
value for noise is a fictitious value which will give rise to the same average noise as the actual
noise.
w.E
The phasor represents a signal with peak value Vc, rotating with angular frequencies Wc rads per
sec and with an angle c t to some reference axis at time t=0.
asy
If we now consider a carrier with a noise voltage with ―peak‖ value superimposed we may
En
represents this as:
gin
ee rin
g.n
FIGURE 4.19 PHASOR Diagram -2
et
In this case Vn is the peak value of the noise and is the phase of the noise relative to the carrier.
Both Vn and n are random variables, the above phasor diagram represents a snapshot at some
instant in time.
The resultant or received signal R, is the sum of carrier plus noise. If we consider several
snapshots overlaid as shown below we can see the effects of noise accompanying the signal and
how this affects the received signal R.
ww
Thus the received signal has amplitude and frequency changes (which in practice occur randomly)
w.E
due to noise.We may draw, for a single instant, the phasor with noise resolved into 2 components,
which are:
a) x(t) in phase with the carriers
x(t ) Vn Cos n asy
b) y(t) in quadrature with the carrier
En
y (t ) Vn Sin n
gin
ee rin
g.n
et
FIGURE 4.19 PHASOR Diagram -4
The reason why this is done is that x(t) represents amplitude changes in Vc (amplitude changes
affect the performance of AM systems) and y(t) represents phase (i.e. frequency) changes (phase /
frequency changes affect the performance of FM/PM systems)
We note that the resultant from x(t) and y(t) i.e.
Vn y (t ) 2 x(t ) 2
Vn2 Cos 2 n Vn2 Sin 2 n
Vn ( Since Cos 2 Sin 2 1 )
We can regard x(t) as a phasor which is in phase with Vc Cos c t , i.e a phasor rotating at c .
i.e. x(t)Cos c t
and by similar reasoning, y(t) in quadrature
i.e. y(t)Sin c t
Hence we may write
Or – alternative approach
w.E Vn t Vn Cos c t n
En
This equation is algebraic representation of noise and since
x(t ) Vn Cos n = 2 po Bn Cos n
gin
the peak value of x(t) is
Vrms 2 g.n
2
2 po Bn
and thus the mean square of x(t), i.e x(t )
_______
2
2
po Bn
et
2
2
2 po Bn
_______
also the mean square value of y(t), i.e y(t ) 2 po Bn
2
The total noise in the bandwidth, Bn is
_______ _______
2
V x(t ) 2 y(t ) 2
N = v po Bn
2 2 2
_______ _______
i.e. NOT x(t ) 2 y (t ) 2 as might be expected.
The reason for this is due to the Cos n and Sin n relationship in the representation e.g. when say
―x(t)‖ contributes po Bn , the ―y(t)‖ contribution is zero, i.e. sum is always equal to po Bn .
The algebraic representation of noise discussed above is quite adequate for the analysis of many
systems, particularly the performance of ASK, FSK and PSK modulated systems.
When considering AM and FM systems, assuming a large (S/N) ratio, i.e Vc>> Vn, the following
may be used.
Considering the general phasor representation below:-
ww
w.E
asy
En
FIGURE 4.19 PHASOR Diagram -5
gin
For AM systems, the signal is of the form Vc m(t ) Cos c t where m(t) is the message or
ee
modulating signal and the resultant for (S/N) >> 1 is
AM received = Vc m(t ) Cos c t x(t ) Cos c t
rin
Since AM is sensitive to amplitude changes, changes in the Resultant length are predominantly
due to x(t).
g.n
et
For FM systems the signal is of the form Vc Cos c t . Noise will produce both amplitude
changes (i.e. in Vc) and frequency variations – the amplitude variations are removed by a limiter
in the FM receiver. Hence,
w.E
diagram.
Vn Sin n t
tan 1
asy
Vc Vn Cos n t
Vn
tan 1
Vc En
Sin n t
Vn
1 Cos n t
Vc
gin
Additive
Noise is usually additive in that it adds to the information bearing signal. A model of the received
signal with additive noise is shown below.
w.E
The signal (information bearing) is at its weakest (most vulnerable) at the receiver input. Noise at
the other points (e.g. Receiver) can also be referred to the input.The noise is uncorrelated with the
asy
signal, i.e. independent of the signal and we may state, for average powers
En
Output Power = Signal Power + Noise Power
= (S+N)
White
gin
ee
As we have stated noise is assumed to have a uniform noise power spectral density, given that the
noise is not band limited by some filter bandwidth.
We have denoted noise power spectral density by po f . rin
White noise = po f = Constant g.n
Also Noise power = po Bn
et
GAUSSIAN
We generally assume that noise voltage amplitudes have a Gaussian or Normal distribution.
4.2 CLASSIFICATION OF NOISE:
Noise is random, undesirable electrical energy that enters the communications system via the
communicating medium and interferes with the transmitted message. However, some noise is also
produced in the receiver.
(OR)
With reference to an electrical system, noise may be defined as any unwanted form of energy
which tends to interfere with proper reception and reproduction of wanted signal.
ww
Noise may be put into following two categories.
1. External noises, i.e. noise whose sources are external.
w.E
External noise may be classified into the following three types:
asy
Atmospheric noises
Extraterrestrial noises
En
Man-made noises or industrial noises.
gin
2. Internal noise in communication, i.e. noises which get, generated within the receiver or
ee
communication system. Internal noise may be put into the following four categories.
Thermal noise or white noise or Johnson noise
rin
Shot noise.
Transit time noise g.n
Miscellaneous internal noise.
et
External noise cannot be reduced except by changing the location of the receiver or the entire
system. Internal noise on the other hand can be easily evaluated mathematically and can be
reduced to a great extent by proper design. As already said, because of the fact that internal noise
can be reduced to a great extent, study of noise characteristics is a very important part of the
communication engineering.
Explanation of External Noise
Atmospheric Noise:
Atmospheric noise or static is caused by lighting discharges in thunderstorms and other natural
electrical disturbances occurring in the atmosphere. These electrical impulses are random in
nature. Hence the energy is spread over the complete frequency spectrum used for radio
communication.
Atmospheric noise accordingly consists of spurious radio signals with components spread over a
wide frequency range. These spurious radio waves constituting the noise get propagated over the
earth in the same fashion as the desired radio waves of the same frequency. Accordingly at a given
receiving point, the receiving antenna picks up not only the signal but also the static from all the
thunderstorms, local or remote.
Extraterrestrial noise:
Solar noise
ww Cosmic noise
Solar noise:
w.E
This is the electrical noise emanating from the sun. Under quite conditions, there is a steady
radiation of noise from the sun. This results because sun is a large body at a very high temperature
asy
(exceeding 6000°c on the surface), and radiates electrical energy in the form of noise over a very
En
wide frequency spectrum including the spectrum used for radio communication. The intensity
produced by the sun varies with time. In fact, the sun has a repeating 11-year noise cycle. During
gin
the peak of the cycle, the sun produces some amount of noise that causes tremendous radio signal
rin
g.n
Distant stars are also suns and have high temperatures. These stars, therefore, radiate noise in the
et
same way as our sun. The noise received from these distant stars is thermal noise (or black body
noise) and is distributing almost uniformly over the entire sky. We also receive noise from the
center of our own galaxy (The Milky Way) from other distant galaxies and from other virtual
point sources such as quasars and pulsars.
Man-Made Noise (Industrial Noise):
By man-made noise or industrial- noise is meant the electrical noise produced by such sources as
automobiles and aircraft ignition, electrical motors and switch gears, leakage from high voltage
lines, fluorescent lights, and numerous other heavy electrical machines. Such noises are produced
by the arc discharge taking place during operation of these machines. Such man-made noise is
most intensive in industrial and densely populated areas. Man-made noise in such areas far
exceeds all other sources of noise in the frequency range extending from about 1 MHz to 600
MHz.
SCE 103 DEPT OF ECE
ww
resistance due to random motion of electrons is called thermal noise or white or Johnson noise.
The analysis of thermal noise is based on the Kinetic theory. It shows that the temperature of
w.E
particles is a way of expressing its internal kinetic energy. Thus "Temperature" of a body can be
said to be equivalent to the statistical rms value of the velocity of motion of the particles in the
asy
body. At -273°C (or zero degree Kelvin) the kinetic energy of the particles of a body becomes
En
zero .Thus we can relate the noise power generated by a resistor to be proportional to its absolute
temperature. Noise power is also proportional to the bandwidth over which it is measured. From
the above discussion we can write down.
gin
ee Pn ∝ TB
Pn = KTB ------ (1)
Where Pn = Maximum noise power output of a resistor. rin
K = Boltzmann‘s constant = 1.38 x10-23 joules I Kelvin.
g.n
T = Absolute temperature.
B = Bandwidth over which noise is measured. et
ww
w.E
asy
En
Transit Time Noise:
gin
ee
Another kind of noise that occurs in transistors is called transit time noise.
rin
Transit time is the duration of time that it takes for a current carrier such as a hole or current to
move from the input to the output.
g.n
The devices themselves are very tiny, so the distances involved are minimal. Yet the time it takes
et
for the current carriers to move even a short distance is finite. At low frequencies this time is
negligible. But when the frequency of operation is high and the signal being processed is the
magnitude as the transit time, then problem can occur. The transit time shows up as a kind of
random noise within the device, and this is directly proportional to the frequency of operation.
Miscellaneous Internal Noises Flicker Noise:
Flicker noise or modulation noise is the one appearing in transistors operating at low audio
frequencies. Flicker noise is proportional to the emitter current and junction temperature.
However, this noise is inversely proportional to the frequency. Hence it may be neglected at
frequencies above about 500 Hz and it, Therefore, possess no serious problem.
Transistor Thermal Noise:
Within the transistor, thermal noise is caused by the emitter, base and collector internal
resistances. Out of these three regions, the base region contributes maximum thermal noise.
SCE 105 DEPT OF ECE
Partition Noise:
Partition noise occurs whenever current has to divide between two or more paths, and results from
the random fluctuations in the division. It would be expected, therefore, that a diode would be less
noisy than a transistor (all other factors being equal) If the third electrode draws current (i.e.., the
base current). It is for this reason that the inputs of microwave receivers are often taken directly to
diode mixers.
Shot Noise:
The most common type of noise is referred to as shot noise which is produced by the random
arrival of 'electrons or holes at the output element, at the plate in a tube, or at the collector or drain
ww
in a transistor. Shot noise is also produced by the random movement of electrons or holes across a
PN junction. Even through current flow is established by external bias voltages, there will still be
w.E
some random movement of electrons or holes due to discontinuities in the device. An example of
such a discontinuity is the contact between the copper lead and the semiconductor materials. The
asy
interface between the two creates a discontinuity that causes random movement of the current
carriers.
Signal to Noise Ratio: En
gin
Noise is usually expressed as a power because the received signal is also expressed in terms of
ee
power. By Knowing the signal to noise powers the signal to noise ratio can be computed. Rather
rin
than express the signal to noise ratio as simply a number, you will usually see it expressed in
terms of decibels.
g.n
et
A receiver has an input signal power of l.2µW. The noise power is 0.80µW. The signal to noise
ratio is
Signal to Noise Ratio = 10 Log (1.2/0.8)
= 10 log 1.5
= 10 (0.176)
= 1.76 Db
ww
w.E
asy
En
4.3 NOISE IN CASCADE SYSTEMS:
gin
Cascade noise figure calculation is carried out by dealing with gain and noise figure as a ratio
ee
rather than decibels, and then converting back to decibels at the end. As the following equation
rin
shows, cascaded noise figure is affected most profoundly by the noise figure of components
g.n
closest to the input of the system as long as some positive gain exists in the cascade. If only loss
exists in the cascade, then the cascaded noise figure equals the magnitude of the total loss. The
et
following equation is used to calculate cascaded noise figure as a ratio based on ratio values for
gain and noise figure (do not use decibel values).
ww
4.4 NARROW BAND NOISE:
w.E
4.4.1 Definition: A random process X(t) is bandpass or narrowband random process if its power
spectral density SX(f) is nonzero only in a small neighborhood of some high frequency fc
asy
Deterministic signals: defined by its Fourier transform Random processes: defined by its power
spectral density.
Notes:
En
gin
1. Since X(t) is band pass, it has zero mean: E[(X(t)] = 0.
2. fc needs not be the center of the signal bandwidth, or in the
signal bandwidth at all.
4.4.2 Narrowband Noise Representation:
ee rin
g.n
In most communication systems, we are often dealing with band-pass filtering of signals.
Wideband noise will be shaped into bandlimited noise. If the bandwidth of the bandlimited
et
noise is relatively small compared to the carrier frequency, we refer to this as narrowband
noise. We can derive the power spectral density Gn(f) and the auto-correlation function
Rnn(τ) of the narrowband noise and use them to analyse the performance of linear
systems. In practice, we often deal with mixing (multiplication), which is a non-linear
operation, and the system analysis becomes difficult. In such a case, it is useful to express
the narrowband noise as n(t) = x(t) cos 2πfct - y(t) sin 2πfct.
where fc is the carrier frequency within the band occupied by the noise. x(t) and y(t)
are known as the quadrature components of the noise n(t). The Hibert transform of
n(t) is n^ (t) = H[n(t)] = x(t) sin 2πfct + y(t) cos 2πfct.
Generation of quadrature components of n(t).
x(t) and y(t) have the following properties:
SCE 108 DEPT OF ECE
1. E[x(t) y(t)] = 0. x(t) and y(t) are uncorrelated with each other.
2. x(t) and y(t) have the same means and variances as n(t).
3. If n(t) is Gaussian, then x(t) and y(t) are also Gaussian.
4. x(t) and y(t) have identical power spectral densities, related to the power
spectral density of n(t) by Gx(f) = Gy(f) = Gn(f- fc) + Gn(f+ fc) (28.5)
for fc - 0.5B < | f | < fc + 0.5B and B is the bandwidth of n(t).
4.4.3 Inphase and Quadrature Components:
In-Phase & Quadrature Sinusoidal Components
ww
From the trig identity, we have
w.E
asy
En
From this we may conclude that every sinusoid can be expressed as the sum of a sine function
gin 𝜋
phase zero) and a cosine function (phase 2 ). If the sine part is called the ``in-phase'' component,
ee
the cosine part can be called the ``phase-quadrature'' component. In general, ``phase quadrature''
𝜋
rin
means ``90 degrees out of phase,'' i.e., a relative phase shift of ± 2 . It is also the case that every
g.n
sum of an in-phase and quadrature component can be expressed as a single sinusoid at some
amplitude and phase. The proof is obtained by working the previous derivation backwards.Figure
et
illustrates in-phase and quadrature components overlaid. Note that they only differ by a relative
degree phase shift.
Noise in AM receivers using Envelope detection:
In standard AM wave both sidebands and the carrier are transmitted. The AM wave may be
written as s(t) = Ac[ 1+ kam(t) ] cos2𝜋𝑓𝑐𝑡
ww
w.E
asy
En
gin
ee rin
g.n
et
The received signal x(t) at the envelope detector input consists of the modulated message signal
s(t) and narrow band noise n(t). then,
ww
w.E
From this Phasor diagram for AM wave plus narrow-band noise for the case of high carrier –to-
noise for the case of high carrier-to-noise ratio
asy
From this phasor diagram the receiver output can be obtained as y(t) = envelope of x(t).
The average noise power is derived from both in-phase component and quadrature component,
En
gin
ee rin
g.n
et
ww
w.E
4.5 FM NOISE REDUCTION: asy
En
gin
ee rin
g.n
et
4.6 FM CAPTURE EFFECT:
A phenomenon, associated with FM reception, in which only the stronger of two signals at or near
the same frequency will be demodulated
The complete suppression of the weaker signal occurs at the receiver limiter, where it is
treated as noise and rejected.
When both signals are nearly equal in strength, or are fading independently, the receiver
may switch from one to the other.
In the frequency modulation, the signal can be affected by another frequency modulated signal
whose frequency content is close to the carrier frequency of the desired FM wave. The receiver
may lock such an interference signal and suppress the desired FM wave when interference signal
is stronger than the desired signal. When the strength of the desired signal and interference signal
are nearly equal, the receiver fluctuates back and forth between them, i.e., receiver locks
interference signal for some times and desired signal for some time and this goes on randomly.
This phenomenon is known as the capture effect.
4.7 PRE-EMPHASIS & DE-EMPHASIS:
Pre-emphasis refers to boosting the relative amplitudes of the modulating voltage for higher audio
ww
frequencies from 2 to approximately 15 KHz.
DE-EMPHASIS:
w.E
De-emphasis means attenuating those frequencies by the amount by which they are boosted.
However pre-emphasis is done at the transmitter and the de-emphasis is done in the receiver. The
asy
purpose is to improve the signal-to-noise ratio for FM reception. A time constant of 75µs is
En
specified in the RC or L/Z network for pre-emphasis and de-emphasis.
4.7.1 Pre-Emphasis Circuit:
gin
At the transmitter, the modulating signal is passed through a simple network which amplifies the
ee
high frequency, components more than the low-frequency components. The simplest form of such
rin
a circuit is a simple high pass filter of the type shown in fig (a). Specification dictate a time
constant of 75 microseconds (µs) where t = RC. Any combination of resistor and capacitor (or
g.n
resistor and inductor) giving this time constant will be satisfactory. Such a circuit has a cutoff
et
frequency fco of 2122 Hz. This means that frequencies higher than 2122 Hz will he linearly
enhanced. The output amplitude increases with frequency at a rate of 6 dB per octave. The pre-
emphasis curve is shown in Fig (b). This pre-emphasis circuit increases the energy content of the
higher-frequency signals so that they will tend to become stronger than the high frequency noise
components. This improves the signal to noise ratio and increases intelligibility and fidelity.
ww
w.E
asy
En
gin
ee rin
4.7.2 De-Emphasis Circuit: g.n
et
ww
In an FM receiver, the effect produced when the desired-signal gain begins to limit the desired
signal, and thus noise limiting (suppression).(188) Note: FM threshold effect occurs at (and
w.E
above) the point at which the FM signal-to-noise improvement is measured. The output signal to
noise ratio of FM receiver is valid only if the carrier to noise ratio is measured at the discriminator
asy
input is high compared to unity. It is observed that as the input noise is increased so that the
En
carrier to noise ratio decreased, the FM receiver breaks. At first individual clicks are heard in the
gin
receiver output and as the carrier to noise ratio decreases still further, the clicks rapidly merge in
to a crackling or sputtering sound. Near the break point eqn8.50 begins to fail predicting values of
ee
output SNR larger than the actual ones. This phenomenon is known as the threshold effect.The
rin
threshold effect is defined as the minimum carrier to noise ratio that gives the output SNR not less
g.n
than the value predicted by the usual signal to noise formula assuming a small noise power. For a
qualitative discussion of the FM threshold effect, Consider, when there is no signal present, so that
et
the carrier is unmodulated. Then the composite signal at the frequency discriminator input is
x(t) = [Ac +nI(t)] cos2𝜋fct – nQ(t) sin 2𝜋fct-------------------(1)
Where nI(t) and nQ(t) are inphase and quadrature component of the narrow band noise n(t) with
respect to carrier wave Accos2𝜋fct. The phasor diagram of fig8.17 below shows the phase
relations b/n the various components of x(t) in eqn (1).This effect is shown in fig below, this
calculation is based on the following two assumptions:
1. The output signal is taken as the receiver output measured in the absence of noise. The average
output signal poweris calculated for a sinusoidal modulation that produces a frequency deviation
ΔIequal to 1/2 of the IF filter bandwidth B, The carrier is thus enabled to swing back and forth
across the entire IF band.
2. The average output noise power is calculated when there is no signal present, i.e.,the carrier is
unmodulated, with no restriction placed on the value of the carrier to noise ratio.
Assumptions:
Single-tone modulation, ie: m(t) = Am cos(2∆ fmt)
The message bandwidth W = fm;
For the AM system, µ = 1;
For the FM system, β = 5 (which is what is used in commercial FM
transmission, with ∆f = 75 kHz, and W = 15 kHz).
4.9 APPLICATION & ITS USES:
ww
Tape Noise reduction.
PINK Noise or 1/f noise.
w.E Noise masking and baby sleep.
REFERENCES:
asy
1. Simon Haykins - ―Communication Systems‖ John Wilsey 2005.
En
2. J.G Prokias, M.Salelhi,‖Fundamental Of Communication Systems‖ Pearson Education
gin
2006.
3. Muralibabu – ―Communication Theory‖.
GLOSSARY TERMS:
ee rin
1. Correlated noise, is a form of internal noise which is correlated to the signal and is
absent in the absence of signal.
g.n
2. Noise, unwanted introduction of energy which interferes with the proper reception
and reproduction of transmitted signals.
et
3. Coherent detection, carrier wave at the transmitted side is phase locked within the
carrier wave at the receiver side.
4. Non Coherent detection, carrier wave at the transmitted side need not phase locked
within the carrier wave at the receiver side.
5. Demodulation, which is used to recover the original message signals.
6. Mixer, non linear device which combines RF frequency and local oscillator
frequency.
UNIT – V
INFORMATION THEORY
ww
2. If an event has probability 1, we get no information from the occurrence of the
w.E
event: I(1) = 0.
3. If two independent events occur (whose joint probability is the product of their individual
asy
probabilities), then the information we get from observing the events is the sum of the two
Information‘s: I(p1 ∗ p2) = I(p1) + I(p2).
CONTENT:
En
gin
INTRODUCTION ABOUT INFORMATION THEORY
ENTROPY
CHANNEL CODING THEOREM
ee
DISCRETE MEMORY LESS CHANNELS
rin
SHANNON FANO CODING
g.n
et
HUFFMANN CODING
SHANNON HARTLEY THEOREM
Information Theory
Information theory is a branch of science that deals with the analysis of a communications system
We will study digital communications – using a file (or network protocol) as the channel Claude
Shannon Published a landmark paper in 1948 that was the beginning of the branch of information
theory We are interested in communicating information from a source to a destination In our case,
the messages will be a sequence of binary digits Does anyone know the term for a binary digit.
ww
error tx 1, rx 0 - error tx 1, rx 1 - good Two of the cases above have errors – this is where
w.E
probability fits into the picture In the case of steganography, the noise may be due to attacks on
the hiding algorithm. Claude Shannon introduced the idea of self-information.
asy
𝐼 𝑋𝑗 = log
1
𝑝 𝑥𝑗
= log
1
𝑝 𝑗
= − log 𝑝𝑗
En
Suppose we have an event X, where Xi represents a particular outcome of the
gin
Consider flipping a fair coin, there are two equiprobable outcomes: say X0 = heads, P0 = 1/2, X1
= tails, P1 = 1/2 The amount of self-information for any single result is 1 bit. In other words, the
ee
number of bits required to communicate the result of the event is 1 bit. When outcomes are
rin
equally likely, there is a lot of information in the result. The higher the likelihood of a particular
g.n
outcome, the less information that outcome conveys However, if the coin is biased such that it
lands with heads up 99% of the time, there is not much information conveyed when we flip the
et
coin and it lands on heads. Suppose we have an event X, where Xi represents a particular outcome
of the event. Consider flipping a coin, however, let‘s say there are 3 possible outcomes: heads (P =
0.49), tails (P=0.49), lands on its side (P = 0.02) – (likely much higher than in reality).
Information
There is no some exact definition, however:Information carries new specific knowledge, which is
definitely new for its recipient; Information is always carried by some specific carrier in different
forms (letters, digits, different specific symbols, sequences of digits, letters, and symbols , etc.);
Information is meaningful only if the recipient is able to interpret it. According to the Oxford
English Dictionary, the earliest historical meaning of the word information in English was the act
of informing, or giving form or shape to the mind. The English word was apparently derived by
adding the common "noun of action" ending "-action the information materialized is a message.
Information is always about something (size of a parameter, occurrence of an event, etc). Viewed
in this manner, information does not have to be accurate; it may be a truth or a lie. Even a
disruptive noise used to inhibit the flow of communication and create misunderstanding would in
this view be a form of information. However, generally speaking, if the amount of information in
the received message increases, the message is more accurate. Information Theory How we can
measure the amount of information? How we can ensure the correctness of information? What to
do if information gets corrupted by errors? How much memory does it require to store
information? Basic answers to these questions that formed a solid background of the modern
information theory were given by the great American mathematician, electrical engineer, and
ww
computer scientist Claude E. Shannon in his paper ―A Mathematical Theory of Communication‖
published in ―The Bell System Technical Journal in October, 1948.
w.E
A noiseless binary channel 0 0 transmits bits without error, What to do if we have a noisy channel
and you want to send information across reliably? Information Capacity Theorem (Shannon
asy
Limit) The information capacity (or channel capacity) C of a continuous channel with bandwidth
En
BHertz can be perturbed by additive Gaussian white noise of power spectral density N0/2, C=B
log2(1+P/N0B) bits/sec provided bandwidth B satisfies where P is the average transmitted power
gin
P = Eb Rb ( for an ideal system, Rb= C). Eb is the transmitted energy per bit, Rb is transmission
rate.
5.1 ENTROPY: ee rin
Entropy is the average amount of information contained in each message received.
g.n
Here, message stands for an event, sample or character drawn from a distribution or data stream.
et
Entropy thus characterizes our uncertainty about our source of information. (Entropy is best
understood as a measure of uncertainty rather than certainty as entropy is larger for more random
sources.) The source is also characterized by the probability distribution of the samples drawn
from it.
Formula for entropy:
Information strictly in terms of the probabilities of events. Therefore, let us suppose that we have
a set of probabilities (a probability distribution) P = {p1, p2, . . . , pn}. We define the
𝑛
entropy of the distribution P by 𝐻 𝑃 = 𝑖=1 𝑝𝑖 ∗ 𝑙𝑜𝑔(1/𝑝𝑖). Shannon defined the entropy of
a discrete random variable X with possible values {x1, ..., xn} and probability mass
function P(X) as: Here E is the expected value operator, and I is the information content of X. I(X)
is itself a random variable. One may also define the conditional entropy of two
events X and Y taking values xi and yj respectively, as
SCE 119 DEPT OF ECE
The entropy of two simultaneous events is no more than the sum of the entropies of each
ww individual event, and are equal if the two events are independent. More specifically,
w.E
ifX and Y are two random variables on the same probability space, and (X,Y) denotes their
Cartesian product, then
et
The (i ; j) entry of the matrix is P(Y = yj /jX = xi ), which is called forward transition
probability.
In DMC the output of the channel depends only on the input of the channel at the same
instant and not on the input before or after.
The input of a DMC is a RV (random variable) X who selects its value from a discrete
limited set X.
The cardinality of X is the number of the point in the used constellation.
In an ideal channel, the output is equal to the input.
In a non-ideal channel, the output can be different from the input with a given
probability.
Transmission rate:
H(X) is the amount of information per symbol at the input of the channel.
H(Y ) is the amount of information per symbol at the output of the channel.
H(XjY ) is the amount of uncertainty remaining on X knowing Y .
The information transmission is given by:I (X; Y ) = H(X) − H(XjY ) bits/channel use
For an ideal channel X = Y , there is no uncertainty over X when we observe Y . So all the
information is transmitted for each channel use: I (X;Y ) = H(X)
If the channel is too noisy, X and Y are independent. So the uncertainty over X remains
ww the same knowing or not Y , i.e. no information passes through the channel: I (X; Y ) = 0.
Hard and soft decision:
w.E
Normally the size of constellation at the input and at the output are the same, i.e.,
jXj = jYj
asy
In this case the receiver employs hard-decision decoding.
En
It means that the decoder makes a decision about the transmitted symbol.
It is possible also that jXj 6= jY j.
gin
In this case the receiver employs a soft-decision.
Channel models and channel capacity:
ee rin
g.n
et
input data channel waveform waveform waveform channel output data
detector
encoder modulator channel demodulator decoder
(b). Q levels, Q being greater than M, then we say the detector has made a soft decision.
Channels models:
1. Binary symmetric channel (BSC):
If (a) the channel is an additive noise channel, and (b) the modulator and demodulator/detector are
included as parts of the channel. Furthermore, if the modulator employs binary waveforms, and
the detector makes hard decision, then the channel has a discrete-time binary input sequence and a
discrete-time binary output sequence.
1-p
0 0
ww Input
p
Output
w.E
p
1 1
asy 1-p
En
Note that if the channel noise and other interferences cause statistically independent errors in the
gin
transmitted binary sequence with average probability p, the channel is called a BSC. Besides,
ee
since each output bit from the channel depends only upon the corresponding input bit, the channel
is also memoryless.
2. Discrete memoryless channels (DMC):
rin
g.n
A channel is the same as above, but with q-ary symbols at the output of the channel encoder, and
Q-ary symbols at the output of the detector, where Q q . If the channel and the modulator are
memoryless, then it can be described by a set of qQ conditional probabilities
P(Y yi | X x j ) P( yi | x j ), i 0,1,..., Q 1; j 0,1,..., q 1
et
Such a channel is called discrete memory channel (DSC).
x0 y0
x1 y1
y2
xq 1
ww yQ 1
asy
corresponding output is the sequence v1 , v2 ,..., vn of symbols from the alphabet Y, the joint
conditional probability is
En
gin n
P(Y1 v1 , Y2 v2 ,..., Yn vn | X 1 u1 , X 2 u2 ,..., X n un ) P(Yk vk | X k uk )
k 1
rin
3. Discrete-input, continuous-output channels:
g.n
Suppose the output of the channel encoder has q-ary symbols as above, but the output of the
detector is unquantized (Q ) . The conditional probability density functions
p( y | X xk ), k 0,1, 2,..., q 1
et
AWGN is the most important channel of this type.
Y X G
where G N (0, 2 ) . Accordingly
1 2 /2𝜎 2
𝑃(𝑦/𝑋 =xk) = 𝑒 (−𝑥−𝑦) k= 0,1,2,q-1
2𝜋
Yi X i Gi , i 1, 2,..., n
If, further, the channel is memoryless, then the joint conditional pdf of the detector‘s output is
n
p( y1 , y2 ,..., yn | X 1 u1 , X 2 u2 ,..., X n un ) p ( yi | X i ui )
i 1
4. Waveform channels:
channel
ww
If such a channel has bandwidth W with ideal frequency response C ( f ) 1 , and if the bandwidth-
limited input signal to the channel is x (t ) , and the output signal, y (t ) of the channel is corrupted
w.E
by AWGN, then
y ( t ) x (t ) n (t )
y (t ) yi f i (t ),
asy
The channel can be described by a complete set of orthonormal functions:
x (t ) xi f i (t ), n (t ) ni f i (t )
i i
En i
where
T T gin
0 0 ee
yi y (t ) f i (t )dt x(t ) n(t ) f i (t )dt xi ni
Channel Capacity:
Channel model: DMC
Input alphabet: X {x0 , x1 , x2 ,..., xq 1}
The mutual information (MI) provided about the event { X x j } by the occurrence of the event
q 1
{Y y j } is log P( yi | x j ) / P( yi ) with P( yi ) P(Y yi ) P( xk ) P( yi | xk )
k 0
Hence, the average mutual information (AMI) provided by the output Y about the input X is
q 1 Q 1
I ( X , Y ) P( x j ) P( yi | x j ) log P( yi | x j ) / P( yi )
j 0 i 0
ww
(3). P ( x j ) represents the probabilities of the input symbols, and we may do something or control
w.E
them. Therefore, the channel capacity is defined by
q 1 Q 1
C max P( x j ) P( yi | x j ) log P( yi | x j ) / P( yi )
P( x j )
j 0 i 0
asy
En
q 1
with two constraints: P ( x j ) 0 ; P( x ) 1
j 0
j
Unit of C:
gin
bits/channel use when log log2 ; and
applications in both communications and data storage. This theorem is of foundational importance
to the modern field of information theory. Shannon only gave an outline of the proof. The first
rigorous proof for the discrete case is due to Amiel Feinstein in 1954.
The Shannon theorem states that given a noisy channel with channel capacity C and information
transmitted at a rate R, then if there exist codes that allow the probability of error at the
receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit
information nearly without error at any rate below a limiting rate, C.
The converse is also important. If , an arbitrarily small probability of error is not
achievable. All codes will have a probability of error greater than a certain positive minimal level,
ww
and this level increases as the rate increases. So, information cannot be guaranteed to be
transmitted reliably across a channel at rates beyond the channel capacity. The theorem does not
w.E
address the rare situation in which rate and capacity are equal.
The channel capacity C can be calculated from the physical properties of a channel; for a band-
asy
limited channel with Gaussian noise, using the Shannon–Hartley theorem.
En
For every discrete memory less channel, the channel capacity has the following property. For any
ε > 0 and R < C, for large enough N, there exists a code of length N and rate ≥ R and a decoding
gin
algorithm, such that the maximal probability of block error is ≤ ε.
and
ee
2. If a probability of bit error pb is acceptable, rates up to R(pb) are achievable, where
rin
is the binary entropy function
g.n
5.4 SOURCE CODING:
et
A code is defined as an n-tuple of q elements. Where q is any alphabet. Ex. 1001 n=4, q={1,0} Ex.
2389047298738904 n=16, q={0,1,2,3,4,5,6,7,8,9} Ex. (a,b,c,d,e) n=5, q={a,b,c,d,e,…,y,z} The
most common code is when q={1,0}. This is known as a binary code. The purpose A message can
become distorted through a wide range of unpredictable errors.
• Humans
• Equipment failure
• Lighting interference
• Scratches in a magnetic tape
Error-correcting code:
To add redundancy to a message so the original message can be recovered if it has been
garbled. e.g. message = 10 code = 1010101010
SCE 126 DEPT OF ECE
Send a message:
ww
immersed in a sequence of code words e.g., with previous table, message 11 could be defined as
either ddddd or bbbbbb Measure of Information Consider symbols si and the probability of
w.E
occurrence of each symbol p(si)
Example Alphabet = {A, B} p(A) = 0.4; p(B) = 0.6 Compute Entropy (H) -0.4*log2 0.4 + -
asy
0.6*log2 0.6 = .97 bits Maximum uncertainty (gives largest H) occurs when all probabilities are
En
equal Redundancy Difference between avg. codeword length (L) and avg. information content (H)
If H is constant, then can just use L Relative to the optimal value
Shannon-Fano Algorithm:
gin
• Arrange the character set in order of decreasing probability
ee
• While a probability class contains more than one symbol:
rin
Divide the probability class in two so that the probabilities in the two halves are as nearly
as possible equal.
g.n
Assign a '1' to the first probability class, and a '0' to the second
TABLE : 5.1
Character Probability
et code
X6 0.25 1/0 11
X3 0.2 1 10
X4 0.15 1 1/0 011
X5 0.15 010
X1 0.1 0 1/0 001
X7 0.1 0001
X2 0.05 0 1/0 0000
Huffman Encoding:
Statistical encoding To determine Huffman code, it is useful to construct a binary tree Leaves are
characters to be encoded Nodes carry occurrence probabilities of the characters belonging to the
subtree Example: How does a Huffman code look like for symbols with statistical symbol
occurrence probabilities: P(A) = 8/20, P(B) = 3/20, P(C ) = 7/20, P(D) = 2/20? Step 1 : Sort all
Symbols according to their probabilities (left to right) from Smallest to largest these are the leaves
of the Huffman tree Step 2: Build a binary tree from left toRight Policy: always connect two
smaller nodes together (e.g., P(CE) and P(DA) had both Probabilities that were smaller than P(B),
Hence those two did connect first Step 3: label left branches of the tree With 0 and right branches
ww
of the tree With 1 Step 4: Create Huffman Code Symbol A = 011 Symbol B = 1 Symbol C = 000
Symbol D = 010 Symbol E = 001
w.E
asy
En
gin
ee rin
g.n
et
5.5 SHANNON-FANO CODING:
This is a basic information theoretic algorithm. A simple example will be used to illustrate the
algorithm:
Symbol A B C D E
----------------------------------
Count 15 7 6 6 5
A top-down approach
2. Recursively divide into two parts, each with approx. same number of counts.
ww
symbols at the left and the least common at the right.
3. Divide the list into two parts, with the total frequency counts of the left part being as close
w.E
to the total of the right as possible.
4. The left part of the list is assigned the binary digit 0, and the right part is assigned the digit
asy
1. This means that the codes for the symbols in the first part will all start with 0, and the
codes in the second part will all start with 1.
En
5. Recursively apply the steps 3 and 4 to each of the two halves, subdividing groups and
gin
adding bits to the codes until each symbol has become a corresponding code leaf on the
tree.
Tree diagram:
ee rin
g.n
et
FIGURE 5.7 Tree Diagram of Shannon fano coding
A 15 1.38 00 30
B 7 2.48 01 14
C 6 2.70 .10 12
D 6 2.70 110 18
E 5 2.96 111 15
ww
5.6 HUFFMAN CODING:
TOTAL NO OF BITS: 89
w.E
The Shannon–Fano algorithm doesn't always generate an optimal code. In 1952, David A.
Huffman gave a different algorithm that always produces an optimal tree for any given
asy
probabilities. While the Shannon–Fano tree is created from the root to the leaves, the Huffman
En
Procedure for Huffman Algorithm:
1. Create a leaf node for each symbol algorithm works from leaves to the root in the opposite
gin
direction and add it to frequency of occurrence.
ee
2. While there is more than one node in the queue:
rin
Remove the two nodes of lowest probability or frequency from the queue
g.n
Prepend 0 and 1 respectively to any code already assigned to these nodes
Create a new internal node with these two nodes as children and with probability
equal to the sum of the two nodes' probabilities.
Add the new node to the queue.
et
3. The remaining node is the root node and the tree is complete.
Tree diagram:
ww
w.E
asy
En
gin
5.7 SHANNON–HARTLEY THEOREM:
ee
FIGURE 5.8 Tree Diagram of Huffman coding
rin
g.n
In information theory, the Shannon–Hartley theorem tells the maximum rate at which information
et
can be transmitted over a communications channel of a specified bandwidth in the presence
of noise. It is an application of the noisy channel coding theorem to the archetypal case of
a continuous-time analog communications channel subject to Gaussian noise. The theorem
establishes Shannon's channel capacity for such a communication link, a bound on the maximum
amount of error-free digital data (that is, information) that can be transmitted with a
specified bandwidth in the presence of the noise interference, assuming that the signal power is
bounded, and that the Gaussian noise process is characterized by a known power or power spectral
density. The law is named after Claude Shannon and Ralph Hartley.
ww
w.E FIGURE 5.7 Characteristics of Channel Capacity and Power Efficiency
asy
Considering all possible multi-level and multi-phase encoding techniques, the Shannon–Hartley
theorem states the channel capacity C, meaning the theoretical tightest upper bound on
En
the information rate (excluding error correcting codes) of clean (or arbitrarily low bit error rate)
gin
data that can be sent with a given average signal power S through an analog communication
ee
channel subject to additive white Gaussian noise of power N, is:
rin
Where C is the channel capacity in bits per second;
g.n
et
B is the bandwidth of the channel in hertz (passband bandwidth in case of a modulated
signal);
S is the average received signal power over the bandwidth (in case of a modulated signal,
often denoted C, i.e. modulated carrier), measured in watts (or volts squared);
N is the average noise or interference power over the bandwidth, measured in watts (or
volts squared); and
S/N is the signal-to-noise ratio (SNR) or the carrier-to-noise ratio (CNR) of the
communication signal to the Gaussian noise interference expressed as a linear power ratio
(not as logarithmic decibels).
w.E
4. J.G Prokias, M.Salelhi,‖Fundamental of Communication Systems‖ Pearson Education
2006.
GLOSSARY TERMS:
asy
En
1. Entropy, the average amount of information per source symbol.
2. Information, is a continuous function of probability,
gin
3. Channel, is the medium through which the information is transmitted from the
source to destination.
ee rin
4. Channel capacity, maximum of mutual information that may be transmitted
through the channel.
g.n
5. Binary symmetric channel, the channel is symmetric because the probability of
‗1‘ is transmitted.
6. SNR, the ratio of input noise is divided by output noise.
et
receiving a ‗1‘ if a ‗0‘ is transmitted is same as the probability of receiving a ‗0‘ if
TUTORIAL PROBLEMS:
1.Consider DMS with 4 symbols 𝑥1, 𝑥2, 𝑥3, 𝑥4 with the corresponding probabilities 𝑝 𝑥1 =
0.4, 𝑝 𝑥2 = 0.3, 𝑝 𝑥3 = 0.2, 𝑝 𝑥4 = 0.1. construct the Huffman coding and find code
efficiency.
Solution:
4
a) Length = 𝑖 𝐿𝑖
𝑝(𝑥𝑖) = 1.9.bits/symbols
4
b) H(X) = 𝑖 𝑝(𝑥𝑖) log 2 𝑝(𝑥𝑖) = 1.84bits/symbols
𝐻(𝑋)
c) Code efficiency = 97.18%
𝐿
2. Consider DMS with 4 symbols 𝑥1, 𝑥2, 𝑥3, 𝑥4 with the corresponding probabilities 𝑝 𝑥1 =
0.4, 𝑝 𝑥2 = 0.19, 𝑝 𝑥3 = 0.16, 𝑝 𝑥4 = 0.15, 𝑝 𝑥5 = 0.1 construct Shannon fano coding and
find code efficiency.
Solution:
xi P(xi) Column Wise Code Li
ww
x1
x2
0.4
0.19
0
0
0
1
-
-
words
00
01
2
2
w.E
x3
x4
X5
0.16
0.15
0.1
1
1
1
0
1
1
-
0
1
10
110
111
2
3
3
a) Length =
b) H(X) = 5
5
asy
𝑖 𝐿𝑖 𝑝(𝑥𝑖) = 2.25.bits/symbols
c) Code efficiency
𝐻(𝑋)
𝐿
En
= 95.54%
ww
4. What are the degrees of modulation?
Under modulation. m<1
w.E
Critical modulation m=1
Over modulation m>1
5. What is the need for modulation?
Needs for modulation:
Ease of transmission asy
Multiplexing
Reduced noise
En
Narrow bandwidth
Frequency assignment
Reduce the equipments limitations gin
6. What are the types of AM modulators?
ee
There are two types of AM modulators. They are
Linear modulators
rin
Non-linear modulators
Linear modulators are classified as follows
Transistor modulator
g.n
There are three types of transistor modulator.
Collector modulator
Emitter modulator
et
Base modulator
Switching modulators
Non-linear modulators are classified as follows
Square law modulator
Product modulator
Balanced modulator
7. What is the difference between high level and low level modulation?
In high level modulation, the modulator amplifier operates at high power levels and
delivers power directly to the antenna. In low level modulation, the modulator
amplifier performs modulation at relatively low power levels. The modulated signal is then
amplified to high power level by class B power amplifier. The amplifier feeds power to
antenna.
8. Define Detection (or) Demodulation.
SCE 135 DEPT OF ECE
ww
11. What is single tone and multi tone modulation?
If modulation is performed for a message signal with more than one frequency
w.E
component then the modulation is called multi tone modulation.
If modulation is performed for a message signal with one frequency component then the
modulation is called single tone modulation.
asy
12. Compare AM with DSB-SC and SSB-SC.
En
S.No AM signal DSB-SC SSB-SC
rin
AM &DSB-SC
ww
a common bandwidth.
19. Define Guard Band.
Guard Bands are introduced in the spectrum of FDM in order to avoid any interference
w.E
between the adjacent channels. Wider the guard bands, Smaller the interference.
20. Define SSB-SC.
(i) SSB-SC stands for Single Side Band Suppressed Carrier
asy
(ii) When only one sideband is transmitted, the modulation is referred to as Single side
band modulation. It is also called as SSB or SSB-SC.
21. Define DSB-SC.
En
After modulation, the process of transmitting the sidebands (USB, LSB) alone and
gin
suppressing the carrier is called as Double Side Band-Suppressed Carrier.
22. What are the disadvantages of DSB-FC?
(i) Power wastage takes place in DSB-FC
rin
During Demodulation carrier is exactly coherent or synchronized in both the frequency
and phase, with the original carrier wave used to generate the DSB-SC wave.
g.n
This method of detection is called as coherent detection or synchronous detection.
24. What is Vestigial Side Band Modulation?
et
Vestigial Sideband Modulation is defined as a modulation in which one of the sideband
is partially suppressed and the vestige of the other sideband is transmitted to compensate for that
suppression.
25. What are the advantages of signal sideband transmission?
a) Power consumption
b) Bandwidth conservation c) Noise reduction
26. What are the disadvantages of single side band transmission?
a) Complex receivers: Single side band systems require more complex and expensive
receivers thn conventiaonal AM transmission.
b) Tuning difficulties: Single side band receivers require more complex and precise
tunig than conventional AM receivers.
2. These modulators are used in high These modulators are used in low level
level modulation. modulation.
The carrier voltage is very much The modulating signal voltage is very much
3. greater than modulating signal greater than the carrier signal voltage.
voltage.
ww Suppose that a signal is band limited to the frequency range extending from a frequency
f1 to a frequency f2. The process of frequency translation is one in which the original signal is
replaced with a new signal whose spectral range extends from f1‘ and f2‘ and which new signal
w.E
bears, in recoverable form the same information as was borne by the original signal.
29. What are the two situations identified in frequency translations?
a) Up Conversion: In this case the translated carrier frequency is greater than the
incoming carrier
asy
b) Down Conversion: In this case the translated carrier frequency is smaller than the
En
increasing carrier frequency.
Thus, a narrowband FM signal requires essentially the same transmission bandwidth as
the AM signal.
30. What is BW for AM wave?
gin
ee
The difference between these two extreme frequencies is equal to the bandwidth of the
AM wave.
Therefore, Bandwidth, B = (fc + fm) - (fc - fm) B = 2fm
31. What is the BW of DSB-SC signal? rin
Bandwidth, B = (fc + fm) - (fc - fm) B = 2f
g.n
It is obvious that the bandwidth of DSB-SC modulation is same as that of general AM
waves.
32. What are the demodulation methods for DSB-SC signals?
et
The DSB-SC signal may be demodulated by following two methods: (i) Synchronous
detection method.
(ii)Using envelope detector after carrier reinsertion.
33. Write the applications of Hilbert transform?
(i) For generation of SSB signals,
(ii) For designing of minimum phase type filters,
(iii) For representation of band pass signals.
34. What are the methods for generating SSB-SC signal?
SSB-SC signals may be generated by two methods as under:
(i)Frequency discrimination method or filter method. (ii)Phase discrimination method or
phase-shift method.
16 MARK QUESTIONS
6. Draw the circuit diagram of Ring Modulator and explain with its operation?
ww
7. Discuss the coherent detection of DSB-SC modulated wave with a block diagram of detector
and explain.
w.E
8. Draw the block diagram for the generation and demodulation of a VSB signal and explain the
principle of operation.
asy
9. Write short notes of frequency translation and FDM?
En
10. Explain the method of generating AM waves using linear time invariant circuits.
gin
11. Explain the method of generating AM waves using Non-Linear circuits.
ee rin
g.n
et
ww
β=δffm
w.EModulation done for the message signal with more than one frequency component is called
multitone modulation.
En
accordance with the instantaneous amplitude of the message signal.
gin
6. What are the types of Frequency Modulation?
Based on the modulation index FM can be divided into types. They are Narrow band FM
ee
and Wide band FM. If the modulation index is greater than one then it is wide band FM and if the
modulation index is less than one then it is Narrow band FM
rin
7. What is the basic difference between an AM signal and a narrowband FM signal?
g.n
In the case of sinusoidal modulation, the basic difference between an AM signal and a
narrowband FM signal is that the algebraic sign of the lower side frequency in the narrow band FM
is reversed.
ww
12. Define frequency Deviation.
w.E
The maximum departure of the instantaneous frequency from the carrier frequency is called
frequency deviation.
En
tone-modulating signal of frequency f m (max) is defined as
∴ BW=2[ + fm(max)]
gin
14. Define the deviation ratio D for non-sinusoidal modulation.
ee
The deviation ratio D is defined as the ratio of the frequency deviation f, which
rin
Corresponds to the maximum possible amplitude of the modulation signal m (t), to the highest
modulation frequency.
D = ∆f f m
ww
b) Only two tuned circuits are necessary and they are tuned to same frequency c) Linearity
is better
Disadvantages:
asy
Phase locked loops are used for various purposes in AM and FM communication.
(i)Automatic frequency correction in FM transmitter uses PLL to keep carrier
frequency constant.
En
(ii)PLL is used direct FM Tramitter uses PLL to keep carrier frequency constant. (iii)
gin
PLL is also used in FM demodulators.
rin
Phase of the carrier varies as per Frequency of the carrier varies as per
3
𝜃 (t ) = k em (t )
Modulation index = k Em
∆𝑓 (t ) = k em (t )
Modulation index = k Em et
24. A 80 MHz carrier is frequency modulated by a sinusoidal signal of 1V amplitude and the
frequency sensitivity is 100 Hz/V. Find the approximate bandwidth of the FM waveform if the
modulating signal has a frequency of 10 kHz.
w.E
FM stereo multiplexing is used for stereo transmission. It is basically frequency division
multiplexing. It is used for FM radio broadcasting. The left and right channel signals are used to
generate sum and difference signals. The difference signal frequency modulates the carrier. The
asy
difference signal, FM difference signal, FM difference signal and carrier are combined together and
sent. Such FM multiplexed signal can be coherently received by stereo as well as mono receiver.
16 MARK QUESTIONS En
gin
1. Explain the indirect method of generation of FM wave and any one method of demodulating an
FM wave.
ee
2. Discuss the indirect methods of generating a wide-band FM signal.
rin
3. Draw the circuit diagram of Foster-seeley discriminator and explain its working.
ww
[1] The mean of the population of means is always equal to the mean of the parent population from
which the population samples were drawn.
[2] The standard deviation of the population of means is always equal to the standard deviation of
w.E
the parent population divided by the square root of the sample size (N).
[3] The distribution of means will increasingly approximate a normal distribution as the size N of
samples increases.
gin
present, also do not change over time and do not follow any trends.
ee
The population correlation coefficient ρX,Y between two random variables X and Y with expected
values μX and μY and standard deviations ςX and ςY is defined as: rin
g.n
6. what is meant by covariance?
et
Covariance is a measure of how much two variables change together, and the covariance function,
or kernel, describes the spatial covariance of a random variable process or field.
16 MARK QUESTIONS
ww
1. Discuss about Central limit theorem in detail.
w.E
3. Explain in detail about Random process and its Random variables.
asy
4. Write short notes on covariance function.
En
5. Write short notes on Auto correlation function.
gin
6. With neat diagram Linear filtering of Random process?
ee rin
g.n
et
1. Define noise.
Noise is defined as any unwanted form of energy, which tends to interfere with proper
reception and reproduction of wanted signal.
ww 1. Atmospheric noiseq
2. Extraterrestrial noises
3. Man –made noises or industrial noises
w.E
4. What are types of internal noise?
asy
Internal noise can be classified
into
1. Thermal noise
2. Shot noise
3. Transit time noise En
gin
4. Miscellaneous internal noise
ee
5. What are the types of extraterrestrial noise and write their origin?
The two type of extraterrestrial noise are solar noise and cosmic noise Solar noise is the
rin
electrical noise emanating from the sun. Cosmic noise is the noise received from the center part
of our galaxy, other distant galaxies and other virtual point sources.
Flicker noise is the one appearing in transistors operating at low audio frequencies. Flicker
noise is proportional to the emitter current and junction temperature and inversely
proportional to the frequency.
system.
10. Define thermal noise. Give the expression for the thermal noise voltage across a resistor.
The electrons in a conductor possess varying amounts of energy. A small fluctuation in this
energy produces small noise voltages in the conductor. These random fluctuations produced by
thermal agitation of the electrons is called thermal noise.
ww When current flows in electronic device, the fluctuations number of electrons or holes
generates the noise. It is called shot noise. Shot noise also depends upon operating conditions of the
w.E
device.
asy
The Mean –Square value of thermal noise voltage is given by, Vn
K – Boltz man constant, R – Resistance
T – Obsolute temperature, B Bandwidth
2 = 4 k TBR
En
gin
14. What is White Noise?
Many types of noise sources are Gaussian and have flat spectral density over a wide
frequency range. Such spectrum has all frequency components in equal portion, and is therefore
rin
S ( f ) = N o/2
ww
19. What is FM threshold effect?
As the carrier to noise ratio is reduced, clicks are heard in the receiver output. As the carrier
w.E
to noise ratio reduces further, crackling, or sputtering sound appears at the receiver output. Near
the breaking point, the theoretically calculated output signal to noise ratio becomes large, but its
actual value is very small. This phenomenon is called threshold effect.
asy
20. What is capture effect in FM?
En
When the noise interference is stronger than FM signal, then FM receiver locks to
interference. This suppresses FM signal. When the noise interference as well as FM signal are of
gin
equal strength, then the FM receiver locking fluctuates between them. This
phenomenon is called capture effect.
merit.,
ee
21. What is meant by figure of merit of a receiver?
rin
The ratio of output signals to noise ratio to channel signal to noise ratio is called figure of
et
but the PSD of message signal falls off at higher frequencies. This means the message signal
doesn‘t utilize the frequency band in efficient manner. Such more efficient use of frequency
band and improved noise performance can be obtained with the help of re-emphasis and de-
emphasis.
S= Pi
ww
N N0 fM
27. Define superheterodyne principle.
It can be defined as the process of operation of modulated waves to obtain similarly
w.E
modulated waves of different frequency. This process uses a locally generated carrier wave, which
determines the change of frequency.
asy
28. Define signal to noise ratio.
Signal to noise ratio is the ratio of signal power to the noise power at the same point in a
system.
En
gin
12. What is threshold effect in an envelope detector? Explain.
When a noise is large compared to the signal at the input of the envelope detector, the
ee
detected output has a message signal completely mingled with noise. It means that if the input
SNR is below a certain level, called threshold level, the noise dominates over the message signal,
rin
threshold is defined as value of the input signal to noise ratio (So/No) below which the output
signal to noise ratio (Si/Ni) deteriorates much more rapidly than the input signal to noise ratio. The
g.n
threshold effect in an envelope detector whenever the carrier power-to-noise power ratio
approaches unity or less.
16 MARK QUESTIONS
9. What is narrowband noise discuss the properties of the Quadrature components of a narrowband
noise.
10. Derive the noise figure for cascade stages.
11. Define noise and Explain the types of noise
ww
w.E
asy
En
gin
ee rin
g.n
et
1. What is entropy?
Entropy is also called average information per message. It is the ratio of total information to
number of messages. i.e.,
ww
3. Name the two source coding techniques.
w.E
The source coding techniques are, a) prefix coding b)
Shannon-fano coding c) Huffman coding
4. Write the expression for code efficiency in terms of entropy.
Code efficiency =
asy
Entropy(H)
Average code word length(N)
En
5. What is memory less source? Give an example.
gin
The alphabets emitted by memory less source do not depend upon previous alphabets.
Every alphabet is independent. For example a character generated by keyboard
represents memory less source.
ee rin
6. Explain the significance of the entropy H(X/Y) of a communication system where X is
the transmitter and Y is the receiver.
Y is known.
g.n
a) H(X/Y) is called conditional entropy. It represents uncertainty of X, on average, when
9. Calculate the entropy of source with a symbol set containing 64 symbols each with a
probability pi = 1/ 64 .
Here, there are M = 64 equally likely symbols. Hence entropy of such source is given as,H = log 2
M = log 2 64 = 6 bits / symbo
10. State the channel coding theorem for a discrete memory less channel.
Statement of the theorem:
Given a source of ‗M‘ equally likely messages, with M >>1, which is generating
information at a rate. Given channel with capacity C. Then if,R ≤ C
There exits a coding technique such that the output of the source may be transmitted over the
channel with a probability of error in the received message which may be made arbitrarily
small.
Explanation: This theorem says that if R ≤ C ; it is possible to transmit information without any
error even if noise is present. Coding techniques are used to detect and correct the errors.
ww
12. Explain Shannon-Fano coding.
An efficient code can be obtained by the following simple procedure, known as Shannon
w.E
– Fano algorithm.
Step 1: List the source symbols in order of decreasing probability.
Step 2: Partition the set into two sets that are as close to equiprobable as possible, and sign 0 to the
asy
upper set and 1 to the lower set.
Step: Continue this process, each time partitioning the sets with as nearly equal probabilities as
En
possible until further partitioning is not possible.
13. Define bandwidth efficiency.
gin
The ratio of channel capacity to bandwidth is called bandwidth efficiency. i.e,
Bandwidth efficiency =
Bandwidth (B)
ee
Channel Capacity
14. Define channel capacity of the discrete memory less channel. rin
g.n
The channel capacity of the discrete memory less channel is given as maximum average
mutual information. The maximization is taken with respect to input probabilities.
16 MARK QUESTIONS
et
1. Discuss source coding theorem, give the advantage and disadvantages of channel coding in
detail, and discuss the data compaction.
2. Explain in detail Huffman coding algorithm and compare this with the other types of coding.
3. Explain the properties of entropy and with suitable example, explain the entropy of binary
memory less source.
4. Define mutual information. Find the relation between the mutual information and the Joint
entropy of the channel input and channel output. Explain the important properties of mutual
information.
5. Encode the source symbols with following set of probabilities using Huffman coding.
ww
w.E
asy
En
gin
ee rin
g.n
et
ww
9. Mention the applications of SSB.
10. Mention the advantages of VSB.
11. What are the advantages of superheterodyne receiver?
asy
1. (a).Explain the generation of AM signals using square law modulator. (8)
2. Explain Balanced modulator to generate DSB-SC signal. (16)
En
3. Explain coherent detector to detect SSB-SC signal. (16)
4. Explain the generation of SSB using balanced modulator. (16)
gin
5. Draw the circuit diagram of Ring modulator and explain with its operation. (16)
6. Discuss the coherent detection of DSB-SC modulated wave with a block diagram
of detector and Explain. (16)
ee
7. Draw the block diagram for the generation and demodulation of a VSB signal
and explain the principle of operation. (16)
rin
8.Explain the working of Super heterodyne receiver with its parameters. (16)
g.n
1. Define PM.
UNIT II
ANGLE MODULATION SYSTEMS
PART-A (2 Marks) et
2. Define FM.
3. Define Frequency deviation(_f)
4. Define Carson‘s rule.
5. Write the applications of FM
6. What do you mean by narrowband and wideband FM?
7. What is modulation index of PM?
8. Define Direct method and Indirect method FM.
9. Mention the advantages of FM.
10. What is the modulation index of FM?
w.E
1. Define internal noise.
2. Define shot noise.
3. Define thermal noise.
4. Define narrow band noise.
5. Define noise figure. asy
En
6. Define noise equivalent bandwidth.
7. Draw the Phasor representation of FM noise.
gin
8. Define pre-emphasis and de-emphasis.
9. Define SNR.
10. Define ergodic process.
ee
11. Write the equation for correlation and covariance functions?
12. Define Gaussian process.
rin
PART-B (16 Marks)
g.n
noises are generated in the method of representing them. (16)
2. Explain the following terms
(i) Random variable
et
1. Derive the effective noise temperature of a cascade amplifier and explain how various
UNIT V
INFORMATION THEORY
PART-A (2 Marks)
ww
3. What is channel capacity of binary synchronous channel with error probability of 0.2?
4. State channel coding theorem.
5. Define entropy for a discrete memoryless source.
w.E
6. What is code redundancy?
7. Write down the formula for the mutual information.
8. Name the source coding techniques.
9. What is Data compaction ?
asy
10. Write the expression for code efficiency in terms of entropy.
gin
1. Explain the significance of the entropy H(X/Y) of a communication system where X is
rin
3. Discuss Source coding theorem, give the advantage and disadvantage of channel coding in detail,
and discuss the data compaction. (16)
g.n
4. Explain the properties of entropy and with suitable example, explain the entropy of binary
memory less source. (16)
et
5. Five symbols of the alphabet of discrete memory less source and their probabilities
are given below. S=[S0,S1,S2,S3]; P[S]=[.4,.2,.2,.1,.1].Encode the symbols using Huffman
coding. (16)
6. Write short notes on Differential entropy, derive the channel capacity theorem and discuss the
implications of the information capacity theorem. (16)
7. What do you mean by binary symmetric channel? Derive channel capacity formula for symmetric
channel. (16)
8. Construct binary optical code for the following probability symbols using Huffman
procedure and calculate entropy of the source, average code Length, efficiency, redundancy
and variance? 0.2, 0.18, 0.12, 0.1, 0.1, 0.08, 0.06, 0.06, 0.06, 0.04. (16)
9. State and prove continuous channel capacity theorem. (16)
11. Encode the following source using Shannon-Fano and Huffman coding procedures.
Compare the results. (16)
X X1 X2 X3 X4 X5
P(X) 0.3 0.1 0.4 0.08 0.12.
ww
4. What i s meant by detection? Name the methods for detecting FM signals.
5. Write the Rayleigh and Rician probability density functions.
w.E
6. What is white noise? State its power spectral density.
7. Compare the noise performance of DSBSC receiver using coherent detection with AM receiver
using envelope detection.
asy
8. What are the methods to improve FM threshold reduction?
9. Define entropy function.
En
gin
10. Differentiate between lossless and lossy coding.
Part B - (5 x 16 = 80 marks)
ee rin
11. (a) With a help of a neat diagram, explain the operation of an envelope detector. Why
does negative peak clipping take place? (16)
OR
g.n
(ii) Explain the concept of FDM with a suitable block diagram. (6) et
11. (b) (i) Compare the characteristics of DSBFC, DSBSC, SSBFC, SSBSC, VSB schemes. (10)
12. (a) (i) Derive the expression for the single tone frequency modulation and draw its frequency
spectrum. (8)
(ii) An angle modulated wave is described by the equation V (t) = 10 cos(2106¼t+10 cos 2000¼t).
Find (1) Power of the modulated signal (2) Maximum frequency deviation
(3) Bandwidth. (8)
OR
12. (b) (i) A 100 kHz carrier is frequency modulated to produce a peak deviation of 800 Hz.
This FM signal i s passed through a 3 by 3 by 4 frequency multiplier chain, the output of which
ww
as 10 dB and 2 dB. Second stage has noise figure of 3 dB. Calculate total noise figure. (6)
w.E
14. (a) (i) Sketch the block diagram of DSB-SC/AM system and derive the figure of merit. (8)
(ii) Using superheterodyne principle, draw the block diagram of AM radio receiver
and brifly explain it. (8)
asy OR
En
14. (b) (i) Explain pre-emphasis and De-emphasis in detail. (10)
gin
(ii) Compare the performances of AM and FM systems. (6)
15. (a) Using Huffman code I, encode the following symbols. (8)
rin
(ii) Entropy of the source (3) (iii) Code efficiency (iv) Redundancy (1)
OR g.n
15. (b) (i) State and prove the properties of mutual information. (10)
et
(ii) The channel transition matrix is given by 0:9 0:1:2 0:8. Draw the channel diagram and
determine the probabilities associated with outputs assuming equi probable inputs. (6)
ww
2. Define modulation index of AM signal..
3. Compare Frequency and Phase modulation.
w.E
4. Convert NBFM to WBFM?
5. Define noise figure.
6. Define Thermal noise.
7. Define Figure of merit.
asy
En
8. Define pre-emphasis and de-emphasis
9. What is channel redundancy?
10. State Shannon‘s theorem.
gin
ee
Part B - (5 x 16 = 80 marks)
11. ((i) Explain Frequency Translation. (8) rin
g.n
(ii) Derive the equation of an AM wave. Also draw the modulated AM wave for various modulation
index. (8)
ww
14. (a) Explain the working of AM & FM Superheterodyne receivers with its parameters. (16)
w.E OR
(b) Derive the expression for output signal to noise for a DSB-SC receiver using coherent detection.
(16)
asy
15. (a) Discuss Source coding theorem, give the advantage and disadvantage of channel coding in
En
detail, and discuss the data compaction. (16)
gin OR
(b) Five symbols of the alphabet of discrete memory less source and their probabilities are given
below. (8)
S=[S0,S1,S2,S3,S4]
ee rin
P[S]=[0.4,0.2,0.2,0.1,0.1] Code the symbols using Huffman coding.
g.n
(ii) Derive the channel capacity of a continuous band limited white Gaussian noise channel (8)
et
ww
2. What is meant by coherent detection?..
3. Define frequency deviation in FM?
w.E
4. Why Armstrong method is superior to reactance modulator
5. What is a random process?.
asy
6. Find the thermal noise voltage developed across a resistor of 700ohm. The bandwidth of the
En
measuring instrument is 7MHz and the ambient temperature is 27‘C.
7. Draw the Phasor representation of FM noise.
8. Define SNR.
gin
10. Explain rate distortion theory. ee
9. What is channel capacity of binary synchronous channel with error probability of 0.2?
ww
15. (a) Define mutual information. Find the relation between the mutual information and the joint
w.E
entropy of the channel input and channel output. Explain the important properties of mutual
information. (16)
asy OR
(b) (i)Five symbols of the alphabet of discrete memory less source and their probabilities are given
below.
En (8)
S=[S0,S1,S2,S3,S4]
P[S]=[0.4,0.2,0.2,0.1,0.1] gin
ee
Code the symbols using Shannon fano coding.
rin
(ii) Explain the properties of entropy and with suitable example, explain the entropy of binary
memory less source.
g.n (8
et
w.E6. Two resistors of 20kΩ, 50kΩ are at room temperature (290K). For a bandwidth of 100 KHZ,
calculate the thermal noise voltage generated by the two resistors in series.
7. Define threshold effect in AM receiver.
asy
8. What is FM Threshold effect?
9. Calculate the entropy of the source with symbol probabilities 0.6, 0.3, 0.1.
En
10. State Shannon‘s channel capacity theorem, for a power and band limited channel.
gin
Part B - (5 x 16 = 80 marks)
11. (a) With necessary diagrams and expressions explain the generation and demodulation of AM.
(16)
ee OR
rin
(b) (i) Discuss the generation and detection scheme for standard AM. (10)
g.n
(ii) Draw the filtering scheme for the generation of VSB modulated wave & explain. (6)
(b) (i) Explain how varactor diode can be used for frequency modulation. (6)
(ii) Explain ratio detector with merits and demerits. (10)
13. (a) Explain how the various noises are generated and the method of representing them. (16)
OR
(b) (i)What is narrowband noise? Discuss the properties of inphase and quadrature phase
components of a narrowband noise. (8)
(ii)What is meant by noise equivalent bandwidth? Illustrate it with a diagram (8)
14. (a) Define and Explain FM Threshold effect. With suitable diagram, explain threshold reduction
ww S=[S0,S1,S2,S3,S4]
w.E P[S]=[0.4,0.2,0.2,0.1,0.1]
Code the symbols using Shannon fano coding.
En
gin
UNIVERSITY QUESTION PAPER CODE - 11327
ee
EC 2252 Communication Theory
(Common to PTEC 2252 Communication Theory for B.E.(Part -Time)
Third Semester ECE - Regulations 2009)
Time: Three Hours Maximum: 100 marks rin
Answer ALL Questions
g.n
Part A - (10 x 2 = 20 marks)
et
1. An amplitude modulation transmitter radiates 1000 watts of unmodulated power. If the
carrier is modulated simultaneously by two tones of 40% and 60% respectively ,
calculate the total power radiated.
2. Calculate the local oscillator frequency if incoming frequency if f1 and translated carrier
frequency is f2.
3. How is the narrow band FM converted in to wideband FM?
4. A Carrier is frequency modulated by a sinusoidal modulating frequency 2KHZ resulting
in a frequency deviation of 5KHZ. What is the bandwidth occupied by the modulated
waveforms?
5. Define a random variable. Specify the sample space and random variable for a coin
tossing experiment.
Part B - (5 x 16 = 80 marks)
11 a (i) Define Amplitude modulation. how an amplitude modulated signal can be generated using a
ww ii) What is a DSB-SC signal? Explain the basic operation of DSB-SC signal with neat
diagram? (8)
w.E (OR)
asy
b (i) Discuss in detail about frequency translation and FDM with neat diagram? (8)
En
(ii) Compare Amplitude modulation and Frequency modulation. (8)
(16)
gin
12 (a)(i) Derive the expression for the frequency modulated signal. Explain the wide band FM?
OR
ee
(b) (i) Explain two methods of FM Detection with neat diagram? (16)
13. (a) (i) Explain expression of short noise voltage (10)
rin
(ii) Give the properties of Auto correlation Functions? (6)
OR g.n
et
(b) (i)What is narrowband noise? Discuss the properties of inphase and quadrature phase
components of a narrowband noise. (8)
(ii)What is meant by noise equivalent bandwidth? Illustrate it with a diagram (8)
14. (a) Derive signal to noise ratio at input and output of a Coherent Detector.(16).
OR
(b) (i) Derive the output signal to noise ratio of FM Reception. (8)
(ii) Explain the significance of Pre –emphasis and De- emphasis in FM Systems. (8)
15. (a) (i) Five symbols of the alphabet of discrete memory less source and their probabilities are
given below. (8)
X=[1,2,3,4,5,6,7]
ww
w.E
asy
En
gin
ee rin
g.n
et