0% found this document useful (0 votes)
2 views53 pages

Module 1 Analog Communication

The document outlines the syllabus for a course on Analog Communication, covering topics such as random variables, amplitude modulation, noise in modulation systems, angle modulation, and radio transmission concepts. It includes laboratory components for practical applications and lists textbooks and reference materials. The introduction discusses the nature of deterministic and random signals, along with foundational concepts in probability theory and random processes.

Uploaded by

amar jha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views53 pages

Module 1 Analog Communication

The document outlines the syllabus for a course on Analog Communication, covering topics such as random variables, amplitude modulation, noise in modulation systems, angle modulation, and radio transmission concepts. It includes laboratory components for practical applications and lists textbooks and reference materials. The introduction discusses the nature of deterministic and random signals, along with foundational concepts in probability theory and random processes.

Uploaded by

amar jha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Analog

Communication /
23ECPC208
Syllabus
• Module 1:
Random Variables and Processes
Random Variables: Introduction, Probability, Conditional
probability, Random variables, Statistical Averages, Function of a
random variable, Moments
Random Process: Mean, Correlation and covariance function,
Properties of autocorrelation function, Cross–correlation functions,
Gaussian process, Gaussian distribution function
Laboratory Components: (Using Simulation Tools)
• Generate the probability density function of the Gaussian
distribution function
• To plot the correlation between waves
• Module 2:
Amplitude Modulation
Communication System: Introduction to communication System,
AM concepts, Modulation index and percentage of modulation,
Sidebands and the frequency domain, AM power, DSB-SC
modulation, Single sideband modulation, VSB modulation,
Balanced modulators
Laboratory Components: (Using Discrete Components)
• To design and verify amplitude modulation for a given modulation
index
• To design and verify DSB-SC modulation
Module 3:
Noise in Modulation Systems
Noise: Introduction, Shot noise, Thermal noise, White noise, Noise
equivalent bandwidth, Narrow bandwidth, Noise figure, Equivalent
noise temperature
Noise in continuous wave modulation systems: Receiver model,
Noise in DSB-SC receivers, Noise in SSB receivers, Noise in AM
receivers, Threshold effect, Noise in FM receivers
Laboratory Components: (Using Simulation Tools)
• To find the Signal to Noise Ratio of a given signal
Module 4:
• Angle Modulation
Fundamentals of Frequency Modulation: Basic principles of
frequency modulation, Principles of phase modulation, Modulation
index and sidebands, Noise suppression effects of FM, Frequency
modulation versus Amplitude modulation. Ratio discriminator, FM
threshold effect, Pre-emphasis and De-emphasis
Laboratory Components: (Using Discrete Components)
• To design and verify Frequency modulation
• To design and verify Pre-emphasis and De-emphasis
Module 5:
Radio Transmission Concepts
• Radio Transmitters: Transmitter fundamentals, Transmitter
configurations, Carrier generators, Crystal oscillators, Frequency
synthesizers, Phase locked loop synthesizers
• Communication Receivers: Basic principles of signal
reproduction, Superheterodyne receivers, Squelch, Frequency
conversion: Mixing principles, Mixer and converter designs, Local
oscillators and frequency synthesizers, Intermediate frequency and
images
Laboratory Components: (Using Simulation Tools / Discrete
Components)
• Design and test a mixer used for frequency translation
Text Books:
1. Herbert Taub, Donald L Schilling, “Principles of Communication
Systems” 2nd Edition, McGraw-Hill, 2017
2. Rodger E Zeimer, William H Tranter, “Principles of Communication
Systems, Modulation and Noise” 7th Edition, Wiley Publications,
2015
3. George Kennedy, “Electronic Communication Systems” 5th Edition,
McGraw-Hill, 2012 4. Simon Haykins & Moher, “Communication
Systems”, 5th Edition, John Wiley India Pvt. Ltd, 2010
Reference Books:
1. B P Lathi, Zhi Ding, “Modern Digital and Analog Communication
Systems”, 4th Edition, Oxford University Press, 2010
2. Masoud Salehi, John G. Proakis, “Communication System
Engineering”, 2nd Edition, Prentice Hall, 2018
3. A. Michael Noll, “Introduction to Telephones and Telephone
Systems” 3rd Edition, Artech House Publishers, 2005
4. Louis E Frenzel, “Principles of Electronic Communication Systems”,
3rd Edition, McGraw-Hill (India), 2016
Introduction
• There are 2 types of signals
• Deterministic signal: These are class of signals that may be
modeled as completely specified functions of time.
• Random signal: It is encountered in every practical
communication system. Signal is random if it is not possible to
predict its precise value in advance.
Example: Radio communication system. Received signal has
information bearing signal+ random interference component +
receiver noise.
It is not possible to predict the precise value of random signal
in advance, it can be described in terms of its statistical
properties such as average power in the random signal or
average spectral distribution of this power.
• The mathematical discipline that deals with the statistical
characterization of random signals is Probabilility Theory.
• Random Process: It consists of ensemble (family) of sample
functions, each of which varies randomly with time.
• Random variable: It is obtained by observing a random
process at a fixed instant of time.
Probability
• Probability theory is rooted in the phenomenon that can be
modeled by an experiment with an outcome that is subject to
chance.
• If the experiment is repeated the outcome can be different
because of the influence of underlying random phenomenon/
chance mechanism. Such an experiment is referred as random
experiment.
Example: Tossing of fair coin, the possible outcomes of the
trials are “heads” or “Tails”.
• There are 2 approaches to the definition of probability.
• First approach is based on probability of occurance:
In “n” trials of a random experiment, an event “A” occurs “m”
times then we assign the probabilility “m/n” to the event “A”
• In second case, definition based on set theory is used and a set
of related mathematical axioms.
• In situations where experiments are repeatable, the set theory
approach agrees completely with the relative frequency of
occurrence approach.
• Consider an experiment and its possible outcomes as defining
a space and its points.
• If an experiment has K possible outcomes, then for the kth
possible outcome we have a point called sample point, denoted
by sk
• With these we have following definitions (Axioms)
 The set of all possible outcomes of the experiments is called
the sample space, denoted by S.
 An event corresponds to either a single sample points or a set
of sample points in the space S.
 A single sample point is called an elementary event.
 An entire sample space S is called the Sure event; and the null
set ø is called the null or impossible event.
 Two events are mutually exclusive if the occurrence of one
event precludes the occurrence of the other event.
• The sample space S may be discrete with countable number of
outcomes such as outcome of tossing a die.
• The sample space S may be continuous, such as voltage
measured at the output of a noise source.
• A probability measure “P” is a function that assigns a non-negative
number to an event “A” in the sample space “S” and satisfies the
following 3 properties (axioms):
1. 0<=P[A]<=1
2. P[S]=1
3. If “A” and “B” are two mutually exclusive events, then,
P[AUB] = P[A]+P[B]
• Consider probability system illustrated in Figure 5.1.
• The sample space S is mapped to event via random experiment
• The probability function assigns a value between 0 and 1 to each of
events via random experiment.
• The probability is not unique to the event.
• Mutually exclusive events may be assigned the same probability.
• The probability of union of all events, sure event is always unity.
• The 3 axioms and their relationship to the relative frequency
may be illustrated by the Venn diagram of figure 5.1.
• If we equate P to a measure of the area in the venn diagram with
the total area of “S” equal to 1, then the axioms are simple
statements of familiar geometric results regarding area.
• The following properties of probability measure P may be
derived from the above axioms:
1. P[Al] = 1- P[A], where A’ is the complement of the event A.
2. When events A and B are not mutually exclusive, then the
probability of the union event “A” or “B” satisfies
P[AUB] = P[A]+P[B]- P[A B]
Where, P[A B] is the probability of the joint event “A” and “B”.
3. If A1, A2, ………………., Am are mutually exclusive events
that include all the possible outcomes of the random
experiments, then
P[A1]+P[A2]+…………+P[Am]=1
Conditional Probability:
• Suppose in an experiment a pair of outputs ‘A’ and ‘B’ are
events.
• Let P[B/A] is the Probability of event ‘B’, given that ‘A’ has
occurred then P[B/A] is a conditional Probability of B given
A.
• i.e P[B/A] = P[A B] / P[A]
• Where, P[A B] is joint probability of A and B.
• P[A B] = P[B/A] P[A]
• P[A B] = P[A/B] P[B]
• The joint probability of two events may be expressed as the
product of conditional probabilility of one event given other
and the elementary probability of the other.
• Conditional Probability, if P[A] is not equal to 0 then
P[B/A] = P[A/B] P[B]/ P[A],
this is special form of Bayes rule.
• Suppose P[B/A] is equal to elementary probability of
occurrence of event B, i.e
P[B/A] = P[B]
• Then Joint probability, P[A B] is
P[A B] = P[A] P[B]
Random Variables:
• A function whose domain is a sample space and range is a set
of real numbers is called as Random variable of the
experiment.
• Example: In tossing a coin, heads corresponds to ‘1’ and tail to
‘0’.
• These numbers are assigned to outcomes of experiments.
• If the outcome of the experiment is ‘S’ then random variable is
X(S) or X.
• The particular outcome of random experiment is X(Sk) =x.
• Note: there may be more than one random variable associated
with the same random experiment.
• Random variable can take discrete values or continuous
values.
• Random variable takes only finite number of discrete values,
then its called discrete random variable.
• Random variable takes continuous values, then its called
continuous random variable.
• Let ‘X’ be a random variable and probability of the event
‘X’<=x i.e P[X<=x]
• The function,
FX[x] = P[X<=x],
is a cumulative distribution function [CDF].
• Where, 1) FX(x) is bounded between zero and one.
• 2) FX(x) is a monotone – non decreasing function.
i.e, FX(x1) <= FX(x2) if x1< x2
• The derivative of CDF, FX[x] = d FX[x] / dx
Several Random Variables
• Few experiments requires several random variables for
description.
• So if ‘X’ and ‘Y’ are two random variables. The joint
distribution function is FX,Y(x,y), where x, y are specifies
values.
• Joint distribution function, FX,Y(x,y) is defined as the
probability that the random variable X is less than or equal to a
specified value x and that the random variable Y is less than or
equal to a specified value y.
• Then the output of the experiment result is sample point lying
inside of the joint sample space,
• i.e FX,Y(x,y) = P[X<=x, Y<=y]
• Suppose joint PDF is continuous then partial derivation

is continuous.
• Joint PDF is monotonic, Non decreasing function of both x
and y.
• The total valued under graph of the joint probability density
function must be unity as shown by,

• The probability density function for a single random variable


can be obtained from its joint probability function with a
second random variable,
• Differentiating both sides with respect to x,

• The Conditional Probability density function of Y given that


X=x is defined by,
fY(y/x)= fX,Y(x,y)/fX(x)
fY(y/x)>=0
and

• Conditional Probability density function fY(y/x) reduces to the


marginal density fY(y) as
fY(y/x)= fY(y)
• We can express the joint probability density function of the
random variables X and Y as the product of their respective
marginal densities shown by
fX,Y(x,y) = fX(x) fY(y)
• The joint probability density function of the random variable X
and Y equals the product of their marginal densities, then X
and Y are statistically independent.
P[X€A, Y€B] = P[X€A] P[Y€B]
Or P[X, Y] = P[X] P[Y]
for statistically independent random variable X and Y.
Statistical Averages:
• It is the expected value or mean of the random variable ‘X’.

i.e
• Where, E denotes the statistical expectation operator and µX
denotes mean which gives center of gravity of an area under
probability density function.
Functions of a Random Variable
• Let ‘X’ denote a random variable, g(X) denote a real-valued
function.
Y=g(X)
• Then expected value of y with PDF fY(y) is
Question:
The Random Variable y is the function of another random
variables ‘X’ such that y = cos(x). Where x is a random
variable uniformly distributed in interval (-, ).

Find out the mean of y


Solution:
Moments:
• Let ‘X’ be a random variable with a real valued function g(X)=
Xn then nth moment of probability distribution of X is

• First two moments are important.


• For n=1,
is a mean of random variable.
• For n=2,
is a mean square value of ‘X’.
Central Moments:
• It is moments of the difference between a random variable [X]
and its mean µX
• Therefore, the nth central moment is

• For, n=1, central moment is 0


• For, n=1, 2nd central moment is variance of random variable
‘X’.

• Variance is denoted as
• Square root of the variance is standard deviation of random
variable X.

• The variance and mean square value are related by,

• The Chebyshev inequality states that for any positive number,


Joint Moments:
• Consider a pair of random variables X and Y.
• A set of statistical averages of importance in this case the joint
moments, the expected value of Xi Yk, where i and k have
positive values.

• The correlation of the centered random variables X - E[X] and


Y - E[Y], the joint moment

• Is called the covariance of X and Y.


• Let µX=E[X] µY=E[Y] then,
• Two random variable X and Y are uncorrelated if and only their
covariance is zero
cov[XY] = 0
• Two random variable X and Y are uncorrelated if and only their
correlation is zero
E[XY] = 0
• If one of the random variables X and Y or both have zero means
and if they are orthogonal then they are uncorrelated and
viceversa.
• If X and Y are statistically independent, then they are uncorrelated.
Random Processes
• The sample space or ensemble comprised of functions of time
is called random or stochastic process.
• Consider a random experiment specified by the outcomes s
from sample space S, by the events defined on the sample
space, then,
X(t,s), -T<=t<=T
• Where, 2T is the total observation interval.
• For fixed sample point sj, the graph of the function X(t,sj)
versus time t is called realization or sample function of the
random process. Sample function is denoted as,

• Figure 5.7 illustrates a set of sample functions


• Random process X(t) is an ensemble of time function together
with a probability rule that assigns a probability to an
meaningful event associated with an observation of one of the
sample functions of the random process.
• Difference between random variable and a random process:
• For a random variable, the outcome of a random experiment is
mapped into a number.
• For a random process, the outcome of a random experiment is
mapped into a waveform that is a function of time.
Mean, Correlation and Covariance Functions:
• Consider a random process X(t). The mean of the process X(t), as
the expectation of the random variable E[X(t)] obtained by
observing the process at some time t, is

• Where fX(t) (x) is the probability density function of the process at


time t.
• A random process is said to be stationary to first order if the
distribution function of X(t) does not vary with time.
• The density functions for the random variables X(t1) and X(t2)
satisfy,

• for all t1 and t2.


• For the process that is stationary to the first order the mean of the
random process is a constant as shown by,
• Autocorrelation function of the process X(t) is the expectation
of the product of two random variables X(t1) and X(t2) is
obtained by observing X(t) at times t1 and t2.

• Where, fX(t1) , X(t2) (x1,x2) is the joint probability density


function of the random variables X(t1) and X(t2).
• X(t) is stationary to second order if the joint distribution
function fX(t1) , X(t2) (x1,x2) depends only on the difference
between the observation times t1 and t2.
• Autocorrelation function of a stationary random process
depends only on the time difference t2-t1 as shown by,
Autocovariance:
• The autocovariance function of a stationary random process
X(t),
Cross-Correlation Functions
• Consider two random processes X(t) and Y(t) with
autocorrelation functions RX(t,u) and RY(t,u).
• The cross-correlation function of X(t) and Y(t) is defines by
RXY(t,u)= E[X(t) Y(u)]
• If the random process X(t) and Y(t) are wide sense stationary
then

• The cross-correlation function is not generally an even


function of and is true for the autocorrelation function, nor
does it have a maximum at the origin.
• However it does not obey a certain symmetry relationship as
follows,
Ergodic Processes:
• The expectation of a stochastic process X(t) are averages
“across the process”.
• The expectation of a stochastic process are referred as
ensemble averages.
• It is difficult or impossible to observe all the sample function
of a random process at a given time.
• It is often more convenient to observe a single sample function
for a long period of time.
• For a simple sample function, we may compute the time
average of a particular function.
• For example, for a sample function x(t), the time average of
the mean value over an observation period 2T is
• For many Stochastic processes of interest in communication,
the time averages and ensemble are equal known as a property
called ergodicity.
• This implies that whenever an ensemble average is required,
we may estimate it by using a time average and we consider all
random processes to be ergodic.
Gaussian Process:
• Suppose a random process X(t) for an interval that starts at
time t=0 and lasts until t=T.
• Suppose we weight the random process X(t) by some function
g(t) and then integrate the product g(t) X(t) over this
observation interval.
• The random variable Y is defines by,

• We refer Y as a linear functional of X(t)


• The value of the random variable Y depends on the course of
the argument function g(t) X(t) over the entire observational
interval from 0 to T.
• The Weighting function g(t) is such that the mean square value
of the random variable Y is finite, and if random variable Y is
a Gaussian distributed random variable for every g(t) in this
class of functions, then the process X(t) is said to be Gaussian
process.
• The process X(t) is a Gaussian process if every linear
functional of X(t) is a Gaussian random variable.
• The random variable Y has a Gaussian distribution if its
probability density function has the form,
• Where µY is the mean and is the variance of the random
variable Y.
• The plot of the probability density function is given in figure
5.16, for the special case where the Gaussian random variable
Y is normalized to have a mean µY of zero mean and a
variance of one, as shown by,

• Such a normalized Gaussian distribution is commonly written


as N (0,1)
• The Gaussian process has two main virtues.
• First, the Gaussian process has many properties that make
analytic results possible.
• Second, the random process produced by physical phenomena
are often such that a Gaussian model is appropriate.
Central Limit Theorem
Central limit theorem provides the mathematical justification
for using a Gaussian process as a model for a large number of
different physical phenomenon in which the observed random
variable, at a particular instant of time, is the result of a large
number of individual random events.
• Let Xi, i= 0,1,2,…………..N, be set of random variables the
satisfy the following requirements:
1. The Xi are statistically independent.
2. The Xi have the same probability distribution with mean µx
and variance
• The Xi so described are said to constitute a set of
independently and identically distributed random variables.
• Let these random variables be normalized as follows:
• The central limit theorem states that the probability
distribution N (0,1) in the limit as N approaches infinity.
• i,e regardless of the distribution of the Yi, the sum VN
approaches a Gaussian distribution.
• Central limit theorem gives only the “limiting” form of
the probability distribution of the normalized random
variable VN, as N approaches infinity.
• When N is finite, the Gaussian limit is most accurate in
the central portion of the density function and less
accurate in the tails of the density function.
Properties of a Gaussian process:
Property 1
If a gaussian process X(t) is applied to a stable linear filter,
then the random process Y(t) developed at the output of the
filter is also gaussian.
Proof:
• Consider a time-invariant filter of impulse response h(t), with
the random process X(t) as input and the random process Y(t)
as output. Assume X(t) is a Gaussian process.
• The random process Y(t) and X(t) are related by the
convolution integral
• Assume impulse response h(t) such that the mean square
process Y(t) is finite for all t in the range 0<=t< infinity, for
which Y(t) is defined.
• To demonstrate Y(t) is Gaussian, we must show that any linear
functional of it is a Gaussian random variable.
• We define the random variable as,

• Z must be Gaussian random variable for every function gY(t)


such that the mean square value of Z is finite.
• Interchanging the order of integration,

• Where,
• X(t) is a Gaussian Hypothesis. The input X(t) to a linear filter
is a Gaussian Process, then output Y(t) is also Gaussian
Process.
• Property 2
Consider the set of random variables or sample X(t1), X(t2),
X(t3), . . . . . . . , X(tn), obtained by observing a random process
X(t) at time t1, t2, t3,. . . . . . ., tn .If the process X(t) is Gaussian,
then this set of random variables is jointly Gaussian for any n,
with their n-fold joint probability density function being
completely determined by specifying the set of means

And the set of autocovariance function.


• Consider the composite set of random variables X(t1), X(t2),
X(t3), . . . . . . . , X(tn), Y(u1), Y(u2), Y(u3), . . . . . . . , Y(um),
obtained by observing the random process X(t) at times
{ti, i=1,2,3,4,…….,n} and a second random process Y(t) at
times {uk, k=1,2,3,4,…….,m}.
• The process X(t) and Y(t) are jointly Gaussian if this
composite set of random variables are jointly Gaussian for any
n and m.
• The covariance function of the random process X(t) and Y(t) is
given by,

• for any given pair of observation instants (t, uk).


• Property 3
If a Gaussian process is wide sense stationary, then the process
is also stationary in the strict sense.
• Property 4
If a random variables X(t1), X(t2), X(t3), . . . . . . . , X(tn),
obtained by sampling a gaussian process X(t) at times
t1,t2,t3,………..,tm, are uncorrelated, that is,

Then these random variables are statistically independent.

• The implication of this property is that the joint probability


density function of the set of random variables X(t1), X(t2),
X(t3), . . . . . . . , X(tn), can be expressed as the product of the
probability density functions of the indivisual random
variables is the set.

You might also like