0% found this document useful (0 votes)
17 views106 pages

Comm-Optical Instrumentation - Unit 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views106 pages

Comm-Optical Instrumentation - Unit 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 106

Communication and Optical

Instrumentation
Dr. Jaiprakash Narain Dwivedi
Course Outcome
• CO1: Understand and remember different modulation and
demodulation schemes for analog and digital
communications.
• CO2: Illustrate the basic knowledge of probability theory and
understand the effect of Noise in communication system.
• CO3: To design, implement and compare various modulation
and demodulation schemes.
• CO4: To understand about different component of optical
networks
• CO5: To understand the working of Optical sensors

2
Unit 2
• Probability Theory, Random Variables
• Gaussian Distribution, Transformation of Random
Variables
• PDF, CDF
• Mean, Moments, Covariance Functions
• Power Spectral Density, Correlation Functions
• Sampling theorem (Instantaneous Sampling and Flat
Top Sampling),
• TDM, Pulse Code Modulation,
• Differential PCM systems (DPCM), Delta modulation
3
Phenomena

Non-deterministic
Deterministic
Deterministic Phenomena
• There exists a mathematical model that allows
“perfect” prediction the phenomena’s
outcome.
• Many examples exist in Physics, Chemistry
(the exact sciences).
Non-deterministic Phenomena
• No mathematical model exists that allows
“perfect” prediction the phenomena’s
outcome.
Non-deterministic Phenomena
• may be divided into two groups.

1. Random phenomena
– Unable to predict the outcomes, but in the long-
run, the outcomes exhibit statistical regularity.

2. Haphazard phenomena
– unpredictable outcomes, but no long-run,
exhibition of statistical regularity in the
outcomes.
Phenomena

Non-deterministic

Deterministic
Haphazard

Random
Haphazard phenomena
– unpredictable outcomes, but no long-run,
exhibition of statistical regularity in the
outcomes.
– Do such phenomena exist?
– Will any non-deterministic phenomena exhibit
long-run statistical regularity eventually?
Random phenomena
– Unable to predict the outcomes, but in the long-
run, the outcomes exhibit statistical regularity.

Examples
1. Tossing a coin – outcomes S ={Head, Tail}
Unable to predict on each toss whether is Head or
Tail.
In the long run can predict that 50% of the time
heads will occur and 50% of the time tails will occur
2. Rolling a die – outcomes
S ={ , , , , , }

Unable to predict outcome but in the long run can


one can determine that each outcome will occur 1/6
of the time.
Use symmetry. Each side is the same. One side
should not occur more frequently than another side
in the long run. If the die is not balanced this may
not be true.
Definitions
The sample Space, S
The sample space, S, for a random phenomena
is the set of all possible outcomes.
Examples
1. Tossing a coin – outcomes S ={Head, Tail}

2. Rolling a die – outcomes


S ={ , , , , , }

={1, 2, 3, 4, 5, 6}
An Event , E
The event, E, is any subset of the sample space,
S. i.e. any set of outcomes (not necessarily all
outcomes) of the random phenomena
Venn
S diagram
E
The event, E, is said to have occurred if after the
outcome has been observed the outcome lies in
E.

S
E
Examples

1. Rolling a die – outcomes


S ={ , , , , , }
={1, 2, 3, 4, 5, 6}

E = the event that an even number is


rolled
= {2, 4, 6}
={ , , }
Special Events
The Null Event, The empty event - f

f = { } = the event that contains no outcomes


The Entire Event, The Sample Space - S
S = the event that contains all outcomes
The empty event, f , never occurs.
The entire event, S, always occurs.
Set operations on Events
Union
Let A and B be two events, then the union of A
and B is the event (denoted by AB) defined by:
A  B = {e| e belongs to A or e belongs to B}

AB

A B
The event A  B occurs if the event A occurs or
the event and B occurs .

AB

A B
Intersection

Let A and B be two events, then the intersection


of A and B is the event (denoted by AB) defined
by:
A  B = {e| e belongs to A and e belongs to B}

AB

A B
The event A  B occurs if the event A occurs and
the event and B occurs .

AB

A B
Complement

Let A be any event, then the complement of A


(denoted by A ) defined by:

A = {e| e does not belongs to A}

A
A
The event A occurs if the event A does not
occur

A
A
In problems you will recognize that you are
working with:

1. Union if you see the word or,


2. Intersection if you see the word and,
3. Complement if you see the word not.
Definition: mutually exclusive
Two events A and B are called mutually
exclusive if:
A  B 

A B
If two events A and B are are mutually
exclusive then:
1. They have no outcomes in common.
They can’t occur at the same time. The outcome
of the random experiment can not belong to both
A and B.

A B
Probability
Definition: probability of an Event E.
Suppose that the sample space S = {o1, o2, o3, …
oN} has a finite number, N, of oucomes.
Also each of the outcomes is equally likely
(because of symmetry).
Then for any event E

n E  n E  no. of outcomes in E
P E =  
n S  N total no. of outcomes
Note : the symbol n  A  = no. of elements of A
Thus this definition of P[E], i.e.

n E  n E  no. of outcomes in E
P E =  
n S  N total no. of outcomes

Applies only to the special case when


1. The sample space has a finite no.of
outcomes, and
2. Each outcome is equi-probable
If this is not true a more general definition
of probability is required.
Rules of Probability
Rule The additive rule
(Mutually exclusive events)

P[A  B] = P[A] + P[B]


i.e.
P[A or B] = P[A] + P[B]

if A  B = f
(A and B mutually exclusive)
If two events A and B are are mutually
exclusive then:
1. They have no outcomes in common.
They can’t occur at the same time. The outcome
of the random experiment can not belong to both
A and B.

A B
P[A  B] = P[A] + P[B]
i.e.
P[A or B] = P[A] + P[B]

A B
Rule The additive rule
(In general)

P[A  B] = P[A] + P[B] – P[A  B]

or
P[A or B] = P[A] + P[B] – P[A and B]
Logic AB
A B

AB

When P[A] is added to P[B] the outcome in A  B


are counted twice
hence
P[A  B] = P[A] + P[B] – P[A  B]
P  A  B   P  A  P  B   P  A  B 

Example:
Saskatoon and Moncton are two of the cities competing
for the World university games. (There are also many
others). The organizers are narrowing the competition to
the final 5 cities.
There is a 20% chance that Saskatoon will be amongst
the final 5. There is a 35% chance that Moncton will be
amongst the final 5 and an 8% chance that both
Saskatoon and Moncton will be amongst the final 5.
What is the probability that Saskatoon or Moncton will
be amongst the final 5.
Solution:
Let A = the event that Saskatoon is amongst the final 5.
Let B = the event that Moncton is amongst the final 5.
Given P[A] = 0.20, P[B] = 0.35, and P[A  B] = 0.08
What is P[A  B]?
Note: “and” ≡ , “or” ≡  .
P  A  B   P  A  P  B   P  A  B 
0.20  0.35  0.08 0.47
Rule for complements

2. P  A  1  P  A

or
P  not A 1  P  A
Complement

Let A be any event, then the complement of A


(denoted by A ) defined by:

A = {e| e does not belongs to A}

A
A
The event A occurs if the event A does not
occur

A
A
Logic:
A and A are mutually exclusive.
and S  A  A

A
A

thus 1 P  S  P  A  P  A 
and P  A  1  P  A
Conditional Probability
Conditional Probability
• Frequently before observing the outcome of a
random experiment you are given information
regarding the outcome
• How should this information be used in prediction of
the outcome.
• Namely, how should probabilities be adjusted to take
into account this information
• Usually the information is given in the following
form: You are told that the outcome belongs to a
given event. (i.e. you are told that a certain event has
occurred)
Definition
Suppose that we are interested in computing
the probability of event A and we have been
told event B has occurred.
Then the conditional probability of A given B is
defined to be:
P  A  B if P  B  0
P  A B  
P  B
Rationale:
If we’re told that event B has occurred then the sample
space is restricted to B.
The probability within B has to be normalized, This is
achieved by dividing by P[B]
The event A can now only occur if the outcome is in of
A ∩ B. Hence the new probability of A is:

A
P  A  B B
P  A B  
P  B A∩B
An Example
The academy awards is soon to be shown.
For a specific married couple the probability that
the husband watches the show is 80%, the
probability that his wife watches the show is
65%, while the probability that they both watch
the show is 60%.
If the husband is watching the show, what is the
probability that his wife is also watching the
show
Solution:
The academy awards is soon to be shown.
Let B = the event that the husband watches the show
P[B]= 0.80
Let A = the event that his wife watches the show
P[A]= 0.65 and P[A ∩ B]= 0.60

P  A  B 0.60
P  A B    0.75
P  B 0.80
Independence
Definition
Two events A and B are called independent if
P  A  B   P  A P  B 
Note if P  B  0 and P  A 0 then
P  A  B P  A P  B 
P  A B     P  A
P  B P  B
P  A  B P  A P  B 
and P  B A   P  B 
P  A P  A
Thus in the case of independence the conditional probability of
an event is not affected by the knowledge of the other event
Difference between independence
and mutually exclusive

mutually exclusive
Two mutually exclusive events are independent only in
the special case where
P  A 0 and P  B  0. (also P  A  B  0
Mutually exclusive events are
A highly dependent otherwise. A
B
and B cannot occur
simultaneously. If one event
occurs the other event does not
occur.
Independent events
P  A  B   P  A P  B 

P  A  B P  A
or  P  A 
P  B PS
S

A B
The ratio of the probability of the
AB set A within B is the same as the
ratio of the probability of the set
A within the entire sample S.
The multiplicative rule of probability

 P  A P  B A if P  A 0
P  A  B  
 P  B  P  A B  if P  B  0

and
P  A  B   P  A P  B 

if A and B are independent.


Summary of the Rules of
Probability
The additive rule
P[A  B] = P[A] + P[B] – P[A  B]

and
P[A  B] = P[A] + P[B] if A  B = f
The Rule for complements
for any event E

P  E  1  P  E 
Conditional probability

P  A  B
P  A B  
P  B
The multiplicative rule of probability

 P  A P  B A if P  A 0
P  A  B  
 P  B  P  A B  if P  B  0
and

P  A  B   P  A P  B 
if A and B are independent.

This is the definition of independent


Counting techniques
Finite uniform probability space
Many examples fall into this category
1. Finite number of outcomes
2. All outcomes are equally likely

n E  n E  no. of outcomes in E
3. P E =  
n S  N total no. of outcomes
Note : n  A  = no. of elements of A

To handle problems in case we have to be able to


count. Count n(E) and n(S).
Random Variables

• In
an experiment, a measurement is usually
denoted by a variable such as X.

• In a random experiment, a variable whose


measured value can change (from one replicate of
the experiment to another) is referred to as a
random variable.
Random Variables
Continuous Random Variables

Probability Density Function


• Theprobability distribution or simply distribution of
a random variable X is a description of the set of the
probabilities associated with the possible values for X.
Continuous Random Variables

Probability Density Function


Continuous Random Variables

Probability Density Function


Continuous Random Variables

Probability Density Function


Continuous Random Variables

Probability Density Function


Continuous Random Variables

Cumulative Distribution Function


Continuous Random Variables

Mean and Variance


Important Continuous Distributions

Normal Distribution
Undoubtedly, the most widely used model for the
distribution of a random variable is a normal
distribution.

• Central limit theorem


• Gaussian distribution
Important Continuous Distributions
Normal Distribution
Important Continuous Distributions
Normal Distribution
Stationary Process
Stationary Process :
The statistical characterization of a process is independent of
the time at which observation of the process is initiated.
Non-stationary Process:
Not a stationary process (unstable phenomenon )
Consider X(t) which is initiated at t = ,
X(t1),X(t2)…,X(tk) denote the RV obtained at t1,t2…,tk
For the RP to be stationary in the strict sense (strictly stationary)
The joint distribution function
FX ( t1 τ ),..., X ( tk τ ) ( x ,.., x )  FX ( t1 ) ,...,X ( tk ) ( x ,... x )
1 k 1 k

For all time shift t, all k, and all possible choice of t1,t2…,tk

73
Mean, Correlation and Covariance Function
Let X(t) be a strictly stationary RP
The mean of X(t) is

 X (t )  E  X (t )

 xf X ( t ) ( x ) d x

 X for all t
fX(t)(x) : the first order pdf.
The autocorrelation function of X(t) is

R X (t1,t2 )  E  X (t1 ) X (t2 )


 
  x1 x2 f X ( t1 ) X ( t2 ) ( x1 , x2 )dx1dx2
- -
 
  x1 x2 f X (0) X ( t2  t1 ) ( x1 , x2 )dx1dx2
- -

 RX (t2  t1 ) for all t1 and t2


74
The auto-covariance function

C X (t1,t 2) E X (t1 )   X X (t2 )   X 


RX (t2  t1 )   X2
Which is of function of time difference (t2-t1).
We can determine CX(t1,t2) if mX and RX(t2-t1) are known.
Note that:
1. mX and RX(t2-t1) only provide a partial description.
2. If mX(t) = mX and RX(t1,t2)=RX(t2-t1),
then X(t) is wide-sense stationary (stationary process).
3. The class of strictly stationary processes with finite
second-order moments is a subclass of the class of all
stationary processes.
4. The first- and second-order moments may not exist.
75
Cross-correlation Function
RXY (t,u ) E  X (t )Y (u )
and RYX (t,u ) E Y (t ) X (u )
Note RXY (t , u ) and RYX (t , u )are not general even
functions.
The correlation matrix is
 RX (t , u ) RXY (t , u )
R (t , u )  
R
 YX ( t , u ) RY ( t , u ) 
If X(t) and Y(t) are jointly stationary
 RX ( ) RXY ( )
R ( ) 
 RYX ( ) RY ( ) 
where τ t  u 76
Power Spectral Density (PSD)
Consider the Fourier transform of g(t),

G ( f )  g (t ) exp(  j 2πft ) dt


g (t )  G ( f ) exp( j 2πft ) df

Let H(f ) denote the frequency response,   2 -  1

h( τ1 )  H ( f ) exp( j 2πfτ1 ) df


    
E Y (t )    H ( f ) exp( j 2fτ1 ) df  h( τ 2 ) RX ( τ 2  τ1 ) dτ1 dτ 2
2 
       
  
 df H ( f ) dτ h(τ ) R (τ  τ ) exp( j 2fτ ) dτ
 
2 2

X 2 1 1 1

  
 df H ( f )  dτ h(τ ) exp( j 2fτ )  R ( ) exp( j 2fτ ) d
2 2 2 X
  

*
H ( f ) (complex conjugate response of the filter)
77
 2

E Y (t )  df H ( f )

-
2 
R
-
X (τ ) exp( j 2f ) dτ
H ( f ) : the magnitude response
Define: Power Spectral Density ( Fourier Transform of R(τ )

S X ( f )  RX ( ) exp( 2πfτ ) d
-

  
E Y (t )  H ( f ) S X ( f ) df
2
-
2

E Y 2 (t )   h( τ1 )RX ( τ 2  τ1 ) dτ1 dτ 2
 
Recall
- -
Let H ( f ) be the magnitude response of an ideal narrowband filter
1, f  f c  1  f
|H ( f )|  2
1
0, f  f c  2  f
D f : Filter Bandwidth

If Δf  f c and S X ( f ) is continuous,
 
E Y 2 (t ) 2Δf S X ( f c ) in W/Hz

78
Properties of The PSD


S X ( f )  RX ( τ ) exp( j 2f ) d


RX ( τ )  S X ( τ ) exp( j 2f ) df


Einstein-Wiener-Khintahine relations:

S X ( f )  RX ( τ )

S X ( f )is more useful than RX (τ!)

79

a. S X (0)  RX ( τ ) d


 
b. E X (t )  S X ( f ) df
2


c. If X (t ) is stationary,
 
E Y 2 (t ) (2Δf ) S X ( f ) 0
S X ( f ) 0 for all f

d. S X ( f )  RX ( τ ) exp( j 2f ) dτ


 RX (u ) exp( j 2fu ) du, u  τ


S X ( f )
e. The PSD can be associated with a pdf :
SX ( f )
pX ( f )  
S

X ( f ) df
80
Baseband Communication System
Sampling
• Sampling
is taking samples from a continuous signal to convert
it to a discrete signal

• Analog signal is sampled every TS seconds.


• Ts is referred to as the sampling interval.
• fs = 1/Ts is called the sampling rate or sampling
frequency.
Ideal Sampling (Impulse)
Natural Sampling
Flat-top Sampling
Comparison between Sampling Methods
Sampling Effect in frequency domain
fS>2W
m(t)

TS=1/fS t (sec)

M(f)
LPF

-fS-W -fS -fS+W -W 0 W fS-W fS fS+W f (Hz)


fcutoff
 By using a LPF at the receiver, it is possible to reconstruct the
original signal from received samples
fS=2W (Nyquist Rate)
m(t)

TS=1/ t (sec)
fS
M(f)
LPF

-3W -fS=-2W -W 0 W fS=2W 3W f (Hz)


fcutoff
 By using a LPF with fcutoff=W at the receiver, it is possible to
reconstruct the original signal from received samples
fS<2W
m(t)

TS=1/fS t (sec)

M(f)

-fS-W -fS -W -fS+W 0 fS-W W fS fS+W f (Hz)

 It is no longer possible to reconstruct the original signal from


received samples
Impulse Sampling with Increasing Sampling
Frequency
Sampled waveform Sampled waveform

0 0
1 201 1 201

Sampled waveform Sampled waveform

0 0
1 201 1 201
Pulse Code Modulation (PCM) is a special form of A/D conversion. It consists of sampling, quantizing, and encoding steps. It is widely popular because:
- Used for long time in telephone systems
- Inexpensive electronics exists
- Errors can be corrected during long haul transmission
- Can use time division multiplexing

PCM
signal
Signals in PCM Process
Design Issues for PCM
- Analog to Digital Conversion
Aliasing
Sample timing accuracy
Quantization noise
D/A accuracy
Reconstruction filter

- Digital Communication Technique


Encoding and decoding
Signal format
Transmit and receive filters
Channel effects
Statistical decision making error
Bandwidth of PCM

Assume w(t) is bandlimited to B hertz.


Minimum sampling rate = 2B samples / second
A/D output = n bits per sample (quantization level M=2n)
Assume a simple PCM without redundancy.
Minimum channel bandwidth = bit rate /2

® Bandwidth of PCM signals:


BPCM  nB (with sinc functions as orthogonal basis)
BPCM  2nB (with rectangular pulses as orthogonal basis)
® For any reasonable quantization level M, PCM requires
much higher bandwidth than the original w(t).
Example) PCM used in telephone systems

Voice is considered to be bandlimited at 4 kHz ( B).  f s 8,000 samples per second


Each sample is converted into an 8 - bit binary number.  n 8 bits per sample
Bit rate of binary PCM signal, R :
R  f s samples/second n bits/samples  64 kbits/second (this is known as DS - 0 signal.)
1
minimum bandwidth : B min  R 32kHz (when sinc function is used.)
2
null bandwidth : BPCM  R 64kHz (when rectangular pulse is used)
S
    2
3 28 52.9dB
 N  pk out
Differential Pulse Code Modulation (DPCM)

- Often voice and video signals do not change much from one
sample to next.
- Such signals has energy concentrated in lower frequency.
- Sampling faster than necessary generates redundant information.
Can save bandwidth by not sending all samples.
* Send true samples occasionally.
* In between, send only change from previous value.
* Change values can be sent using a fewer number of bits
than true samples.
Examples (CCITT standards)
* 32 k bits / s (4-bit quantization and 8 k samples /s) for 3.2kHz
* 64 k bits / s (4-bit quantization and 16 k samples /s) for 7 kHz
For slowly varying signals, a future sample can predicted from
past samples.
s(t) + e(t) e(t) s(t)
+ + +
- +
Predictor Predictor

Transmitter Side Receiver Side

Transversal filter can perform the prediction process.


One Implementation of DPCM

Quantization error is accumulated.


Another Implementation of DPCM
Quantization error is not accumulated.
Delta Modulation (DM)

- Special type of DPCM with M = 2.


Inexpensive and simple to implement.
DM Waveform
Some notes about DM
Bit rate = sampling rate
n
Reconstructed signal z (nTs )  y (iTs )where y(iTs) = +1 or -1
and  is the step size. i 1

Types of noise
* Quantization noise: step size  takes place of smallest
quantization level.
* Granular noise: z(nTs) is always different from z((n-1)Ts).
* Slope overload noise: maximum slope of output signal is  / Ts.
 too small: slope overload noise
too large: quantization noise and granular noise

There is an optimum value for  in terms of signal bandwidth,


signal power, and sampling frequency.

Example. Let w(t)  A cos 2πf ot and the sampling frequency, f s kf o
where k is an integer, k  2. What is the minimum value of δ for no slope overload?

dw(t )
 2 Af o π sin 2πf ot  which has the maximum value of 2 Af o π.
dt
1 1
Ts  
f s kf o
 2 Aπ
For no slope overload, 2 Af o .   
Ts k
Time Division Multiplexing
• Time interleaving of samples from different sources to be transmitted over a
single communication channel.
References

• S. Haykin, Communication Systems, 4thEdn, John Wiley & Sons, Singapore,


2001.

• B.P. Lathi, Modern Digital & Analog Communication Systems, 3rdEdition,


Oxford University Press, Chennai, 1998.

• Leon W. Couch II. Digital and Analog Communication Systems, 6thEdition,


Pearson Education Inc., New Delhi, 2001.

• Gerd Keiser, “Optical Fiber Communications”, McGraw Hill , 5th Edition,


2013.

• J. Wilson & J. F. B. Hawkes, “Optoelectronics: An Introduction” PHI/ Pearson.


105
Thank you
106

You might also like