0% found this document useful (0 votes)
66 views

Chap2 Signal Analysis

This document discusses signal analysis. It defines signals as functions of time and classifies them as continuous or discrete, periodic or non-periodic, deterministic or random. The most fundamental signal is the sinusoidal wave. Fourier analysis represents signals in both the time and frequency domains and establishes the relationship between them via Fourier series and Fourier transforms. Key functions discussed include the unit step function and Dirac delta function. The goal of signal analysis is to describe complex signals using sums of sinusoidal components.

Uploaded by

LaNitah LaNitah
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views

Chap2 Signal Analysis

This document discusses signal analysis. It defines signals as functions of time and classifies them as continuous or discrete, periodic or non-periodic, deterministic or random. The most fundamental signal is the sinusoidal wave. Fourier analysis represents signals in both the time and frequency domains and establishes the relationship between them via Fourier series and Fourier transforms. Key functions discussed include the unit step function and Dirac delta function. The goal of signal analysis is to describe complex signals using sums of sinusoidal components.

Uploaded by

LaNitah LaNitah
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Signal Analysis

Objectives:
• To explain the most fundamental signal
• To describe the classification of the signals
• To describe the classification of the systems
• To introduce special functions
• To introduce Fourier series
• To explain the concept of negative frequency
• To show how the signal may be described in either the time domain or the frequency domain and
establish the relationship between these descriptions.
• To study Fourier transform
• To determine signal power
• To study convolution and autocorrelation

In communication system, the received waveform is usually categorized into the desired part containing the
information and undesired part. The desired part is the signal and the undesired part is the noise.
A signal is a function of time. In a commonly used electrical communication system, the most fundamental
signal we deal with is the sinusoidal signal.

Why the sinusoidal signal is the most fundamental?


The response of a sine wave to a linear time invariant system is also sinusoidal. If a sinusoidal signal is
applied at the input of a linear system, the output waveform is also sinusoidal with the same frequency as
the input, except for a constant transmission delay and attenuation (or gain).

However, communication systems involve waveforms that are complex in nature, and it is desirable to
revolve them in terms of sinusoidal functions. Signal analysis is a tool for achieving this aim.
The principle of signal analysis is to break up all signals into summations of sinusoidal components. This
provides a description of a given signal in terms of sinusoidal frequencies.

Classification of signals
The most useful method of signal representation for any given situation depends on the type of signal being
considered. Signals can be classified in various ways which are not mutually exclusive:

Continuous (analog) and discrete (digital) signals:


Continuous signals are those that do not have any discontinuity in the time domain.
Discrete signals are those that assume only specific values at a certain time (and thus have discontinuities).
Information-carrying signals can be either continuous or discrete.
e.g. signals associated with a computer are digital because they take on only two values (binary signals)

Periodic and nonperiodic signals


A periodic signal is one that repeats itself exactly after a fixed length of time. It is defined by
g(t) = g(t + T) for all t
where the smallest positive number that satisfies the above equation is called the period.
Above equation simply says that shifting the signal by a period or an integer number of periods to the left
or right leaves the waveform unchanged. Consequently, a periodic signal is fully described by specifying its
behavior over any one period.
Any signal for which there is no value of T satisfying the above equation is said to be nonperiodic, or
aperiodic.
Information-carrying signals are normally nonperiodic.

Important properties of periodic signal


A periodic signal must start at -∞ and continue forever.
A periodic signal can be generated by periodic extension of any segment of g(t) of duration T.

1
(There is no strictly defined periodic signal since any signal has beginning and end, however, if the signal
repeats itself in a long time, it can be considered as an approximately periodic signal.)

Deterministic and random signals


A signal, whose physical description is known completely, in either a mathematical form or a graphical
form, is a deterministic signal.
If a signal is known only in terms of probabilistic description, such as mean value, mean squared value and
so on, rather than its complete mathematical or graphical description, is a random signal.
All signals encountered in telecommunications, i.e. all noise and information-carrying signals, are random
signals.
All message signals are random signals because a signal, to convey some information, must have some
uncertainty (randomness) about it.
Therefore, all signals we have to deal with in telecommunications are random nonperiodic signals in reality.
However, frequently we will use deterministic periodic signals to demonstrate a point because they are
much easier to work with mathematically.

Singularity functions
There is a particular class of functions which plays an important role in signal analysis, they are called
singularity functions. These functions are discontinuous or have discontinuous derivatives.
Singularity functions are mathematical idealizations and, strictly speaking, do not occur in physical systems.
They are useful in signal analysis because they serve as good approximations to certain limiting conditions
in physical systems.
Here, we are going to discuss two types of singularity functions: unit step function and Dirac delta function
(unit impulse function).

Unit step function


1 t >0
The unit step function is defined by u (t ) = {
0 t <0
(u(t) has no definition at t = 0, or you may define u(0) = 1 or u(0) = ½)

Unit impulse function


The unit impulse function is defined as follows:
δ (t ) = 0, t≠0

∫ −∞
δ (t )dt = 1

2
ε ε
u (t + ) − u (t − )
δ (t ) = lim 2 2
ε →0 ε

Relationship between δ(t) and u(t)


t
∫−∞
δ (τ )dτ = u (t )
du (t )
= δ (t )
dt
Proof
t 0− t t
(1) When t > 0,
∫−∞
δ (τ )dτ = ∫ δ (τ )dτ + ∫ δ (τ )dτ = ∫ δ (τ )dτ =1 = u (t )
−∞ 0− 0−
t
When t < 0,
∫−∞
δ (τ )dτ = 0 = u (t )
Multiplication of a function by δ(t)
f(t)δ(t) = f(0)δ(t), f(t) continuous at 0.
f(t)δ(t – t0) = f(t0)δ(t – t0), f(t) continuous at t0.
Proof
(1) f(t) is continuous at 0, then f(0) exists.
when t = 0, f(t)δ(t) = f(0)δ(t)
when t ≠ 0, f(t)δ(t) = 0 = f(0)δ(t)
so that f(t)δ(t) = f(0)δ(t),

Sampling property of δ(t)


∞ ∞
∫−∞
f (t )δ (t )dt = f (0) ∫ δ (t )dt = f (0)
−∞

∫−∞
f (t )δ (t − T )dt = f (T )
Proof
∞ ∞
(2)
∫−∞
f (t )δ (t − T )dt = f (T ) ∫ δ (t − T )d (t − T ) = f (T )
−∞

Fourier analysis
A signal can be represented in either the time domain (where it is a waveform as a function of time) or in
the frequency domain (where the signal is defined in terms of its spectrum).
If the signal is specified in the time domain, we can determine its spectrum.
Conversely, if the spectrum is specified, we can determine the corresponding waveform.
Fourier analysis provides a link between the time domain and the frequency domain.

Fourier series
A periodic function g(t) of period T, can be expressed as an infinite sum of sinusoidal waveforms. This
summation is called Fourier series. Fourier series can be written in several forms. One such form is the
trigonometric Fourier series:
a0 ∞ 2nπ t ∞
2nπ t
g (t ) = + ∑ an cos( ) + ∑ bn sin( )
2 n =1 T n =1 T
where

3
2nπ t
T /2
2
an = ∫
T −T / 2
g (t ) cos(
T
)dt , n = 0, 1, 2, ⋅⋅⋅

2nπ t
T /2
2
bn = ∫
T −T / 2
g (t ) sin(
T
)dt , n= 1, 2, ⋅⋅⋅

Any repetitive waveform can be represented in terms of an infinite number of sine or cosine waves having
frequencies which are multiples of a fundamental frequency (“harmonics”) and a dc component.
The frequency domain representation of a sine or cosine time signal is an impulse having the amplitude of
the sine wave and being at the frequency of the sine wave.

Exponential Fourier series


The trigonometric Fourier series can be further simplified by the use of Euler’s identities
1
cos ω 0t = (e jω0t + e − jω0t )
2
1 jω0t − jω0t
sin ω 0t = (e − e )
2j
Assuming ω0 = 2π/T, then
a0 ∞ 2nπ t ∞
2nπ t a ∞ ∞
g (t ) = + ∑ an cos( ) + ∑ bn sin( ) = 0 + ∑ an cos(nω 0t ) + ∑ bn sin(nω 0t )
2 n =1 T n =1 T 2 n =1 n =1

a0 ∞ an jnωot − jnωot bn jnωot − jnωot a ∞


a − jbn jnωot an + jbn − jnωot
= + ∑ [ (e +e ) + (e −e )] = 0 + ∑ ( n e + e )
2 n =1 2 2j 2 n =1 2 2

= c0 + ∑ (cn e jnωot +c− n e− jnωot )
n =1

where c0 = a0 / 2, cn = (an – jbn) / 2 and c-n = (an + jbn) / 2, nω0 is the frequency of exponential Fourier series.
The formula can be written in the compact form

g (t ) = ∑ce
n =−∞
n
jnω0t

where
T /2
1
T −T∫/ 2
g (t )e − jnω0t dt ,
cn = n = 0, ±1, ±2,

The exponential Fourier series find extensive applications in communication theory. It is equivalent to
resolving the function in terms of harmonically related frequency components of a fundamental frequency.
A weighting factor cn is assigned to each frequency component. cn is called spectral amplitude and
represents the amplitude of nth harmonic. Graphic representation of spectral amplitude along with the
spectral phase is called complex frequency spectrum. If cn is purely real or purely imaginary, we can
disregard the phase spectrum.

Concept of negative frequency


The exponential Fourier series represents an arbitrary function g(t) by a linear combination of exponential
functions of frequencies, 0, ±ω0, ±2ω0, ±3ω0, …, the presence of negative frequencies is simply because the
mathematical model of a signal requires the use of negative frequencies. Negative frequency signals are not
physical signals, rather they are only a mathematical concept required to give a compact mathematical
description of a real signal.
In conventional sense, frequencies can only be positive, and the spectrum of the trigonometric Fourier
series exists only for positive frequency. However, it is more convenient to use exponential representation
rather than trigonometric.

Example:
A periodic “sawtooth” waveform as shown below has the following Fourier Series representation:

4
2A  1 1 
V (t ) = sin ωot + sin 2ωo t + sin 3ωot + ..... 
π  2 3 
where ω0 = 2π/T0.

The spectrum of this signal is:

What will happen if we place a filter with various bandwidths following the sawtooth generator?

5
Time domain and frequency domain representation of a signal
A signal can be specified in two equivalent ways:
1. Time domain representation;
where g(t) is represented as a function of time. Graphical time domain representation is termed as
waveform.
2. The frequency domain representation;
Where the signal is defined in terms of its spectrum. The signals are considered as the superposition of
many signals at different frequencies, thus their characteristics can be investigated in the frequency domain.

6
Any of the above two representations uniquely specifies the function, i.e., if the signal is specified in time
domain, we can determine its spectrum. Conversely, if the spectrum is specified, we can determine the
corresponding time domain function of a signal.
Signal can be observed in the frequency domain by the use of a Spectrum Analyzer.

Example
For the periodic gate function shown in the figure,
(a) Find the exponential Fourier series of this waveform;
(b) Sketch the signal spectrum.

Solution
(a)
τ /2
2 A (e jnω0τ / 2 − e − jnω0τ / 2 )
T /2
1 1 − A − jnω0t τ /2

T −T∫/ 2 T −τ∫/ 2
Fn = f (t )e − jnω0t dt = Ae − jnω0t dt = e =
jnω 0T −τ / 2 nω 0T 2j
2A Aτ sin(nω 0τ / 2)
= sin(nω 0τ / 2) =
nω 0T T nω 0τ / 2

and the Fourier series is given by


Aτ ∞ sin( nω 0τ / 2) jnω0t

f (t ) =
T −∞ nω 0τ / 2
e

(b) The spectrum is the plot of Fn as function of nω0. Since Fn is real, only amplitude plot is required. If we
define a normalized and dimensionless variable given by x = nω0τ/2, then
Aτ sin(nω 0τ / 2) Aτ sin x
Fn = =
T nω 0τ / 2 T x
the function sin(x )/x, is known as sampling function. It plays an important role in communication theory.
We may rewrite Fn in terms of sampling function as follows
Aτ sin(nω 0τ / 2) Aτ
Fn = = Sa (nω 0τ / 2)
T nω 0τ / 2 T

7
The amplitude plot of Fn is a discret2e spectrum existing at ω = 0, ±ω0, ±2ω0, ±3ω0, … ,have amplitudes
Aτ/T, (Aτ/T)Sa(πτ/T), (Aτ/T)Sa(2πτ/T), … , etc. respectively. τ/T is called the duty cycles of periodic
waveform.

Assignment 1
Consider the signal x(t) = cosω0t + sin22ω0t. Find the exponential Fourier series expression for x(t) and
sketch its spectrum

Interesting phenomena

There is an inverse relationship between the pulse width τ, and the frequency spread of the spectrum. As
the period becomes larger, the fundamental frequency ω0 becomes smaller, generate more frequency
components in a given range of frequency; and therefore the spectrum becomes denser. However, the
amplitudes of the frequency components become smaller.
If the limit T goes to infinity, we are left with a single pulse of width τ in the time domain as the next pulse
comes after infinite interval. The fundamental frequency ω0, of this waveform, approaches zero, i.e., no
spacing is left between two line-components; the spectrum continuous and exists at all frequencies rather
than only at the discrete frequencies. However, there is no change in the shape of the envelope of the
spectrum.
The continuous spectrum described above corresponds to a single non-repetitive pulse; i.e., a non-periodic
function existing over the entire interval (-∞ < t < ∞) has a continuous spectrum. Thus, we have arrived at
the spectrum of a non-periodic function, taking it as a special case of periodic function with period T
approaching infinity.

The Fourier transform (Fourier integral)


The Fourier transform of a signal g(t) is defined by

G (ω ) = ∫ g (t )e
− jω t
dt
−∞

and g(t) is called the inverse Fourier transform of G(ω)



1
∫ G(ω )e
jω t
g (t ) = dω
2π −∞

The functions g(t) and G(ω) constitute a Fourier transform pair:


g(t) ⇔ G(ω)
These transforms can also be written as
G(ω) = F [g(t)] and g(t) = F -1[G(ω)]
Fourier transform is most useful for analyzing signals involved in communication systems.

8
A periodic signal spectrum has finite amplitudes and exists at discrete frequencies.
A non-periodic signal has a continuous spectrum and G(ω) is its spectral density.

What is the difference between Fourier transform (Fourier integral) and Fourier series?
Fourier integral is different from the Fourier series in that its frequency spectrum is continuous rather than
discrete. Fourier integral is obtained from Fourier series by letting T → ∞ (for a nonperiodic signal).

What is the key advantage of Fourier transform?


The original time function can be uniquely recovered from its Fourier transform.

Fourier transform of some useful functions


Rectangular function:
τ τ
t 1 − <t <
rect ( ) = { 2 2
τ 0 elsewhere
t ωτ
rect ( ) ⇔ τ Sa ( )
τ 2

Proof

τ /2
e − jω t τ /2 e − jωτ / 2 − e jωτ / 2 e jωτ / 2 − e − jωτ / 2 2 ωτ ωτ
G (ω ) = ∫
−τ / 2
e − jω t dt =
− jω −τ / 2
=
− jω

jωτ
= sin
ω 2
= τ Sa ( )
2

Unit impulse function:


δ(t) ⇔ 1 and 1 ⇔ 2πδ(ω)

9
Proof
∞ ∞ ∞

∫ δ (t )e dt = ∫ δ (t )e0 dt = ∫ δ (t )dt =1
− jω t
F [δ (t )] =
−∞ −∞ −∞

1 1
F −1[δ (ω )] = ∫−∞ δ (ω )e dω = 2π
jω t

Sinusoidal function cos(ω0t)


cos(ω0t) ⇔ π[δ(ω + ω0) + δ(ω - ω0)]

Proof

1 1 jω0t
F −1[δ (ω − ω 0 )] = ∫ δ (ω − ω )e
jω t
dω = e
2π 2π
0
−∞

∴ e jω0t ⇔ 2πδ (ω − ω 0 ), e − jω0t ⇔ 2πδ (ω + ω 0 )


cos ω 0t ⇔ π [δ (ω − ω 0 ) + δ (ω + ω 0 )]

Properties of Fourier transform


Fourier transform has many important properties. Apart from giving simple solutions of complicated
Fourier transform, these properties also help in finding the effect of various time domain operations on
frequency domain.

Linearity property
If g1(t) ⇔ G1(ω) and g2(t) ⇔ G2(ω)
then a1g1(t) + a2g2(t) ⇔ a1G1(ω) + a2G2(ω)
where a1 and a2 are constants
This property is proved easily by linearity property of integrals used in defining Fourier transform

Symmetry property
If g(t) ⇔ G(ω), then G(t) ⇔ 2πg(- ω)

Proof

1
∫ G(ω )e
jω t
g (t ) = dω
2π −∞

2π g ( −t ) = ∫ G (ω )e
− jω t

−∞

we can interchange the variable t and ω, i.e. let t → ω, ω → t, we have

10

2π g (−ω ) = ∫ G(t )e
− jω t
dt
−∞

∴ G (t ) ⇔ 2π g (−ω )

Time scaling property

1 ω
g (at ) ⇔ G( )
a a
Proof

∫ g (at )e
− jω t
F [ g (at )] = dt
−∞
let x = at, then dt = dx/a,
case 1: when a > 0,

1 1 ω
∫ g ( x)e dx = a G( a )
− jω x / a
F [ g (at )] =
a −∞
case 2: when a < 0, then t → ∞ leads to x → - ∞,
−∞ ∞
1 1 1 ω
F [ g ( at )] = ∫ g ( x)e − jω x / a dx = − ∫ g ( x)e − jω x / a dx = − G ( )
a∞ a −∞ a a
Combined, the two cases are expressed as,
1 ω
g (at ) ⇔ G( )
a a

Significance
Time domain compression of a signal results in spectral expansion
Time domain expansion of a signal results in spectral compression

11
It can be observed that there is an inverse time-bandwidth relationship between the pulse duration and
bandwidth. If the pulse duration is τ, its bandwidth can be defined to be 1/τ. This indicates a very easy way
to estimate the bandwidth of an undeterministic signal just by looking at its waveform as shown below:

That is, the time elapsed between an adjacent minimum and maximum of a waveform gives approximately
the inverse of the bandwidth of that signal (or noise).

Time shifting property


g (t − t0 ) ⇔ G (ω )e− jωt0
Proof

∫ g (t − t )e
− jω t
F [ g (t − t0 )] = 0 dt
−∞
put t – t0 = x, so that dt = dx, then
∞ ∞

∫ g ( x)e − jω ( x +t0 ) dx = e − jω t0 ∫ g ( x )e dx = G (ω )e − jω t0
− jω x
F [ g (t − t0 )] =
−∞ −∞

Frequency shifting property


g (t )e jω0t ⇔ G (ω − ω 0 )

Proof
∞ ∞
F [ g (t )e jω0t ] = ∫ g (t )e − jωt e jω0t dt = ∫ g (t )e
− j (ω −ω 0 ) t
dt = G (ω − ω 0 )
−∞ −∞
Significance
According to this property, multiplication of a function g(t) by exp(jω0t) is equivalent to shifting its Fourier
transform in the positive direction by an amount ω0, i.e., spectrum F(ω) is translated by an amount ω0.
Therefore, this theorem is also known as Frequency translation theorem.
Translation of a spectrum is a very important phenomenon in communication system. This translation helps
in achieving the process known as modulation. This process can be performed by multiplying the known
signal g(t) by a sinusoidal signal. This is because a sinusoidal function can be express as a sum of
exponentials as follows
1
g (t ) cos ω 0t = [ g (t )e jω0t + g (t )e − jω0t ]
2
Therefore,
1
g (t ) cos ω 0t ⇔ [G (ω − ω 0 ) + G (ω + ω 0 )]
2
The multiplication of a time function with a sinusoidal function translates the whole spectrum G(ω) to ±ω0.
This result is also known as modulation theorem.
It should be noted that exp(jω0t) can also provide frequency translation, but it is not a real signal, whereas,
sinusoidal function is a real signal. Hence, sinusoidal function is used in practical modulation system.

12
Assignment 2:
1. Evaluate the integrals
∞ ∞

(a) (b) e − 2 x δ ( x)dx (c) e −t δ (t + 3)dt
∫ e δ (t − π )dt ∫
cos t

−∞ 1
∫ −∞

Assignment 3:
If f(t) has a spectrum F(ω), find the Fourier transform of the following functions:
(a) f(t/2 – 5); (b) f(3 – 3t).

Assignment 4:
If g1(t) ⇔ G1(ω) and g2(t) ⇔ G2(ω), then show that a1g1(t) + a2g2(t) ⇔ a1G1(ω) + a2G2(ω), where a1 and a2
are constants

System
A system is defined as a set of rules that associates an output time function to an input time function.

where g(t) is input signal (or source signal); y(t) is output signal (or response signal); h(t) is the system
response when input is a unit impulse function and is known as unit impulse response of the system.

Symbolically, input and response are represented as g(t) → y(t) and read as input g(t) causes a response
y(t).

Linear systems
A system is said to obey superposition when the output obtained due to a sum of inputs is equal to the sum
of the outputs caused by individual inputs, i.e., if
g1(t) → y1(t) and g2(t) → y2(t)
then g1(t) + g2(t) → y1(t) + y2(t)

A system is said to be linear if the following relationship hold for all values of the constants a1 and a2,
a1g1(t) + a2g2(t) → a1y1(t) + a2y2(t)
otherwise, the system is nonlinear.

13
Convolution
Suppose that g1(t) ⇔ G1(ω) and g2(t) ⇔ G2(ω), what is the waveform of g(t) whose Fourier transform is the
product of G1(ω) and G2(ω)? This question arises frequently in spectral analysis, and is answered by the
convolution theorem.
The convolution of two time function g1(t) and g2(t), is defined by the following integral

g1 (t ) ∗ g 2 (t ) = ∫ g (τ ) g (t − τ )dτ
−∞
1 2

Time convolution theorem


If g1(t) ⇔ G1(ω) and g2(t) ⇔ G2(ω)
Then g1(t) * g2(t) ⇔ G1(ω)G2(ω)
∞ ∞ ∞ ∞ ∞

∫ [ ∫ g (τ ) g (t − τ )dτ ]e ∫ g (τ )[ ∫ g (t − τ )e dt ]e − jωτ dτ = ∫ g (τ )G (ω )e dτ
− jω t − jω ( t −τ ) − jωτ
F [ g1 (t ) ∗ g 2 (t )] = 1 2 dt = 1 2 1 2
−∞ −∞ −∞ −∞ −∞

= G1 (ω )G2 (ω )

Frequency convolution theorem


If g1(t) ⇔ G1(ω) and g2(t) ⇔ G2(ω)
1
Then g1 (t ) g 2 (t ) ⇔ G1 (ω ) ∗ G2 (ω )

The proof is similar to time convolution theorem.

Signal transmission through a linear system

y(t) = g(t) * h(t)


when g(t) ⇔ G(ω), h(t) ⇔ H(ω), y(t) ⇔ Y(ω), by convolution theorem
Y(ω) = G(ω)H(ω)
Where H(ω) is the system transfer function.
If the input is a delta function at t = τ, i.e. it is δ(t − τ), then the output is h(t − τ) and
h(t − τ ) = h(t ) ∗ δ (t − τ )
This means that, convolving a pulse x(t) located near t = 0 with a delta function located at t = τ has the
effect of shifting x(t) to around t = τ. This also applies in the frequency domain, and is shown
schematically below.

14
Signal power
A primary goal of the communication system is to transmit more signal power as against noise power to
achieve greater signal-to-noise ratio (S/N). Signal-to-noise ratio (S/N) is an important parameter used to
evaluate the system performance. Noise, being random in nature, cannot be expressed as a time function,
like deterministic waveform. It is represented by power. Hence, to evaluate the S/N, it is necessary to
evolve a method for calculating the power content of a signal.

For a time function g(t), its average power is given by


T /2
1
T →∞ T ∫
2
P = lim g (t ) dt
−T / 2

For a periodic signal, each period contains a replica of the function, and the limiting operation can be
omitted as long as T is taken as the period.
For a real signal
T /2
1
T →∞ T ∫
P = g 2 (t ) = lim g 2 (t )dt
−T / 2

A signal with finite power is called a power signal.

Example
Find the power of a sinusoidal signal cosω0t.
Solution
1 + cos 2ω 0t sin 2ω 0t
T /2
1 1 1
T −T∫/ 2
T /2
P = cos 2 (ω 0t ) = dt = (t + ) =
2 2T 2ω 0 −T / 2
2

Frequency domain representation for signals of arbitrary waveshape


When dealing with deterministic signals, knowledge of the spectrum implies knowledge of the time domain
signal.
For an arbitrary (random) signal, Fourier analysis cannot be used because g(t) is not known analytically.
For such an undeterministic signal (which include information signals and noise waveforms), the power
spectrum Sg(ω) (or power spectral density) concept is used.
The average power of the signal is then
∞ ∞
1 1
P=
2π ∫S
−∞
g (ω )dω =
π ∫S
0
g (ω ) dω

Another way to evaluate the signal power!

Note that the power spectral density of a signal retains only the magnitude information and all phase
information is omitted. It follows that
(1) A given signal has a unique power density spectrum;
(2) But the converse is not true, i.e., a power density spectrum, may correspond to a large number of
signals with identical frequency spectrum, but with different phase functions.

If an input signal, described by a power spectrum Si(ω), passes through a filter with a transfer function of
H(ω), the power spectrum at the output of the filter is given by
2
So (ω ) = Si (ω ) H (ω )
2
or S o ( f ) = Si ( f ) H ( f )
The square of H(ω) is used because H(ω) corresponds to a ratio of signal amplitudes and power is
proportional to (amplitude)2. |H(ω)|2 is sometimes called the power transfer function.

Electrical noise which has not passed through any filtering is


normally ‘white’, i.e. the noise power is uniformly distributed over all frequencies of interest:

15
Then, the noise power in a band of frequencies from f1 to f2 is:
f2
PN = ∫ f1
η df = η B (Watts )
where B is the bandwidth of interest: B = f2 - f1

Example
For an arbitrary voltage signal v(t), whose average power is PS, the power spectrum is given as:

White noise is also present and has a power spectral density of η W/Hz for all frequencies. Both signal and
noise pass through a lowpass filter whose characteristics are:

Find the SNR at the output of the filter.

Solution
Since the |H(f)|2 is 1 for − Wo < f < Wo , the signal power at the output is still PS (from (15)), i.e. the
signal passes through the filter unaffected. The noise power spectrum at the output of the filter is:
2 2
S No ( f ) = H ( f ) S Ni ( f ) = η H ( f )
and the noise power at the output is:
∞ 2Wo 2
PNo = ∫−∞
S No ( f )df = η ∫
− 2Wo
H ( f ) df
2
i.e. it is equal to the area under the |H(f)| curve multiplied by η. Thus,
PNo = η3Wo .
PS
∴ SNRo =
3ηWo
The SNR at the filter input was virtually zero because the noise power was infinite. By using a filter which
passes the signal totally but which filters out most of the noise power, we have achieved a much improved
SNR at the output. (The larger the SNR, the better.)

Example
For a low pass filter as shown, determine (a) the transfer function; (b) the impulse response; (c) the power
transfer function; (d) 3-dB bandwidth.

16
R i(t)
g(t) y(t)
C

Solution
By circuit theory, we have g(t) = i(t)R + y(t), and i(t) = Cdy(t)/dt
Thus, RCdy(t)/dt + y(t) = g(t)
Taking the Fourier transform, we have jωRC Y(ω) + Y(ω) = G(ω)
(a) The transfer function is: H(ω) = Y(ω)/G(ω) = 1/(1+ jωRC)
(b) From the Fourier transform pair, 1
e− at u (t ) ⇔
a + jω
t
1 − RC
e t ≥0
we have, h(t ) = { RC
0 t <0

2 1
(c) The power transfer function is: H (ω ) =
1 + (ω RC ) 2
or

(d) Note that the value of |H(f)|2 at f = fo = 1/2πRC is 0.5. That is, the frequency component in the output
waveform at this frequency is attenuated by 3dB as compared to that at f = 0.
Consequently, fo is said to be the 3dB bandwidth of this filter.

Correlation
The correlation between two waveforms is the measure of similarity between one waveform, and time
delayed version of the other waveform. This expresses how much one waveform is related to the time
delayed version of other waveform when scanned over time axis.
The expression of correlation is very close to convolution. Consider two power signals g1(t) and g2(t), the
correlation between two functions is defined as follows:
T /2
1
T →∞ T ∫
R1,2 (τ ) = lim g1 (t ) g 2 (t + τ )dt
*

−T / 2

where g*(t) represents the conjugation of g(t) and can be removed if g(t) is real.

17
Autocorrelation
Autocorrelation is a special case of correlation, it is a measure of similarity of a function with its delayed
replica.
T /2
1
T →∞ T ∫
R (τ ) = lim g (t ) g * (t + τ )dt
−T / 2

Important properties of autocorrelation


(1) the autocorelation for τ = 0 is average power of the signal,
T /2 T /2
1 1
T →∞ T ∫ T →∞ T ∫
2
R(0) = lim g (t ) g * (t )dt = lim g (t ) dt = P
−T / 2 −T / 2

(2) power spectral density Sg(ω) and autocorrelation function of a power signal are Fourier transform pair,
R (τ ) ⇔ S g (ω )

Assignment 5
Calculate the (a) average value, (b) ac power, and (c) average power of the periodic waveform v(t) = 1 +
cosω0t.

Assignment 6:
Determine the average power of the following signal: Acosω0t + B sinω0t.

Assignment 7:
A signal x(t) has an autocorrelation function given by R(τ) = 3e-aτ, (a) Find the power spectrum of the
signal. (b) This signal passes through a linear system whose transfer function is shown below. Find the
average signal power at the output (hint: E e-at ⇔ 2aE / (a2 +ω2))

Assignment 8:
What equipments would you use in order to observe (a) a signal waveform; and (b) its spectrum?

Review Questions

1. What type signal is the most fundamental?


2. How do you define a periodic signal?
3. Does a periodic signal exist within a limited time period?
4. Is the message signal a deterministic signal?
5. Does negative frequency physically exist?
6. What equipments you are going to use in order to observe the signal waveform and spectrum?
7. How does a pulsed signal differ from a sinusoidal signal?
8. What is the frequency domain description of a signal? Is it more or less useful than the time domain?
9. What is the significance of the time scaling property of Fourier transform?
10. What is the significance of the frequency shifting property of Fourier transform?
11. Suppose that g1(t) ⇔ G1(ω) and g2(t) ⇔ G2(ω), what is the waveform of g(t) whose Fourier transform is the
product of G1(ω) and G2(ω)?
12. Why is it important to understand the power spectrum of a signal? What does it show?
13. How do you measure the similarity between the signal and its delayed replica?
14. How many methods you know for signal power calculation?

18
Exercise Problems (Signal Analysis)

1. Evaluate the integrals


∫ δ (2t − 4)(2t + t − 8)dt


2

−∞

∫ cos(9t )δ (t − 2)dt
−∞

2. Simplify the following expressions:


(a) [sint/(t + 2)] δ(t);
(b) [1/(jω +2)] δ(ω + 3);
(c) [sin(kω)/ω] δ(ω);

3. Prove that 1
δ (at ) = δ (t )
a
4. If g(t) ⇔ G(ω), then show that g*(t) ⇔ G*(-ω).

5. Find the Fourier transform of the signal


f(t) = [A + fm(t)]cosωct
if fm(t) has a spectrum Fm(ω).

6. If f(t) has a spectrum F(ω), find the Fourier transform of the following function:
f(2 + 5t);

7. Determine the average power of the following signal:


(A + sinω0t) cosω0t;

8. Find the autocorrelation function of the signal, g(t) = E cosω0t;

9. For a power signal, g(t) = Acos(200πt)cos(2000πt), determine the average power.

19
Trigonometric Identities

1 jθ 1 jθ
e ± jθ = cos θ ± j sin θ cos θ = (e + e − jθ ) sin θ = (e − e − jθ )
2 2j
sin 2 θ + cos 2 θ = 1 cos 2θ = 2 cos 2 θ − 1 = 1 − 2 sin 2 θ = cos 2 θ − sin 2 θ sin 2θ = 2 sin θ cos θ
1 + cos 2θ 1 − cos 2θ
cos 2 θ = sin 2 θ =
2 2
tan α ± tan β
sin(α ± β ) = sin α cos β ± cos α sin β cos(α ± β ) = cos α cos β ∓ sin α sin β tan(α ± β ) =
1 ∓ tan α tan β
cos(α + β ) + cos(α − β ) cos(α − β ) − cos(α + β ) sin(α + β ) + sin(α − β )
cos α cos β = sin α sin β = sin α cos β =
2 2 2
α+β α−β α −β α+β
sin α + sin β = 2 sin cos sin α − sin β = 2 sin cos
2 2 2 2
α+β α −β α+β α −β
cos α + cos β = 2 cos cos cos α − cos β = −2 sin sin
2 2 2 2

Selected Fourier Transform Pairs

δ (t ) ⇔ 1 1 ⇔ 2πδ (ω )
e jω0t ⇔ 2πδ (ω − ω 0 ) e − jω0t ⇔ 2πδ (ω + ω 0 )
cos ω 0t ⇔ π [δ (ω − ω 0 ) + δ (ω + ω 0 )] sin ω 0t ⇔ − jπ [δ (ω − ω 0 ) − δ (ω + ω 0 )]
t ωτ
rect ( ) ⇔ τ sa ( )
τ 2

Properties of Fourier Transform

Linearity: a1g1(t) + a2g2(t) ⇔ a1G1(ω) + a2G2(ω)

Symmetry: If g(t) ⇔ G(ω), then G(t) ⇔ 2πg(- ω)

Time scaling: 1 ω
g (at ) ⇔ G( )
a a

Time shifting: g (t − t0 ) ⇔ G (ω )e − jωt0

Frequency shifting: g (t )e jω0t ⇔ G (ω − ω 0 )

Modulation theorem: g (t ) cos ω 0t ⇔


1
[G (ω − ω 0 ) + G (ω + ω 0 )]
2

Time convolution: g1(t) * g2(t) ⇔ G1(ω)G2(ω)

Frequency convolution: 1
g1 (t ) g 2 (t ) ⇔ G1 (ω ) ∗ G2 (ω )

Conjugate functions: g*(t) ⇔ G*(-ω)

Time differentiation: d
g (t ) ⇔ jω G (ω )
dt

t
Time integration: 1
∫ g (τ )dτ ⇔
−∞

G (ω ) + π G (0)δ (ω )

20

You might also like