Unit 1 - Analog and Digital Communication
Unit 1 - Analog and Digital Communication
Tech
Subject Name: Analog and Digital Communication
Subject Code: IT-404
Semester: 4th
Downloaded from be.rgpvnotes.in
Unit-I: Signals and Systems: Block diagram of a communication system, signal-definition, types of signals
continuous, discrete, deterministic, non-deterministic, periodic, non-periodic, energy, power, analog and
digital signals. Electromagnetic Spectra, Standard signals- DC, sinusoidal, unit step, ramp, signum, rectangular
pulse, impulse(delta) signal. System definition, classification of systems, linear, nonlinear, time variant, time
invariant, causal, non causal, stable and unstable systems. Fourier transforms: Time domain and frequency
domain representation of signal, Fourier Transform and its properties, conditions for existence, Transform of
Gate, unit step, constant, impulse, sine and cosine wave. Shifting property of delta function, convolution,
time and frequency convolution theorems.
1. Continuous Time Signal: If the independent variable (t) is continuous, then the corresponding signal is
continuous time signal. A finite, real-valued, smooth function x(t) of a variable t which usually
represents time. Both s and t in X (t) are continuous. A continuous-time signal is a signal that can be
defined at every instant of time. It is denoted by x(t). Figure 1.2 shows continuous-time signal.
1
0.5
x(t) 0
0 2 4 6 8 10
-0.5
-1
time(t)
• Dis rete Ti e “ig al If the i depe de t aria le t takes o o l dis rete alues, for example t = ±1, ±2,
±3, ... A discrete-time signal is a bounded, continuous-valued sequence x[n]. Alternately, it may be viewed as
a continuous-valued function of a discrete index n. Discrete time signals can be obtained by sampling a
continuous-time signal. It is denoted as x(n).Figure 1.3 shows discrete-time signal.
Digital Signal: The signals that are discrete in time and quantized in amplitude are called digital signal. The term
"digital signal" applies to the transmission of a sequence of values of a discrete-time signal in the form of some
digits in the encoded form.
2. Random (Non-deterministic) and Deterministic Signal:
A random signal cannot be described by any mathematical function, where as a deterministic signal is one that
can be described mathematically. A common example of random signal is noise. Random signal and
deterministic signal are shown in the Figure 1.4 and 1.5 respectively. A signal is said to be non-deterministic if
there is uncertainty with respect to its value at some instant of time. Non-deterministic signals are random in
nature hence they are called random signals. Random signals cannot be described by a mathematical
equation. They are modeled in probabilistic terms. A signal is said to be deterministic if there is no uncertainty
with respect to its value at any instant of time.
x(t)
1
t
−� 0 �
where
A simple way of visualizing even and odd signal is to imagine that the ordinate [x(t)] axis is a mirror. For even
signals, the part of x(t) for t > 0 and the part of x(t) for t < 0 are mirror images of each other. In case of an odd
signal, the same two parts of the signals are negative mirror images of each other.
A signal is said to be periodic if it satisfies the condition x(t) = x(t + T) or x(n) = x(n + N).
NOTE: A signal cannot be both, energy and power simultaneously. Also, a signal may be neither energy nor
power signal. If < E< ∞, then the signal x(t ) is called an energy signal. However, there are signals where this
condition is not satisfied. For such sig als e o sider the po er. If < P< ∞, the the sig al is alled a po er
signal. Note that the power for an energy signal is zero (P = 0) and that the energy for a power signal is infinite
E = ∞ . “o e sig als are either energy nor power signals.
E erg of po er sig al = ∞
Energy and power for discrete-time signal: The definition of signal energy and power for discrete signals is
similar definitions for continuous signals.
E = ∑∞
�=−∞ | |
P = lim ∑�
�=−�| |
�→∞ �+
A non-causal signal is one that has non zero values in both positive and negative time. Causal, non-causal and
anti-causal signals are shown below in the Figure 1.8(a), 1.8(b) and 1.8(c) respectively.
Standard Signal:
1. The DC Signal: The DC or o sta t sig al si pl takes a o sta t alue. I o ti uous ti e it ould e
represented as: s(t)=1. In discrete time, it would be s[n] = . The u er a e repla ed a
constant. The DC signal typically represents any constant offset from 0 in real-world signals. The analog
DC signal has bounded amplitude and power and is smooth.
2. Unit Step Function
The u it step , also ofte referred to as a Heaviside function, is literally a step. It has 0 value until time
0, at which point, it abruptly switches to 1 from 0. The unit step represents events that change
state, e.g. the switching on of a system, or of another signal. It is usually represented as u(t) in
continuous time and u[n] in discrete time.
Unit step function is denoted by u(t). It is defined as
, ≥
u(t) ={
, <
, n<
Discrete time: u[n]= {
, n≥
Figure 1.9(a) continuous time Unit step signal Figure 1.9(b) Discrete time Unit step signal
1/W
t
0 W
Figure 1.10.Gate function
One thing of note about g(t) is that
− g t dt = 1:
The lower limit 0- is a infinitesimally small amount less than zero. Now, suppose that the width w gets very
small, indeed as small at 0+, a number an infinitesimal amount bigger than zero. At that point, g(t) has become
like the δ function, a very thin, very high spike at zero, such that
∞ +
−∞
δ t dt = − δ t dt = 1
As w becomes very small the function g(t) turns into a function δ (t) indicated by the arrowed spike.
0 t
Figure 1.11.Unit Impulse signal
−∞
δ t dt = u(t)
d
δt =
4. Ramp Signal
⩾
Ramp signal is denoted by r(t), and it is defined as r(t) = {
<
r(t)
0 t
Figure 1.12.Ramp signal
∫u t = t
u(t)=dr(t)/dt
Area under unit ramp is unity.
5. Parabolic Signal
6. Signum Function
>
Signum function is denoted as sgn (t). It is defined as sgn (t) = { =
− <
sgn(t) = 2u(t) – 1 sgn (t)
0 t
-1
Case ii: if α < 0 i.e. -ve then x(t) = e−αt. The shape is Case iii: if α > 0 i.e. +ve then x(t) = eαt. The shape is
called decaying exponential. called raising exponential.
X(t) x(t)
t t
8. Rectangular Signal
Let it be denoted as x(t) and it is defined as
x (t) = A rect [ ]
�
x(t)
1
t
−� 0 �
-T T t
Figure 1.17.Triangular function
10. Sinusoidal Signal
Sinusoidal signal is in the form of x(t) = A cos (w0±ϕ) or A sin(w0±ϕ)
Where T0 = π 0
1
0.5
x(t) 0
0 2 4 6 8 10 12 14
-0.5
-1
time(t)
Sinc(x)
−
−
− 0 1
− �
−� �
t
− � 0 � �
A system is said to be linear when it satisfies superposition and homogeneity principles. Consider two
systems with inputs as x1(t), x2(t), and outputs as y1(t), y2(t) respectively. Then, according to the
superposition and homogenate principles,
Principle of homogeneity: T [a1*x1(t)] = a1*y1, T [a2*x2(t)] = a2*y2
Principle of superposition: T [x1(t)] + T [x2(t)] = a1*y1+a2*y2
Linearity: T [a1 x1(t)] + T[ a2 x2(t)] = a1 y1(t) + a2 y2(t)
From the above expression, is clear that response of overall system is equal to response of individual
system.
Example:
y(t) = x2(t)
y1 (t) = T[x1(t)] = x12(t)
y2 (t) = T[x2(t)] = x22(t)
T [a1 x1(t) + a2 x2(t)] = [ a1 x1(t) + a2 x2(t)]2
Which is not equal to a1 y1(t) + a2 y2(t). Hence the system is said to be non linear.
2. Time Variant and Time Invariant Systems
A system is said to be time variant if its input and output characteristics vary with time. If the system
response to an input signal does not change with time such system is termed as time invariant system.
The behavior and characteristics of time variant system are fixed over time.The condition for time
invariant system is:
In time invariant systems if input is delayed by time t0 the output will also gets delayed by t0.
Mathematically it is specified as follows
y(t-t0) = T[x(t-t0)]
For a discrete time invariant system the condition for time invariance can be formulated
mathematically by replacing t as n*Ts is given as
y(n-n0) = T[x(n-n0)]
3. Linear Time variant (LTV) and Linear Time Invariant (LTI) Systems
If a system is both linear and time variant, then it is called linear time variant (LTV) system.
If a system is both linear and time Invariant then that system is called liner time invariant (LTI) system.
4. Static and Dynamic Systems
Static system is memory-less whereas dynamic system is a memory system.
Example 1: y (t) = 2 x(t)
For present value t=0, the system output is y(0) = 2x(0). Here, the output is only dependent upon
present input. Hence the system is memory less or static.
Example 2: y (t) = 2 x(t) + 3 x(t-3)
For present value t=0, the system output is y(0) = 2x(0) + 3x(-3).
Here x(-3) is past value for the present input for which the system requires memory to get this output.
Hence, the system is a dynamic system.
5. Causal and Non-Causal Systems
A system is said to be causal if its output depends upon present and past inputs, and does not depend
upon future input. For non causal system, the output depends upon future inputs also.
Example 1: y(n) = 2 x(t) + 3 x(t-3)
For present value t=1, the system output is y(1) = 2x(1) + 3x(-2).
Here, the system output only depends upon present and past inputs. Hence, the system is causal.
Example 2: y(n) = 2 x(t) + 3 x(t-3) + 6x(t + 3)
For present value t=1, the system output is y(1) = 2x(1) + 3x(-2) + 6x(4) Here, the system output
depends upon future input. Hence the system is non-causal system.
6. Stable and Unstable Systems
The system is said to be stable only when the output is bounded for bounded input. For a bounded
input, if the output is unbounded in the system then it is said to be unstable.
Note: For a bounded signal, amplitude is finite.
Example 1: y (t) = x2(t)
Let the input is u(t) (unit step bounded input) then the output y(t) = u2(t) = u(t) = bounded output.
Hence, the system is stable.
Example 2: y (t) = ∫x(t)dt
Let the input is u (t) (unit step bounded input) then the output y(t) = ∫u(t)dt = ramp signal (unbounded
because amplitude of ramp is not finite it goes to infinite when t → infinite).
Hence, the system is unstable.
1.5. Time domain and frequency domain representation of signal
An electrical signal either, a voltage signal or a current signal can be represented in two forms: These two types
of representations are as under:
i) Time Domain representation-: In time domain representation a signal is a time varying quantity as
shown in Fig.1.24
V(t)
0 t
ii) Frequency Domain Representation: In frequency domain, a signal is represented by its frequency
spectrum as shown in Fig 1.25
V(w)
0 w
∞ − −
Or X(w)= −∞
dt
∞ − −
= dt
∞ − +
= dt
− −∞ −
+
[ - ]= [0-1]=
+ +
−
X(w)= = -
+ + +
Obtaining the above expression in polar form
�
− �−
X(w)= �
√ +
As we know that
X(w)=| | �
On comparision amplitude spectrum
| |=
√ +
−
� =−
Time scaling property states that the time compression of a signal results in its spectrum expansion and time
expansion of the signal results in its spectral compression. Mathematically,
If x(t) X(w)
Then, for any real constant a,
x(at) | |
X( )
proof: The general expression for fourier transform is
∞ −
X(w)=F[x(t)]= −∞
dt
∞ −
Now F[x(at)]= −∞ dt
Putting
At=y
We have dt=
Case (i): When a is positive real constant
� �
∞ − ∞ −
F[x(at)]= −∞ � = −∞
� = X( )
Case (ii): When a is negative real constant
−
F[x(at)]= X( )
2. Linearity Property
Linearity property states that fourier transform is linear. This means that
If x1(t) X1(w)
And x2(t) X2(w)
Then a1 x1(t) + a2 x2(t) a1X1(w) + a2X2(w)
If x(t) X(w)
The X t π -w)
Proof
The general expression for fourier transform is
∞
�− [ ]= = ∫
� −∞
Therefore,
∞ −
x(-t)= −∞
�
∞
π -t)= −∞ −
Or X(t) π -w)
For an even function x(-w)=x(w)
Therefore , X(t) π
Example (1)
−
The fourier transform F[ ] is equal to . Therefore F[ ] is equal to
+ � + �
Solution:
Using Duality property of Fourier Transform, we have
If x(t) X(f)
Then X(t) x(-f)
Therefore,
−
+ �
−
Then
+ �
If x(t) X(w)
−
Then X(t-b) X(w)
∞ −
Proof: X(w)=F[x(t)]= −∞ dt
∞ −
And F[x(t-b)]= −∞
− dt
Putting t-b=y, so that dt=dy
∞ − + ∞ − −
F[x(t-b)]= −∞
dy = −∞
dy
∞
Or F[x(t-b)]= − −∞
−
dy
Since y is a dummy variable, we have
F[x(t-b)]= − X(w)=X(w) −
Or x(t-b) X(w) −
If x(t) X(w)
Then x(t) X(w- )
Proof: General expression for fourier transform is
∞ −
X(w)=F[x(t)]= −∞
dt
∞ −
Now, F[ x(t)] = −∞
dt
∞ − −
Or F[ x(t)] = −∞
dt
Or F[ x(t)] = X w −
� x(t) X(w- )
If x(t) X(w)
Then x(t) jw X(w)
Proof: The general expression for fourier transform is
∞
�− [ ]= = ∫
� −∞
Taking differentiation, we have
∞
= [ −∞ ]
�
Interchanging the order of differentiation and integration, we have
∞
= � −∞ [ ]
∞
=
� −∞
Or = �− [ ]
Or F[ ]=
Or Hence proved
x(t)
1
t
−� 0 �
Sinc(x) Or Sa (x)
− �
−� �
− � 0 � �
(i)
Sa(x) or sinc(x) is an even function of x.
(ii)
Sinc(x) =0 when sinx=0 except at x=0, where it is indeterminate. This means that sinc(x)=0 for
x=± π , here =± , ± ….
(iii) “i is the produ t of os illati g sig al si of period π a d a de reasi g fu tio .
Therefore, si e hi its si usoidal os illatio s of period π ith a plitude decreasing
continuously as 1/x.
Example 2: Find the fourier transform of the gate function shown in figure 1.5.
x(t)
1
t
−� 0 �
Fig.2.5 Gate function
−� �
Sol. x(t)= rect ( ) ={
�
< < }
{ ℎ }
∞ −
X(w)=F[x(t)]= −∞
dt
∞ −
X(w)=F[x(t)]= −∞
rect dt
�
�
�
− − −���
= −� . dt= [ ]−�
�� ��
− −
= [ - ]
�� ��
−
= [ - ] --------(1)
2jsin�= � - − �
�
Putting �= , we get
�� ��
�
2jsin = - − --------(2)
From (1) and (2)
�
X(w)= [2jsin ]
By multiplying and dividing the equation by �
� �
= [jsin ]
�
� �
= �� [sin ]
��
n
=�[ �� ]
�
= �sinc(
Now, since sinc(x)=0, when x=± π
� �
Therefore, sinc( =0, when =± π
=± nπ
Or w= �
π
Figure 2.6 shows the plot of X(w)
�
�
� Sinc( )
6π π
− −
� � π
π �
� 0 π 6π
π
− � �
�
Fig. 2.6
0 t
Fig.2.7 The Unit Impulse function
i) The width of pulse is zero. This means that pulse exist only at t=0.
ii) The height of the pulse goes to infinity
iii) The area under the pulse-curve is always is always unity.
Q.1 Find the fourier transform of an impulse function x(t) = � Also draw the spectrum
Sol. Expression of the fourier transform is given by
∞ − ∞ −
X(w)=F[x(t)]= −∞ dt= −∞ � dt
Using shifting property of impulse function
X(w)=[ − ]at t=0
X(w)=1
� 1
Hence the fourier transform of an impulse function is unity.
�
=
1 1 =
0 t 0 w
Figure 2.8 shows an unit impulse function and its fourier transform or spectrum. From the figure1.7 it is clear
that an unit impulse contains the entire frequency components having identical magnitude. This means that the
bandwidth of the unit impulse function is infinite. Also, since spectrum is real, only magnitude spectrum is
required. The phase spectrum � =0, which means that all the frequency components are in the same phase.
∞
− ∫ � w
� [� w ] = =
� −∞
� − [� w ]= � [ ]at w=0
� − [� w ]= [ ]= .1 =
� � �
F[ ]= � w)
�
� w)
�
1 �� w)
=
1 =
��
0 t
0 w
Fig.2.9 representation of Inverse fourier transform \
This shows that the spectrum of a constant signal x(t)=1 an impulse function 2�� . This can also be
interpreted as that x(t) =1 is a d.c. signal which has single frequency. W=0(dc).
Or
∞
� − [� w − ]= ∫ � w−
� −∞
Using shifting or sampling property of impulse function, we get
� − [� w − ]= [ ]at w =
�
� − [� w − ]= [ ]
�
F[
�
]= � w − )
� w− )
�
�
�� w− )
The above expression shows that the spectrum of an overlasting exponential is a single impulse at w=0.
Similarly,
−
�� w+ )
�� + −��
cos�=
And
� − �
2jsin�= - Or
�� + −��
sin�=
�� � + −�� �
Hence, cos =
We know that
�� w− )
−
� ) �� w+
So that cos [ �� w− )+ � � w + ]
Or
cos [� � w − )+ � � w + ]
Hence, F[x(t)]= ∑∞
�=−∞ �� � � w − = � ∑∞
�=−∞ �� � w −
Hence, the fourier transform of a periodic function consist of a train of equally spaced impulses. These impulses
are located at the harmonic frequencies of the signal and the strength or area of each impulse is given by ���
.
1.11. Convolution
Convolution is a mathematical operation used to express the relation between input and output of an LTI
system. It relates input, output and impulse response of an LTI system as
y(t)=x(t)∗h(t)
Where y (t) = output of LTI
x (t) = input of LTI
h (t) = impulse response of LTI
There are two types of convolutions:
Continuous convolution
Discrete convolution
Continuous Convolution
y(t)=x(t)∗h(t)
∞
= −∞ x τ h t − τ dτ
∞
= −∞ x t − τ h τ dτ
A convolution is a mathematical operation that represents a signal passing through a LTI (Linear and Time-
Invariant) system or filter.
Discrete Convolution
y(n)=x(n)∗h(n)
=∑∞=−∞ ℎ −
=∑∞=−∞ − ℎ
Delta function, s olized the Greek letter delta, δ[ ] is a or alized i pulse, that is, sa ple u er zero has a alue
of one, while all other samples have a value of zero. For this reason, the delta function is called the unit impulse.
Impulse response is the signal that exits a system when a delta function (unit impulse) is the input. If two
systems are different in any way, they will have different impulse responses. The input and output signals are
often called x[n] and y[n], the impulse response is usually given the symbol, h[n]. Any impulse can be
represented as a shifted and scaled delta function.
Properties of Convolution
Note:
Convolution of two causal sequences is causal.
Convolution of two anti causal sequences is anti causal.
Convolution of two unequal length rectangles results a trapezium.
Convolution of two equal length rectangles results a triangle.
A function convoluted itself is equal to integration of that function.
Here, we have two rectangles of unequal length to convolute, which results a trapezium.
The range of convoluted signal is:
Sum of lower limits < t < sum of upper limits
− +− <t< +
−3<t<4
Hence the result is trapezium with period 7.
Time convolution theorem: The time convolution theorem states that convolution in time domain is equivalent
to multiplication of their spectra in frequency domain.
Mathematically, if
x1 t ↔X1(⍵)
And x2 t ↔X2(⍵)
Then x1(t) * x2 t ↔ X1(⍵)X2(⍵)
Proof:
∞ −
F[x1(t) * x2(t)] = −∞
[x t ∗ x t ]
∞
We have x1(t) * x2(t) = −∞[x τ x t − τ ] �
∞ ∞ −
F[x1(t) * x2(t)]= −∞{ −∞[[x τ x t − τ ] dτ}
Interchanging the order of integration, we have
∞ ∞
F[x1(t) * x2(t)]= −∞ x τ −∞[x t − τ ] − ] dτ
Letting t-� = p, in the second integration, we have t=p+� and dt = dp
∞ ∞
F[x1(t) * x2(t)]= −∞ x τ −∞[x p e− w p+τ dp] dτ
∞ ∞
F[x1(t) * x2(t)]= −∞
x τ −∞
[x p e− wp dp] − τ dτ
∞ −
F[x1(t) * x2(t)]= −∞
τ X2(w) τ dτ
∞ −
F[x1(t) * x2(t)]= −∞
τ τ dτ X2(w)
t * t ↔ X ⍵ X ⍵)
This is time convolution theorem
Frequency convolution theorem:
The frequency convolution theorem states that the multiplication of two functions in time domain is
equivalent to convolution of their spectra in frequency domain.
Mathematically, if
t ↔X ⍵)
And t ↔X ⍵)
The t t ↔ / � [ X (⍵) * X (⍵)]
Proof:
∞
F[ t t ] = −∞[x t x t ] − ⍵ t
By definition of inverse Fourier transform
∞ ∞
= −∞[ −∞ x λ e λ dλ] x t ] − ⍵ t
π
Interchanging the order of integration, we get
∞ ∞ −⍵ λ
=
π −∞
x λ [ −∞ x t e dλ] t
∞ ∞
= x λ [ x t –(⍵-λ
dt] λ
π −∞ −∞
∞
= x λ X2(⍵-λ ) λ
π −∞
= [X1(⍵) * X2(⍵)]
π
This is frequency convolution theorem in radian frequency in terms of frequency, we get
F[ t t ]= X f * X f
∞ ∞
−∞
| | = −∞
| | Hence proved.
�