0% found this document useful (0 votes)
16 views8 pages

T1 SignalsAndEngery

Uploaded by

bertamanteca11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views8 pages

T1 SignalsAndEngery

Uploaded by

bertamanteca11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

digital data transmission — 2023/2024

signals and energy

1 course introduction
Modern communication systems are based on digital signals. A signal is a continuous-time function of a physical
quantity X such as the intensity in an antenna as a function of time t, denoted as X(t). The signal is digital when
it can only take values from a finite number of possible waveforms representing bits of information. The main
advantage is that, in contrast to analog signals that can take any shape, the digital circuits and digital signal processing
involved in the transmission and reception of signals are more robust to distortion and interference.

Information B1 , . . . , B k X1 , . . . , X n
Mapper Modulator X(t)
bits

Channel

Destination Detection (demodulator & demapper) Y(t)


B̂1 , . . . , B̂ k

Figure 1.1: Digital communication system.

Figure 1.1 above shows a simplification of the blocks in a typical digital communication system. Inspecting the no-
tation, we already see that a digital communication system has heterogeneous quantities: a sequence of information
bits B1 , . . . , B k a sequence of (complex-valued) discrete-time symbols X1 , . . . , X n , a (complex-valued) transmitted
continuous-time signal X(t) and a (complex-valued) received continuous-time signal Y(t). Since information is
carried by continuous-time signals and transmitted through physical channels such as wireless or optical fibers, we
are interested in the use of the physical resources (time, frequency, energy) for the transmission of information. We
shall illustrate the concept of modulation through an example.
Example 1.1. Suppose we wish to transmit the sequence of information bits B1 , . . . , B k using the signal


⎪A 0 ≤ t ≤ T
ϕ(t) = ⎨ (1.1)


⎩0 elsewhere,
a rectangular waveform of amplitude A and length T as in Figure 1.2. To do so, we can directly map the information

ϕ(t)
A

t
T

Figure 1.2: Rectangular waveform.

1
bits to the signal using the linear combination
k
X(t) = ∑ B i ϕ i (t), (1.2)
i=1

where ϕ i (t) = ϕ(t −(i −1)T). We observe that while the first waveform is ϕ1 (t) = ϕ(t), the others ϕ2 (t) = ϕ(t − T),
ϕ3 (t) = ϕ(t −2T), etc, are shifted versions of ϕ(t). For a given sequence of bits, say 10110, the digital signal x(t) will
have duration 5T and will be given by the linear combination x(t) = ϕ1 (t) + ϕ3 (t) + ϕ4 (t), graphically represented
in Figure 1.3. We observe that if we are given a signal x(t) that has been constructed using (1.2), there is a unique
sequence of bits that leads to this specific x(t). In other words, there is a one-to-one mapping between bits and
signals. And viceversa: the fact that the waveforms ϕ i (t) do not overlap in time is a sufficient condition for this
example to recover the information bits from x(t). We shall see later in the course that this is related to the concept
of orthogonality between signals, a property that can be achieved not necessarily with non-overlapping signals!

x(t)
A

t
T 2T 3T 4T 5T

Figure 1.3: Digital signal of Example 1.1.

In this course, we build on top the basic tools of calculus, algebra, complex numbers, probability, programming,
signals and systems, and information theory; introduce new concepts such as signal energy, signal space, constel-
lations, bandwidth, noise and detection; and leave for future interest other components involved in a real systems
such as encryption or multiuser communication.

We start this topic by reviewing the main properties of continuous-time signals and their energy.

2 signals
As previously stated, a signal is a complex-valued, continuous-time function X(t). As a single-variable function,
we can easily plot it using two axes if x(t) is real valued, or using two figures (one for the real part and one for the
imaginary part; or one for the amplitude and one for the phase) if x(t) is complex-valued.

Since we will be interested in the spectral resources used in the data transmission using signals, we introduce the
Fourier transform of a signal x(t), denoted as x̂( f ), as the integral

x̂( f ) = ∫ x(t)e − j2π f t dt. (2.1)
−∞

Similarly, the inverse Fourier transform of x̂( f ) is given by



x(t) = ∫ x̂( f )e j2π f t d f . (2.2)
−∞

Here, we use x̂ instead of X because we will reserve uppercase letters for random variables later on. Also, param-
eterizing the complex exponential with 2π f instead of ω provides a convenient symmetry between the direct and
the inverse transformations, without any additional scaling factors in front of the integrals.

2
Example 2.1. Consider the rectangular waveform of duration T and amplitude A centered at the origin, that is


⎪A − T2 ≤ t ≤ T2
x(t) = ⎨ (2.3)


⎩0 elsewhere.
Using the definition of the Fourier transform (2.1), for f ≠ 0, we have
∞ sin(πT f ) sin(πT f )
T

x̂( f ) = ∫ x(t)e − j2π f t dt = A ∫ e − j2π f t dt = A = AT


2
, (2.4)
−∞ − T2 πf πT f

while for f = 0,

x̂(0) = ∫ x(t)dt = AT. (2.5)
−∞
We can combine (2.4) and (2.5) into the single expression

x̂( f ) = AT sinc(T f ), (2.6)

where the function sinc(⋅) is defined as



⎪ x=0

⎪1
sinc(x) = ⎨ sin(πx) (2.7)


⎪ x ≠ 0.
⎩ πx
Graphically in Figure 2.1, the Fourier transform x̂( f ) has zeroes every multiple of T1 .

x̂( f )

AT

f
1
T

Figure 2.1: Fourier transform x̂( f ) of Example 2.1.

Problem 2.1. Calculate the Fourier transform of y(t) = 2W sinc(2Wt).


Solution . We have seen that x(t), a rectangular wave of length T and amplitude A centered at the origin, has as
Fourier transform x̂( f ) = AT sinc(T f ). This implies that the inverse Fourier transform of x̂( f ) satisfies, from (2.2),


∞ ∞ ⎪1 − T2 ≤ t ≤ T2
∫ x̂( f )e df = A⋅ ∫
j2π f t
T sinc(T f )e df = A⋅ ⎨ j2π f t
(2.8)


⎩0 elsewhere.
−∞ −∞

Since both x(t) and x̂( f ) are even functions (that is, x(−t) = x(t) and x̂(− f ) = x̂( f )) it holds that the Fourier
transform of y(t), given by

ŷ( f ) = ∫ y(t)e − j2π f t dt, (2.9)
−∞
is equivalent to

ŷ( f ) = ∫ y(u)e j2π f u du. (2.10)
−∞

3
To check that (2.9) and (2.10) are equivalent, we may use the change of variable u = −t. Substituting the expression
for y(u), we obtain

ŷ( f ) = ∫ 2W sinc(2Wt)e j2π f u du. (2.11)
−∞
Upon the identification A = 1, T = 2W, f = u and t = f , and using (2.8), we finally obtain that


⎪1 −W ≤ t ≤ W
ŷ( f ) = ⎨ (2.12)


⎩0 elsewhere.

We already saw in the toy Example 1.1 that the signals transmitted in digital communications are linear combinations
of predetermined (deterministic, fixed) waveforms, which are known by both the transmitter and the receiver, and
that the information is carried by the coefficients of such linear combination (bits or symbols). Another example of
linear combination is the following.
Example 2.2. A signal x(t) is constructed as the linear combination x(t) = x1 ϕ1 (t) + x2 ϕ2 (t), where
⎧ √ ⎧ √

⎪ T2 0 ≤ t ≤ T2 ⎪
⎪− T2 T2 ≤ t ≤ T
ϕ1 (t) = ⎨ ϕ2 (t) = ⎨ (2.13)

⎪ ⎪

⎩ 0 elsewhere, ⎩0 elsewhere.
and where the symbols x1 and x2 are given by

A T
x1 = (2.14)
2 2

T
x2 = A , (2.15)
2
To obtain x(t), we need to find x1 ϕ1 (t) + x2 ϕ2 (t) for every value of t, say from −T to 2T. The resulting signal is
represented in Figure 2.2. And since the Fourier transform is a linear operator, the Fourier transform of x(t) satisfies
x̂( f ) = x1 ϕ̂1 ( f ) + x2 ϕ̂2 ( f ), where ϕ̂1 ( f ) and ϕ̂2 ( f ) are respectively the Fourier transforms of ϕ1 (t) and ϕ2 (t).

x(t)

A
2
t
T T
2

−A

Figure 2.2: The signal for Example 2.2.

Problem 2.2. A signal is constructed as x(t) = ∑1i=−1 x i ϕ i (t), where ϕ i (t) are the Fourier functions given by


⎪ √T e T − 2 ≤ t ≤ 2
1 j2π i t T T
ϕ i (t) = ⎨ (2.16)


⎩0 elsewhere,
and where the symbols are given by
x0 = 0 (2.17)

T T
x1 = − j (2.18)


T T
x−1 = j . (2.19)

4
Solution. For − T2 ≤ t ≤ T2 , we have
√ √
T T 1 − j2π t T T 1 T 2πt
x̃(t) = x−1 ϕ−1 (t) + x0 ϕ0 (t) + x1 ϕ1 (t) = j ⋅√ e T − j ⋅ √ e j2π T = sin ( ),
t
(2.20)
2π T 2π T π T

while x(t) = 0 outside the [−T/2, T/2] interval. The resulting signal is shown in Figure 2.3.

x(t)
T
2

t
− T2 T
2

− T2

Figure 2.3: The signal of Problem 2.2.

In a real communication system, the information bits (B1 , . . . , B k ) can be modeled as a sequence of random variables
taking values in {0, 1}k with some joint probability distribution, implying that the symbols (X1 , . . . , X n ) and the
signal x(t) according to Figure 1.1 are also random.
Problem 2.3. A signal is constructed as X(t) = B1 ϕ(t) + B2 ϕ(t − T), where ϕ(t) is the rectangular waveform given
in Figure 1.2, and B1 and B2 are independent bits both with equal probability. Describe the randomness of X(t).
Solution. Since B1 and B2 are independent with distribution { 21 , 21 }, with probability PB1 B2 (0, 0) = PB1 (0)PB2 (0) = 41
we have x(t) = 0. Similarly, with probability 41 we have X(t) = ϕ(t), with probability 41 we have X(t) = ϕ(t − T)
and with probability 41 we have X(t) = ϕ(t) + ϕ(t − T). In other words, X(t) is a random signal that takes values
in the alphabet {x1 (t), x2 (t), x3 (t), x4 (t)} with equal probability, graphically represented in Figure 2.4.

x1 (t) x2 (t) x3 (t) x4 (t)


A A A A

t t t t
T 2T T 2T T 2T T 2T

Figure 2.4: The alphabet of the signal X(t) in Problem 2.3.

3 energy
Signals in digital communications usually represent electric fields measured in an antenna, or voltages measured in
a resistor. The energy required to generate these signals is known to be proportional to the integral of their squared
magnitude, motivating the following definition for complex-valued signals. The energy of a signal x(t) is

Ex = ∫ ∣x(t)∣2 dt. (3.1)
−∞

We will deal only with finite-energy signals, also called energy-limited signals or simply energy signals.

5
Example 3.1. The energy of the squared waveform in Figure 1.2 is
T
Ex = ∫ ∣A∣2 dt = ∣A∣2 T. (3.2)
0

We obtain the same result for the signal in Example 2.1 (please check!). The energy of the signal in Figure 1.3 is
T 4T
Ex = ∫ ∣A∣2 dt + ∫ ∣A∣2 dt = ∣A∣2 T + 2∣A∣2 T = 3∣A∣2 T, (3.3)
0 2T

while the energy of the signal in Figure 2.2 is


T
T 5
Ex = ∫ + ∫ T ∣A∣2 = ∣A∣2 T.
2 ∣A∣2
4 (3.4)
0 2
8

Problem 3.1. Check that the Fourier functions (2.16) have unit energy.
Solution. Using the definition of energy in (3.1), we have that the energy of ϕ i (t) in (2.16) is given by
2
T/2 1 T/2 1 1 T/2 1
Eϕ i = ∫ ∣ √ e j2π T ∣ dt = ∫ √ e j2π T √ e − j2π T dt = ∫ dt = 1.
it it it
(3.5)
−T/2 T −T/2 T T −T/2 T

To obtain (3.5), we used that the squared modulus of a complex quantity z = a + jb, can be calculated as ∣z∣2 = zz ∗ ,
where z ∗ = a − jb is the complex conjugated. Of course, ∣z∣2 = zz ∗ = (a + jb)(a − jb) = a 2 + b2 .

Problem 3.2. What values of A are such that the signal in Figure 1.2 has unit energy?
Solution. We need to solve the equation ∣A∣2 T = 1 for a complex variable A ∈ C. From ∣A∣ = √1 we obtain that A is
T
a complex number with magnitude √1 and any phase, that is, A = √1 e jα for any α ∈ R. Indeed,
T T
2
1 1 1
∣A∣ T = ∣ √ e jα ∣ T = √ e jα √ e − jα T = 1.
2
(3.6)
T T T

Sometimes it is preferable to calculate the energy in the frequency domain thanks to Parseval’s theorem for energy,
stating that for a finite-energy signal x(t) it holds that

Ex = ∫ ∣x̂( f )∣2 d f . (3.7)
−∞

Problem 3.3. Compute the energy of the signal y(t) in Problem 2.1.
Solution. The energy of y(t) is given by
∞ ∞
Ey = ∫ ∣2Wsinc(2Wt)∣2 dt = 4W 2 ∫ sinc2 (2Wt)dt. (3.8)
−∞ −∞

Clearly, it is by far more convenient to compute this energy in the frequency domain. From Equation (2.12) we know
that the Fourier transform of y(t) is


⎪1 −W ≤ t ≤ W
ŷ( f ) = ⎨ (3.9)


⎩0 elsewhere.
Then,
∞ W
Ey = ∫ ∣ ŷ( f )∣2 d f = ∫ d f = 2W . (3.10)
−∞ −W

6
Problem 3.4. In this problem, we consider the signal x(t) = e −t . To study it,
2

(a) Check that it has finite energy.

(b) Calculate its Fourier transform.

(c) Plot x(t) and x̂( f ).

Solution. We will use MATLAB to solve this problem. We start by defining the symbolic expression for the signal and
its Fourier transform, that is, syms x(t); and syms xhat(f);. This automatically creates the symbolic variables t
and f. We assign the expression to the signal with x(t) = exp(-t^2);.

(a) The energy of x(t) is√given by (3.1), which can be implemented using int(x(t)^2,-inf,inf); returning
the finite value E x = 22π .

(b) To calculate the Fourier transform x̂( f ), we may use the fourier(x,t,w) which by default returns the
integral

∫ x(t)e − jωt dt. (3.11)
−∞
Hence, we must specify ω = 2π f to use the normalized Fourier transform (2.1), and store the resulting func-
tion into x̂( f ): xhat(f)=fourier(x,t,2*pi*f);. By displaying the expression, we obtain that

x̂( f ) = πe −π f .
2 2
(3.12)

Since x(t) is real and even, its Fourier transform is also real and even.

(c) We now define two variables T and W to plot x(t) and x̂( f ) in the [−T/2, T/2] and [−W , W] intervals using
fplot(x,[-T/2,T/2] and fplot(xhat,[-W,W] respectively. Graphically we observe that proper values
for T and W are T = 5 seconds and W = 1 Hz, as shown in Figure 3.1.

x(t) √ x̂( f )
π

− T2 T
2 −W W f
t
−3 −2 −1 1 2 3 −1.5 −1 −0.5 0.5 1 1.5

Figure 3.1: Signal x(t) and its Fourier transform x̂( f ) of Problem 3.4.

The energy, as well as other properties of random signals, can be a random quantity.
Problem 3.5. Calculate the average (expected) energy of the signal X(t) in Problem 2.3.
Solution . The are two ways to look at this problem. The first one is to see the link between the random variables
and describe the distribution of the energy of X(t). That is, with probability 41 , X(t) = x1 (t) and therefore E X = 0;
with probability 41 , X(t) = x2 (t) and therefore E X = ∣A∣2 T and similarly when X(t) = x3 (t); while E X = 2∣A∣2 T for
X(t) = x4 (t), an event that happens with probability 41 . In summary, the energy of X(t) is a random variable taking

7
values in {0, ∣A∣2 T, 2∣A∣2 T} with distribution { 41 , 21 , 41 }. Note that P[E X = ∣A∣2 T] = P[X(t) = x2 (t)] + P[X(t) =
x3 (t)] = 41 + 41 = 21 . From the distribution of E X we can determine the expected value as
1 1 1
E[E X ] = ⋅ 0 + ⋅ ∣A∣2 T + ⋅ 2∣A∣2 T = ∣A∣2 T. (3.13)
4 2 4
Alternatively, we can express the energy of X(t) as a deterministic function of the information bits and calculate
the expectation using the distribution of (B1 , B2 ) without the need of determining the distribution of E X . That is,

EX = ∫ ∣X(t)∣2 dt (3.14)
−∞

=∫ X(t)X(t)∗ dt (3.15)
−∞
∞ ∗
=∫ (B1 ϕ1 (t) + B2 ϕ2 (t))(B1 ϕ1 (t) + B2 ϕ2 (t)) dt (3.16)
−∞

=∫ (B1 ϕ1 (t) + B2 ϕ2 (t))(B1 ϕ1 (t)∗ + B2 ϕ2 (t)∗ )dt (3.17)
−∞

=∫ (B21 ϕ1 (t)ϕ1 (t)∗ + B1 B2 ϕ1 (t)ϕ2 (t)∗ + B2 B1 ϕ2 (t)ϕ1 (t)∗ + B22 ϕ2 (t)ϕ2 (t)∗ )dt (3.18)
−∞
∞ ∞ ∞ ∞
= B21 ∫ ∣ϕ1 (t)∣2 dt + B1 B2 ∫ ϕ1 (t)ϕ2 (t)∗ dt + B2 B1 ∫ ϕ2 (t)ϕ1 (t)∗ dt + B22 ∫ ∣ϕ2 (t)∣2 dt. (3.19)
−∞ −∞ −∞ −∞

We identify in (3.19) the energy of ϕ1 (t) and the energy of ϕ2 (t), which we already found in Example 3.1 as
∞ ∞
∫ ∣ϕ1 (t)∣2 dt = ∫ ∣ϕ2 (t)∣2 dt = ∣A∣2 T. (3.20)
−∞ −∞

The energy of X(t) in (3.19) also depends on two integrals that involve the product between the two waveforms
ϕ1 (t) and ϕ2 (t). Since in this problem ϕ2 (t) is a shifted version of ϕ1 (t) such that they do not overlap in time, we
have that ϕ1 (t)ϕ2 (t)∗ = ϕ2 (t)ϕ1 (t)∗ = 0 for all t, and as a consequence
∞ ∞
∫ ϕ1 (t)ϕ2 (t)∗ dt = ∫ ϕ2 (t)ϕ1 (t)∗ dt = 0. (3.21)
−∞ −∞

We will discover in the next topic that (3.21) implies that the waveforms are orthogonal. Combining (3.20) and (3.21)
in (3.19), we obtain that the energy of X(t) is the following function of B1 and B2 :

E X = B21 ∣A∣2 T + B22 ∣A∣2 T. (3.22)

Taking the expectation,

E[E X ] = E[B21 ∣A∣2 T + B22 ∣A∣2 T] = E[B21 ∣A∣2 T] + E[B22 ∣A∣2 T] = E[B21 ]∣A∣2 T + E[B22 ]∣A∣2 T. (3.23)

It only remains to calculate E[B21 ] and E[B22 ]. Since both B1 and B2 are bits with equal probability,
1 2 1 2 1
E[B21 ] = ⋅0 + ⋅1 = , (3.24)
2 2 2
and the same for B2 . Using this information back in (3.23) we find again that
1 1
E[E X ] = ⋅ ∣A∣2 T + ⋅ ∣A∣2 T = ∣A∣2 T. (3.25)
2 2
In conclusion, the average energy used to transmit two independent, equiprobable bits using the modulated sig-
nal (1.2) we saw at the beginning of this topic coincides with the energy of the waveform ϕ(t) itself in (1.1). What
happens when bits are not independent or equiprobable are open questions for the enthusiastic reader.

You might also like