Basics of Signals - Princeton University
Basics of Signals - Princeton University
Basics of Signals
2.1
of three variables. Now, if we are also interested in how the temperature evolves
in time, the signal f (x, y, z, t) would be a function of four variables.
The word "Matlab"
3
intensity
0.05
0.1
0.15
0.2
0.25
0.3
seconds
0.35
0.4
0.45
0.5
2.2
Often the domain and the range of a signal f (x) are modeled as continuous.
That is, the time (or spatial) coordinate x is allowed to take on arbitrary values
(perhaps within some interval) and the value of the signal itself is allowed to
take on arbitrary values (again within some interval). Such signals are called
0:13
0:14
0:15
35
34
temperature
33
32
31
30
29
28
10
seconds
12
14
16
18
20
No subsampling
4 x 4 blocks
8 x 8 blocks
16 x 16 blocks
32 levels
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.5
1.5
0.5
16 levels
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.5
1.5
1.5
8 levels
1.5
0.5
256 levels
32 levels
16 levels
8 levels
4 levels
2 levels
2.3
2
0
2
Amplitudeshifted signal, f(x)+1.5
0
2
Timeshifted signal, f(x+3)
6
4
2
0
6
4
2
0
2
A periodic signal
0.7
0.6
0.5
0.4
0.3
0.2
0.1
20
40
60
80
100
seconds
120
140
160
180
200
For some signals, appropriate time shifts can leave the signal unchanged.
Formally, a signal is said to be periodic with period P if x(t P ) = x(t) for all
t. That is, the signal simply repeats itself every P seconds. Figure 2.9 shows an
example of a periodic signal.
Amplitude scaling a signal to get ax(t) is simply multiplying x(t) with a
constant signal a. However, a rather different operation is obtained when one
scales the time domain. Namely, the signal x(at) is like the original signal,
but with the time axis compressed or stretched (depending on whether a > 1 or
a < 1). Of course, if a = 1 the signal is unchanged. Figure 2.10 shows the effects
of amplitude and time scaling. For negative values of a, the signal is flipped (or
reflected) about the range axis, in addition to any compression or stretching.
In particular, if a = 1, the signal is reflected about the range axis, but there
is no stretching or compression. For some functions, the reflection about the
range axis leaves the function unchanged, that is, the signal is symmetric about
the range axis. Formally, the property required for this is x(t) = x(t) for all
t. Such functions are called even. A related notion is that of an odd function,
for which x(t) = x(t). These functions are said to be symmetric about the
origin, meaning that they remain unchanged if they are first reflected about
the range axis and then reflected about the domain axis. Figure 2.11 shows
examples of an even function and an odd function.
The signal x(y(t)) is called the composition of the two functions x() and
y(). For each t, it denotes the operation of taking the value y(t) and evaluating
x() at the time y(t). Of course, we can get a very different result if we reverse
the order and consider y(x(t)).
One other operation that is extremely useful is known as convolution. We
will defer a description of this operation until Section XX.
10
0
Amplitudescaled signal, 2f(x)
10
10
0
Timescaled signal, f(x/2)
10
10
0
Timescaled signal, f(2x)
10
10
10
8
6
4
2
0
8
6
4
2
0
8
6
4
2
0
2.4. NOISE
11
An even function
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
-50
-40
-30
-20
-10
10
20
30
40
50
10
20
30
40
50
An odd function
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
-50
-40
-30
-20
-10
2.4
Noise
12
Original signal
1
0.8
0.6
0.4
0.2
0.5
1.5
1.5
Noisy signal
1
0.8
0.6
0.4
0.2
0.5
0.5
1.5
0.5
1
Original signal, recovered by requantization
1.5
0.5
1.5
1
0.8
0.6
0.4
0.2
0
13
might be preferred over an analog one. Of course, the power of digital computing
is also a key reason for the prevalence of digital systems, and robustness to noise
is one factor that makes digital computing so reliable. Figures 2.12 and 2.13
illustrate the effect of adding noise on an analog signal and a quantized (although
still continuous-time) signal. Without further knowledge of signal and noise
characteristics, the noise cannot be removed from an analog signal since any
possible value could be a valid value for the signal. On the other hand, if we
know the original signal is quantized (so it takes on only a discrete set of values),
then depending on the noise level, it may be possible to remove much of the
noise by simply re-quantizing the noisy signal. This process simply maps the
observed signal values to one of the possible original levels (for example, by
selecting the closest level).
2.5
Here we briefly define some signals that we will commonly encounter. Perhaps
the most basic and frequently used signal is a sinusoid defined by
x(t) = A sin(t)
and shown in Figure 2.14.
The sinusoid x(t) = A sin( t)
-A
2 /
2 /
14
its clear that since sin( + /2) = cos , we could have equivalently written the
sinusoid as
x(t) = A cos(2f t /2).
Up to this point, we have only considered real-valued signals. Although
physical quantities can generally be represented in terms of real-valued signals,
it turns out to be extremely useful to consider signals taking on complex values.
The most basic complex-valued signal we will use is the complex exponential
ejt . (Note that here
we have used the symbol j instead of i to denote the
imaginary number 1. This is common in electrical engineering since the
symbol i has traditionally been used to represent an electrical current.) The
well-known Euler identity can be used to write the complex exponential in terms
of standard sinusoids. Namely,
ejt = cos(t) + j sin(t).
As with sinusoids, the complex exponential can also be written in terms of
frequency in Hertz rather than radian frequency.
Some other signals that we will use on occasion and therefore give special
symbols to are the step function, ramp, square wave, triangle wave, and the sinc
function (pronounced like sink). These signals are defined by
0 if t < 0
step(t) =
1 if t 0
0 if t < 0
ramp(t) =
t if t 0
1 |t| if 1 t 1
tri(t) =
0 otherwise
and
sinc(t) =
sin(t)
t
2.6
Delta Functions
15
step(t)
ramp(t)
-1
-1
-2
-2
-1
-2
-2
-1
rect(t)
2
-1
-1
-1
tri(t)
-2
-2
-2
-2
-1
sinc(t)
1
0.5
-0.5
-5
-4
-3
-2
-1
Figure 2.15: (a) Step function. (b) Ramp function. (c) Rectangle function. (d)
Triangle function. (e) Sinc function.
16
The delta function in continuous-time is also called the Dirac delta function,
unit impulse function, or sometimes just the impulse function. It is defined
implicitly through its behavior under integration as follows:
Definition: (t) is the Dirac delta function if it satisfies
Z
f (t)(t) dt = f (0)
(2.1)
This result implies that the area under the delta function is equal to 1.
The second property gives the value of (t) for t 6= 0. Suppose (t) took on
positive values in even a very small interval away from t = 0. Then we could
choose a function f (t) that also took positive values inside a portion of this same
interval, but with f (t) = 0 elsewhere (including t = 0) and with f (t) continuous
at t = 0. However, in this case the left hand side of Equation (2.1) must be
positive, but the right hand side is 0. Therefore, (t) cannot take on positive
values in any interval. A similar argument leads us to the conclusion that (t)
cannot take on negative values in any interval. Thus, (t) = 0 for t 6= 0.
These two results (namely, that the area under (t) is 1 and that (t) = 0 for
all t 6= 0) are inconsistent with our usual notions of functions and integration.
If (0) was any finite value, then the area under (t) would be zero. Strictly
speaking, (0) is undefined although it is convenient to think of (0) = . Thus,
although we call (t) the delta function, it is technically not a function in the
usual sense. It is what is known as a distribution. However, it turns out that
for many manipulations we can treat (t) like a function.
It is also convenient to have a graphical representation as shown in Figure
2.16. The arrow indicates that the value at t = 0 is infinite (or undefined), with
the height of the arrow indicating the area under (t). To depict A(t) where
A is some constant, we would draw the height of the arrow to be A.
It is sometimes also helpful to think of (t) as a limit of a sequence of
approximating functions. Consider the function a rect(at). This has area 1, but
if a > 1 it is more concentrated around t = 0. As we let a we get a
sequence of approximations as shown in Figure 2.17, which intuitively get closer
and closer to (t). In fact, it is not hard to verify that for f (t) continuous at
t = 0 we have
Z
f (t) a rect(at) dt f (0) as a
17
2.5
1.5
0.5
0
1
0.8
0.6
0.4
0.2
0.2
0.4
0.6
0.8
a = 1/2
a=1
3.5
3.5
2.5
2.5
1.5
1.5
0.5
0.5
0
-2
-1
0
-2
-1
a=2
4
3.5
3.5
2.5
2.5
1.5
1.5
0.5
0.5
0
-2
-1
a=4
0
-2
-1
Figure 2.17: The delta function is a limit of rectangle functions with area 1.
18
where the first equality is obtained by the change of variable u = t, and the
second equality follows from the definition of (u). The conclusion is that (t)
satisfies the required property of (t), and so (t) = (t).
By the change of variable u = at and considering the cases a > 0 and a < 0
separately, it is easy to show that
(at) =
1
(t).
|a|
Therefore, the time-shifted delta function (tt0 ) behaves like we would expect.
This property is sometimes called the sifting property of the delta function. The
natural graphical depiction of (t t0 ) is shown in Figure 2.18.
We now turn to the discrete-time delta function, also called the Kronecker
delta function. The Kronecker delta function is denoted by [n] and defined as
1 if n = 0
[n] =
0 otherwise
19
2.5
1.5
0.5
0
1
0.8
0.6
0.4
0.2
0.2
0.4
0.6
0.8
2.5
1.5
0.5
0.5
1
1
0.8
0.6
0.4
0.2
0.2
0.4
0.6
0.8
20
Figure 2.19 shows the graph of [n]. Hence, in discrete-time, the delta function is in fact a function in the proper sense. There are none of the mathematical
subtleties/difficlties associated with the continuous-time delta function. In fact,
[n] is rather simple to work with.
Many properties of (t) have analogous counterparts in discrete-time, and
the discrete-time properties are generally easier to verify. For example, the
result
X
f [n][n] = f [0]
n=
follows trivially from the definition. Recall that in continuous time, the analogous property was actually the definition. Also trivial is the fact that [n] is
an even function of n = . . . , 1, 0, 1, . . .. It is easy to see that the time-shifted
delta function [n n0 ] satisfies the discrete-time sifting property
f [n][n n0 ] = f [n0 ].
n=
It turns out that for some properties the discrete-time counterpart is not analogous. For example, in discrete-time if a is an integer we have [an] = [n].
2.7
2-D Signals
One useful notion that arises in two (and higher) dimensions is separability. A
function f (x, y) is called separable if it can be written as f (x, y) = f1 (x)f2 (y).
Many of the commonly encountered 2-D functions are simply separable extensions of the corresponding 1-D functions. For example, the 2-D version of the
complex exponential is
ej(1 x+2 y) = ej1 x ej2 y
where 1 and 2 are the radian frequencies in the x and y directions, respectively.
That is, the 2-D complex exponential is simply the product of a 1-D complex
exponential in each direction.
Likewise, the 2-D Dirac delta function (x, y) is given by
(x, y) = (x)(y)
Formally, (x, y) would actually be defined by the property
Z Z
f (x, y)(x, y) dx dy = f (0, 0)
for any function f (x, y) continuous at (0, 0), but showing equivalence with the
separable expression is straightforward.
Similarly, in the discrete case, the Kronecker delta [m, n] is defined by
1 if m = n = 0
[m, n] =
0 otherwise
21