RKS Signals and Systems Lecture 01 03
RKS Signals and Systems Lecture 01 03
2022
Lecture 01 & 02: Classifications and Operations of Signals and Elementary signals
Introduction
Signals
In the analysis of a communication system, we define a signal as a single-valued function of time that
conveys information. For every time instant, there is a unique function value. This value can be either
a real number, in which case we have a real-valued signal or a complex number, in which case we have
a complex-valued signal.
A signal is typically written as x(t). The notation x(t) can actually be interpreted in two ways: (i) as the
signal value at a particular time instant t, (ii) as a function defined over all time t.
Systems
A system is a mapping (transformation) of the input signal, denoted by x(t), into the output signal,
denoted by y(t). Let f denote this mapping. Then, we can write y(t) = f(x(t)). In other words, a system
is any (physical) device that produces an output signal in response to an input signal.
Signals may be processed further by systems, which may modify them or extract additional information
from them. Thus, a system is an entity that processes a set of signals (inputs) to yield another set of
signals (outputs).
Page 2 of 18
Lecture note 01 & 02
Classification of Signals
There are several classes of signals, we can classify signals in several ways as follows.
(a) According to the predictability of their behavior, signals can be random or deterministic. While
a deterministic signal can be represented by a formula or a table of values, random signals can only be
approached probabilistically.
(b) According to the variation of their time variable and their amplitude, signals can be either
continuous-time or discrete-time, analog or discrete amplitude, or digital. This classification relates
to the way signals are either processed, stored, or both.
(c) According to their energy content, signals can be characterized as finite- or infinite-energy signals.
(d) According to whether the signals exhibit repetitive behavior or not as periodic or aperiodic
signals.
(e) According to the symmetry with respect to the time origin, signals can be even or odd.
(f) According to the dimension of their support, signals can be of finite or of infinite support. Support
can be understood as the time interval of the signal outside of which the signal is always zero.
This classification is determined by whether or not the time axis is discrete (countable) or continuous
(Figure 1).
A continuous-time signal will contain a value for all real numbers along the time axis. In contrast to
this, a discrete-time signal is specified at discrete values of time t, often created by sampling a
continuous signal that will only have values at equally spaced intervals along the time axis.
Figure 1.
Page 3 of 18
Lecture note 01 & 02
This classification is determined by whether or not the amplitude axis is discrete (countable) or
continuous (Figure 3).
A signal whose amplitude can take on any value in a continuous range is an analog signal. This means
that an analog signal amplitude can take on an infinite number of values.
A digital signal, on the other hand, is one whose amplitude can take on only a finite number of values.
Signals associated with a digital computer are digital because they take on only two values (binary
signals). Note amplitudes can take on M values is an M-ary signal of which binary (M = 2) is a special
case.
Figure 3.
Page 4 of 18
Lecture note 01 & 02
The concept of continuous-time is often confused with that of analog. The two are not the same. The
same is true of the concepts of discrete-time and digital.
The terms continuous-time and discrete-time qualify the nature of a signal along the time (horizontal)
axis (Figure 1). The terms analog and digital, on the other hand, qualify the nature of the signal
amplitude (vertical axis) (Figure 3).
Homework 01: “analog is not necessarily continuous-time and digital need not be discrete-time.”
Justify the statement.
Periodic signals repeat with some period T0 , while aperiodic, or nonperiodic, signals do not (Figure 5).
Page 5 of 18
Lecture note 01 & 02
The smallest value of T0 that satisfies the periodicity condition of (1) is the fundamental period of
f (t ) .
(a)
(b)
A signal with finite energy is an energy signal, and a signal with finite and nonzero power is a power
signal. Signals in Figures 6(a) and 6(b) are examples of energy and power signals, respectively.
A necessary condition for the energy to be finite is that the signal amplitude → 0 as t → (Figure
6(a)). Otherwise, the integral in (2) measuring the signal energy will not converge.
Ex = x ( t ) dt
2
(2)
−
When the amplitude of x ( t ) does not → 0 as t → (Figure 6(b)), the signal energy is infinite. A
more meaningful measure of the signal size in such a case would be the time average of the energy, if
it exists. This measure is called the power of the signal. For a signal x ( t ) , we define its power Px as
(T 2 )
1
Px = lim x 2 ( t ) dt (3)
T → T
−(T 2 )
Page 6 of 18
Lecture note 01 & 02
Figure 6. Examples of signals: (a) a signal with finite energy and (b) a signal with finite
power.
Observe that power is the time average of energy. Since the averaging is over an infinitely large interval,
a signal with finite energy has zero power, and a signal with finite power has infinite energy. Therefore,
a signal cannot both be an energy signal and a power signal. If it is one, it cannot be the other.
Generally, the mean of an entity averaged over a large time interval approaching infinity exists if the
entity either is periodic or has a statistical regularity. If such a condition is not satisfied, the average
may not exist. For instance, a ramp signal x(t) = t increases indefinitely as t → , and neither the
energy nor the power exists for this signal. However, the unit step function, which is not periodic nor
has statistical regularity, does have a finite power.
When x ( t ) is periodic (Figure 7), x ( t ) is also periodic. Hence, the power of x ( t ) can be computed
2
Figure 7.
Note: All practical signals have finite energies and are therefore energy signals. A power signal must
necessarily have infinite duration; otherwise its power, which is its energy averaged over an infinitely
large interval, will not approach a (nonzero) limit. Clearly, it is impossible to generate a true power
signal in practice because such a signal has infinite duration and infinite energy.
Page 7 of 18
Lecture note 01 & 02
Signals can be characterized as to whether they have a finite or infinite length set of values. Most finite
length signals are used when dealing with discrete-time signals or a given sequence of values.
− f ( t )
Figure 8. Finite-Length Signal. Note that it only has nonzero values on a set, finite interval.
Causal signals are signals that are zero for all negative time, while anticausal are signals that are zero
for all positive time. Noncausal signals are signals that have nonzero values in both positive and
negative time (Figure 9).
(a)
Page 8 of 18
Lecture note 01 & 02
(b)
(c)
Figure 9. (a) A causal signal (b) An anticausal signal (c) A noncausal signal
f ( t ) = f ( −t ) .
Even signals can be easily spotted as they are symmetric around the vertical axis.
An odd signal, on the other hand, is a signal f such that (Figure 10(b)).
f ( t ) = − f ( −t )
(a)
Page 9 of 18
Lecture note 01 & 02
(b)
Figure 10. (a) An even signal (b) An odd signal.
A deterministic signal is a signal in which each value of the signal is fixed and can be determined by
a mathematical expression, rule, or table. Because of this the future values of the signal can be
calculated from past values with complete confidence. (Figure 11(a))
On the other hand, a random signal has a lot of uncertainty about its behavior. The future values of a
random signal cannot be accurately predicted and can usually only be guessed based on the averages
of sets of signals (Figure 11(b)).
(a)
(b)
Page 10 of 18
Lecture note 01 & 02
Signal Operations
Three useful signal operations called shifting, scaling, and inversion, are discussed. Since the
independent variable in our signal description is time, these operations are discussed as time shifting,
time scaling, and time reversal (inversion). However, this discussion is valid for functions having
independent variables other than time (e.g., frequency or distance).
Time Shifting
Consider a signal x ( t ) (Figure 12 (a)) and the same signal delayed by T seconds (Figure 12 (b)), which
we shall denote by ( t ) . Whatever happens in x ( t ) at some instant t also happens in ( t ) T seconds
later at the instant t + T. Therefore,
( t + T ) = x ( t ) and
(t ) = x (t − T )
Page 11 of 18
Lecture note 01 & 02
Time Scaling
Time scaling compresses or dilates a signal by multiplying the time variable by some quantity. If that
quantity is greater than one, the signal becomes narrower and the operation is called compression,
while if the quantity is less than one, the signal becomes wider and is called dilation.
Consider the signal x ( t ) of Figure 13 (a). The signal ( t ) in Figure 13(b) is x ( t ) compressed in time
by a factor of 2. Therefore, whatever happens in x ( t ) at some instant t also happens to ( t ) at the
t
instant so that
2
t
= x (t )
2
( t ) = x ( 2t )
Page 12 of 18
Lecture note 01 & 02
If x ( t ) were recorded on a tape and played back at twice the normal recording speed, we would obtain
x (t ) .
In general, if x ( t ) is compressed in time by a factor a (a > 1), the resulting signal φ(t) is given by
( t ) = x ( at )
Using a similar argument, we can show that x(t) expanded (slowed down) in time by a
factor a (a > 1) is given by
t
(t ) = x
a
In summary, to time-scale a signal by a factor a, we replace t with at. If a > 1, the scaling results in
compression, and if a < 1, the scaling results in expansion.
Page 13 of 18
Lecture note 01 & 02
Time Reversal
A natural question to consider when learning about time scaling is: What happens when the time
variable is multiplied by a negative number? The answer to this is time reversal (Figure 14). This
operation is the reversal of the time axis, or flipping the signal over the y-axis.
Combined Operations
Certain complex operations require simultaneous use of more than one of the operations described
above. The most general operation involving all the three operations is x(at − b), which is realized in
two possible sequences of operation:
1. Time-shift x(t) by b to obtain x(t −b). Now time-scale the shifted signal x(t −b) by a (i.e., replace t
with at) to obtain x(at − b).
2. Time-scale x(t) by a to obtain x(at). Now time-shift x(at) by b/a (i.e., replace t with t − (b/a)) to
obtain x[a(t − b/a)] = x(at − b).
Page 14 of 18
Lecture note 01 & 02
Elementary signals
In this section, we define some elementary functions that will be used frequently to represent more
complicated signals. Representing signals in terms of the elementary functions simplifies the analysis
and design of linear systems.
1, t 0
u (t ) =
0, 0
In much of our discussion, the signals begin at t = 0 (causal signals). Such signals can be conveniently
described in terms of unit step function u(t) shown in Figure 14 (a). More specifically, if we want a
signal to start at t = 0 (so that it has a value of zero for t < 0), we need only multiply the signal by u(t).
For instance, the signal e− at represents an everlasting exponential that starts at t = −∞. The causal form
of this exponential (Figure 14(b)) can be described as e− at u ( t ) .
Ramp function
t, t 0
r ( t ) = tu ( t ) =
0, 0
Page 15 of 18
Lecture note 01 & 02
Note that the ramp function can be obtained by integrating the unit-step function.
t
r (t ) = u ( ) d
−
Signum Function
The signum function (or sign) function, denoted by sgn(t ), is defined as follows:
1 t 1
sgn ( t ) = 0 t = 0
−1 t 0
sin x
Sa ( x ) =
x
As the denominator is an increasing function of x and the numerator is bounded, i.e., sin x 1 ,
sampling function is a damped sine wave having its peak at x=0 and zero-crossing at x= x = n .
sin x
sinc ( x ) = = Sa ( x ) ,
x
As shown in Figure 19. It can be found that sinc function is a compressed of version of Sa(x) where
the compression factor is .
Page 16 of 18
Lecture note 01 & 02
(b)
The unit impulse function δ(t ), also known as the Dirac delta function or simply the delta function,
is defined in terms of two properties as follows:
(1) ( t ) = 0, t 0
(2) ( t ) dt = 1
−
Figure 20. Impulse function δ(t ). (a) Generating the impulse function from a rectangular pulse. (b)
Notation used to represent an impulse function.
Direct visualization of a unit impulse function in the continuous time (CT) domain is difficult. One
way to visualize a CT impulse function is to let it evolve from a rectangular function. Consider a tall
narrow rectangle with width ε and height 1/ε, as shown in Figure 20 (a), such that the area enclosed by
the rectangular function equals one. Next, we decrease the width and increase the height at the same
Page 17 of 18
Lecture note 01 & 02
rate such that the resulting rectangular functions have areas = 1. As the width ε → 0, the rectangular
function converges to the CT impulse function δ(t ) with an infinite amplitude at t = 0. However, the
area enclosed by CT impulse function is finite and equals one. Many physical phenomenon such as
point sources, point charges, voltage or current sources (acting for a short duration) can be modelled
as delta function.
References
[1] Alan V. Oppenheim, Alan S. Willsky, With S. Hamid, Syed Hamid Nawab, “Signals and Systems” Pearson, second
edition.
[2] Samir S. Soliman, Mandyam D. Srinath “Continuous and discrete signals and systems,” Prentice Hall, second
edition.
[3] P. Lathi, “Linear Systems and Signals”, Oxford University Press, Inc, second edition.
[4] J. Calic, “Communication Systems”. Online: < http://cnx.org/content/col10631/1.3/ >
Page 18 of 18