Math Reviewsc S
Math Reviewsc S
Review Materials
Contents
R1 Mathematical Formulas and Identities 1
R1.1 Finite and Innite Sums of Numbers
1
. . . . . . . . . . . . . . 1
R1.2 Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
R1.3 Factorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
R1.4 Permutations and Combinations . . . . . . . . . . . . . . . . . 3
R1.5 Polynomial Factors and Products . . . . . . . . . . . . . . . . 3
R1.6 Roots of Quadratic Equation . . . . . . . . . . . . . . . . . . . 4
R1.7 Eulers Formula . . . . . . . . . . . . . . . . . . . . . . . . . . 4
R1.8 Trigonometric Functions and Formulas . . . . . . . . . . . . . 5
R1.9 Newton-Raphson Method: Finding a root of a polynomial
equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
R1.10Holders Inequality and Cauchy-Schwartzs
Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
R2 Useful Functions 9
R3 Commonly Used Dierentials and
Integrals 11
R3.1 Dierentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
R3.2 Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
R3.3 lHopitals Rule . . . . . . . . . . . . . . . . . . . . . . . . . . 12
R3.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
R4 Complex Numbers 15
R4.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
R4.2 Complex Arithmetic . . . . . . . . . . . . . . . . . . . . . . . 16
R5 Complex Variables 18
R5.1 Function of a Complex Variable . . . . . . . . . . . . . . . . . 18
R5.2 Analytic Function of a Complex Variable . . . . . . . . . . . . 18
R5.3 Analytic Continuation . . . . . . . . . . . . . . . . . . . . . . 19
R5.4 Cauchys Integral Formula . . . . . . . . . . . . . . . . . . . . 19
R5.5 Cauchys Residue Theorem . . . . . . . . . . . . . . . . . . . . 19
i
R6 Continuous-Time Signals 20
R6.1 Energy and Power . . . . . . . . . . . . . . . . . . . . . . . . 20
R6.2 Continuous-Time Sinusoidal and Exponential Signals . . . . . 20
R6.2.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . 20
R6.2.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . 21
R6.3 Continuous-Time Eigenfunction . . . . . . . . . . . . . . . . . 22
R6.4 Continuous-Time Fourier Series . . . . . . . . . . . . . . . . . 23
R6.4.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . 23
R6.4.2 Dirichlet Conditions . . . . . . . . . . . . . . . . . . . 25
R6.5 Continuous-Time Fourier Transform . . . . . . . . . . . . . . . 26
R7 Discrete Fourier Series 27
R8 Matrix Algebra 29
R8.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
R8.2 Transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
R8.3 Toeplitz Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 30
R8.4 Circulant Matrix . . . . . . . . . . . . . . . . . . . . . . . . . 30
R8.5 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
R8.6 Minor and Cofactor . . . . . . . . . . . . . . . . . . . . . . . . 32
R8.7 Inverse of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . 33
R8.8 Unitary Matrix and Orthogonal Matrix . . . . . . . . . . . . . 33
R8.9 Cramers Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
ii
R1 Mathematical Formulas and Identities
R1.1 Finite and Innite Sums of Numbers
1
n
k=1
k =
n(n + 1)
2
, (R1.1)
n
k=1
k
2
=
n(n + 1)(2n + 1)
6
, (R1.2)
n
k=1
k
3
=
n
2
(n + 1)
2
4
, (R1.3)
k=1
1
k
2
=
2
6
, (R1.4)
k=1
1
k
4
=
4
90
, (R1.5)
where n is a positive integer.
Note: The series
k=1
1
k
(R1.6)
does not converge.
n1
k=0
d
k
=
1 d
n
1 d
, (R1.7)
with d arbitrary integer and [d[ , = 1, 0.
k=0
d
k
=
1
1 d
, (R1.8)
with 0 < [d[ < 1.
1
For test of the convergence of innite sums, a recommended reading is Table of Inte-
grals, Series, and Products, I.S. Gradshteyn and I.M. Ryzhik, c _2000, Academic Press.
1
R1.2 Power Series
Binomial Series:
(x+y)
n
= x
n
+
_
n
1
_
x
n1
y +
_
n
2
_
x
n2
y
2
+
_
n
3
_
x
n3
y
3
+ +
_
n
n1
_
xy
n1
+y
n
,
(R1.9)
with n a positive integer.
Taylor Series:
If f(x) is an arbitrarily dierentiable function, then it can be expressed in
the form
f(x) = f(a) +f
(a)(x a) +
f
(a)
2
(x a)
2
+ +
f
(n)
(a)
n!
(x a)
n
+ ,
(R1.10)
where f
(a) =
df(x)
dx
x=a
, f
(a) =
d
2
f(x)
dx
2
x=a
, and f
(n)
(a) =
d
n
f(x)
dx
n
x=a
.
Exponential Series:
e
x
= 1 + x +
x
2
2!
+
x
3
3!
+
x
4
4!
+ , (R1.11)
a
x
= 1 +x log
e
a +
(x log
e
a)
2
2!
+
(x log
e
a)
3
3!
+ . (R1.12)
Logarithmic Series:
log
e
(1+x) = x
1
2
x
2
+
1
3
x
3
1
4
x
4
+ , (Region of Convergence : 1 < x < 1).
(R1.13)
2
Trigonometric Series:
sin x = x
x
3
3!
+
x
5
5!
x
7
7!
+ , (R1.14)
cos x = 1
x
2
2!
+
x
4
4!
x
6
6!
+ , (R1.15)
tan x = x +
x
3
3
+
2x
5
15
+
17x
7
315
+
62x
9
2835
+ , (Region of Convergence : x
2
<
2
4
),
(R1.16)
cot x =
1
x
x
3
x
3
45
2x
5
945
+ , (Region of Convergence : 0 < [x[ < ).
(R1.17)
R1.3 Factorial
n! = n(n 1)(n 2) 2 1, with n a nonnegative integer, (R1.18)
0! = (0 + 1) = 1. (R1.19)
R1.4 Permutations and Combinations
The number of permutations S of n things taken k at a time, with n and k
positive integers, is given by
S =
n!
(n k)!
. (R1.20)
The number of combinations S of n things taken k at a time, with n and
k positive integers, is given by
S =
_
n
k
_
=
n!
k!(n k)!
. (R1.21)
R1.5 Polynomial Factors and Products
x
n
y
n
= (x y)(x
n1
+x
n2
y + +y
n1
), (R1.22)
with n a positive integer.
3
x
n
+y
n
= (x +y)(x
n1
x
n2
y +x
n3
y
2
+y
n1
), (R1.23)
with n a positive and odd integer.
N
i=1
(x +
i
) = (x +
1
)(x +
2
) (x +
N
)
=
0
+
1
x +
2
x
2
+ +
N1
x
N1
+
N
x
N
(R1.24)
where
0
=
N
i=1
i
,
1
=
N
i=1
i
,
2
=
N
i=j
i,j=1
j
, ,
N1
=
1
+
2
+
3
+ +
N
,
N
= 1.
R1.6 Roots of Quadratic Equation
The roots x
1
, x
2
of the quadratic equation
ax
2
+bx +c = 0,
with a, b, and c real numbers, are given by
x
1
=
b +
b
2
4ac
2a
, (R1.25)
x
2
=
b
b
2
4ac
2a
. (R1.26)
Note:
x
1
+x
2
=
b
a
, (R1.27)
x
1
x
2
=
c
a
. (R1.28)
R1.7 Eulers Formula
e
j
= cos +j sin , (R1.29)
with a real number.
4
R1.8 Trigonometric Functions and Formulas
sin =
1
2j
(e
j
e
j
), (R1.30)
cos =
1
2
(e
j
+e
j
), (R1.31)
tan =
sin
cos
=
(e
j
e
j
)
j(e
j
+e
j
)
, (R1.32)
cot =
1
tan
=
j(e
j
+e
j
)
(e
j
e
j
)
, (R1.33)
csc =
1
sin
=
2j
e
j
e
j
, (R1.34)
sec =
1
cos
=
2
e
j
+e
j
, (R1.35)
sin = cos(
2
) = sin( ), (R1.36)
cos = sin(
2
) = cos( ), (R1.37)
tan = cot(
2
) = tan( ), (R1.38)
sinh =
1
2
(e
), (R1.39)
cosh =
1
2
(e
+e
), (R1.40)
tanh =
sinh
cosh
=
(e
)
(e
+e
)
, (R1.41)
with a real number.
5
sin(
1
2
) = sin
1
cos
2
cos
1
sin
2
, (R1.42)
cos(
1
2
) = cos
1
cos
2
sin
1
sin
2
, (R1.43)
sin
2
1
sin
2
2
= sin(
1
+
2
) sin(
1
2
), (R1.44)
cos
2
1
cos
2
2
= sin(
1
+
2
) sin(
1
2
), (R1.45)
cos
2
1
sin
2
2
= cos(
1
+
2
) cos(
1
2
), (R1.46)
cos
2
1
+ sin
2
2
= 1 (R1.47)
sin
1
sin
2
= 2 sin
_
1
2
2
_
cos(
1
2
), (R1.48)
cos
1
+ cos
2
= 2 cos
_
1
+
2
2
_
cos
_
1
2
2
_
, (R1.49)
cos
1
cos
2
= 2 sin
_
1
+
2
2
_
sin
_
1
2
2
_
, (R1.50)
sin 2 = 2 sin cos , (R1.51)
cos 2 = cos
2
sin
2
, (R1.52)
sin 3 = 3 sin 4 sin
3
, (R1.53)
cos 3 = 4 cos
3
3 cos . (R1.54)
with ,
1
, and
2
real numbers.
R1.9 Newton-Raphson Method: Finding a root of a
polynomial equation
The Newton-Raphson method is a numerical technique to determine approx-
imately the root of the equation f(x) = 0. The procedure starts from a
initial guess of the root x = x
1
. Then using the recurrence relation
x
n+1
= x
n
f(x
n
)
f
(x
n
)
, n = 1, 2, ,
where
f
(x
n
) =
df(x)
dx
x=xn
,
the successive approximations x
n+1
, n 1, beginning with n = 1, can be
found. The approximation is assumed to converge when the dierence be-
tween x
n+1
and x
n
is below a prescribed small number, typically 10
6
.
6
The Newton-Raphson method converges fast to the actual root if the
initial guess of the root is close to the actual root. However, there are three
main drawbacks: (1) The method fails when f
(x
n
) = 0, (2) The method does
not always converge, and (3) The method may converge to a root dierent
from that expected if the initial guess x
1
is far from the actual root.
Example R1.1. In this example, we would like to show how the Newton-
Raphson method is used to nd the root of f(x) = x
3
3x
2
+ x 1 = 0.
Assume the numerical resolution required is 14 decimal digits.
We start with an initial guess of the root x
1
= 2.5:
x
1
= 2.5,
x
2
= x
1
f(x
1
)
f
(x
1
)
= 2.84210526315789,
x
3
= x
2
f(x
2
)
f
(x
2
)
= 2.77282691999216,
x
4
= x
3
f(x
3
)
f
(x
3
)
= 2.76930129255045,
x
5
= x
4
f(x
4
)
f
(x
4
)
= 2.76929235429601,
x
6
= x
5
f(x
5
)
f
(x
5
)
= 2.76929235423863,
x
7
= x
6
f(x
6
)
f
(x
6
)
= 2.76929235423863.
The recurrence process stops as [x
7
x
6
[ 10
15
. Hence x = x
7
is a root of
f(x).
R1.10 H olders Inequality and Cauchy-Schwartzs
Inequality
The Holders inequality for integrals is given by
_
a
1
a
0
f(x)g(x) dx
_
_
a
1
a
0
[f(x)[
p
dx
_
1/p
_
_
a
1
a
0
[g(x)[
q
dx
_
1/q
, (R1.55)
where
1
p
+
1
q
= 1.
7
The equality holds when
f(x) = kg(x)
p1
, with k any constant.
If p = q = 2, the inequality becomes Schwartzs inequality
_
a
1
a
0
f(x)g(x) dx
_
_
a
1
a
0
[f(x)[
2
dx
_
1/2
_
_
a
1
a
0
[g(x)[
2
dx
_
1/2
. (R1.56)
The equality holds when
f(x) = kg(x), with k any constant.
The Holders inequality for sums is given by
i=1
x
i
y
i
_
N
i=1
[x
i
[
p
_
1/p
_
N
i=1
[y
i
[
q
_
1/q
, (R1.57)
where
1
p
+
1
q
= 1.
The equality holds when
y
i
= kx
p1
i
, with k any constant.
If p = q = 2, the inequality becomes Cauchys inequality
i=1
x
i
y
i
_
N
i=1
[x
i
[
2
_
1/2
_
N
i=1
[y
i
[
2
_
1/2
, (R1.58)
The equality holds when
y
i
= kx
i
, with k any constant.
8
R2 Useful Functions
1. rect function
rect(x) =
_
1, [x[ <
1
2
0, [x[ >
1
2
.
2. sinc function
sinc(x) =
sin x
x
.
3. signum function
sgn(x) =
_
_
_
+1, x > 0
0, x = 0
1, x < 0
.
4. ceiling function rounds the input x towards the closest integer larger
than or equal to x and is denoted as x|.
For example, 3.2| = 3.8| = 4 and 3.2| = 3.9| = 3.
5. floor function rounds the input x towards the closest integer less than
or equal to x and is denoted as x|.
For example, 3.2| = 3.8| = 3 and 3.2| = 3.9| = 4.
6. median of a set of real numbers x
1
, x
2
, ..., x
N
is obtained by rank
ordering the numbers in the set and choosing the middle number in the
ordered set.
For example, the median of 7, 13, 1, 6, 3 is 6 and the median of 7, 13, 1, 6, 3, 9
is (6 + 7)/2 = 6.5.
7. Dirac delta function () is a function of with innite height, zero
width, and unit area. It is the limiting form of a unit area pulse function
p
() =
_
1
2
, <
0, elsewhere
as goes to 0, i.e.,
lim
0
_
() d =
_
() d = 1. (R2.1)
9
Equation(R2.1) also holds even when we reverse the direction of axis
and shift () by an amount t, i.e.,
_
(t ) d = 1. (R2.2)
Because of the above properties, we have
_
x()(t ) d = x()[
=t
= x(t). (R2.3)
Equation(R2.3) holds for any value of t, and it is referred as the sifting
property of the Dirac delta function.
8. The modulo operation of integer X over integer N is the residue of X
divided by N:
X)
N
= X kN, k = X/N|.
It can be veried that the modulo operation is linear. When nega-
tive numbers are used, X)
N
has the same sign as N. For example,
67)
13
= 67 5 13 = 2, 67)
13
= 67 (13) (6) = 11 and
67)
13
= 67 13 (6) = 11.
The statement X is congruent to Y , modulo N means that
X)
N
= Y )
N
.
The notation
X
1
)
N
denotes the multiplicative inverse of X evaluated modulo N, i.e., if
X
1
)
N
= , then X)
N
= 1. For example, 3
1
)
4
= 3 because
3 3)
4
= 1, and 8
1
)
5
= 2 because 8 2)
5
= 1.
In the case of polynomial, the operation a(z) mod b(z) is the residue
r(z) after the polynomial division a(z)/b(z). For example, if a(z) =
4z
3
+2z
2
+5z
1
+1 and b(z) = z
2
+3z
1
+4 then the residue after
the division
a(z)
b(z)
= 4z
1
10 +
19z
1
+ 41
z
2
+ 3z
1
+ 4
is 19z
1
+ 41. Therefore, a(z) mod b(z) = r(z) = 19z
1
+ 41.
10
R3 Commonly Used Dierentials and
Integrals
R3.1 Dierentials
d(uv) = udv +v du (R3.1)
d(
u
v
) =
v du udv
v
2
(R3.2)
d(u
n
) = nu
n1
du (R3.3)
d e
u
= e
u
du (R3.4)
d a
u
= (a
u
log
e
a) du (R3.5)
d(log
e
u) = u
1
du (R3.6)
d sin u = cos udu (R3.7)
d cos u = sin udu. (R3.8)
R3.2 Integrals
_
f(g(x))g
(x) dx =
_
f(y) dy, y = g(x) and g
(x) dx
f(x)
= log
e
f(x) (R3.11)
_
dx
x
= log
e
x (R3.12)
11
_
x
n
dx =
x
n+1
n + 1
(R3.13)
_
e
x
dx = e
x
(R3.14)
_
a
x
dx =
a
x
log
e
a
(R3.15)
_
a
bx
dx =
a
bx
b log
e
a
(R3.16)
_
log
e
x dx = x log
e
x x (R3.17)
_
sin x dx = cos x (R3.18)
_
cos x dx = sin x (R3.19)
_
tan x dx = log cos x. (R3.20)
R3.3 lH opitals Rule
Consider a fraction f(x)/g(x) for which at x = x
0
, f(x
0
) = g(x
0
) = 0 (or
f(x
0
) = g(x
0
) = ). Then
lim
xx
0
f(x)
g(x)
= lim
xx
0
f
(x)
g
(x)
(R3.21)
as long as the limits on the right-hand side exist and are nite.
R3.4 Examples
Example R3.1. Evaluate the integral
x[n] =
1
2
_
X()e
jn
d,
where
X() =
_
cos(), [[
0
0,
0
< [[ .
12
Answer:
x[n] =
1
2
_
0
0
cos()e
jn
d
=
1
2
_
0
0
1
2
(e
j
+e
j
)e
jn
d
=
1
4
_
_
0
0
e
j
e
jn
d +
_
0
0
e
j
e
jn
d
_
=
1
4
_
1
j( +n)
e
j(+n)
0
+
1
j( +n)
e
j(+n)
0
_
=
1
4
_
1
j( +n)
2j sin( +n)
0
+
1
j( +n)
2j sin( +n)
0
_
=
sin( +n)
0
2( +n)
+
sin( +n)
0
2( +n)
.
Example R3.2. Evaluate the integral
x[n] =
1
2
_
X()e
jn
d,
where
X() =
_
, [[
0
0,
0
< [[ .
using integration by parts.
13
Answer:
x[n] =
1
2
_
0
0
e
jn
d
=
1
2
1
jn
_
0
0
e
jn
d(jn)
=
1
2
1
jn
_
0
0
de
jn
=
1
2
1
jn
_
e
jn
_
0
0
e
jn
d
_
=
1
2
1
jn
_
0
e
j
0
n
(
0
)e
j
0
n
1
jn
e
jn
0
_
=
1
2
1
jn
_
0
(e
j
0
n
+e
j
0
n
)
1
jn
(e
j
0
n
e
j
0
n
)
_
=
1
(jn)
_
0
cos(
0
n)
1
n
sin(
0
n)
_
.
14
R4 Complex Numbers
R4.1 Denition
A complex number z is represented in the Cartesian coordinate as
z = x +jy
where j =
2
2
+ 3 =
7, = tan
1
(
3
2
).
Therefore,
z = 2 +j
3 =
7e
j tan
1
(
3
2
)
.
15
Figure R4.1: Representation of a complex number z in Cartesian form and
polar form.
The conjugate of a complex number in Cartesian form is obtained by
negating the imaginary part:
z
= (x +jy)
= x
+ (jy)
= x jy.
In polar form, the conjugate is obtained by changing the sign of the angle:
z
= (re
j
)
= r e
j
.
R4.2 Complex Arithmetic
(1) Addition and Subtraction
z
1
= x
1
+jy
1
and z
2
= x
2
+jy
2
be two complex numbers. Then
z
1
+z
2
= (x
1
+x
2
) +j(y
1
+y
2
)
where (x
1
+ x
2
) are (y
1
+ y
2
) are the real and imaginary parts of the sum
z
1
+z
2
, respectively. Similarly,
z
1
z
2
= (x
1
x
2
) +j(y
1
y
2
)
where (x
1
x
2
) are (y
1
y
2
) are the real and imaginary parts of the dierence
z
1
z
2
, respectively.
16
Example R4.2. Let z
1
= 1.3 +j5.2 and z
2
= 2.7 j3.6 then
z
1
+z
2
= (1.3 +j5.2) + (2.7 j3.6) = 4 + j1.6
z
1
z
2
= (1.3 +j5.2) (2.7 j3.6) = 1.4 + 8.8j.
(2) Multiplication
Let z
1
= x
1
+jy
1
and z
2
= x
2
+jy
2
then
z
1
z
2
= (x
1
+jy
1
)(x
2
+jy
2
)
= x
1
x
2
+jx
1
y
2
+jx
2
y
1
+j
2
y
1
y
2
= (x
1
x
2
y
1
y
2
) +j(x
1
y
2
+x
2
y
1
).
Example R4.3. Let z
1
= 1 + j
3, z
2
= 2 j2. The product of z
1
, z
2
calculated in polar form is given by
(1 +j
3)(2 j2) = 2e
j/3
2
2e
j/4
= 4
2e
j/12
= 5.4641 +j1.4641.
Calculating in the Cartesian form we get
(1 +j
3)(2 j2) = (2 + 2
3) +j(2
3 2) = 5.4641 +j1.4641.
(3) Division
The division of two complex numbers z
0
and z
1
can be carried out either in
polar form or in Cartesian form. In the former case
w =
z
0
z
1
=
r
0
e
j
0
r
1
e
j
1
=
r
0
r
1
e
j(
0
1
)
.
In the latter case
w =
z
0
z
1
=
x
0
+jy
0
x
1
+jy
1
=
(x
0
+jy
0
)(x
1
jy
1
)
(x
1
+jy
1
)(x
1
jy
1
)
=
(x
0
x
1
+y
0
y
1
) +j(x
1
y
0
x
0
y
1
)
x
2
1
+y
2
1
.
Example R4.4. To divide 2 + j2 by 1 j, we calculate in polar form as
follows:
2 +j2
1 j
=
2
2e
j
2e
j
4
= 2e
j(
4
)(
4
)
= 2e
j
2
= j2.
Calculating in the Cartesian form we get
2 +j2
1 j
=
(2 +j2)(1 +j)
(1 j)(1 + j)
=
(2 2) +j(2 + 2)
1
2
+ 1
2
= j2.
17
(4) Inverse
The inverse of a complex number is a special case of division where the
numerator is 1. In polar form we have
z
1
=
1
z
=
1
re
j
=
1
r
e
j
.
Equivalently, in the Cartesian form we have
z
1
=
1
z
=
1
x +jy
=
x jy
(x +jy)(x jy)
=
x jy
x
2
+y
2
.
R5 Complex Variables
R5.1 Function of a Complex Variable
A function of the complex variable z can be written as
f(z) = u(z) +jv(z)
where u(z) and v(z) are real functions of z. In the Cartesian form, we dene
z = x + jy for real x and y. Therefore, the values of u(z) and v(z) depend
on x and y, and we can express the complex function f(z) as
f(z) = u(x, y) +jv(x, y).
If z = re
j
, then f(z) can be expressed as
f(z) = u(r, ) +jv(r, ),
where u(r, ) and v(r, ) are the real and imaginary parts of f(z).
R5.2 Analytic Function of a Complex Variable
Denition R5.1. A function f(z) is said to be differentiable at a point z
0
in the z-plane if the limit
f
(z
0
) = lim
z0
f(z
0
+ z) f(z
0
)
z
exists. Note that f(z
0
+z) can approach f(z
0
) along any path. This limit
is called the derivative of f(z) at point z
0
.
Denition R5.2. A function f(z) of a complex variable z is analytic in the
region R in the complex z-plane if and only if all the derivatives of f(z) exist
at all points inside the region R.
18
R5.3 Analytic Continuation
If the values of a function f(z) of a complex variable are known everywhere
on a closed contour C inside a region R where f(z) is analytic, then the
values of f(z) at all points in R can be found by mapping from the contour
C to any point in R.
R5.4 Cauchys Integral Formula
If a function f(z) is analytic both on and inside a counterclockwise closed
contour C and if z
0
is any point inside C, then
f(z
0
) =
1
2j
_
C
f(z)
1
z z
0
dz, (R5.1)
f
(z
0
) =
1
2j
_
C
f(z)
1
(z z
0
)
2
dz (R5.2)
f
(z
0
) =
2
2j
_
C
f(z)
1
(z z
0
)
3
dz (R5.3)
.
.
. (R5.4)
f
(n)
(z
0
) =
n!
2j
_
C
f(z)
1
(z z
0
)
n+1
dz. (R5.5)
where f
(z
0
) =
d f(z)
dz
z=z
0
, f
(z
0
) =
d
2
f(z)
dz
2
z=z
0
, and f
(n)
(z
0
) =
d
n
f(z)
dz
n
z=z
0
.
Eq.(R5.1) is often referred as the Cauchys integral formula.
By combining Eq.(R5.1) - Eq.(R5.5) we arrive at an useful relation:
1
2j
_
C
z
k1
dz =
_
1 , k = 0
0 , k ,= 0
(R5.6)
where C is a counterclockwise closed contour encircling z = 0.
R5.5 Cauchys Residue Theorem
If a function f(z) is analytic both on and inside a counterclockwise closed
contour C except at poles z
k
, k = 1, 2, ..., n, then
1
2j
_
C
f(z) dz =
k
_
residue of f(z) at pole z
k
inside C
_
. (R5.7)
19
In the case when f(z) is a rational function of z and has pole at z = z
k
of multiplicity m, we can express f(z) as
f(z) =
(z)
(z z
k
)
m
,
where (z) does not have any pole at z = z
k
. Thus the residue of f(z) at
the pole z
k
inside C is given by
1
(m 1)!
_
d
m1
(z)
d z
m1
_
z=z
k
. (R5.8)
R6 Continuous-Time Signals
R6.1 Energy and Power
The total energy of a continuous-time signal x(t) is given by
E
x
= lim
T
_
T
T
[x(t)[
2
dt.
The average power of a continuous-time x(t) is given by
P
x
= lim
T
1
2T
_
T
T
[x(t)[
2
dt.
The denition of total energy can explained as the area under the squared
signal [x(t)[
2
, and it is a measurement of the strength of the signal x(t) over
innite time. However, there are signals with innite energy so we need
to evaluate the average power of the signal x(t) as a measurement of the
strength over one unit time.
R6.2 Continuous-Time Sinusoidal and Exponential Sig-
nals
R6.2.1 Denition
The continuous-time real sinusoidal signal with constant amplitude is of
the form
x(t) = Acos(
0
t +), (R6.1)
20
where A,
0
and are real numbers. The parameters A,
0
and are called,
respectively, the amplitude, the angular frequency, and the phase of the
sinusoidal signal x(t).
The complex exponential signal is expressed in the form
x(t) = A
t
, (R6.2)
where
= e
0
+j
0
, A = [A[ e
j
. (R6.3)
If A and are both real, the signal of Eq. (R6.2) reduces to real
exponential signal. For t 0 such a signal with [[ < 1 decays expo-
nentially as t increases and with [[ > 1 grows exponentially as t increases.
In addition, we can rewrite Eq. (R6.2) as
x(t) = Ae
(
0
+j
0
)t
= [A[ e
0
t
e
j(
0
t+)
(R6.4)
= [A[ e
0
t
cos(
0
t +) +j[A[ e
0
t
sin(
0
t +). (R6.5)
Thus the real and imaginary parts of a complex exponential signal are real
sinusoidal signals.
The fundamental period T
0
of a complex exponential signal (Eq. (R6.4))
with
0
= 0 is dened to be the smallest positive T
0
satisfying
[A[ e
j(
0
t+)
= [A[ e
j(
0
(t+T
0
)+)
, (R6.6)
or equivalently,
e
j
0
T
0
= 1. (R6.7)
Therefore,
T
0
=
2
[
0
[
. (R6.8)
R6.2.2 Properties
The properties of continuous-time sinusoidal and exponential signals and
comparisons with discrete-time sinusoidal and exponential sequences are dis-
cussed as follows.
21
1. Periodicity for any choice of
0
Note that the continuous-time sinusoidal signal Acos(
0
t + ) (Eq.
(R6.1)) and the continuous-time complex exponential signal [A[ e
j(
0
t+)
are periodic signals of any choice of
0
. However, discrete-time se-
quences are not always periodic with any choice of
0
. Discrete si-
nusoidal sequence Acos(
0
n +) and discrete complex exponential se-
quence [A[ e
j(
0
n+)
are periodic with period N only if
0
N is an integer
multiple of 2, i.e.,
0
N = 2r where N and r are positive integers.
For example, cos(
n
4
) is a periodic sequence while cos(
n
4
) is not periodic.
2. Distinctness for dierent
0
,
1
Any two continuous-time sinusoidal signals
Acos(
0
t +), Acos(
1
t +),
0
,=
1
have dierent waveforms. Similarly, any two continuous-time exponen-
tial signals with
0
,=
1
also have dierent waveforms. Unlike the
continuous-time case, discrete-time sinusoidal sequences
Acos(
0
n +), Acos(
1
n +),
0
=
1
+ 2k
have the same sequence values. Similarly, any two discrete-time ex-
ponential sequences with
0
=
1
+ 2k also have the same sequence
values.
R6.3 Continuous-Time Eigenfunction
If the input signal of any LTI system has output signal being the input signal
multiplied by a complex constant, this certain type of input signal is called
the eigenfunction and the complex constant is called the eigenvalue.
Example R6.1. We want to show that the complex exponential signal de-
ned in Eq. (R6.2)
x(t) = A
t
is an eigenfunction of an LTI continuous-time system with an impulse re-
sponse h(t).
22
By using the convolution integral, we have
y(t) =
_
h()A
(t)
d
=
_
_
h()
d
_
A
t
.
Since the integral inside the brackets is independent of t, we can therefore
say that the input signal A
t
is an eigenfunction.
Example R6.2. We want to show that the sum of any two complex expo-
nential signals
x(t) = A
t
+B
t
is not an eigenfunction of an LTI continuous-time system with an impulse
response h(t).
By using the convolution integral, we have
y(t) =
_
h()
_
A
(t)
+B
(t)
_
d
=
_
_
h()
d
_
A
t
+
_
_
h()
d
_
B
t
.
Since the input signal x(t) cannot be extracted from the summation above,
we can therefore say that the input signal A
t
+B
t
is not an eigenfunction.
R6.4 Continuous-Time Fourier Series
R6.4.1 Denition
Given a periodic continuous-time signal x(t) with period T
0
and fundamental
frequency
0
= 2/T
0
, the Fourier series expansion of x(t) is given by the
linear combination of the set of harmonically related complex exponentials
e
jk
0
t
= e
jk
2
T
0
t
, k = 0, 1, 2, ...,
i.e.,
x(t) =
k=
a
k
e
jk
0
t
=
k=
a
k
e
jk
2
T
0
t
(R6.9)
a
k
=
1
T
0
_
T
0
x(t)e
jk
0
t
dt =
1
T
0
_
T
0
x(t)e
jk
2
T
0
t
dt. (R6.10)
23
Note that the notation
_
T
0
denotes the integration over any interval of
length T
0
. The Eq.(R6.9) is referred to as the synthesis equation and the
Eq.(R6.10) is referred to as the analysis equation. The coecient a
k
is called
the Fourier series coefficient.
Table R6.1: Properties of Continuous-Time Fourier Series.
Type of Periodic Signal with Fourier Series
Property frequency
0
= 2/T Coecients
g(t) a
k
h(t) b
k
Linearity g(t) +h(t) a
k
+b
k
Time Shifting g(t t
0
) a
k
e
jk
0
t
0
Frequency Shifting e
jM
0
t
g(t) a
kM
Multiplication g(t)h(t)
l=
a
l
b
kl
Time Reversal g(t) a
k
Conjugation g
(t) a
k
Time Scaling g(t), > 0 a
k
Periodic Convolution
_
T
g()h(t ) d Ta
k
b
k
Example R6.3. Find the Fourier series coecients of the continuous-time
signal
x(t) = 1 + cos(
0
t) + 2 cos(2
0
t +
3
) + 4 sin(3
0
t
4
)
with fundamental frequency
0
.
By using the Eulers Formula, it can be shown that
x(t) = 1 +
1
2
(e
j
0
t
+e
j
0
t
) +
2
2
(e
j(2
0
t+
3
)
+e
j(2
0
t+
3
)
) +
4
2j
(e
j(3
0
t
4
)
e
j(3
0
t
4
)
)
= 1 +
1
2
e
j
0
t
+
1
2
e
j
0
t
+e
j
3
e
j2
0
t
+e
j
3
e
j2
0
t
+
2
j
e
j
4
e
j3
0
t
2
j
e
j
4
e
j3
0
t
.
Therefore, the Fourier series coecients are
24
a
0
= 1,
a
1
=
1
2
, a
1
=
1
2
,
a
2
= e
j
3
=
1 +j
3
2
, a
2
= e
j
3
=
1 j
3
2
,
a
3
=
2
j
e
j
4
=
2(1 j), a
3
=
2
j
e
j
4
=
2(1 +j),
a
k
= 0, [k[ > 3.
Example R6.4. Find the Fourier series coecients of the impulse train
x(t) =
k=
(t kT
0
)
with period T
0
.
By calculating Eq.(R6.10) in the interval T
0
/2 t T
0
/2, we can get
a
k
=
1
T
0
_
T
0
/2
T
0
/2
(t)e
jk
2
T
0
t
dt =
1
T
0
.
Therefore, all the Fourier series coecients of the impulse train have the
same value 1/T
0
.
Some important properties of continuous-time Fourier series are listed in
Table R6.1 for quick reference.
R6.4.2 Dirichlet Conditions
In order to verify the existence of Fourier series representation for a peri-
odic signal x(t), we need to examine the Dirichlet conditions. The Dirichlet
conditions are given by:
1. x(t) must be absolutely integrable, i.e.,
_
T
0
[x(t)[ dt <
25
2. In any nite interval of time, x(t) has a nite number of local maxima
and local minima.
3. In any nite interval of time, x(t) has a nite number of discontinuities.
The Dirichlet conditions guarantee that x(t) equals its Fourier series rep-
resentation
k=
a
k
e
jk
0
t
at all values of t except at discontinuities of x(t). Note that Dirichlet condi-
tions are only sucient but not necessary conditions.
R6.5 Continuous-Time Fourier Transform
Given an aperiodic continuous-time signal x(t), the continuous-time Fourier
transform of x(t) is given by
X(j) =
_
x(t)e
jt
dt (R6.11)
x(t) =
1
2
_
X(j)e
jt
d. (R6.12)
The transform X(j) is referred to as the spectrum of x(t) because it pro-
vides the information of x(t) when evaluated by complex exponential signals
at dierent frequencies.
Some important properties of continuous-time Fourier transform are listed
in Table R6.2 for quick reference.
26
Table R6.2: Properties of Continuous-Time Fourier Transform.
Property Signal Fourier Transform
g(t) G(j)
h(t) H(j)
Linearity g(t) +h(t) G(j) +H(j)
Time Shifting g(t t
0
) G(j)e
jt
0
Frequency Shifting e
j
0
t
g(t) G(j(
0
))
Multiplication g(t)h(t)
1
2
G(j) H(j)
Time Reversal g(t) G(j)
Conjugation g
(t) G
(j)
Time Scaling g(t)
1
||
G
_
j
_
Convolution g(t) h(t) G(j)H(j)
Dierentiation in Time
d
d t
g(t) jG(j)
Integration
_
t
g() d
1
j
G(j) + G(0)()
Real and Even in Time g(t) real and even G(j) real and even
Real and Odd in Time g(t) real and odd G(j) purely imaginary and odd
R7 Discrete Fourier Series
Given a periodic sequence x[n] with period N, the fundamental period is
dened to be the smallest integer N such that x[n] = x[n + N] is satised,
and the fundamental frequency is dened to be
0
= 2/N. The harmonics
are sequences whose frequencies are integer multiples of the fundamental
frequency. For discrete complex exponential signals, the k th harmonic is
expressed as
e
jk
0
n
= e
jk
2
N
n
, k = 0, 1, 2, .
Note that there are only N distinct harmonics for discrete complex ex-
ponential signals with fundamental frequency
0
= 2/N because every two
signals with frequencies which dier in 2m have the same waveform, i.e,
e
jk(
0
+2m)n
= e
jk(
2
N
+2m)n
= e
jk(
2
N
)n
e
jk2mn
= e
jk(
2
N
)n
.
27
The discrete Fourier series expansion of periodic signal x[n] is the expres-
sion in form of a linearly weighted combination of a fundamental and a series
of harmonic complex exponential signals.
x[n] =
N1
k=0
a
k
e
jk
0
n
=
N1
k=0
a
k
e
jk
2
N
n
,
where
a
k
=
1
N
N1
n=0
x[n]e
jk
0
n
=
1
N
N1
n=0
x[n]e
jk
2
N
n
.
Example R7.1. Calculate the Fourier series coecients a
k
for the following
periodic signal
x[n] = ..., 1, 1, 1, 1, 0, 0, ....
5
n=0
x[n]e
jk(
2
6
)6
=
1
6
(e
jk(
2
6
)0
+e
jk(
2
6
)1
+e
jk(
2
6
)2
+e
jk(
2
6
)3
+ 0 + 0)
=
1
6
(1 +e
jk
3
+e
jk
2
3
+e
jk
)
=
1
6
(1 +e
jk
3
+ (1)
k
e
jk
3
+ (1)
k
).
Example R7.2. Calculate signal x[n] from the following Fourier series co-
ecients a
k
a
k
= ..., 1/4, 1/2, 1, 1/2, 1/4, 0, 1/4, 1/2, 1, 1/2....
.
We can observe that N = 6 so
x[n] =
k=<N>
a
k
e
jk
2
6
n
= 1 +
1
2
e
j
2
6
n
+
1
4
e
j
2
6
2n
+ 0 +
1
4
e
j
2
6
4n
+
1
2
e
j
2
6
5n
= 1 +
1
2
e
j
n
3
+
1
4
e
j
2n
3
+ 0 +
1
4
e
j
4n
3
+
1
2
e
j
5n
3
= 1 +
1
2
e
j
n
3
+
1
4
e
j
2n
3
+ 0 +
1
4
e
j(2n
2n
3
)
+
1
2
e
j(2n
n
3
)
= 1 +
1
2
e
j
n
3
+
1
4
e
j
2n
3
+ 0 +
1
4
e
j
2n
3
+
1
2
e
j
n
3
= 1 + cos(
n
3
) +
1
2
cos(
2n
3
).
28
R8 Matrix Algebra
R8.1 Denition
A matrix is a rectangular array of real or complex numbers enclosed in brack-
ets; for instance,
_
3 4
1 2
_
,
_
3 5 6
2 1 3
_
,
_
_
3j 5
2 4j 1 +j
7 5 3j
_
_
.
A matrix with K rows and M columns is called a K M matrix. For
example, the matrix
_
_
1 0 0
0 1 0
0 0 1
_
_
is a 3 3 matrix. The matrix
_
_
1 4
2 6
3j 1
1 +j 1
_
_
is a 4 2 matrix. The matrix
U =
_
_
a
11
a
12
a
1M
a
21
a
22
a
2M
.
.
.
.
.
.
.
.
.
.
.
.
a
K1
a
K2
a
KM
_
_
(R8.1)
is a K M matrix and the number a
rs
, r = 1, 2, ..., K, and s = 1, 2, ..., M
is called the entry of U.
R8.2 Transpose
The transpose, U
T
, of a K M matrix U is the M K matrix formed by
interchanging the rows and columns of U. For example, the transpose of the
29
matrix U given in Eq.(R8.1) is a M K matrix given by
U
T
=
_
_
a
11
a
21
a
K1
a
12
a
22
a
K2
.
.
.
.
.
.
.
.
.
.
.
.
a
1M
a
2M
a
KM
_
_
. (R8.2)
R8.3 Toeplitz Matrix
The N N matrix U is a Toeplitz matrix if all entries along the line parallel
to the main diagonal are the same. For example,
U =
_
_
a
0
a
1
a
2
a
3
a
1
a
0
a
1
a
2
a
2
a
1
a
0
a
1
a
3
a
2
a
1
a
0
_
_
is a 4 4 Toeplitz matrix.
R8.4 Circulant Matrix
The NN matrix U is a circulant matrix if each row equals the right circular
shift of the previous row by one entry. For example,
U =
_
_
a
0
a
1
a
2
a
3
a
3
a
0
a
1
a
2
a
2
a
3
a
0
a
1
a
1
a
2
a
3
a
0
_
_
is a 4 4 Circulant matrix.
R8.5 Determinant
If the 2 2 matrix U is
U =
_
a
11
a
12
a
21
a
22
_
, (R8.3)
then the determinant of the U is given by
det(U) = a
11
a
22
a
12
a
21
. (R8.4)
30
Example R8.1. The determinant of the matrix
U =
_
3 4
1 2
_
is det(U) = 3 2 4 1 = 6 4 = 2.
If the 3 3 matrix U is
U =
_
_
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
_
_
, (R8.5)
then the determinant of U is given by
det(U) = a
11
a
22
a
33
+a
21
a
32
a
13
+a
12
a
23
a
31
a
13
a
22
a
31
a
11
a
32
a
23
a
12
a
21
a
33
.
(R8.6)
Example R8.2. The determinant of the matrix
_
_
1 4 6
2 1 3
4 5 2
_
_
is
det(U) = 1 1 (2) + 2 5 (6) + 4 3 4 (6) 1 4 1 5 3 4 2 (2)
= (2) + (60) + 48 (24) 15 (16) = 11.
If the N N matrix U is
U =
_
_
a
11
a
12
a
1N
a
21
a
22
a
2N
.
.
.
.
.
.
.
.
.
.
.
.
a
N1
a
N2
a
NN
_
_
, (R8.7)
then the determinant of U is
det(U) = a
r1
(1)
r+1
M
r1
+a
r2
(1)
r+2
M
r2
+ +a
rN
(1)
r+N
M
rN
(R8.8)
= a
1s
(1)
1+s
M
1s
+a
2s
(1)
2+s
M
2s
+ +a
Ns
(1)
N+s
M
Ns
(R8.9)
where r, s = 1 or 2 or 3 , or N and M
rs
is the minor of a
rs
(see Section
R8.6).
31
Example R8.3. To calculate the determinant of the matrix
U =
_
_
1 6 4 6
7 1 3 6
2 3 6 5
2 4 2 6
_
_
, (R8.10)
we rst calculate the minors
M
11
=
_
_
1 3 6
3 6 5
4 2 6
_
_
= 320, M
12
=
_
_
7 3 6
2 6 5
2 2 6
_
_
= 344
M
13
=
_
_
7 1 6
2 3 5
2 4 6
_
_
= 232, M
14
=
_
_
7 1 3
2 3 6
2 4 2
_
_
= 200.
The determinant is therefore given by
det(U) = 1 (1)
1+1
320 + 6 (1)
1+2
(344) + (4) (1)
1+3
232 + 6 (1)
1+4
200
= 320 + 2064 928 1200 = 256.
R8.6 Minor and Cofactor
From U given in Eq.(R8.7), the minor M
rs
of a
rs
in U is dened to be the
determinant of the (N 1) (N 1) matrix formed by deleting the r-th row
and s-th column of U. For example,
M
11
=
a
22
a
23
a
2N
a
32
a
33
a
3N
.
.
.
.
.
.
.
.
.
.
.
.
a
N2
a
N3
a
NN
, M
12
=
a
21
a
23
a
2N
a
31
a
33
a
3N
.
.
.
.
.
.
.
.
.
.
.
.
a
N1
a
N3
a
NN
.
The cofactor C
rs
of a
rs
in U given in Eq.(R8.7) is dened to be
C
rs
= (1)
r+s
M
rs
. (R8.11)
32
R8.7 Inverse of a Matrix
By Eq.(R8.7), if det(U) ,= 0, then the inverse of U exists and is uniquely
given by
U
1
=
1
det(U)
_
_
C
11
C
21
C
N1
C
12
C
22
C
N2
.
.
.
.
.
.
.
.
.
.
.
.
C
1N
C
2N
C
NN
_
_
, (R8.12)
where C
rs
= (1)
r+s
M
rs
is the cofactor of a
rs
in U given in Eq.(R8.7).
Example R8.4. In this example we want to nd the inverse of the matrix
given in Eq.(R8.10). The cofactors are calculated as follows:
C
11
= (1)
1+1
M
11
= 320, C
12
= (1)
1+2
M
12
= 344,
C
13
= (1)
1+3
M
13
= 232, C
14
= (1)
1+4
M
14
= 200
C
21
= (1)
2+1
M
21
= 176, C
22
= (1)
2+2
M
22
= 210,
C
23
= (1)
2+3
M
23
= 158, C
24
= (1)
2+4
M
24
= 134
C
31
= (1)
3+1
M
31
= 240, C
32
= (1)
3+2
M
32
= 234,
C
33
= (1)
3+3
M
33
= 198, C
34
= (1)
3+4
M
24
= 142
C
41
= (1)
4+1
M
41
= 344, C
42
= (1)
4+2
M
42
= 329,
C
43
= (1)
4+3
M
43
= 239, C
44
= (1)
4+4
M
44
= 227.
Therefore, the inverse is
U
1
=
1
256
_
_
320 176 240 344
344 210 234 329
232 158 198 239
200 134 142 227
_
_
.
R8.8 Unitary Matrix and Orthogonal Matrix
The N N matrix U is said to be unitary if
U
H
U = UU
H
= kI, (R8.13)
where k is any nonzero constant and U
H
= (U
T
)
is the conjugate-transpose
of U. Note that the unitary matrix is always invertible and U
H
= U
1
.
33
A real unitary matrix U is also called an orthogonal matrix, i.e.,
U
T
U = UU
T
= kI, (R8.14)
where k is any nonzero constant and U
T
is the transpose of U. Similarly,
the orthogonal matrix is always invertible and U
H
= U
1
. If k = 1, then
the matrix U is said to be orthonormal.
R8.9 Cramers Rule
Consider the set of N linear equations
a
11
x
1
+a
12
x
2
+ +a
1N
x
N
= b
1
,
a
21
x
1
+a
22
x
2
+ +a
2N
x
N
= b
2
,
.
.
.
a
N1
x
1
+a
N2
x
2
+ +a
NN
x
N
= b
N
,
(R8.15)
writing in matrix form yields
_
_
a
11
a
12
a
1N
a
21
a
22
a
2N
.
.
.
.
.
.
.
.
.
.
.
.
a
N1
a
N2
a
NN
_
_
_
_
x
1
x
2
.
.
.
x
N
_
_
=
_
_
b
1
b
2
.
.
.
b
N
_
_
. (R8.16)
Let D be the determinant of the coecient matrix
D =
a
11
a
12
a
1N
a
21
a
22
a
2N
.
.
.
.
.
.
.
.
.
.
.
.
a
N1
a
N2
a
NN
. (R8.17)
If D ,= 0, then the system(R8.15) has the unique solution
x
1
=
D
1
D
, x
2
=
D
2
D
, , x
N
=
D
N
D
, (R8.18)
where
D
1
=
b
1
a
12
a
1N
b
2
a
22
a
2N
.
.
.
.
.
.
.
.
.
.
.
.
b
N
a
N2
a
NN
, D
2
=
a
11
b
1
a
1N
a
21
b
2
a
2N
.
.
.
.
.
.
.
.
.
.
.
.
a
N1
b
N
a
NN
, , etc.
34