0% found this document useful (0 votes)
13 views4 pages

HW 2

Uploaded by

prickytreepro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views4 pages

HW 2

Uploaded by

prickytreepro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Orthogonal and Orthonormal Bases With Homework #2

This handout discusses orthogonal and orthonormal bases of a finite-dimensional real vector
space. (Later, we will have to consider the case of vector spaces over the complex numbers.) To be
definite, we will consider the vector space Rn , where n is a positive integer. You can consider the
elements of this space to be signals and transformations such as the k-level Haar Transform Hk to
be linear transformations from Rn to itself.
Some review of linear algebra: Recall Pthat a set {x1 , x2 , . . . , xk } of elements of Rn is said to
k
be linearly independent if whenever i=1 ci xi = 0 for some c1 , c2 , . . . , ck ∈ R, it follows that
c1 = c2 = · · · = ck = 0. That is, the only way for a linear combination of x1 , x2 , . . . , xk to be
zero is for all the coefficients in the linear combination to be zero. If {x1 , x2 , . . . , xk } is a linearly
independent set of vectors in Rn , then k ≤ n, and k = n if and only if {x1 , x2 , . . . , xk } is a basis
of Rn .
Given a basis B = {x n n
P1n, x2 , . . . , xn } of R , any element v ∈ R can be written uniquely as a
linear combination v = i=1 ci xi , where the ci are real numbers. The numbers ci are called the
components of v relative to the basis B.
Finally, we need some facts about scalar products: If x = (xP 1 , x2 , . . . , xn ) and y = (y1 , y2 , . . . , yn ),
then the scalar product of x and y is defined to be x · y = ni=1 xi yi . An important fact about
scalar products is that they are linear in both arguments. Thus, for example, if s, t ∈ R and
x, y, z ∈ Rn , then x · (sy + tz) = s(x · y) + t(x · z). The scalar product is also known as the “dot
product,” because of the usual notation. However, there is another common notation for scalar
product that we might use sometimes: hx, yi. A third common term for the scalar product is “inner
product.”

Definition: Two vectors x and y are said to be orthogonal if x · y = 0, that is, if their scalar
product is zero.

Theorem: Suppose x1 , x2 , . . . , xk are non-zero vectors in Rn that are pairwise orthogonal (that
is, xi · xj = 0 for all i 6= j). Then the set {x1 , x2 , . . . , xk } is a lineary independent set of vectors.
Proof: Suppose that we have c1 x1 + c2 x2 + · · · + ck xk = 0 for some scalers c1 , c2 , . . . , ck ∈ R.
Let i be any integer in the range 1 ≤ i ≤ k. We must show that ci = 0. Now, consider the dot
product of ci with c1 x1 + c2 x2 + · · · + ck xk . Since c1 x1 + c2 x2 + · · · + ck xk = 0, we have:

0 = xi · 0
= xi · (c1 x1 + c2 x2 + · · · + ck xk )
= c1 (xi · x1 ) + c2 (xi · x2 ) + · · · + c2 (xi · xk )
= ci (xi · xi )

where the last equality follows because xi · xj = 0 for i 6= j. Now, since xi is not the zero vector,
we know that xi · xi 6= 0. So the fact that 0 = ci (xi · xi ) implies ci = 0, as we wanted to show.

Corollary: Suppose that B = {x1 , x2 , . . . , xn } is a set of n vectors in Rn that are pairwise


orthogonal. Then A is a basis of Rn .
Proof: This follows simply because any set of n linearly independent vectors in Rn is a basis.

Definition: The length or norm of a vector x = (x1 , x2 , . . . , xn ) ∈ Rn is defined to be |x| =



x · x . (Note then than x · x = |x|2 .)
Definition: A basis B = {x1 , x2 , . . . , xn } of Rn is said to be an orthogonal basis if the elements
of B are pairwise orthogonal, that is xi · xj whenever i 6= j. If in addition xi · xi = 1 for all i, then
the basis is said to be an orthonormal basis. Thus, an orthonormal basis is a basis consisting of
unit-length, mutually orthogonal vectors.

We introduce the notation δij for integers i and j, defined by δij = 0 if i 6= j and δii = 1. Thus,
a basis B = {x1 , x2 , . . . , xn } is orthonormal if and only if xi · xj = δij for all i, j.
Given a vector v ∈ Rn and an orthonormal basis B = {x1 , x2 , . . . , xn } of Rn , it is very easy to
find the components of v with respect to the basis B. In fact, the ith component of v is simply
v · xi . This is the content of the following theorem:

Theorem: Suppose that B = {x1 , x2 , . . . , xn } is an orthonormal basis of Rn and v is any


vector in Rn . Then
X n
v= (v · xi )xi
i=1
n
X
Proof: Since B is a basis, we can write v = ci xi for some unique c1 , c2 , . . . , cn ∈ R. We
i=1
just have to show that ci = v · xi for each i. But
 
Xn
v · xi =  cj xj  · xi
j=1
n
X
= cj (xj · xi )
j=1
n
X
= cj δji
j=1

= ci

We can look at this in terms of linear transformations. Given any basis B = {x1 , x2 , . . . , xn } of
Rn , we can always define a linear transformation TB : Rn → Rn by T (v) = (v · x1 , v · x2 , . . . , v · xn ).
Now, suppose we are given (t1 , t2 , . . . , tn ) ∈ Rn and we would like to find a vector v such that
T (v) = (t1 , t2 , . . . , tn ). That is, we are trying to compute T −1 (t1 , t2 , . . . , tn ). We know from the
definition of T that v · xi = ti for all i. For a general basis this information does not make
it easy to recover v. However, for an orthonormalPbasis, v · xi is just th
n Pnthe i component of v
relative to the basis. That is, we P can write v = i=1 (v · xi )xi = i=1 ti xi . This could also
n
be written as T (t1 , t2 , . . . , tn ) = i=1 ti xi . Another way of looking at this is to say that if we
−1

know the transformed vector T (v) of an unknown vector v, we have a simple explicit formula for
reconstructing v.
If we apply this to the k-level Haar Transform Hk : RN → RN , we know that the components
of Hk (f ) can be computed as scalar products of f with the vectors Vik for i = 1, 2, . . . , 2Nk and Wij
for j = 1, 2, . . . , k and i = 1, 2, . . . , 2Nj :

Hk (f ) = ak1 , ak2 , . . . , akN/2k | dk1 , dk2 , . . . , dkN/2k | dk−1 k−1 k−1 1 1 1



1 , d2 , . . . , dN/2k−1 | · · · | d1 , d2 , . . . , dN/2

aki = f · Vik , for i = 1, 2, . . . , N/2k


dji = f · Wij , for j = 1, 2, . . . , k and i = 1, 2, . . . , N/2j
Now, it turns out that this set of vectors Vik and Wij forms an orthonormal basis of RN .
Knowing this, we know automatically how to reconstruct a signal f form its k-level Haar transform.
Namely, if f is a signal such that

Hk (f ) = ak1 , ak2 , . . . , akN/2k | dk1 , dk2 , . . . , dkN/2k | dk−1 k−1 k−1 1 1 1



1 , d2 , . . . , dN/2k−1 | · · · | d1 , d2 , . . . , dN/2

then
N/2k k N/2 j

dji Wij
X X X
f = aki Vik +
i=1 j=1 i=1

(Admittedly, the naming and indexing of the basis vectors is sort of strange, but you shouldn’t
let this obscure the fundamental simplicity of the result.)
The fact that we can so easily reconstruct a signal given its transform shows why it has been
considered so useful to construct orthonormal bases of wavelets (and scaling functions). The case of
Haar wavelets is relatively straightforward, and its orthonormality has been known and understood
for a century. Finding other systems of wavelets that have the orthonormality property has not
been so easy. In the 1980s, a general method for constructing orthonormal bases of wavelets was
discovered. This is one of the breakthroughs that has incited a lot of the recent interest in wavelet
theory.

Homework Exercises — due Friday, February 3

1. Suppose that B = {x1 , x2 , . . . , xn } is an orthogonal basis of Rn , but not necessarily an orthonor-


mal basis. Suppose that v ∈ Rn . Find P a simple formula for the components of v relative to the
basis B. That is, suppose that v = ni=1 ci xi . Find a formula for ci . (The formula should use
scalar products.) Justify your answer.

2. Show that the vectors Vik for i = 1, 2, . . . , 2Nk and Wij for j = 1, 2, . . . , k and i = 1, 2, . . . , 2Nj
form an orthonormal basis of RN . This will take some work. A useful concept is the support
of a vector v ∈ RN . The support of v = (v1 , v2 , . . . , vn ) is defined to be the set of indices for
which vi is non-zero. That is, support(v) = {i | vi 6= 0}. Note that given any two vectors in
the set that you are considering, it is the case that either their supports do not overlap at all
or the support of one of the two vectors is contained inside the support of other vector. (What
can you say about the dot product of two vectors whose supports do not overlap?)

The remaining exercises use the Java application HaarTransformDemo.jar. You can find it in
the directory /classes/s06/math371 on the computers in the Math/CS lab, or you can download
it from a link on the course web page. The program can draw Haar k-level transforms of an input
signal of length 2n . The number of points can be selected using a pop-up menu at the bottom of
the program window. The program can also do full or partial reconstructions of the input signal
from the full n-level Haar transform, like those found in Figure 1.3 in the textbook. (A Level 0
reconstruction uses only the average an1 of the input signal; a level 1 transform adds in information
from the level-n difference dn1 ; level 2 adds in the level-(n − 1) differences d1n−1 and d2n−1 , and so on.
The Level n reconstruction should be equal to the original signal.) A pop-up menu at the bottom
of the window determines which of the possible output signals is displayed.
You can change the Input Signal by dragging points. When you use the left mouse button,
nearby points are dragged along. If you drag with the right-mouse button, only a single point is
moved. You can also specify the input signal as a function f (n).

3. Start the program. Drag the center point of the input signal upwards to create an input signal
with one “hump” (or use the formula 3/( ((n-64)/16)^2 + 1 )^2 for input signal). Describe
and explain in detail the 1-level and 2-level Haar Transforms of this signal.

4. Still using 128 points, use the formula sin(pi*n/32) for the input signal. Describe and explain
the Level 0 through Level 7 Reconstructions of this signal.

5. Switch to 1024 points. Right-click a point near the center of the input signal. Discuss the Haar
Transforms and Reconstructions of this signal.

You might also like