Linear Algebra
Linear Algebra
Linear Algebra
http://mandal.faculty.ku.edu/math290/SU7TeenMath290/summ17S6p1S5p1
Length in Rn
We discussed, two parallel arrows, with equal length, represented the Same Vector v. In particular,
there is one arrow, representing v, starting at the origin.
Dot Product in Rn
https://www.mathsisfun.com/algebra/vectors-dot-product.html?
a·b
This means the Dot Product of a and b
Where:
|a| is the magnitude (length) of vector a
|b| is the magnitude (length) of vector b
θ is the angle between a and b
So we multiply the length of a times the length of b, then multiply by the cosine of the
angle between a and b
a · b = ax × b x + a y × b y
a · b = 10 × 13 × cos(59.5°)
a · b = 10 × 13 × 0.5075...
a · b = 65.98... = 66 (rounded)
a · b = ax × b x + a y × b y
a · b = -6 × 5 + 8 × 12
a · b = -30 + 96
a · b = 66
Also note that we used minus 6 for ax (it is heading in the negative x-direction)
Why cos(θ) ?
OK, to multiply two vectors it makes sense to multiply their lengths together but only
when they point in the same direction.
So we make one "point in the same direction" as the other by multiplying by cos(θ):
THEN we multiply !
Right Angles
When two vectors are at right angles to each other the dot product is zero.
a · b = ax × b x + a y × b y
a · b = -12 × 12 + 16 × 9
a · b = -144 + 144
a·b=0
This can be a handy way to find out if two vectors are at right angles.
Example: Sam has measured the end-points of two poles, and wants
to know the angle between them:
a · b = ax × b x + a y × b y + az × b z
a · b = 9 × 4 + 2 × 8 + 7 × 10
a · b = 36 + 16 + 70
a · b = 122
We have seen how to take the dot product of any two vectors in Rn. In this discussion,
we will generalize this idea to general vector spaces. The dot product had certain
properties that we will use to define this generalized product.
Definition
2. (u, v) = (v, u)
3. (u + v, w) = (u, w) + (v, w)
A vector space with its inner product is called an inner product space.
Notice that the regular dot product satisfies these four properties.
Example
Let V be the vector space consisting of all continuous functions with the
standard + and *. Then define an inner product by
For example
The four properties follow immediately from the analogous property of the definite integral. For
example
Example
Then this function defines an inner product. We will prove Property 1. We have
Fourier Series
We will spend the rest of the discussion with a special case of an inner product space. For any
inner product space V we call vectors v and w orthogonal if
(v, w) = 0
The inner product space that we are interested in is the space of continuous functions
with
Let S = {v1, v2, v3, ...} be the infinite set of vectors (functions) defined by
and
The above identities imply that S is an orthonormal set of vectors. To show that it is a basis is
beyond the level of this discussion. In fact, we have yet to understand about infinite dimensional
bases.
Recall that
Notice that we have adjusted the definition to use the inner product instead of the dot
product and for an infinite orthonormal set instead of a finite set. This definition
provides us with a way of approximating a continuous function with elements of the
orthonormal set. Ths idea is analogous to idea of the Taylor Series. When the basis is
as above, the inner products are called the Fourier coefficients of the function.
Example:
Calculate the first four nonzero Fourier coefficients for the function
f(t) = |t|
Solution:
We have
It turns out that the next two nonzero Fourier coefficients are
We can write
The graphs of y = |t| and the above equation are shown below.
Orthonormal Bases: Gram- Schmidt
Gram–Schmidt process
https://en.wikipedia.org/wiki/Gram–Schmidt_process
In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process is a
method for orthonormalising a set of vectors in an inner product space, most commonly
the Euclidean space Rn equipped with the standard inner product. The Gram–Schmidt process takes
a finite, linearly independent set S = {v1, ..., vk} for k ≤ n and generates an orthogonal set S′ = {u1,
..., uk} that spans the same k-dimensional subspace of Rn as S.
The method is named after Jørgen Pedersen Gram and Erhard Schmidt, but Pierre-Simon
Laplace had been familiar with it before Gram and Schmidt.[1] In the theory of Lie group
decompositions it is generalized by the Iwasawa decomposition.
The application of the Gram–Schmidt process to the column vectors of a full
column rank matrix yields the QR decomposition (it is decomposed into an orthogonal and
a triangular matrix).
The modified Gram-Schmidt process being executed on three linearly independent, non-orthogonal vectors of a basis
for R3.
The Gram-Schmidt Process
The sequence u1, ..., uk is the required system of orthogonal vectors, and the normalized
vectors e1, ..., ek form an orthonormal set. The calculation of the sequence u1, ..., uk is known
as Gram–Schmidt orthogonalization, while the calculation of the sequence e1, ..., ek is known
as Gram–Schmidt orthonormalization as the vectors are normalized.
To check that these formulas yield an orthogonal sequence, first compute ( u1 , u 2) by
substituting the above formula for u2: we get zero. Then use this to compute ( u1 , u 3) again
by substituting the formula for u3: we get zero. The general proof proceeds by mathematical
induction.
Geometrically, this method proceeds as follows: to compute ui, it projects vi orthogonally
onto the subspace U generated by u1, ..., ui−1, which is the same as the subspace generated
by v1, ..., vi−1. The vector ui is then defined to be the difference between vi and this projection,
guaranteed to be orthogonal to all of the vectors in the subspace U.
The Gram–Schmidt process also applies to a linearly independent countably
infinite sequence {vi}i. The result is an orthogonal (or orthonormal) sequence {ui}i such that
for natural number n: the algebraic span of v1, ..., vn is the same as that of u1, ..., un.
If the Gram–Schmidt process is applied to a linearly dependent sequence, it outputs
the 0 vector on the ith step, assuming that vi is a linear combination of v1, ..., vi−1. If an
orthonormal basis is to be produced, then the algorithm should test for zero vectors in the
output and discard them because no multiple of a zero vector can have a length of 1. The
number of vectors output by the algorithm will then be the dimension of the space spanned
by the original input
noting that if the dot product of two vectors is 0 then they are orthogonal.
For non-zero vectors, we can then normalize the vectors by dividing out their sizes as shown above: