Linear Algebra

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 27

Length and Dot Product in Rn

http://mandal.faculty.ku.edu/math290/SU7TeenMath290/summ17S6p1S5p1

Length in Rn

Length and Angle in plane R2

We discussed, two parallel arrows, with equal length, represented the Same Vector v. In particular,
there is one arrow, representing v, starting at the origin.
Dot Product in Rn
https://www.mathsisfun.com/algebra/vectors-dot-product.html?

A vector has magnitude (how long it is) and direction:

Here are two vectors:

They can be multiplied using the "Dot Product" .


Calculating
The Dot Product gives a number as an answer (a "scalar", not a vector).

The Dot Product is written using a central dot:

a·b
This means the Dot Product of a and b

We can calculate the Dot Product of two vectors this way:

a · b = |a| × |b| × cos(θ)

Where:
|a| is the magnitude (length) of vector a
|b| is the magnitude (length) of vector b
θ is the angle between a and b

So we multiply the length of a times the length of b, then multiply by the cosine of the
angle between a and b

OR we can calculate it this way:

a · b = ax × b x + a y × b y

So we multiply the x's, multiply the y's, then add.

Both methods work!


Example: Calculate the dot product of vectors a and b:

a · b = |a| × |b| × cos(θ)

a · b = 10 × 13 × cos(59.5°)
a · b = 10 × 13 × 0.5075...
a · b = 65.98... = 66 (rounded)

or we can calculate it this way:

a · b = ax × b x + a y × b y

a · b = -6 × 5 + 8 × 12
a · b = -30 + 96
a · b = 66

Both methods came up with the same result (after rounding)

Also note that we used minus 6 for ax (it is heading in the negative x-direction)

Note: you can use the Vector Calculator to help you.

Why cos(θ) ?
OK, to multiply two vectors it makes sense to multiply their lengths together but only
when they point in the same direction.

So we make one "point in the same direction" as the other by multiplying by cos(θ):
THEN we multiply !

It works exactly the same if we


"projected" b alongside a then multiplied:

Because it doesn't matter which order we do the


multiplication:

|a| × |b| × cos(θ) = |a| × cos(θ) × |b|

Right Angles
When two vectors are at right angles to each other the dot product is zero.

Example: calculate the Dot Product for:

a · b = |a| × |b| × cos(θ)

a · b = |a| × |b| × cos(90°)


a · b = |a| × |b| × 0
a·b=0
or we can calculate it this way:

a · b = ax × b x + a y × b y

a · b = -12 × 12 + 16 × 9
a · b = -144 + 144
a·b=0

This can be a handy way to find out if two vectors are at right angles.

Three or More Dimensions


This all works fine in 3 (or more) dimensions, too.

And can actually be very useful!

Example: Sam has measured the end-points of two poles, and wants
to know the angle between them:

We have 3 dimensions, so don't forget the z-components:

a · b = ax × b x + a y × b y + az × b z

a · b = 9 × 4 + 2 × 8 + 7 × 10
a · b = 36 + 16 + 70
a · b = 122

Now for the other formula:

a · b = |a| × |b| × cos(θ)

But what is |a| ? It is the magnitude, or length, of the vector a. We can


use Pythagoras:

 |a| = √(42 + 82 + 102)


 |a| = √(16 + 64 + 100)
 |a| = √180

Likewise for |b|:

 |b| = √(92 + 22 + 72)


 |b| = √(81 + 4 + 49)
 |b| = √134

And we know from the calculation above that a · b = 122, so:

a · b = |a| × |b| × cos(θ)

122 = √180 × √134 × cos(θ)


cos(θ) = 122 / (√180 × √134)
cos(θ) = 0.7855...
θ = cos-1(0.7855...) = 38.2...°
Inner Product Spaces
https://ltcconline.net/greenl/courses/203/Vectors/innerProduct.htm

Definitions and Examples

We have seen how to take the dot product of any two vectors in Rn. In this discussion,
we will generalize this idea to general vector spaces. The dot product had certain
properties that we will use to define this generalized product.

Definition

Let V be a vector space and u, v, and w be vectors in V and c be a constant. Then


an inner product ( , ) on V is a function with domain consisting of pairs of vectors and
range real numbers satisfying the following properties.

1. (u, u) > 0 with equality if and only if u = 0.

2. (u, v) = (v, u)

3. (u + v, w) = (u, w) + (v, w)

4. (cu, v) = (u, cv) = c(u, v)

A vector space with its inner product is called an inner product space.

Notice that the regular dot product satisfies these four properties.

Example

Let V be the vector space consisting of all continuous functions with the
standard + and *. Then define an inner product by

For example
The four properties follow immediately from the analogous property of the definite integral. For
example

Example

Let V = R2 and consider the function

((a,b), (c,d)) = ac + 2bd

Then this function defines an inner product. We will prove Property 1. We have

((a,b), (a,b)) = a2 + 2b2 > 0

And equality occurs if and only if both a and b are 0.

Fourier Series

We will spend the rest of the discussion with a special case of an inner product space. For any
inner product space V we call vectors v and w orthogonal if

(v, w) = 0

And we define the length of v by


We will call a basis S for a vector space V orthonormal if every element of S is a unit
vector (length one) and any two distinct elements are orthogonal.

The inner product space that we are interested in is the space of continuous functions
with

Let S = {v1, v2, v3, ...} be the infinite set of vectors (functions) defined by

We leave it as an exercise to show that

and for m and n distinct positive integers,

and
The above identities imply that S is an orthonormal set of vectors. To show that it is a basis is
beyond the level of this discussion. In fact, we have yet to understand about infinite dimensional
bases.

Recall that

projWv = (v,w1) w1 + (v,w2) w2 + (v,w3) w3 + ...

Notice that we have adjusted the definition to use the inner product instead of the dot
product and for an infinite orthonormal set instead of a finite set. This definition
provides us with a way of approximating a continuous function with elements of the
orthonormal set. Ths idea is analogous to idea of the Taylor Series. When the basis is
as above, the inner products are called the Fourier coefficients of the function.

Example:

Calculate the first four nonzero Fourier coefficients for the function

f(t) = |t|

Solution:

We have

It turns out that the next two nonzero Fourier coefficients are

We can write
The graphs of y = |t| and the above equation are shown below.
Orthonormal Bases: Gram- Schmidt
Gram–Schmidt process
https://en.wikipedia.org/wiki/Gram–Schmidt_process

The first two steps of the Gram–Schmidt process

In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process is a
method for orthonormalising a set of vectors in an inner product space, most commonly
the Euclidean space Rn equipped with the standard inner product. The Gram–Schmidt process takes
a finite, linearly independent set S = {v1, ..., vk} for k ≤ n and generates an orthogonal set S′ = {u1,
..., uk} that spans the same k-dimensional subspace of Rn as S.
The method is named after Jørgen Pedersen Gram and Erhard Schmidt, but Pierre-Simon
Laplace had been familiar with it before Gram and Schmidt.[1] In the theory of Lie group
decompositions it is generalized by the Iwasawa decomposition.
The application of the Gram–Schmidt process to the column vectors of a full
column rank matrix yields the QR decomposition (it is decomposed into an orthogonal and
a triangular matrix).

The Gram–Schmidt process

The modified Gram-Schmidt process being executed on three linearly independent, non-orthogonal vectors of a basis
for R3.
The Gram-Schmidt Process

The Gram–Schmidt process then works as follows:

The sequence u1, ..., uk is the required system of orthogonal vectors, and the normalized
vectors e1, ..., ek form an orthonormal set. The calculation of the sequence u1, ..., uk is known
as Gram–Schmidt orthogonalization, while the calculation of the sequence e1, ..., ek is known
as Gram–Schmidt orthonormalization as the vectors are normalized.
To check that these formulas yield an orthogonal sequence, first compute ( u1 , u 2) by
substituting the above formula for u2: we get zero. Then use this to compute ( u1 , u 3) again
by substituting the formula for u3: we get zero. The general proof proceeds by mathematical
induction.
Geometrically, this method proceeds as follows: to compute ui, it projects vi orthogonally
onto the subspace U generated by u1, ..., ui−1, which is the same as the subspace generated
by v1, ..., vi−1. The vector ui is then defined to be the difference between vi and this projection,
guaranteed to be orthogonal to all of the vectors in the subspace U.
The Gram–Schmidt process also applies to a linearly independent countably
infinite sequence {vi}i. The result is an orthogonal (or orthonormal) sequence {ui}i such that
for natural number n: the algebraic span of v1, ..., vn is the same as that of u1, ..., un.
If the Gram–Schmidt process is applied to a linearly dependent sequence, it outputs
the 0 vector on the ith step, assuming that vi is a linear combination of v1, ..., vi−1. If an
orthonormal basis is to be produced, then the algorithm should test for zero vectors in the
output and discard them because no multiple of a zero vector can have a length of 1. The
number of vectors output by the algorithm will then be the dimension of the space spanned
by the original input

Example: Euclidean space


Consider the following set of vectors in R2 (with the conventional inner product)

Now, perform Gram–Schmidt, to obtain an orthogonal set of vectors:

We check that the vectors u1 and u2 are indeed orthogonal:

noting that if the dot product of two vectors is 0 then they are orthogonal.
For non-zero vectors, we can then normalize the vectors by dividing out their sizes as shown above:

You might also like