0% found this document useful (0 votes)
52 views3 pages

2017spring 235 Midterm 1 Solutions

This document provides solutions to problems on a linear algebra midterm exam. It includes 7 problems with multiple parts each. The solutions demonstrate properties of vectors, subspaces, bases, linear independence, inner products, orthogonality, and applying concepts like induction to prove statements. Key steps of the solutions are explained to help students understand the reasoning.

Uploaded by

Yan Ji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views3 pages

2017spring 235 Midterm 1 Solutions

This document provides solutions to problems on a linear algebra midterm exam. It includes 7 problems with multiple parts each. The solutions demonstrate properties of vectors, subspaces, bases, linear independence, inner products, orthogonality, and applying concepts like induction to prove statements. Key steps of the solutions are explained to help students understand the reasoning.

Uploaded by

Yan Ji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Math 235, Spring 2017 (Dummit/Mc Tague) ∼ Linear Algebra ∼ Midterm 1 Solutions

These solutions are not intended to be exhaustive, but they should be sucient for anyone who has already tried
to work through the problems. Some problems may have multiple dierent solutions, so yours may still be correct
even if it is dierent from the solution appearing here.

23 − 2
1. We prove this by induction on n. For the base case we take n = 2: then indeed 1 · 2 = . For the
3
n3 − n
inductive step, suppose that 1 · 2 + 2 · 3 + · · · + (n − 1) · n = . Then
3
1 · 2 + 2 · 3 + · · · + (n − 1) · n + n · (n + 1) = [1 · 2 + 2 · 3 + · · · + (n − 1) · n] + n · (n + 1)
n3 − n
= + n(n + 1)
3
n3 − n + 3n2 + 3n
=
3
n3 + 3n2 + 2n (n + 1)3 − (n + 1)
= =
3 3
as required.

2. If z = ri, then z = −ri = −z . Conversely, if z = a + bi, we see z = a − bi while −z = −a − bi, and these are
equal precisely when a = 0.

3.
(a) True : the set of vectors in both W1 and W2 is the same as the intersection W1 ∩ W2 , which we proved
was a subspace of V .
(b) True : it satises the three components of the subspace criterion.
(c) False : since P4 (R) has dimension 4, any spanning set must contain at least 4 vectors; the given set has
only 3.
(d) False : the zero space has dimension 0. (The dimension of any other space is positive.)
(e) False : while it is true that any spanning set must contain 3 vectors, there are certainly sets with many
vectors that do not span V . An example is V = R2 with the set {h1, 0i , h2, 0i , h3, 0i , h4, 0i}.
(f) True : any basis must have exactly 3 elements.
(g) False : since R3 has dimension 3, any set of more than 3 vectors is linearly dependent.
(h) True : we have h3x, 2xi = 6 hx, xi, and hx, xi ≥ 0 by the positive-deniteness property.
(i) True : this is the triangle inequality.
(j) True : these vectors are orthogonal, each of them has length 1, and there are three of them (meaning
that they form a linearly independent set in the 3-dimensional vector space R3 , so they are a basis).

4.
(a) We just check the parts of the subspace criterion. Note that the vectors in S have the form hx1 , x2 , x3 , x3 , x1 + x2 i.
• [S1]: The zero vector satises both conditions.
• [S2]: If v = hx1 , x2 , x3 , x3 , x1 + x2 i and w = hy1 , y2 , y3 , y3 , y1 + y2 i then
v + w = hx1 + y1 , x1 + y2 , x3 + y3 , x3 + y3 , (x1 + y1 ) + (x2 + y2 )i which is of the desired form.

1
• [S3]: If v = hx1 , x2 , x3 , x3 , x1 + x2 i then cv = hcx1 , cx2 , cx3 , cx3 , cx1 + cx2 i which is of the desired
form.
(b) As noted above, the vectors in S are those of the form hx1 , x2 , x3 , x3 , x1 + x2 i. Since hx1 , x2 , x3 , x3 , x1 + x2 i =
x1 h1, 0, 0, 0, 1i + x2 h0, 1, 0, 0, 1i + x3 h0, 0, 1, 1, 0i, we see that h1, 0, 0, 0, 1i , h0, 1, 0, 0, 1i , h0, 0, 1, 1, 0i span
S . Furthermore, since they are clearly linearly independent, they are a basis for S . So we get the basis
{h1, 0, 0, 0, 1i , h0, 1, 0, 0, 1i , h0, 0, 1, 1, 0i} and get dim(S) = 3 .

5.
(a) Suppose w is in V . If S spans V , then there exist scalars a1 , a2 , . . . , an such that w = a1 v1 + a2 v2 +
· · · + an vn . In order to show that T spans V , we need to show that there exist scalars b1 , b2 , . . . , bn such
that w = b1 (v1 − v2 ) + b2 (v2 − v3 ) + · · · + bn−1 (vn−1 − vn ) + bn vn .
Expanding and collecting terms yields w = b1 v1 + (b2 − b1 )v2 + (b3 − b2 )v3 + · · · + (bn − bn−1 )vn .
Comparing this to the linear combination we had for w above, we should try b1 = a1 , b2 − b1 = a2 ,
b3 − b2 = a3 , ... , bn − bn−1 = an . This yields b1 = a1 , b2 = a1 + a2 , b3 = a1 + a2 + a3 , ... ,
bn = a1 + a2 + · · · + an .
So, by the calculation above, we can write w = a1 (v1 − v2 ) + (a1 + a2 )(v2 − v3 ) + · · · + (a1 + · · · + an )vn ,
meaning that w is in span(T ).
Remark In part (a), many students showed the reverse implication (if T spans V then S spans V ) by
starting with w as a linear combination of elements in T , expanding as
w = b1 (v1 − v2 ) + b2 (v2 − v3 ) + · · · + bn−1 (vn−1 − vn ) + bn vn
= b1 v1 + (b2 − b1 )v2 + (b3 − b2 )v3 + · · · + (bn − bn−1 )vn

and then concluding that the vector w was in span(S).


(b) Suppose that we had a dependence b1 (v1 −v2 )+b2 (v2 −v3 )+· · ·+bn−1 (vn−1 −vn )+bn vn = 0. Expanding
like in part (a), we see that b1 v1 + (b2 − b1 )v2 + (b3 − b2 )v3 + · · · + (bn − bn−1 )vn = 0. But now since S is
linearly independent, each coecient must be zero: this gives b1 = b2 −b1 = b3 −b3 = · · · = bn −bn−1 = 0,
so clearly each of b1 , b2 , . . . , bn must be zero.
(c) If S is a basis for V , then since S spans V , part (a) implies T spans V . Also, since S is linearly
independent, part (b) implies T is linearly independent. Then T spans V and is linearly independent, so
it is a basis.

6.
(a) We need to check the properties of an inner product. (Note that since V is a real vector space, we can
ignore the conjugation and just show it is symmetric.)
• [I1]: We have

h(a1 + ra2 , b1 + rb2 ), (c, d)i = 3(a1 + ra2 )c + (a1 + ra2 )d + (b1 + rb2 )c + (b1 + rb2 )d
= [3a1 c + a1 d + b1 c + b1 d] + r[3a2 c + a2 d + b2 c + b2 d]
= h(a1 , b1 ), (c, d)i + h(a2 , b2 ), (c, d)i .

• [I2]: We have h(c, d), (a, b)i = 3ca + cb + da + db = 3ac + ad + bc + bd = h(a, b), (c, d)i.
• [I3]: We have h(a, b), (a, b)i = 3a2 + 2ab + b2 = 2a2 + (a + b)2 , which is a sum of squares hence always
nonnegative. Furthermore, it is only zero when a = a + b = 0, meaning that (a, b) = (0, 0).
(b) This is the square of the Cauchy-Schwarz inequality for the inner product in part (a). Explicitly, Cauchy-
Schwarz says that |hv, wi| ≤ ||v|| · ||w||. Squaring, and observing that hv, wi is real, produces hv, wi2 ≤
hv, vi hw, wi. Now just take v = (a, b) and w = (c, d) using the inner product from part (a): we
immediately obtain (3ac + ad + bc + bd)2 ≤ (3a2 + 2ab + b2 )(3c2 + 2cd + d2 ).

2
7.
(a) Note that because {u1 , u2 , u3 } is orthonormal, hui , uj i is 0 when i 6= j and 1 when i = j . Expanding
out h2u1 − u2 + 4u3 , u1 + 2u2 + 2u3 i, we obtain
2 hu1 , u1 i + 4 hu1 , u2 i + 8 hu1 , u3 i − hu2 , u1 i − 2 hu2 , u2 i − 2 hu2 , u3 i + 4 hu3 , u1 i + 8 hu2 , u3 i + 8 hu3 , u3 i

and the only nonzero terms are 2 hu1 , u1 i − 2 hu2 , u2 i + 8 hu3 , u3 i = 2 − 2 + 8 = 8 .


More generally, ha1 u1 + · · · + an un , b1 u1 + · · · + bn un i = a1 b1 + · · · + an bn , as proven on homework 7.
(b) Notice that ||v + w||2 = hv + w, v + wi = hv, vi + hv, wi + hw, vi + hw, wi = a + 2 hv, wi + b. So
||v + w|| will be equal to a + b precisely when 2 hv, wi = 0, which is to say, precisely when v and w are
2

orthogonal.

You might also like