Linear Algebra
Linear Algebra
Example. {
x1 − x2 = 1
x1 + x2 = 5
is a system of two linear equations in two unknowns x1 and x2 .
{
x1 + x3 = 8
x3 − x2 = 4
1
intersection of two lines. Lines can be either (i) parallel (no solution), (ii)
intersect at a point (exactly one {
solution), or (iii) coincide (infinitely many
x1 + x2 = 3
solutions). For example, system has no solution, system
2x1 + 2x2 = 4
{ {
x1 + x2 = 4 x1 + x2 = 4
has exactly one solution, and system has
x1 − x2 = 2 2x1 + 2x2 = 8
infinitely many solutions.
2
Definition. A matrix of size m × n has m rows and n columns.
Example. { {
x1 − x2 = 1 e1 →e1 +e2 2x1 = 6
=⇒
x1 + x2 = 5 x1 + x2 = 5
{ {
e1 →e1 /2 x1 = 3 e2 →e2 −e1 x1 = 3
=⇒ =⇒
x1 + x2 = 5 x2 = 2.
This solution can be written in terms of augmented matrices as follows:
[ ] [ ]
1 −1 1 r1 →r1 +r2 2 0 6
=⇒
1 1 5 1 1 5
[ ] [ ]
r1 →r1 /2 1 0 3 r2 →r2 −r1 1 0 3
=⇒ =⇒ .
1 1 5 0 1 2
To solve a system of three or more equations, it is advisable to work with aug-
mented matrices. Three elementary row operations that lead to a row equivalent
augmented matrix are
(i) replace one row by the sum of itself and a multiple of another row,
(ii) interchange two rows,
and (iii) multiply a row by a nonzero constant.
3
In our example, the leading entries are 2, 1 and 5 in the first matrix, and 1,
1, and 1 in the second matrix.
Step 1. Take the leftmost nonzero column. This will be your pivot column.
Make sure the top entry (pivot) is not zero. Interchange rows if necessary.
Example.
x1 + x2 + 2x3 = 0 1 1 2 0
2x2 + x3 = 4 ↔ 0 2 1 4 .
x1 + 2x3 = −3 1 0 2 −3
Step 2. Use the elementary row operations to create zeros in all positions
below the pivot.
In our example,
1 1 2 0
r3 →r3 −r1
=⇒ 0 2 1 4 .
0 −1 0 −3
Step 3. Select a pivot column in the matrix with the first row ignored. Re-
peat the previous steps.
4
In our example,
1 1 2 0
r3 ↔2r3 +r2
=⇒ 0 2 1 4 .
0 0 1 −2
This matrix is in row echelon form.
Step 4. Starting with the rightmost pivot, create zeros above each pivot.
Make pivots equal 1 by rescaling rows if necessary.
In our example,
1 1 2 0 1 1 0 4 1 1 0 4
r2 →r2 −r3 r →r1 −2r3 →r2 /2
=⇒ 0 2 0 6 1 =⇒ 0 2 0 6 r2=⇒ 0 1 0 3
0 0 1 −2 0 0 1 −2 0 0 1 −2
1 0 0 1
x1 = 1
r1 →r1 −r2
=⇒ 0 1 0 3 ↔ x2 = 3
0 0 1 −2 x3 = −2.
5
Thus, the solution is pc = .94ps , pe = .85ps and ps is free.
The solution is
x1 = .25x4
x2 = 1.25x4
x3 = .75x4 .
Since all x’s must be integers, x4 = 4, x1 = 1, x2 = 5, and x3 = 3.
The solution is
x1 = 600 − x5
x
2 = 200 + x5
x = 400
3
x4 = 500 − x5 .
The flow x5 is free, though it must satisfy x5 ≤ 500 since x4 ≥ 0.
6
Definition. A column vector or a vector is a list of numbers arranged in
a column.
1
Example. −3.
7
Notation. Vectors are usually denoted by u, v, w, x, y.
Definition. Two vectors are equal iff their corresponding entries are equal.
That is, equality of vectors is defined entry-wise.
[ ] [ ] [ πi ]
1 1 e
Example. Let u = , v = 2 , and w = . Entry-wise, u =
−1 i −1
v ̸= w.
7
Parallelogram Rule for Addition.
The vector u+v points from the origin and lies on the diagonal of the parallel-
ogram formed by vectors u and v. Picture. Example. Where is u−v? v−u?
Scalar Multiplication.
Vector cv lies on the same line as v, its length is |c| times the length of v,
and it points in the direction of v if c > 0 and in the opposite direction if
c < 0. Picture. Example.
Give an example.
1 2 3 0
Example. Let u = 0 , v = 1, and w = 0. Is vector y = 2 a
−1 0 1 −1
linear combination of u, v, and w?
8
Homogeneous Linear Systems.
9
Nonhomogeneous Linear Systems.
meaning
x1 = −1 + 4/3x3
x2 = 2
x3 free.
Thus,
x1 −1 + 4/3x3 −1 4/3x3 −1 4/3
x = x2 = 2 = 2 + 0 = 2 +x3 0 = p+x3 v.
x3 x3 0 x3 0 1
10
4 2
5, and v3 = 1 are linearly independent.
6 0
Solution: A row echelon form of the augmented matrix is
1 4 2 0
0 −3 −3 0 .
0 0 0 0
Thus, a non-trivial solution is possible. The vectors are not linear indepen-
dent.
Example (Example
2on pages 66 – 67). Determine if the columns of the
0 1 4
matrix A = 1 2 −1 are independent.
5 8 0
Solution: A row echelon form of the augmented matrix is
1 2 −1 0
0 1 4 0 .
0 0 13 0
There are no free variables, therefore only a trivial solution exists and the
columns are independent.
11
Theorem 7 on page 68. Vectors v1 , . . . , vp , p ≥ 2 are linearly depen-
dent iff at least one of the vectors is a linear combination of the others (that
is, at least one vector is in the set spanned by the others).
Proof: The p vectors are linearly dependent iff the equation x1 v1 + · · · +
xp vp = 0 has a non-trivial solution iff (say, x1 ̸= 0) v1 = −x2 /x1 v2 − · · · −
xp /x1 vp . 2
3
Example. (Example 4 on page 68). Consider vectors v1 = 0 and v2 =
1
1
6. They are not multiples of each other, therefore, Span{v1 , v2 } is a plane
0
through the origin. A vector in this plane is linearly dependent with v1 and
v2 . A vector not in this plane is linearly independent. Picture.
Theorem 9 on page 69. If a set of vectors contains the zero vector, the
set is linearly dependent.
Proof: Suppose v1 = 0. Then 1 · v1 + 0 · v2 + · · · + 0 · vp = 0 solves the
equation nontrivially. 2
2 0 1
Example 6(b) on page 69. The set of vectors 3 , 0 , and 1 is
5 0 8
linearly dependent since it contains the zero vector.
12
Definition. The set Rn is called the domain of T , and Rm is called the
codomain of T . The notation T : Rn → Rm means that T maps Rn into Rm .
The vector T (x) is called the image of x. The set of all images is called the
range of T . Picture.
13
(i) T (u + v) = T (u) + T (v) for any vectors u and v in the domain of T ,
(ii) T (cu) = cT (u) for any vector u and any scalar c.
14
Definition. The diagonal entries of A are a11 , a22 , . . . , and they form the
main diagonal of A.
Definition. Two matrices are equal if they have the same dimensions and
their corresponding entries are equal.
Examples.
15
[ ]
a b
Theorem 4 on page 119. The inverse of a 2 × 2 matrix A =
[ ] c d
−1 1 d −b
is A = ad−bc .
−c a
If ad − bc = 0, the inverse does not exist, and A is called singular or
not invertible.
Proof:
Example. Solve {
3x1 + 4x2 = 3
5x1 + 6x2 = 7.
Theorem 6 on page 121. (i) (A−1 )−1 = A, (ii) (AB)−1 = B−1 A−1 , and
(iii) (AT )−1 = (A−1 )T .
Proof:
Solution:
[ ] 0 1 2 1 0 0 1 0 0 −9/2 7 −3/2
A I = 1 0 3 0 1 0 =⇒ 0 1 0 −2 4 −1 .
4 −3 8 0 0 1 0 0 1 3/2 −2 1/2
3.1. Determinants.
16
Example. Find A11 .
1 5 0
A = 2 4 −1 .
0 −2 0
Definition. The determinant of a n × n matrix A = [aij ] is a number given
recursively by
∑ n
detA = (−1)j+1 a1j detA1j .
j=1
Example. Find
1 5 0
det 2 4 −1 .
0 −2 0
Theorem 1 on page 188. The determinant of a matrix can be computed
by a cofactor expansion across any row or any column.
Exercise. Show that if one row or one column of a square matrix is zero,
its determinant is zero (singular matrix).
17
1 −4 2 1 −4 2
= det 0 0 −5 = −det 0 3 2 = −(1)(3)(−5) = 15.
0 3 2 0 0 −5
0 2 2
Exercise. Compute det 1 0 3. Answer: 12.
2 1 1
Exercise. Show that if two rows of a square matrix are equal, its deter-
minant is zero.
Remark. It follows from the above Exercise and Theorem 4 that a ma-
trix is invertible iff its determinant is nonzero iff its columns are linearly
independent.
xi = detAi (b)/detA, i = 1, . . . , n.
[ ] [ ]
Proof: Let [ A = a1 . . . a n and] I = [ e 1 . . . en . If Ax] = b, then
A Ii (x) =( Ae1 ) . . . Ax . . . Aen = a1 . . . b . . . an = Ai (b).
Thus, det A Ii (x) = detA detIi (x) = detAi (b), but detIi (x) = xi . 2
18
Answer: x1 = 20, x2 = 27.
Answer:
−2 14 4
A−1 = (1/14) 3 −7 1 .
5 −7 −3
Exercise 12 on page 210. Find the inverse of
1 1 3
2 −2 1 .
0 1 0
19
[ ] a 0 0
a 0
Example. Draw the picture for det and det 0 b 0 .
0 d
0 0 c
Example 4 on page 206. Find the area of the parallelogram with ver-
tices (−2, −2), (0, 3), (4, −1), and (6, 4).
Solution: First move the parallelogram to the origin by subtracting (−2, −2),
for example. Then [ the ]area of the parallelogram with vertices (0, 0), (2, 5), (6, 1),
2 6
and (8, 6) is |det | = | − 28| = 28.
5 1
4.1. Vector Spaces and Subspaces.
20
Example. The space of natural numbers N = {1, 2, 3, . . . } is not a vec-
tor space. No negative vector, no zero.
Proposition 2. 0 · u = 0.
Proposition 3. −u = (−1)u.
Proposition 4. c 0 = 0.
( )
Proof: By axiom 5, c 0 =( c u+(−u)
) = {by axiom 7} = cu+c(−u) = {by
proposition 3} = cu + c (−1)u = {by axiom 9} = cu + (c(−1))u =
cu + ((−1)c)u
( = {by
) axiom 9 again} = cu + (−1)(cu) = {by proposition
3} = cu + − (cu) = {by axiom 5} = 0. 2
Subspaces.
Example 6 on page 220. The set consisting only of the zero vector is
a subspace. It is denoted by {0}.
21
Example 7 on page 220. The set of all polynomials with real coefficients,
P is a subspace of the space of real-valued functions. The set of all polyno-
mials of degree at most n is a subspace of P.
{ x1 }
Example 8 on page 220. The set x2 is a subspace of R3 . It looks
0
and acts like R but is not R .
2 2
N ulA = {x : x ∈ Rn , Ax = 0}.
Geometrically, the null space of A is the set that is mapped into zero by the
linear transformation x 7→ Ax. Picture.
Example 3 on page 228. Find a spanning set for the null space of the
matrix
−3 6 −1 1 −7
A = 1 −2 2 3 −1 .
2 −4 5 8 −4
Solution: The general solution of the equation Ax = 0 is x1 = 2x2 +
x4 − 3x5 , x3 = −2x4 + 2x5 , and x2 , x4 , and x5 are free variables. We can
now decompose any vector in R5 into a linear combination of vectors where
22
weights are the free variables. That is,
x1 2x2 + x4 − 3x5 2 1 −3
x2 x2
1 0 0
x3 = −2x4 + 2x5 = x2 0+x4 −2+x5 2 = x2 u+x4 v+x5 w.
x4 x4 0 1 0
x5 x5 0 0 1
Notice that u, v, and w are linearly independent since the weights are free
variables. Thus, N ulA = Span{u, v, w}.
Example 4 on pages 229 – 230. Find a matrix A such that W = ColA
{ 6a − b }
where W =
a + b : a, b ∈ R .
−7a
Solution:
{ 6 −1 } { 6 −1 }
W = a 1 + b 1 : a, b ∈ R = Span 1 , 1 .
−7 0 −7 0
Thus, the matrix is
6 −1
A= 1 1 .
−7 0
Kernel and Range of a Linear Transformation.
23
Picture.
4.3. Bases.
24
vectors removed is a basis for V since it contains only linearly independent
vectors spanning V .
0 2 6
Example 7 on page 239. Let v1 = 2 , v2 = 2 , and v3 = 16 ,
−1 0 −5
and suppose V = Span{v1 , v2 , v3 }. Find a basis for V .
Solution: Note that v3 = 5v1 + 3v2 . By the theorem, {v1 , v2 } is a basis
for V .
0 0 0 0 0
Solution: Each nonpivot column of B is a linear combination of the pivot
columns. In fact, the pivot columns are b1 , b3 , and b5 , and the nonpivot
columns are b2 = 4b1 , and p4 = 2b1 − b3 . By the Spanning Set Theorem,
we may discard the nonpivot columns. The pivot columns are linearly inde-
pendent and form a basis for ColB.
Notice that the matrix B is in the reduced row echelon form. Suppose a
matrix is not in this form. How to find a basis for its column space?
25
form a basis.
26
B to the standard basis in Rn . The coordinate mapping from the standard
basis to B is given by [x]B = PB−1 x. Note that PB is invertible since its
columns are linearly independent.
The above theorem states that the possibly unfamiliar space V is isomor-
phic to the familiar Rn .
a3
the coordinate mapping p 7→ [p]B is an isomorphism between P3 and R4 .
27
vectors (p) than entries (n) in each vector. Therefore, there exist scalars
c1 , . . . , cp , not all zeros, such that
0
. . . = c1 [u1 ]B + · · · + cp [up ]B
0
28
Proposition. The dimension of N ulA is the number of free variables in the
equation Ax = 0. The dimension of ColA is the number of pivot columns
in A.
Example 5 on page 260. Find the dimensions of the null space and the
column space of
−3 6 −1 1 −7
A = 1 −2 2 3 −1 .
2 −4 5 8 −4
[ ]
Solution: Row reduce the augmented matrix A 0 to echelon form
1 −2 2 3 −1 0
0 0 1 2 −2 0 .
0 0 0 0 0 0
There are three free variables – x2 , x4 , and x5 . Hence dimN ulA = 3. Also,
dimColA = 2 because A has two pivot columns.
4.6. Rank.
29
Are u and v eigenvectors of A?
Solution: Au = −4u and Av ̸= λv.
To find eigenvalues of a matrix A, one has to find all λ’s such that the
equation Ax = λx or, equivalently, (A − λI)x = 0 has a nontrivial solution.
This problem is equivalent to finding all λ’s such that the matrix A − λI is
not invertible, which is equivalent to solving the characteristic equation of
A, det(A − λI) = 0.
30
−(5/18)a
The solution is x1 = (1/6)a where a is free.
a
Eigenvector x2 solves (A − I)x = 0. The augmented matrix is
0 5 0 0 0 0 0
0 3 −1 =⇒ 0 1 0 0 .
0 0 −3 0 0 1 0
b
The solution is x2 = 0 where b is free.
0
Eigenvector x3 solves (A − 4I)x = 0. The augmented matrix is
−3 5 0 0 0 −5/3 0 0
0 0 −1 0 =⇒ 0 0 0 0 .
0 0 −6 0 0 0 1 0
(5/3)c
The solution is x3 = c where c is free.
0
−5 1 5
For example, we may take x1 = 3 , x2 = 0 , and x3 = 3.
18 0 0
Theorem 2 on page 307. If v1 , . . . , vr are eigenvectors corresponding to
distinct eigenvalues λ1 , . . . , λr of a matrix A, then the vectors are linearly
independent.
Indeed,
in our example,
consider
the linear combination αx1 + βx2 + γx3 =
−5 1 −5
α 3 + β 0 + γ 3 = 0 ⇐⇒ α = β = γ = 0.
18 0 0
Exercise 14 on page 317. Find eigenvalues and eigenvectors of
5 −2 3
A = 0 1 0 .
6 7 −2
Solution: det(A − λI) = −(λ + 4)(λ − 1)(λ − 7) = 0, thus, the eigenvalues
are λ1 = −4, λ2 = 1, and λ3 = 7. The eigenvector x1 solves
9 −2 3 0 −(1/3)a
0 5 0 0 =⇒ x1 = 0 .
6 7 2 0 a
The eigenvector x2 solves
4 −2 3 0 −(3/8)b
0 0 0 0 =⇒ x2 = 3b/4 .
6 7 −3 0 b
31
The eigenvector x3 solves
−2 −2 3 0 (3/2)c
0 −6 0 0 =⇒ x3 = 0 .
6 7 −9 0 c
−1 −3 3
For example, we can take x1 = 0 , x2 = 6 , and x3 = 0. Again,
3 8 2
they are linearly independent.
5.3. Diagonalization.
32
eigenvalues
are 1 and -2.
We can find only two linearly independent vectors
1 −1
v1 = −1 and v2 = 1 , so A is not diagonalizable.
1 0
Theorem 6 on page 323. An n × n matrix with n distinct eigenvalues
is diagonalizable.
33
Theorem 8 on page 331. Suppose A = PDP−1 where D is a diago-
nal n × n matrix. If B is the basis for Rn formed from the columns of P,
then D is the B- matrix for the transformation T (x) = Ax.
[ ] [ ]
7 2 1 1
Example 3 on page 331. Let A = . Then P = and
[ ] −4 1 [ ] [ −1
] −2
5 0 { 1 1 }
D= . D is the B-matrix for T when B = , .
0 3 −1 −2
6.1. Inner Product, Length, and Orthogonality.
u1 v1
Definition. If u = . . . and v = . . . are two vectors in Rn , then the
un vn
inner product or dot product is the number
[ ] v 1
u · v = uT v = u1 . . . un . . . = u1 v1 + · · · + un vn .
vn
1 5
Example. Compute the dot product of u = −3 and v = 0 .
2 −2
Theorem 1 on page 376. The dot product has the following properties:
(a) u · v = v · u (commutative law)
(b) (u + v) · w = u · w + v · w (distributive law)
(c) (cu) · v = c(u · v) = u · (cv) (associative law)
(d) u · u ≥ 0, ∀u, and u · u = 0 iff u = 0
√ √
Definition. The length (or norm) of a vector v is ∥v∥ = v · v = v12 + · · · + vn2 .
Example.
34
6.2. Orthogonal Sets.
∑p
y · ui
y= ui .
i=1
ui · ui
Proof: y · u1 = c1 u1 · u1 , etc.
6
Example 2 on pages 385 – 386. Let y = 1 . Write y as a linear
−8
combination of the vectors in S in Example 1. Answer: y = u1 − 2u2 − 2u3 .
35
6.3. Orthogonal Projections.
∑p
y · ui
ŷ = ui
i=1
ui · ui
and z = y − ŷ.
Picture.
2 −2
Example 2 on pages 396 – 397. Let u1 =
5 , u2 = 1 , and
−1 1
1
y = 2. Then the decomposition of y is
3
1 −2/5 7/5
2 = 2 + 0 .
3 1/5 14/5
Picture.
36
Theorem 10 on ∑ page 399. If {u1 , .[. . , up } is an]orthonormal basis in W ,
then projW y = pi=1 (y·ui )ui . If U = u1 · · · up , then projW y = UUT y.
Then
1 −3/4 0
1 1/4 −2/3
v1 =
1 , v2 = 1/4 , v3 = 1/3 .
1 1/4 1/3
37
Example 2 on ∑page 429. Fix t0 , . . . , tn ∈ R. For any p and q ∈ Pn
define ⟨p, q⟩ = ni=0 p(ti )q(ti ). Show that it is an inner product.
Solution: ⟨p, p⟩ = 0 iff the polynomial vanishes at n + 1 points iff it is a
zero polynomial since its degree is at most n.
satisfy
a − b + c = 1/3
a = −2/3
a + b + c = 1/3.
Thus, {1, t, t2 − 2/3} is an orthogonal basis for P2 . An orthonormal basis is
{1/3, t/2, (3/2)t2 − 1}.
38
1. Theorem 17 on page 433 (The Triangle Inequality). For any u
and v,
∥u + v∥ ≤ ∥u∥ + ∥v∥.
Picture.
Proof: ∥u + v∥2 = ⟨u + v, u + v⟩ = ⟨u, u⟩ + 2⟨u, v⟩ + ⟨v, v⟩ ≤ ∥u∥2 +
C−S.̸=
2|⟨u, v⟩| + ∥v∥2 ≤ ∥u∥2 + 2∥u∥∥v∥ + ∥v∥2 = (u + v)2 . 2
√
2. Exercise 19 on page 436. Show that ab ≤ (a + b)/2, that is, the
geometric mean is less than [√
or equal
] to the arithmetic
[√ ] mean.
a b
Solution: Consider u = √ and v = √ . The inner product is
b a
√ √
⟨u, v⟩ = 2 ab. The√norms are ∥u∥ = ∥v∥ = a + b. By the Cauchy –
Schwarz inequality, 2 ab ≤ a + b.
[ ](a + b) ≤ 2a + 2b .
2 2 2
3. Exercise 20 on page 436.[ ] Show that
a 1
Solution: Consider u = and v = . The inner product is ⟨u, v⟩ =
b 1
√ √
a + b. The norms are ∥u∥ √ = a2 + b2 and ∥v∥ = 2. By the Cauchy –
Schwarz inequality, a + b ≤ 2(a2 + b2 ).
Example.
39
Since a nonzero multiple of an eigenvector is still an eigenvector, we can
normalize the orthogonal eigenvectors {v1 , v2 , v3 } to produce the unit eigen-
vectors (orthonormal eigenvectors) {u1 , u2 , u3 }. In our example,
√ √ √
1/√3 −1/√6 −1/√ 2
u1 = 1/√3 , u2 = −1/√ 6 , u3 = 1/ 2 .
1/ 3 2/ 6 0
[ ]
Definition. Let P = u1 . . . un where {u1 , . . . , un } are the orthonormal
eigenvectors of an n × n matrix A that correspond to not necessarily distinct
eigenvalues λ1 , . . . , λn . The matrix P is called
an orthonormal
matrix. It has
λ1 0 . . . 0
0 λ2 . . . 0
the property that P−1 = PT . Let D =
be the diagonal
...
0 0 . . . λn
matrix with the eigenvalues on the main diagonal. Then A = PDP−1 =
PDPT . The matrix A is said to be orthogonally diagonalizable.
In our example, P = . . . , D = . . . .
Example
3 on
pages 451 – 452. Orthogonally diagonalize the matrix
3 −2 4
A = −2 6 2 .
4 2 3
Solution: The characteristic polynomial is −(λ + 2)(λ − 7)2 . The eigen-
vectors are
−1 1 −1/2
λ1 = −2 : v1 = −1/2 , λ2 = 7 : v2 = 0 , v3 = 1 .
1 1 0
40
eigenvalues of A is called the spectrum.
Example
[ ] [4 √
on page√453.
] [ Construct
] [ √ a spectral
√ ] decomposition of A =
7 2 2/√5 −1/√ 5 8 0 2/ √5 1/√5
= .
2 4 1/ 5 2/ 5 0 3 −1/ 5 2/ 5
[ ] [ ]
T T 32/5 16/5 3/5 −6/5
Solution: A = 8u1 u1 + 3u2 u2 = + .
16/5 8/5 −6/5 12/5
7.2. Quadratic Forms.
Definition. The columns of P are called the principal axes of the quadratic
form xT Ax. The vector y is actually vector x relative to the orthonormal
basis of Rn given by the principal axes.
41
Consider Q(y) = yT Dy, and let c be a constant. Then the equation yT Dy =
c can be written in either of the following six forms:
(i) x21 /a2 + x22 /b2 = 1, a ≥ b > 0 (an ellipse if a > b, a circle if a = b),
(ii) x21 /a2 + x22 /b2 = 0 (a single point (0, 0)),
(iii) x21 /a2 + x22 /b2 = −1 (empty set of points),
(iv) x21 /a2 − x22 /b2 = 1, a ≥ b > 0 (a hyperbola),
(v) x21 /a2 − x22 /b2 = 0 (two intersecting lines x2 = ±(b/a)x1 ),
and
(vi) x21 /a2 − x22 /b2 = −1 (a hyperbola x22 /b2 − x21 /a2 = 1).
Example. Q(x) = x21 + 2x22 , Q(x) = −x21 − 2x22 , Q(x) = x21 − 2x22 .
42