0% found this document useful (0 votes)
15 views17 pages

The Minimal Polynomial: Michael H. Mertens October 22, 2015

Linear algebra polynomial
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views17 pages

The Minimal Polynomial: Michael H. Mertens October 22, 2015

Linear algebra polynomial
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

The minimal polynomial

Michael H. Mertens
October 22, 2015

Introduction
In these short notes we explain some of the important features of the minimal
polynomial of a square matrix A and recall some basic techniques to find roots
of polynomials of small degree which may be useful.
Should there be any typos or mathematical errors in this manuscript, I’d
be glad to hear about them via email ([email protected]) and
correct them. Please note that the numbering of theorems and definitions is
unfortunately not consistent with the lecture.

Atlanta, October 2015, Michael H. Mertens

1 The minimal polynomial and the charac-


teristic polynomial
1.1 Definition and first properties
Throughout this section, A ∈ Rn×n denotes a square matrix with n columns
and real entries, if not specified otherwise. We want to study a certain
polynomial associated to A, the minimal polynomial.
Definition 1.1. Let p(X) = cd X d + cd−1 X d−1 + ... + c1 X + c0 be a polynomial
of degree d in the undeterminate X with real coefficients.
(i) For A ∈ Rn×n we define
p(A) := cd Ad + cd−1 Ad−1 + ... + c1 A + c0 In .

1
(ii) The minimal polynomial of A, denoted by µA (X), is the monic (i.e.
with leading coefficient 1) polynomial of lowest degree such that
µA (A) = 0 ∈ Rn×n .

It is maybe not immediately clear that this definition always makes sense.
Lemma 1.2. The minimal polynomial is always well-defined and we have
deg µA (X) ≤ n2 .
Proof. We can write the entries of an n × n-matrix as a column, e.g. reading
the matrix row-wise,
 
a11
 a.12 
   
a11 a12 ... a1n  . 
 a21 a22 ... a2n   . 

7  a1n  .
   
 .. .. . . .. 
 . . . .  a21 
  
an1 an2 ... ann  . 
 .. 
ann
2
With this we can identify Rn×n with Rn and, as is easy to see, addition and
scalar multipication of matrices are respected by this map. In particular this
means that at most n2 matrices in Rn×n can be linearly independent, since
2
they can be thought of as living in Rn , whose dimension is n2 . This means
now that there must be a minimal d ≤ n2 such that the matrices
In , A, A2 , ..., Ad
(viewed as vectors as described above) are linearly dependent, i.e. there are
numbers c0 , ..., cd−1 such that
Ad + cd−1 Ad−1 + ... + c1 A + c0 In = 0 ∈ Rn×n
If we now replace A in this equation by the undeterminate X, we obtain a
monic polynomial p(X) satisfying p(A) = 0 and the degree d of p is minimal
by construction, hence p(X) = µA (X) by definition.
Remark 1.3. In fact, there is a much stronger (sharp) bound on the degree
of the minimal polynomial, namely we have that deg µA (X) ≤ n. This is a
consequence of the Theorem of Cayley-Hamilton (Theorem 1.11), which we
will encounter later.

2
Recall the definition of eigenvalues, eigenvectors, and eigenspaces from
the lecture.
Definition 1.4. A non-zero vector v ∈ Rn that satisfies the equation

Av = λv

for some scalar λ is called an eigenvector of A to the eigenvalue λ. The


set of all vectors satisfying the above equation is called the eigenspace of A
associated to λ, denoted by EA (λ).
Remark 1.5. Recall that EA (λ) = Kern(A − λIn ), so in particular, EA (λ) is
a subspace of Rn .
One of our goals is to find the eigenvalues of A. The following theorem
will tell us how we can use the minimal polynomial of A to find them.
Theorem 1.6. The number λ is an eigenvalue of A if and only if µA (λ) = 0,
which is the case if and only if we can write µA (X) = (X − λ)q(X) for some
polynomial q(X) of degree deg µA (X) − 1.
Proof. First suppose that λ is an eigenvalue of A, hence EA (λ) is not just the
zero space. By definition, the maps v 7→ Av and v 7→ λv are the same on the
subspace EA (λ) by definition, so (restricted to this subspace!) the minimal
polynomial of this map is X − λ. But since we know that µA (A) = 0, this
implies that (X − λ) divides µA (λ).
Now suppose that µA (λ) = 0, i.e. we can write µA (X) = (X − λ)q(X) for
some polynomial q(X) of degree deg µA (X)−1. If then λ is not an eigenvalue
of A, i.e. EA (λ) = {0}, so in particular the matrix A − λIn is invertible. In
particular this means that µA (A) = 0 if and only if q(A) = 0, but this can’t
be because the degree of q(X) is less than the degree of µA (X). Thus λ must
be an eigenvalue of A.
So once one has the minimal polynomial, one only has to find its zeros in
order to find the eigenvalues of A. So it remains to show how to compute the
minimal polynomial. Theoretically, one could use the method outlined in the
proof of Lemma 1.2, i.e. successively compute powers of A and look for linear
dependencies. The problem here is that computing powers of large matrices
becomes very tedious and in each step one has to solve a linear system with
n2 equations to find the linear dependencies. A much more efficient method
is outlined below.

3
Algorithm 1.7.
INPUT: A ∈ Rn×n
OUTPUT: µA (X)
ALGORITHM:

(i) Choose any non-zero v ∈ Rn .

(ii) While the set S = {v, Av, A2 v, ..., Am v} is linearly independent, com-
pute Am+1 v = A(Am v) and add it to S.

(iii) Write down the (normalized) linear depency for S, i.e. compute num-
bers c0 , ..., cd−1 such that Ad v = cd−1 Ad−1 v + ... + c0 v and use these to
define the polynomial µ̃(X) = X d − (cd−1 X d−1 + ... + c0 ).

(iv) While the vectors found don’t span Rn , find a vector not in their span
and repeat steps 2. and 3.

(v) The minimal polynomial is then the least common multiple of all the
polynomials found on the way.

Example 1.8. Let  


4 0 −3
A = 4 −2 −2 ∈ R3×3 .
4 0 −4
Find µA (X).  
1
We first choose the vector e1 = 0. We compute successively

0
 
1
e1 = 0 ,
0
 
4
Ae1 = 4 ,

4
   
4 4
2
A e1 = A 4 = 0 = 4e1 .
  
4 0

4
Hence we have found a linear dependency (A2 − 4I3 )e1 = 0, we have found a
divisor of our minimal polynomial,

µ̃(X) = X 2 − 4.

Since e1 and Ae1 do not yet span all of R3 , we need torepeat


 the above steps
0
with a vector not in the span of those, e.g. with e2 = 1. Then we have
0
 
0
Ae2 = −2 = −2e2 ,

0

so that we find the linear dependence Ae2 + 2e2 = 0, i.e. the divisor X + 2 of
our minimal polynomial. Note that by chance, e2 is indeed an eigenvector of
A, but this is merely a coincidence. Since the three vectors e1 , Ae1 , e2 span
R3 , we can stop now and find

µA (X) = lcm(X 2 − 4, X + 2) = X 2 − 4.

Another way to compute eigenvalues of a matrix is through the charac-


teristic polynomial.

Definition 1.9. For A ∈ Rn×n we define the characteristic polynomial


of A as
χA (X) := det(XIn − A).
This is a monic polynomial of degree n.

The motivation for this definition essentially comes from the invertible
matrix theorem, especially Theorem 3.8 of the lecture. More precisely we
have that λ is an eigenvalue of A if and only if EA (λ) = Kern(A − λIn ) 6= {0}
which is the case if and only if the matrix A − λIn is not invertible, which
happens if and only if det(A − λIn ) = 0. This proves the following theorem.

Theorem 1.10. The number λ is an eigenvalue of the matrix A ∈ Rn×n if


and only if χA (λ) = 0.

In particular, the minimal polynomial µA (X) and the characteristic poly-


nomial χA (λ) have the same roots. In fact even more is true.

5
Theorem 1.11 (Cayley-Hamilton). The minimal polynomial divides the
characteristic polynomial, or in other words, we have
χA (A) = 0 ∈ Rn×n .
Proof. By extending the definition of the classical adjoint to matrices with
polynomials as entries we can write
(XIn − A) adj(XIn − A) = det(XIn − A)In = χA (X)In . (1.1)
Now by construction the entries of the adjoint are polynomials of degree at
most n − 1, hence we can write
adj(XIn − A) = Cn−1 X n−1 + Cn−2 X n−2 + ... + C1 X + C0 ,
where C0 , ..., Cn−1 ∈ Rn×n are suitable matrices. If we now let
χA (X) = X n + an−1 X n−1 + ... + a1 X + a0
for some numbers a0 , ..., an−1 , it follows from (1.1) that
(X n + an−1 X n−1 + ... + a1 X + a0 )In
=(XIn − A)(Cn−1 X n−1 + Cn−2 X n−2 + ... + C1 X + C0 )
=Cn−1 X n + (Cn−2 − ACn−1 )X n−1 + ... + (C0 − AC1 )X − AC0 ,
so that by comparing coefficients we see that
Cn−1 = In , aj In = Cj−1 − ACj (j = 1, ..., n − 1), a0 In = −AC0 .
With this we obtain that
χA (A) = An + an−1 An−1 + ... + a1 A + a0 In
= An + An−1 (Cn−2 − A) + An−2 (Cn−3 − ACn−2 ) + ... + (A(C0 − AC1 ) − AC0
= An − An + AC0 − AC0
=0
which is what we wanted to show.
Remark 1.12. It is a popular joke to “prove” the theorem of Cayley-Hamilton
by the following short line of equations,
χA (A) = det(AIn − A) = det(0) = 0.
Why is this “proof ” rubbish?

6
If λ is an eigenvalue of A, then, according to Theorems 1.6, 1.10 and 1.11
we can write
µA (X) = (X − λ)γ p(X) (1.2)
and
χA (X) = (X − λ)α q(X), (1.3)
where p(λ), q(λ) 6= 0. This yields
Definition 1.13. For an eigenvalue λ of a matrix A ∈ Rn×n we call the
number γ defined by (1.2) the geometric multiplicity of λ. The number α
defined by (1.3) is called its algebraic multiplicity.

1.2 Similarity and diagonalizability


A very important concept in linear algebra is that of similarity of matrices.
This can be seen from the viewpoint of linear maps (later) or from a purely
computational viewpoint: Suppose we want to compute the matrix A to a
large power. Just by applying the definition of the matrix product, this is
not really the best way to go. Suppose for example that we can write

A = gDg −1

for some invertible matrix g ∈ Rn×n and a diagonal matrix D. Then we have
for example
A2 = (gDg −1 )2 = gDg −1 gDg −1 = gD2 g −1 ,
and similarly in general
Ak = gDk g −1 .
The good thing about this is of course that it is very easy indeed to compute
a power of a diagonal matrix. We want to study this phenomenon a bit more
closely.
Definition 1.14. (i) We call a matrix A ∈ Rn×n similar to a matrix
B ∈ Rn×n , in symbols A ∼ B, if there exists an invertible matrix
g ∈ Rn×n such that A = gBg −1 .
(ii) If A is similar to a diagonal matrix, we call A diagonalizable.
Remark 1.15. (i) If A ∼ B, then we also have B ∼ A, since

A = gBg −1 ⇔ B = g −1 Ag.

7
(ii) If A ∼ B and B ∼ C then we also have A ∼ C, since it follows from
A = gBg −1 and B = hCh−1 that A = ghCh−1 g −1 = (gh)C(gh)−1 .

In general it is quite hard to decide whether two given matrices are similar
or not. But there are several more or less easy to compute data that may
give some indication. We give some of them in the following theorem.

Theorem 1.16. Let A, B ∈ Rn×n be similar. Then the following are all true.

(i) det A = det B,

(ii) trace A = trace B,

(iii) χA (X) = χB (X),

(iv) µA (X) = µB (X),

(v) A and B have the same eigenvalues with the same algebraic and geo-
metric multiplicities.

Proof. Since A ∼ B, we can write A = gBg −1 for some invertible matrix g.

(i) We have

det A = det(gBg −1 ) = det g det B(det g − 1) = det B,

since the determinant is multiplicative.

(ii) Exercise 6 on Homework assignment 8.

(iii) We have

χA (X) = det(XIn − A) = det(Xgg −1 − gBg −1 ) = det(g(XIn − B)g −1 )


= det g det(XIn − B)(det g − 1) = det(XIn − B) = χB (X).

(iv) Suppose that


0 0
µA (X) = X d +ad−1 X d−1 +...+a1 X+a0 and µB (X) = X d +bd0 −1 X d −1 +...+b1 X+b0 .

8
We show that µA (B) = µB (A) = 0, which implies that both minimal
polynomials mutually divide each other, which means, since they have
the same leading coefficient, that they must be equal. We compute

µA (B) = B d + ad−1 B d−1 + ... + a1 B + a0 In


= (g −1 Ag)d + ad−1 (g −1 Ag)d−1 + ... + a1 (g −1 Ag) + a0 g −1 g
= (g −1 Ad g) + ad−1 (g −1 Ad−1 g) + ... + a1 (g −1 Ag) + a0 g −1 g
= g −1 (Ad + ad−1 Ad−1 + ... + a1 A + a0 In )g
= g −1 µA (A)g
= 0.

With a similar computation one also shows µB (A) = 0 which completes


the proof.

(v) Follows from (iii) and (iv).

Remark 1.17. (i) Theorem 1.16 does NOT say that if the indicated quan-
tities above agree for two matrices, then they are equal. It just states
that if they do not agree for two given matrices, then they cannot be
similar. It is not even so that two matrices are similar if all the above
quantities agree. For example for
   
2 1 0 0 2 1 0 0
0 2 0 0
 and B = 0 2 0 0
 
A= 0 0 2 1 0 0 2 0
0 0 0 2 0 0 0 2

we have det A = det B = 16, trace A = trace B = 8, χA (X) =


χB (X) = (X − 2)4 , and µA (X) = µB (X) = (X − 2)2 , but the two
matrices are not similar (which is not so easy to see).

(ii) On the other hand one can show that two 2 × 2-matrices are similar
if and only if their minimal polynomials agree (this is false for the
characteristic polynomial!).

(iii) As one can also show, two 3 × 3-matrices are similar if and only if they
have the same minimal and characteristic polynomial.

9
2 Roots of polynomials
2.1 Quadratic and biquadratic equations
Even though it is probably the most well-known topic among those discussed
in these notes, we begin by recalling how to solve quadratic equations. Sup-
pose we have an equation of the form
ax2 + bx + c = 0
where a, b, c are given real numbers (with a 6= 0, otherwise we would in fact
be talking about linear equations) and we want to solve for x. Since a 6= 0,
we can divide the whole equation by it and add a clever zero on the left hand
side, giving  2  2
2 b b b c
x + x+ − + = 0.
a 2a 2a a
b 2
The first three summands can easily be recognized to equal (x + 2a ) . This
procedure of adding this particular zero is called completing the square. Now
we reorder the equation to obtain
2
b2 − 4ac

b
x+ = .
2a 4a2
Now there are three cases to distinguish,
(i) If ∆ := b2 − 4ac > 0, then we obtain two distinct real solutions by
taking the square-root, namely
√ √
−b + ∆ −b − ∆
x= or x = . (2.1)
2a 2a
Note that the square-root of a positive real number a is also positive
by definition
√ and therefore
√ unique, while the equation x2 = a has two
solutions, a and − a.
(ii) If ∆ = 0, then there is precisely one zero,
b
x=− .
2a
In this case we speak of a double zero, since the derivative of the function
f (x) = ax2 + bx + c would also vanish in this case. The zeros in the
first case are called simple zeros.

10
(iii) If ∆ < 0, then there is no real solution, since the square of a real
number is always non-negative.

Because the behaviour of the quadratic equation is determined entirely by


the quantity ∆ = b2 − 4ac, it is called the discriminant of the equation (from
Latin discriminare – to distinguish).
In the case where a = 1 (we say that the polynomial is monic in this case)
and b and c are integers, there is an easy alternative to the above formula,
which gives the solutions quicker if they are integers (and in many examples,
they will be). It is based on the following observation which goes back to the
French-Italian mathematician François Viète (1540–1603).

Lemma 2.1. Let α1 and α2 be roots of the polynomial x2 + bx + c. Then we


have b = −(α1 + α2 ) and c = α1 α2 .

Proof. If α1 and α2 are the two zeros of our polynomial, then we must have

x2 + bx + c = (x − α1 )(x − α2 ) = x2 − (α1 + α2 ) + α1 α2 .

Thus a comparison of the coefficients gives the lemma.


So if one can factor the number c and combine the divisors so that they
sum to −b, one also has found the solutions to the equation. This may be
easier to do without a calculator than taking square-roots, especially if c has
very few divisors. If this factoring method doesn’t work, then we also know
that our solutions will not be integers.
Sometimes it happens that one has to deal with so-called biquadratic
equations. Those have the general form

ax4 + bx2 + c = 0.

It is not very complicated to solve these as well, one just substitutes z = x2


to obtain a quadratic equation in z, which one can solve by either one of
the above methods. Afterwards, we take the positive and negative square-
root of the solutions which are non-negative (the others don’t yield any real
solutions to the biquadratic equation).

Example 2.2. Let’s solve the biquadratic equation

x4 + x2 − 20 = 0.

11
Substituting z = x2 gives us the quadratic equation

z 2 + z − 20 = 0.

The discriminant of this quadratic equation is ∆ = 12 − 4 · 1 · (−20) = 81 > 0,


therefore, we have two real solutions for z, according to our formula (2.1),
namely √ √
−1 + 81 −1 − 81
z= = 4 or z = = −5.
2 2
Since z = x2 , it must be non-negative if x is a real number, so the solution
z = −5 is irrelevant for us and we obtain the two real solutions

x = 2 or x = −2.

2.1.1 Basics on complex numbers


Let us go back to the third case about solving quadratic equations, when the
discriminant of a quadratic equation is negative. The argument why there
is no solution is that the square of a real number is always non-negative.
But often it is necessary to have a solution to such an equation, even if it is
not real. So one imagines that there is a “number” which we call i with the
property i2 = −1. This number i is not a real number, but it is indeed called
imaginary. We state now that essentially all the rules of arithmetic one is
used to from working with real numbers can also be used for this number i.
With this, we can in fact solve our quadratic equation

ax2 + bx + c = 0

even in the case when the discriminant ∆ is negative, namely by writing


∆ = (−1) · (−∆), keeping in mind that −∆ is positive, we can write our
formula (2.1) as
√ √
−b + i −∆ −b − i −∆
x= or x = , (2.2)
2a 2a
an expression we can now make sense of. In general we call an object of the
form α = a + bi with real numbers a and b a complex number. The number a
is called the real part of α, the number b is called the imaginary part of α (in
particular, the imaginary part of a complex number is always a real number).
Every complex number can be simplified to be of this form. We collect a few

12
facts about complex numbers here. We note that one can basically calculate
with complex numbers exactly as with real numbers, but since it is not really
relevant in the course of the lecture, we won’t go into this here. The only
thing that we will need is the exponential of a complex number. We will just
give it as a definition, although it is possible to derive it properly.
Definition 2.3. For a complex number α = a + bi with real numbers a, b we
have
exp(α) := eα := ea cos(b) + iea sin(b).

2.2 Cubic equations


2.2.1 Polynomial division
There is an easy way to divide polynomials by one another which works
basically like the long division algorithm for integers. This comes in handy if
one has guessed one zero, say α, of a polynomial because one can then divide
the polynomial by x − α to obtain a polynomial of lower degree one has to
deal with (see Lemma 2.4). We want to divide x3 − x2 − 3x − 9 by x − 3.
One only looks at the leading terms of the polynomials and divides those. In
this case, we obtain x2 .
 
x3 − x2 − 3x − 9 = x − 3 x2 .
Then we multiply the divisor x − 3 by this result and subtract it from the
dividend x3 − x2 − 3x − 9,
 
x3 − x2 − 3x − 9 = x − 3 x2 .
− x3 + 3x2
We copy the next lower term downstairs and repeat the procedure with the
difference.  
x3 − x2 − 3x − 9 = x − 3 x2 .
3 2
− x + 3x
2x2 − 3x
Again, we only divide the leading terms, and we get +2x, which we write
next to the x2 from the previous step,
 
x3 − x2 − 3x − 9 = x − 3 x2 + 2x .
3 2
− x + 3x
2x2 − 3x

13
We multiply this 2x again by our divisor and subtract the result from the
dividend,  
x3 − x2 − 3x − 9 = x − 3 x2 + 2x .
3 2
− x + 3x
2x2 − 3x
− 2x2 + 6x
We repeat this procedure again and finsh up,
 
x3 − x2 − 3x − 9 = x − 3 x2 + 2x + 3 .
− x3 + 3x2
2x2 − 3x
− 2x2 + 6x
3x − 9
− 3x + 9
0

Hence we have found our result of the division, namely x2 + 2x + 3.


In general, it is not necessary that the polynomial division goes through
without a remainder. If there is one, it will have a degree less than the
divisor. We want to divide x4 − 3x3 + 2x2 − 5x + 7 by x2 − x + 1. Up to the
last step everything is the same as before,
 
x4 − 3x3 + 2x2 − 5x + 7 = x2 − x + 1 x2 − 2x − 1 .
− x4 + x 3 − x2
− 2x3 + x2 − 5x
2x3 − 2x2 + 2x
− x2 − 3x + 7
x2 − x + 1
− 4x + 8

We see that the last difference is a polynomial of degree 1, but not 0. This

14
polynomial is our remainder, which we have to add on the right-hand side,
 
x4 − 3x3 + 2x2 − 5x + 7 = x2 − x + 1 x2 − 2x − 1 − 4x + 8.
− x4 + x 3 − x2
− 2x3 + x2 − 5x
2x3 − 2x2 + 2x
− x2 − 3x + 7
x2 − x + 1
− 4x + 8

2.2.2 Cubic and higher degree polynomials


As indicated in the last section, we can use polynomial division to find roots of
polynomials. This is based on two easy facts which we recall in the following
lemma.

Lemma 2.4. (i) The product of two numbers is 0 if and only one of the
numbers is zero,

a·b=0 ⇒ a = 0 or b = 0.

(ii) A number α is the root of a polynomial p(x) if and only if x − α divides


p(x), i.e. the polynomial division of p(x) by x − α goes through without
a remainder.

So if we want to find the roots or zeros of a cubic polynomial, we can


somehow guess one zero, and use polynomial division to obtain a quadratic
polynomial, whose zeros we can determine using the formulas in Section
2.1. The question now is how we can guess a zero of a polynomial. At
least in case of a monic polynomial with integer coefficients, there is a very
easy observation with which we can check at least for integer zeros. It is a
generalization of the Theorem of Vieta (Lemma 2.1)

Proposition 2.5. An integer a is a zero of a monic polynomial with integer


coefficients if and only if it divides the absolute term of the polynomial. If
no divisor of the absolute term is a zero of the polynomial, then all zeros are
irrational.

15
So if we cannot find an integer root, then we have essetially no chance
of “guessing” one. In this case, there are numerical methods to obtain ap-
proximations for zeros, the most well-known goes back to Sir Isaac Newton
(1642–1727), or (for polynomials of degree 3 and 4) there are even closed
formulas. None of these will be relevant in our course.

Example 2.6. We want to determine all zeros of the cubic polynomial

x3 + 4x2 − 7x − 10.

By Proposition 2.5, we have to check the divisors of the absolute term, which
is 10 in this case. By trial and error we find that that 2 is in fact a zero of
this polynomial. Now we use polynomial division,
 
x3 + 4x2 − 7x − 10 = x − 2 x2 + 6x + 5 .
− x3 + 2x2
6x2 − 7x
− 6x2 + 12x
5x − 10
− 5x + 10
0

The result is x2 + 6x + 5. The zeros of this polynomial are −1 and −5 which


one can check either using (2.1) or Lemma 2.1. Thus we have the three zeros
x = 2, x = −1, and x = −5.

16
2.3 Exercises
Carry out polynomial division for the following pairs of polynomials.

(i) x2 − 4x + 3 divided by x − 3,

(ii) x3 − 4x2 + 3x − 12 divided by x − 4,

(iii) x3 − 5x2 + 3x − 4 divided by x2 + 1,

(iv) x4 + 7x2 − 4x + 2 divided by x2 + 3x − 1.

Find all zeros (real and complex) of the following polynomials.

(i) x2 − 4x + 4,

(ii) x2 − 4x + 13,

(iii) x4 − 25x2 + 144,

(iv) x3 − 2x2 − 5x + 6,

(v) x3 − 2x2 + 9x − 18.

17

You might also like