Gillman1997-Order Relations and A Proof of L'hôpital's Rule

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

The College Mathematics Journal

ISSN: 0746-8342 (Print) 1931-1346 (Online) Journal homepage: http://www.tandfonline.com/loi/ucmj20

Order Relations and a Proof of l'Hôpital's Rule

Leonard Gillman

To cite this article: Leonard Gillman (1997) Order Relations and a Proof of l'Hôpital's Rule, The
College Mathematics Journal, 28:4, 288-292, DOI: 10.1080/07468342.1997.11973877

To link to this article: https://doi.org/10.1080/07468342.1997.11973877

Published online: 30 Jan 2018.

Submit your article to this journal

View related articles

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=ucmj20
CLASSROOM
CAPSULES
EDITOR

Thomas A. Farmer
Department of Mathematics and Statistics
Miami University
Oxford, OH 45056-1641

A Classroom Capsule is a short article that contains a new insight on a topic taught in the earlier years
of undergraduate mathematics. Please submit manuscripts prepared according to the guidelines on
the inside front cover to Tom Farmer.

Order Relations and a Proof of I'Hopital's Rule


Leonard Gillman ([email protected]), 1606 The High Road, Austin, TX, 78746-
2236

This note stems from my having read and enjoyed the recent MAA book of selected
works of R. P. Boas [1].

The "order" definition of limit. The e, 15 definition of the limit or continuity of


a real-valued function f defined on a subset of JR. refers to metric properties of JR..
But in most circumstances-particularly in freshman calculus-order concepts are
sufficient, while metric details are irrelevant and a visual and mental distraction.
Moreover, a series of e's and I)'s in the definition prepares the mind for computation
even when none may be forthcoming. To insist that the required intervals in the
definition of limit be centered at a or f (a) is usually unnecessary and often ridiculous.
Trivially, the family of all open intervals about a point constitutes a base for the
neighborhood system at the point. Thus, continuity of f at a means that, for any
open interval J about f(a), there is an open interval about a that f takes into J.
This formulation is simply the specialization to real functions on JR. of the definition
(in terms of basic open sets) for arbitrary topological spaces. Likewise,

lim f(x)
x-+a
=L (1)

means that, for any open interval J about L, there is a punctured open interval
I\ {a} that f takes into J. (Not "deleted interval," for gosh sakes: that's like "escaped
prison.") Starting thus with an interval about L focuses directly on the goal and is
certainly more natural than starting with a number e having no visible relation to
the problem; it also reduces the number and variety of symbols. For limits at infinity
and infinite limits, this is essentially the definition we are all accustomed to.
At the working level we often refer to the actual endpoints of the intervals. Thus
(1) means (for L finite) that if A and Bare any numbers satisfying A< L < B, then

A< f(x) < B


near a-that is, on ( = throughout) some punctured interval I \ {a}.

288 THE COLLEGE MATHEMATICS JOURNAL


We may also consider the challenges from each side independently. Note that the
elementary "proximity" theorem,

If x-+a
lim F(x) >A, then F(x) >A near a, (2)

is now just a quotation from the order definition of limit.


An advantage in using the order definition is shown dearly in the limit theorems
for powers of a function. For example, let us show that if f(x) -+ L =f. 0 as x-+ a,
then 1/ f(x) -+ 1/ L. Say L > 0. Given a challenge A < 1/ L < B, we may assume
A> 0. Then 1/B < L < 1/A, and because f(x)-+ L, 1/B < f(x) < 1/A near a.
Then A< 1/ f(x) < B near a. 0
Textbooks that adopted the order definition of limit include [6], [2], and [S], in
increasing level of commitment, the commitment in [Sl being total.

Proof ofl'Hopital's rule. The rule as stated in l'Hopital's own calculus book was
an elementary special case of the modern version, for which the usual proof depends
on Cauchy's extended mean value theorem. But the proof of Cauchy's theorem starts
out with a monster such as

Consider the function F(t) = [f(b)- f(a)]g(t)- [g(b)- g(a)]f(t),

a deus ex machina that frightens students and from which they learn nothing, though
professional mathematicians are charmed by its elegance. The earliest attacks on a
proof free of mean value theorems seems to be the 1877 paper by Victor Rouquet
[8], which treats the special case f(t)jt, and the 1889 calculus book by Otto Stolz
[10, p. 82].
The proofs that follow for the form 0/0 and oo/oo are based, resepectively, on
Boas [4] and [3], which are reproduced in [1]. Both bypass the Cauchy theorem but
use e:'s. In contrast, Rudin [9] does without e:'s but uses Cauchy.
L'Hopital's rule is best formulated as a theorem about one-sided limits. I will
consider the limits to be all from the right, and for convenience I shorten the symbol
Iimt ..... a+ to lim. The term near a+ means of course on an interoal (a, u); I think it
is due to Redheffer [7]. Likewise, near oo means for sufficiently large t.

L'Hopital's rule. Let f and g be differentiable on an interoal (a, v) (v = oo be-


ing permitted), with g' being continuous on the interoal, and g' (t) =f. 0 near a+.
Jf!imf(t)jg(t) assumes the indeterminate form 0/0 or iflimg(t) = oo, and if
lim f' (t) / g' (t) exists, .finite or infinite, then

. f(t) . f'(t)
hm g(t) = hm g'(t).

In the proofs that follow, let

. f'(t)
L = hm g'(t). (3)

The discussion assumes that L is finite; for L infinite, just ignore the condition L < B
and the inequalities that ensue from it. Since g' is never zero near a+, it is of one
sign there (intermediate value theorem).

VOL. 28, NO. 4, SEPTEMBER 1997 289


Prooffor the form 0/0. Given a challenge

A< L < B, (4)

we will show that


A< f(t) < B near a+. (5)
g(t)

From (3) and (4) we have, by definition of limit,

f'(t)
A < -- < B near a+. (6)
g'(t)

For convenience, define f(a) = g(a) = 0; then f and g are continuous at 0. Say
g'(t) > 0; then g(t) > 0 fort> a; also, multiplication by g'(t) in (7) preserves order:

Ag'(t) < J'(t) < Bg'(t) near a+. (7)

Thus(!- Ag)'(t) > 0 near a+, so f-Ag is increasing on an interval [a,v1); that is,

(!- Ag)(t) > (!- Ag)(a) = 0 near a+.

Consequently, f(t) > Ag(t). Similarly, f(t) < Bg(t), so

Ag(t) < f(t) < Bg(t) near a+. (8)

(Alternatively, obtain these inequalities by integrating (7) on [a, t], remembering to


use a different variable of integration). Finally, dividing in (8) by g(t) yields (5). 0

Prooffor the case g(t)----+ oo. This proof is a little more complicated. Given a chal-
lenge A < L < B, we will show that

A< f(t) < B near a+.


g(t)

Choose A* and B* such that A< A*< L < B* <B. Since limf'(t)/g'(t) = L,
we have by definition of limit

A * < f'(t) B* + (9)


g'(t) < near a .

Since g(t)----+ oo as t----+ a+, g(t) > 0 and g'(t) < 0 near a+. Multiplying by g'(t) in
(9) then reverses order:

A*g'(t) > f'(t) > B*g'(t) near a+.

Looking at the second inequality, we see that(!- B*g)'(t) > 0, so f- B*g is


increasing. Thus for x < y, (!- B*g)(x) < (!- B*g)(y), which implies

f(x)- f(y) < B*g(x)- B*g(y).

290 THE COLLEGE MATHEMATICS JOURNAL


Transposing f(y) and dividing by g(x) yields

f(x) < B* + f(y)- B*g(y)


g(x) g(x) ·

Fix y; as x ----> a+, the second term on the right goes to 0, so the right-hand side
approaches B* and is therefore eventually less than B; consequently

f(x) < B near a+.


g(x)

Similarly, f(x)/g(x) >A near a+. D

Technical comment. The hypothesis that g' be continuous is included so that we


can apply the intermediate value theorem for continuous functions. But it is re-
dundant, as all derivatives enjoy the intermediate value property. This intermediate
value theorem for derivatives is held in disfavor by many teachers, however, on the
grounds that derivatives that arise naturally in standard calculus courses are always
continuous. Nevertheless, I like the theorem, particularly because its proof is so
simple and instructive.

Intermediate value theorem for derivatives. Iff' is defined on [a, b] and bas
opposite signs at a and b, then it is zero at some point in between.

The proof rests on a lemma that every calculus student should understand:

Lemma. (Functions increasing at a point.) If f'(a) > 0, then f(x) > f(a) for x
neara+ and f(x) < f(a)forx near a-.

Note: It does not follow that there is an interval about a on which f is increasing-
though the simplest counterexample I can cook up is f(x) = x + x 3 sin(l/x).

Proofoftbelemma. Consider the difference quotient F(x) = [f(x)- f(a)]!(x-a);


then limx--+a F(x) = f'(a) > 0. By the proximity theorem (2), F(x) > 0 near a.
Hence for x near a, the fraction has the same sign upstairs as downstairs, and the
desired conclusion follows. D

Proofoftbe theorem. Say f'(a) > 0 > J'(b). By the lemma, there are points x near
a+ for which f(x) > f(a). Similarly, there are points x near b- for which f(x) > f(b).
Consequently, the maximum of the continuous function f on the closed interval [a, b]
does not occur at either endpoint. It therefore occurs at an interior point; and at such
a point the derivative is zero. D

Acknowledgment. I wish to thank the referees for several helpful suggestions.

References

1. Gerald L. Alexanderson and Dale H. Mugler, eds., Lion-Hunting & Other Mathematical Pursuits,
Mathematical Association of America, Washington, DC, 1955.
2. Lipman Bers, Calculus, Holt, Rinehart & Winston, New York,1969.

VOL. 28, NO.4, SEPTEMBER 1997 291


3. R. P. Boas, L'hospital's rule without mean-value theorems, American Mathematical Monthly 76:9
(1969) 1051-1053.
4. _ _,Indeterminate forms revisited, Mathematics Magazine 63:3 (1990) 155-159.
5. Leonard Gillman and Robert H. McDowell, Calculus, Norton, New York, 1973.
6. Richard E. Johnson and Fred L. Kiokemeister, Calculus with Analytic Geometry, Allyn and Bacon,
Boston, 1957.
7. Ray Redheffer, Some thoughts about limits, Mathematics Magazine 62:3 (1989) 176-184.
8. Victor Rouquet, Note sur les vraies valeurs des expressions de Ia forme oo / oo, Nouvelles Annates de
Matbematique 16:2 (1877) 113--116.
9. Walter Rudin, Principles of Mathematical Analysis, 2nd ed., McGraw-Hill, New York, 1953.
10. Otto Stolz, Grundzuge der Differential- und Integral-rechnung, vol. 1, Teubner, Leipzig, 1893.

---0---

Bounding the Roots of Polynomials


Holly P. Hirst ([email protected]) and Wade T. Macey, Appalachian State
University, Boone, NC 28608
In these days of ubiquitous graphing devices, a standard problem in mathematics
courses at all levels asks the student to generate a graph of a polynomial function
on an interval that contains all the real roots. In this article we will discuss some
simple bounds on the roots of a polynomial function based upon its coefficients.
The results actually give disks in the complex plane that are guaranteed to contain
all of the roots, real or complex, of the polynomial.
The bounds we describe are not new. The novelty of our presentation lies in the
simplicity of the proof of the first theorem, which uses only elementary properties
of absolute values and thus is easy to understand and apply even for pre-calculus
students. One of the bounds on the roots that we will present was first reported by
Cauchy in 1829. After Cauchy's work was published, bounding roots of polynomials
remained a popular topic of study for over a century; many people produced related
results using widely differing techniques from areas such as linear algebra and com-
plex analysis. Thus the study of bounds for the roots of polynomials in terms of the
coefficients convincingly demonstrates the interconnections between different fields
of mathematics.
We found an added bonus when we looked into the history of this topic-a
well documented historical record of the development of an idea that is accessible
to undergraduates. Many results about polynomial roots are described in detail in
one convenient source [3], which gives an excellent account of the activity in this
area over the past two centuries. We recommend it for all who study polynomials,
regardless of their particular interest.
We begin with our main result.

Theorem 1. Given f: c--+ c defined by f (z) = zn + an-lZn-l + ... + alz + ao,


where ao, a1, ... , an E C, and n a positive integer. .if z is a zero off, then

(1)

Proof Let z be a zero of f. If ao = a1 .. · = an-1 = 0, so f (z) zn, then


lzl = 0 < 1.

292 THE COLLEGE MATHEMATICS JOURNAL

You might also like