Math8 wk4 Lecture
Math8 wk4 Lecture
1 Random Questions
Question 1.1. Last week, we showed that the harmonic series
X1
n
n∈N
diverges.
Show that the sum
X 1
n
n∈N:
n has no 9
in its digits
Question 1.2. For any k, define the following sequence of numbers (often called “hailstone”
numbers:)
a0 = k,
3an + 1, an odd.
an+1 = an
2 , an even.
Show that for any k, the number 1 eventually shows up in the sequence {an }∞
n=1 .
• continuous nowhere?
This week’s talks focus on two distinct topics that we’ll deal with repeatedly in Math 1:
the study of power series, a “combination” of series and polynomials that have a number
of useful properties, and the concepts of limits and continuity for real-valued functions.
We start with power series:
1
A function f : R → R is linear if f (x + y) = f (x) + f (y), for every x, y ∈ R.
1
2 Power Series
2.1 Power Series: Definitions and Tools
The motivation for power series, roughly speaking, is the observation that polynomials
are really quite nice. Specifically, if I give you a polynomial, you can
• add and multiply polynomials together and easily express the result as another poly-
nomial,
and do most anything else that you’d ever want to do to a function! One of the only
downsides to polynomials, in fact, is that there are functions that aren’t polynomials! In
specific, the very useful functions
1
sin(x), cos(x), ln(x), ex ,
x
are all not polynomials, and yet are remarkably useful/frequently occuring objects.
So: it would be nice if we could have some way of “generalizing” the idea of polynomials,
so that we could describe functions like the above in some sort of polynomial-ish way –
possibly, say, as polynomials of “infinite degree?” How can we do that?
The answer, as you may have guessed, is via power series:
Power series are almost taken around x0 = 0: if x0 is not mentioned, feel free to assume
that it is 0.
The definition above says that a power series is just a fancy way of writing down a
sequence. This looks like it contradicts our original idea for power series, which was that
we would generalize polynomials: in other words, if I give you a power series, you quite
certainly want to be able to plug numbers into it!
The only issue with this is that sometimes, well . . . you can’t:
There are values of x which, when plugged into our power series P (x), yield a series that
fails to converge.
2
Proof. There are many such values of x. One example is x = 1, as this yields the series
∞
X
P (x) = 1,
n=0
which clearly fails to converge; another example is x = −1, which yields the series
∞
X
P (x) = (−1)n .
n=0
The partial sums of this series form the sequence {1, 0, 1, 0, 1, 0, . . .}, which clearly fails to
converge2 .
So: if we want to work with power series as polynomials, and not just as fancy sequences,
we need to find a way to talk about where they “make sense:” in other words, we need to
come up with an idea of convergence for power series! We do this here:
Definition 2.2. A power series
∞
X
P (x) = an (x − x0 )n
n=0
is a power series that converges at some value b + x0 ∈ R. Then P (x) actually converges on
every value in the interval (−b + x0 , b + x0 ).
In particular, this tells us the following:
Corollary 2. Suppose that
∞
X
P (x) = an xn
n=0
is a power series centered at 0, and A is the set of all real numbers on which P (x) converges.
Then there are only three cases for A: either
2
Though it wants to converge to 1/2. Go to wikipedia and read up on Grandi’s series for more informa-
tion!
3
1. A = {0},
2. A = one of the four intervals (−b, b), [−b, b), (−b, b], [−b, b], for some b ∈ R, or
3. A = R.
We say that a power series P (x) has radius of convergence 0 in the first case, b in the
second case, and ∞ in the third case.
A question we could ask, given the above corollary, is the following: can we actually get
all of those cases to occur? I.e. can we find power series that converge only at 0? On all of
R? On only an open interval?
To answer these questions, consider the following examples:
Proof. That this series converges for x = 0 is trivial, as it’s just the all-0 series.
To prove that it diverges whenever x 6= 0: pick any x > 0. Then the ratio test says that
this series diverges if the limit
(n + 1)!xn+1
lim = lim x(n + 1) = +∞
n→∞ n! · xn n→∞
is > 1, which it is. So this series diverges for all x > 0. By applying our theorem about
radii of convergence of power series, we know that our series can only converge at 0: this
is because if it were to converge at any negative value −x, it would have to converge on all
of (−x, x), which is a set containing positive real numbers.
Proof. Take any x > 0, as before, and apply the ratio test:
xn+1
lim = x.
n→∞ xn
So the series diverges for x > 1 and converges for 0 ≤ x < 1: therefore, it has radius of
convergence 1, using our theorem, and converges on all of (−1, 1). As for the two endpoints
x = ±1: in our earlier discussion of power series, we proved that P (x) diverged at both 1
and −1. So this power series converges on (−1, 1) and diverges everywhere else.
4
Example. The power series
∞
X xn
P (x) =
n
n=1
xn+1 /(n + 1)
n 1
lim = lim x · = lim x · 1 − = x.
n→∞ xn /n n→∞ n + 1 n→∞ n+1
So, again, we know that the series diverges for x > 1 and converges for 0 ≤ x < 1: therefore,
it has radius of convergence 1, using our theorem, and converges on all of (−1, 1). As for
the two endpoints x = ±1, we know that plugging in 1 yields the harmonic series (which
diverges) and plugging in −1 yields the alternating harmonic series (which converges.) So
this power series converges on [−1, 1) and diverges everywhere else.
So, again, we know that the series diverges for x > 1 and converges for 0 ≤ x < 1:
therefore, it has radius of convergence 1, using our theorem, and converges on all of (−1,
P 1).
1
As for the two endpoints x = ±1, we know that plugging in 1 yields the series n2
,
P (−1)n
which we’ve shown converges. Plugging in −1 yields the series n2
: because the series
of termwise-absolute-values converges, we know that this series converges absolutely, and
therefore converges.
So this power series converges on [−1, 1] and diverges everywhere else.
converges on all of R.
5
Example. The power series
∞
X xn
P (x) =
n!
n=0
converges on all of R.
So this series converges for any x > 0: applying our theorem about radii of convergence
tells us that this series must converge on all of R!
This last series is particularly interesting, as you’ll see later in Math 1. One particularly
nice property it has is that P (1) = e:
Definition 2.3.
∞
X 1
= e.
n!
n=0
Using this, we can prove something we’ve believed for quite a while but never yet
demonstrated:
Theorem 2.4. e is irrational.
Proof. We begin with a (somewhat dumb-looking) lemma:
Lemma 3. e < 3.
Proof. To see that e < 3, look at e − 2, factor out a 12 , and notice a few basic inequalities:
1 1 1 1
e − 1 − 1 = 1 + + + + + ... − 1 − 1
1! 2! 3! 4!
1 1 1
= + + + ...
2! 3! 4!
1 1 1 1
= · 1+ + + + ...
2 3 3·4 3·4·5
1 1 1 1
< · 1+ + + + ...
2 2 2·3 2·3·4
1 1 1 1 1
= · + + + + ...
2 1! 2! 3! 4!
1
= · (e − 1)
2
⇒ 2e − 4 <e − 1
⇒ e < 3.
6
Given this, our proof is remarkably easy! Assume that e = ab , for some pair of integers
a, b ∈ Z, b ≥ 1. Then we have that
∞
X 1 a
=
n! b
n=0
∞
X b!
⇒ = a · (b − 1)!
n!
n=0
b ∞
X b! X b!
⇒ + = a · (b − 1)!
n! n!
n=0 n=b+1
∞ b
X b! X b!
⇒ = a · (b − 1)! − .
n! n!
n=b+1 n=0
b!
For n ≤ b, notice that n! is always an integer: therefore, the right-hand-side of the last
equation above is always an integer, as it’s just
P∞the difference of a bunch of integers. This
b!
means, in particular, that the left-hand-side n=b+1 n! is also an integer. What integer is
it?
Well: we know that
∞
1 X b! 1 1 1
0< < = + + ...,
b n! b + 1 (b + 1)(b + 2) (b + 1)(b + 2)(b + 3)
n=b+1
So, it’s an integer strictly between 0 and . . . 1. As there are no integers strictly between 0
and 1, this is a contradiction! – in other words, we’ve just proven that e must be rational.
3 Continuity
Changing gears here, we turn to the concepts of continuity and limits of real-valued
functions:
7
3.1 Continuity: Motivation and Tools
Definition 3.1. If f : X → Y is a function between two subsets X, Y of R, we say that
lim f (x) = L
x→a
if and only if
1. (vague:) as x approaches a, f (x) approaches L.
2. (precise; wordy:) for any distance > 0, there is some bound δ > 0 such that whenever
x ∈ X is within δ of a, f (x) is within of L.
3. (precise; symbols:)
∀ > 0, ∃δ > 0 such that ∀x ∈ X, (|x − a| < δ) ⇒ (|f (x) − L| < ).
These definitions, without pictures, are kind-of hard to understand. In high school,
continuous functions are often simply described as “functions you can draw without lifting
your pencil3 ;” how do these deltas and epsilons relate to this intuitive concept? Consider
the following picture:
f(x)
+ϵ
L -ϵ
-δ + δ
This graph should help to illustrate what’s going on in our “rigorous” definition of limits
and continuity. Essentially, when we claim that “as x approaches a, f (x) approaches f (a)”,
we are saying
• for any (red) distance around f (a) that we’d like to keep our function,
8
Basically, what this definition says is that if you pick values of x sufficiently close to a, the
resulting f (x)’s will be as close as you want to be to f (a) – i.e. that “as x approaches a,
f (x) approaches f (a).”
This, hopefully, illustrates what our definition is trying to capture – a concrete notion
of something like convergence for functions, instead of sequences. So: how can we prove
that a function f has some given limit L? Motivated by this analogy to sequences, we have
the following blueprint for a proof-from-the-definition that limx→a f (x) = L:
|f (x) − L|.
Specifically, try to find a simple upper bound for this quantity that depends only
on |x − a|, and goes to 0 as x goes to a – something like |x − a| · (constants), or
|x − a|3 · (bounded functions, like sin(x)).
2. Using this simple upper bound, for any > 0, choose a value of δ such that whenever
|x − a| < δ, your simple upper bound |x − a| · (constants) is < . Often, you’ll define
δ to be /(constants), or somesuch thing.
3. Plug in the definition of the limit: for any > 0, we’ve found a δ such that whenever
|x − a| < δ, we have
Limits and continuity are wonderfully useful concepts, but working with them straight
from the definitions can be somewhat ponderous. As a result, just like we did for sequences,
we have developed a number of useful tools and theorems to allow us to prove that certain
limits exist without going through the definition every time. We present four such tools
here:
1. Squeeze theorem: Suppose that f, g, h are functions defined on some interval I \{a}4
such that
Then limx→a g(x) exists, and is equal to the other two limits limx→a f (x), limx→a h(x).
4
The set X \ Y is simply the set formed by taking all of the elements in X that are not elements in Y .
The symbol \, in this context, is called “set-minus”, and denotes the idea of “taking away” one set from
another.
9
2. Limits and arithmetic: Suppose that f, g are a pair of functions such that the
limits limx→a f (x), limx→a g(x) both exist. Then we have the following equalities:
lim (f (x) + g(x)) = lim f (x) + lim g(x) .
x→a x→a x→a
lim (f (x) · g(x)) = lim f (x) · lim g(x)
x→a x→a x→a
f (x)
lim = lim f (x) / lim g(x) , if lim g(x) 6= 0.
x→a g(x) x→a x→a x→a
As a special case, the product and sum of any two continuous functions is continuous,
as is dividing a continuous function by another continuous function that’s never zero.
lim f (g(x)) = L.
x→b
• limn→∞ an = a, and
• limn→∞ f (an ) 6= f (a).
We illustrate how to use the definition of continuity, as well as how to use each of these
four tools, in the next section:
By algebraic simplification, we’ve broken our expression into two parts: one of which
is |x − 1|, and the other of which is. . . something. We’d like to get rid of this extra
10
part |x + 1|; so, how do we do this? We cannot just say that this quantity is bounded;
indeed, for very large values of x, this explodes off to infinity.
However, for values of x rather close to 1, this is bounded! In fact, if we have values
of x such that x is distance ≤ 1 from the real number 1, we have that |x + 1| ≤ 2.
So, when we pick our δ, if we just make sure that δ < 1, we know that we have the
following simple and excellent upper bound:
2. We have a simple upper bound! Our next step then proceeds as follows: for any > 0,
we want to pick a δ > 0 such that if |x − a| < δ,
|x − a| · 2 < .
But this is easy: if we want this to happen, we just need to pick δ so that δ < 1
(so we get our simple upper bound,) and also so that δ < 2 . Explicitly, we can pick
δ < min 1, 2 .
3. Thus, for any > 0, we’ve found a δ > 0 such that whenever |x − 1| < δ, we have
Proof. (Using arithmetic and continuity:) First, notice the following lemma:
Lemma 6. The function f (x) = x is continuous everywhere.
∀ > 0, ∃δ > 0 such that ∀x ∈ X, (|x − a| < δ) ⇒ (|f (x) − f (a)| < ).
But we know that f (x) = x and f (a) = a: so we’re really trying to prove
So. Um. Just pick δ = . Then, whenever |x − a| < δ, we definitely have |x − a| < ,
because delta and epsilon are the same.
Similarly, and with even less effort, you can show that any constant function gc (x) = c
is also continuous: for any > 0, let δ be anything, and your − δ statement will hold!
Believe it or not, the rest of the proof is even more trivial. We have that the functions
f (x) = x and gc (x) = c are both continuous. By multiplying and adding these functions
together, we can create any polynomial; thus, by using our theorems on arithmetic and
limits, we have shown that any polynomial must be continuous.
11
In very specific, we have that x2 is continuous at 1, which provides a much shorter proof
of our earlier result. This hopefully illustrates something very relevant about continuity: if
you can use a theorem instead of working from the definition, do so! It will make your life
much easier.
Claim 7.
lim x2 sin(1/x) = 0.
x→0
Proof. (Using the squeeze theorem:) So: for all x ∈ R, x 6= 0, we have that
− 1 ≤ sin(1/x) ≤ 1
⇒ − x2 ≤ x2 sin(1/x) ≤ x2 .
By the squeeze theorem, because the limit as x → 0 of both −x2 and x2 is 0, we have that
lim x2 sin(1/x) = 0
x→0
as well.
Claim 8. x2 sin(1/x2 ) is continuous on R \ {0}.
Proof. (Using composition of limits:) By our work earlier in this lecture, x2 is continuous,
and therefore 1/x2 is continuous on all of R \ {0}, by using arithmetic and limits. From
Apostol, we know that sin(x) is continuous: therefore, we have that the composition of
these functions sin(1/x2 ) is continuous on R \ {0}. Multiplying by the continuous function
x2 tells us that x2 sin(1/x2 ) is continuous on R \ {0}, as claimed.
Claim 9. Let f (x) be defined as follows:
sin(1/x), x 6= 0
f (x) =
a, x=0
Then, no matter what a is, f (x) is discontinuous at 0.
Proof. (Using sequences to show discontinuity:) Before we start, consider the graph of
sin(1/x):
12
Visual inspection of this graph makes it clear that sin(1/x) cannot have a limit as x
approaches 0; but let’s rigorously prove this using our lemma, so we have an idea of how to
do this in general.
So: we know that sin 4k+1
2 π = 1, for any k. Consequently, because the sequence
n o∞
2
(4k+1)π satisfies the properties
k=1
2
• limk→∞ = 0 and
(4k+1)π
1 4k+1
• limk→∞ sin 2/(4k+1)π = limk→∞ sin 2 π = limk→∞ 1 = 1,
our tool also says that if sin(1/x) has a limit at 0, it must be −1. Thus, because −1 6= 1,
we have that the limit limx→0 sin(1/x) cannot exist, as claimed.
13