0% found this document useful (0 votes)
38 views

Math8 wk4 Lecture

1. The document discusses power series, which generalize polynomials by allowing terms of infinite degree. A power series is defined as a sequence written as a sum with terms of the form an(x-x0)n. 2. For a power series to represent a function, it must converge at points where it is evaluated. A power series converges at a point b if the corresponding series using terms an(b-x0)n converges. It is then shown to converge on an interval centered at this point. 3. Examples are given of power series that converge only at 0, converge on the interval (-1,1), and converge everywhere, demonstrating the possible radii of convergence.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Math8 wk4 Lecture

1. The document discusses power series, which generalize polynomials by allowing terms of infinite degree. A power series is defined as a sequence written as a sum with terms of the form an(x-x0)n. 2. For a power series to represent a function, it must converge at points where it is evaluated. A power series converges at a point b if the corresponding series using terms an(b-x0)n converges. It is then shown to converge on an interval centered at this point. 3. Examples are given of power series that converge only at 0, converge on the interval (-1,1), and converge everywhere, demonstrating the possible radii of convergence.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Math 8 Instructor: Padraic Bartlett

Lecture 4: Power Series; Continuity


Week 4 Caltech - Fall, 2011

1 Random Questions
Question 1.1. Last week, we showed that the harmonic series
X1
n
n∈N

diverges.
Show that the sum
X 1
n
n∈N:
n has no 9
in its digits

converges, and specifically converges to something < 80.

Question 1.2. For any k, define the following sequence of numbers (often called “hailstone”
numbers:)

a0 = k,

3an + 1, an odd.
an+1 = an
2 , an even.

Show that for any k, the number 1 eventually shows up in the sequence {an }∞
n=1 .

Question 1.3. Can you find a function f : R → R such that f is

• continuous nowhere?

• continuous at every point of Q, but not at any point of R \ Q?

• continuous at every point of R \ Q, but not at any point of Q?

• not continuous at 0, but somehow is linear1 ?

This week’s talks focus on two distinct topics that we’ll deal with repeatedly in Math 1:
the study of power series, a “combination” of series and polynomials that have a number
of useful properties, and the concepts of limits and continuity for real-valued functions.
We start with power series:
1
A function f : R → R is linear if f (x + y) = f (x) + f (y), for every x, y ∈ R.

1
2 Power Series
2.1 Power Series: Definitions and Tools
The motivation for power series, roughly speaking, is the observation that polynomials
are really quite nice. Specifically, if I give you a polynomial, you can

• differentiate and take integrals easily,

• add and multiply polynomials together and easily express the result as another poly-
nomial,

• find its roots,

and do most anything else that you’d ever want to do to a function! One of the only
downsides to polynomials, in fact, is that there are functions that aren’t polynomials! In
specific, the very useful functions
1
sin(x), cos(x), ln(x), ex ,
x
are all not polynomials, and yet are remarkably useful/frequently occuring objects.
So: it would be nice if we could have some way of “generalizing” the idea of polynomials,
so that we could describe functions like the above in some sort of polynomial-ish way –
possibly, say, as polynomials of “infinite degree?” How can we do that?
The answer, as you may have guessed, is via power series:

Definition 2.1. A power series P (x) centered at x0 is just a sequence {an }∞


n=1 written
in the following form:

X
P (x) = an · (x − x0 )n .
n=0

Power series are almost taken around x0 = 0: if x0 is not mentioned, feel free to assume
that it is 0.

The definition above says that a power series is just a fancy way of writing down a
sequence. This looks like it contradicts our original idea for power series, which was that
we would generalize polynomials: in other words, if I give you a power series, you quite
certainly want to be able to plug numbers into it!
The only issue with this is that sometimes, well . . . you can’t:

Example. Consider the power series



X
P (x) = xn .
n=0

There are values of x which, when plugged into our power series P (x), yield a series that
fails to converge.

2
Proof. There are many such values of x. One example is x = 1, as this yields the series

X
P (x) = 1,
n=0

which clearly fails to converge; another example is x = −1, which yields the series

X
P (x) = (−1)n .
n=0

The partial sums of this series form the sequence {1, 0, 1, 0, 1, 0, . . .}, which clearly fails to
converge2 .

So: if we want to work with power series as polynomials, and not just as fancy sequences,
we need to find a way to talk about where they “make sense:” in other words, we need to
come up with an idea of convergence for power series! We do this here:
Definition 2.2. A power series

X
P (x) = an (x − x0 )n
n=0

is said to converge at some value b ∈ R if and only if the series



X
an (b − x0 )n
n=0

converges. If it does, we denote this value as P (b).


The following theorem, proven in lecture, is remarkably useful in telling us where power
series converge:
Theorem 1. Suppose that

X
P (x) = an (x − x0 )n
n=0

is a power series that converges at some value b + x0 ∈ R. Then P (x) actually converges on
every value in the interval (−b + x0 , b + x0 ).
In particular, this tells us the following:
Corollary 2. Suppose that

X
P (x) = an xn
n=0

is a power series centered at 0, and A is the set of all real numbers on which P (x) converges.
Then there are only three cases for A: either
2
Though it wants to converge to 1/2. Go to wikipedia and read up on Grandi’s series for more informa-
tion!

3
1. A = {0},

2. A = one of the four intervals (−b, b), [−b, b), (−b, b], [−b, b], for some b ∈ R, or

3. A = R.
We say that a power series P (x) has radius of convergence 0 in the first case, b in the
second case, and ∞ in the third case.
A question we could ask, given the above corollary, is the following: can we actually get
all of those cases to occur? I.e. can we find power series that converge only at 0? On all of
R? On only an open interval?
To answer these questions, consider the following examples:

2.2 Power Series: Examples


Example. The power series

X
P (x) = n! · xn
n=1

converges when x = 0, and diverges everywhere else.

Proof. That this series converges for x = 0 is trivial, as it’s just the all-0 series.
To prove that it diverges whenever x 6= 0: pick any x > 0. Then the ratio test says that
this series diverges if the limit
(n + 1)!xn+1
lim = lim x(n + 1) = +∞
n→∞ n! · xn n→∞

is > 1, which it is. So this series diverges for all x > 0. By applying our theorem about
radii of convergence of power series, we know that our series can only converge at 0: this
is because if it were to converge at any negative value −x, it would have to converge on all
of (−x, x), which is a set containing positive real numbers.

Example. The power series



X
P (x) = xn
n=1

converges when x ∈ (−1, 1), and diverges everywhere else.

Proof. Take any x > 0, as before, and apply the ratio test:
xn+1
lim = x.
n→∞ xn

So the series diverges for x > 1 and converges for 0 ≤ x < 1: therefore, it has radius of
convergence 1, using our theorem, and converges on all of (−1, 1). As for the two endpoints
x = ±1: in our earlier discussion of power series, we proved that P (x) diverged at both 1
and −1. So this power series converges on (−1, 1) and diverges everywhere else.

4
Example. The power series

X xn
P (x) =
n
n=1

converges when x ∈ [−1, 1), and diverges everywhere else.

Proof. Take any x > 0, and apply the ratio test:

xn+1 /(n + 1)
 
n 1
lim = lim x · = lim x · 1 − = x.
n→∞ xn /n n→∞ n + 1 n→∞ n+1

So, again, we know that the series diverges for x > 1 and converges for 0 ≤ x < 1: therefore,
it has radius of convergence 1, using our theorem, and converges on all of (−1, 1). As for
the two endpoints x = ±1, we know that plugging in 1 yields the harmonic series (which
diverges) and plugging in −1 yields the alternating harmonic series (which converges.) So
this power series converges on [−1, 1) and diverges everywhere else.

Example. The power series



X xn
P (x) =
n2
n=1

converges when x ∈ [−1, 1], and diverges everywhere else.

Proof. Take any x > 0, and apply the ratio test:


2 2
xn+1 /(n + 1)2
 
n 1
lim = lim x · = lim x · 1 − = x.
n→∞ xn /n2 n→∞ n+1 n→∞ n+1

So, again, we know that the series diverges for x > 1 and converges for 0 ≤ x < 1:
therefore, it has radius of convergence 1, using our theorem, and converges on all of (−1,
P 1).
1
As for the two endpoints x = ±1, we know that plugging in 1 yields the series n2
,
P (−1)n
which we’ve shown converges. Plugging in −1 yields the series n2
: because the series
of termwise-absolute-values converges, we know that this series converges absolutely, and
therefore converges.
So this power series converges on [−1, 1] and diverges everywhere else.

Example. The power series



X
P (x) = 0 · xn
n=0

converges on all of R.

Proof. P (x) = 0, for any x, which is an exceptionally convergent series.

5
Example. The power series

X xn
P (x) =
n!
n=0

converges on all of R.

Proof. Take any x > 0, and apply the ratio test:


xn+1 /(n + 1)! x
lim = lim = 0.
n→∞ xn /n! n→∞ n + 1

So this series converges for any x > 0: applying our theorem about radii of convergence
tells us that this series must converge on all of R!

This last series is particularly interesting, as you’ll see later in Math 1. One particularly
nice property it has is that P (1) = e:
Definition 2.3.

X 1
= e.
n!
n=0

Using this, we can prove something we’ve believed for quite a while but never yet
demonstrated:
Theorem 2.4. e is irrational.
Proof. We begin with a (somewhat dumb-looking) lemma:
Lemma 3. e < 3.

Proof. To see that e < 3, look at e − 2, factor out a 12 , and notice a few basic inequalities:
 
1 1 1 1
e − 1 − 1 = 1 + + + + + ... − 1 − 1
1! 2! 3! 4!
1 1 1
= + + + ...
2!  3! 4! 
1 1 1 1
= · 1+ + + + ...
2 3 3·4 3·4·5
 
1 1 1 1
< · 1+ + + + ...
2 2 2·3 2·3·4
 
1 1 1 1 1
= · + + + + ...
2 1! 2! 3! 4!
1
= · (e − 1)
2
⇒ 2e − 4 <e − 1
⇒ e < 3.

6
Given this, our proof is remarkably easy! Assume that e = ab , for some pair of integers
a, b ∈ Z, b ≥ 1. Then we have that

X 1 a
=
n! b
n=0

X b!
⇒ = a · (b − 1)!
n!
n=0
b ∞
X b! X b!
⇒ + = a · (b − 1)!
n! n!
n=0 n=b+1
∞ b
X b! X b!
⇒ = a · (b − 1)! − .
n! n!
n=b+1 n=0

b!
For n ≤ b, notice that n! is always an integer: therefore, the right-hand-side of the last
equation above is always an integer, as it’s just
P∞the difference of a bunch of integers. This
b!
means, in particular, that the left-hand-side n=b+1 n! is also an integer. What integer is
it?
Well: we know that

1 X b! 1 1 1
0< < = + + ...,
b n! b + 1 (b + 1)(b + 2) (b + 1)(b + 2)(b + 3)
n=b+1

so it’s a positive integer.


However, we also know that because b ≥ 1, we have

X b! 1 1 1
= + + ...
n! b + 1 (b + 1)(b + 2) (b + 1)(b + 2)(b + 3)
n=b+1
1 1 1
≤ + + + ...
2 2·3 2·3·4
1 1 1
= + + + ...
2! 3! 4!
=e−2 < 1.

So, it’s an integer strictly between 0 and . . . 1. As there are no integers strictly between 0
and 1, this is a contradiction! – in other words, we’ve just proven that e must be rational.

3 Continuity
Changing gears here, we turn to the concepts of continuity and limits of real-valued
functions:

7
3.1 Continuity: Motivation and Tools
Definition 3.1. If f : X → Y is a function between two subsets X, Y of R, we say that

lim f (x) = L
x→a

if and only if
1. (vague:) as x approaches a, f (x) approaches L.

2. (precise; wordy:) for any distance  > 0, there is some bound δ > 0 such that whenever
x ∈ X is within δ of a, f (x) is within  of L.

3. (precise; symbols:)

∀ > 0, ∃δ > 0 such that ∀x ∈ X, (|x − a| < δ) ⇒ (|f (x) − L| < ).

Definition 3.2. A function f : X → Y is said to be continuous at some point a ∈ X iff

lim f (x) = f (a).


x→a

These definitions, without pictures, are kind-of hard to understand. In high school,
continuous functions are often simply described as “functions you can draw without lifting
your pencil3 ;” how do these deltas and epsilons relate to this intuitive concept? Consider
the following picture:

f(x)

L -ϵ

-δ + δ

This graph should help to illustrate what’s going on in our “rigorous” definition of limits
and continuity. Essentially, when we claim that “as x approaches a, f (x) approaches f (a)”,
we are saying
• for any (red) distance  around f (a) that we’d like to keep our function,

• there is a (blue) neighborhood (a − δ, a + δ) around a such that

• if f takes only values within this (blue) neighborhood (a − δ, a + δ) , it stays within


the (red)  neighborhood of f (a).
3
Assuming, of course, an arbitrarily sharp pencil, infinite amounts of lead, and a sheet of paper the size
of R2 to draw on.

8
Basically, what this definition says is that if you pick values of x sufficiently close to a, the
resulting f (x)’s will be as close as you want to be to f (a) – i.e. that “as x approaches a,
f (x) approaches f (a).”
This, hopefully, illustrates what our definition is trying to capture – a concrete notion
of something like convergence for functions, instead of sequences. So: how can we prove
that a function f has some given limit L? Motivated by this analogy to sequences, we have
the following blueprint for a proof-from-the-definition that limx→a f (x) = L:

1. First, examine the quantity

|f (x) − L|.

Specifically, try to find a simple upper bound for this quantity that depends only
on |x − a|, and goes to 0 as x goes to a – something like |x − a| · (constants), or
|x − a|3 · (bounded functions, like sin(x)).

2. Using this simple upper bound, for any  > 0, choose a value of δ such that whenever
|x − a| < δ, your simple upper bound |x − a| · (constants) is < . Often, you’ll define
δ to be /(constants), or somesuch thing.

3. Plug in the definition of the limit: for any  > 0, we’ve found a δ such that whenever
|x − a| < δ, we have

|f (x) − L| < (simple upper bound depending on |x − a|) < .

Thus, we’ve proven that limx→a f (x) = L, as claimed.

Limits and continuity are wonderfully useful concepts, but working with them straight
from the definitions can be somewhat ponderous. As a result, just like we did for sequences,
we have developed a number of useful tools and theorems to allow us to prove that certain
limits exist without going through the definition every time. We present four such tools
here:

1. Squeeze theorem: Suppose that f, g, h are functions defined on some interval I \{a}4
such that

f (x) ≤ g(x) ≤ h(x), ∀x ∈ I \ {a},


lim f (x) = lim h(x).
x→a x→a

Then limx→a g(x) exists, and is equal to the other two limits limx→a f (x), limx→a h(x).
4
The set X \ Y is simply the set formed by taking all of the elements in X that are not elements in Y .
The symbol \, in this context, is called “set-minus”, and denotes the idea of “taking away” one set from
another.

9
2. Limits and arithmetic: Suppose that f, g are a pair of functions such that the
limits limx→a f (x), limx→a g(x) both exist. Then we have the following equalities:
   
lim (f (x) + g(x)) = lim f (x) + lim g(x) .
x→a x→a x→a
   
lim (f (x) · g(x)) = lim f (x) · lim g(x)
x→a x→a x→a
  
f (x)   
lim = lim f (x) / lim g(x) , if lim g(x) 6= 0.
x→a g(x) x→a x→a x→a

As a special case, the product and sum of any two continuous functions is continuous,
as is dividing a continuous function by another continuous function that’s never zero.

3. Limits and composition: Suppose that f : Y → Z is a function such that


limy→a f (x) = L, and g : X → Y is a function such that limx→b g(x) = a. Then

lim f (g(x)) = L.
x→b

Specifically, if two functions are continuous, their composition is continuous.

4. Discontinuous functions and sequences: For any function f : X → Y , we know


that f is discontinuous at a point a ∈ R if and only if there is some sequence {an }∞
n=1
with the following properties:

• limn→∞ an = a, and
• limn→∞ f (an ) 6= f (a).

We illustrate how to use the definition of continuity, as well as how to use each of these
four tools, in the next section:

3.2 Continuity: Examples


Claim 4. The function f (x) = x2 is continuous at x = 1.

Proof. (Using the definition of continuity): We want to prove that limx→1 x2 = 12 = 1. To


do this, we’ll try using our blueprint for  − δ proofs:

1. First, let’s examine the quantity |f (x) − f (1)| = x2 − 1 . As stated in the blueprint,
our first goal is to bound this above by something simple, multipled by |x − 1|. We
proceed by blindly trying whichever algebraic tricks come to mind:

|x2 − 1| = |(x − 1)(x + 1)|


= |x − 1| · |x + 1|
.

By algebraic simplification, we’ve broken our expression into two parts: one of which
is |x − 1|, and the other of which is. . . something. We’d like to get rid of this extra

10
part |x + 1|; so, how do we do this? We cannot just say that this quantity is bounded;
indeed, for very large values of x, this explodes off to infinity.
However, for values of x rather close to 1, this is bounded! In fact, if we have values
of x such that x is distance ≤ 1 from the real number 1, we have that |x + 1| ≤ 2.
So, when we pick our δ, if we just make sure that δ < 1, we know that we have the
following simple and excellent upper bound:

|f (x) − f (a)| ≤ 2|x − a|

2. We have a simple upper bound! Our next step then proceeds as follows: for any  > 0,
we want to pick a δ > 0 such that if |x − a| < δ,

|x − a| · 2 < .

But this is easy: if we want this to happen, we just need to pick δ so that δ < 1

(so we get our simple upper bound,) and also so that δ < 2 . Explicitly, we can pick

δ < min 1, 2 .

3. Thus, for any  > 0, we’ve found a δ > 0 such that whenever |x − 1| < δ, we have

|f (x) − f (1) ≤ 2|x − 1| < .

Therefore f (x) = x2 is continuous at 1, as claimed.

Claim 5. Every polynomial is continuous everywhere.

Proof. (Using arithmetic and continuity:) First, notice the following lemma:
Lemma 6. The function f (x) = x is continuous everywhere.

Proof. To prove this, we simply need to show the following:

∀ > 0, ∃δ > 0 such that ∀x ∈ X, (|x − a| < δ) ⇒ (|f (x) − f (a)| < ).

But we know that f (x) = x and f (a) = a: so we’re really trying to prove

∀ > 0, ∃δ > 0 such that ∀x ∈ X, (|x − a| < δ) ⇒ (|x − a| < ).

So. Um. Just pick δ = . Then, whenever |x − a| < δ, we definitely have |x − a| < ,
because delta and epsilon are the same.

Similarly, and with even less effort, you can show that any constant function gc (x) = c
is also continuous: for any  > 0, let δ be anything, and your  − δ statement will hold!
Believe it or not, the rest of the proof is even more trivial. We have that the functions
f (x) = x and gc (x) = c are both continuous. By multiplying and adding these functions
together, we can create any polynomial; thus, by using our theorems on arithmetic and
limits, we have shown that any polynomial must be continuous.

11
In very specific, we have that x2 is continuous at 1, which provides a much shorter proof
of our earlier result. This hopefully illustrates something very relevant about continuity: if
you can use a theorem instead of working from the definition, do so! It will make your life
much easier.
Claim 7.
lim x2 sin(1/x) = 0.
x→0

Proof. (Using the squeeze theorem:) So: for all x ∈ R, x 6= 0, we have that
− 1 ≤ sin(1/x) ≤ 1
⇒ − x2 ≤ x2 sin(1/x) ≤ x2 .

By the squeeze theorem, because the limit as x → 0 of both −x2 and x2 is 0, we have that
lim x2 sin(1/x) = 0
x→0

as well.
Claim 8. x2 sin(1/x2 ) is continuous on R \ {0}.
Proof. (Using composition of limits:) By our work earlier in this lecture, x2 is continuous,
and therefore 1/x2 is continuous on all of R \ {0}, by using arithmetic and limits. From
Apostol, we know that sin(x) is continuous: therefore, we have that the composition of
these functions sin(1/x2 ) is continuous on R \ {0}. Multiplying by the continuous function
x2 tells us that x2 sin(1/x2 ) is continuous on R \ {0}, as claimed.
Claim 9. Let f (x) be defined as follows:

sin(1/x), x 6= 0
f (x) =
a, x=0
Then, no matter what a is, f (x) is discontinuous at 0.
Proof. (Using sequences to show discontinuity:) Before we start, consider the graph of
sin(1/x):

12
Visual inspection of this graph makes it clear that sin(1/x) cannot have a limit as x
approaches 0; but let’s rigorously prove this using our lemma, so we have an idea of how to
do this in general.
So: we know that sin 4k+1

2 π = 1, for any k. Consequently, because the sequence
n o∞
2
(4k+1)π satisfies the properties
k=1
2
• limk→∞ = 0 and
(4k+1)π
 
1 4k+1

• limk→∞ sin 2/(4k+1)π = limk→∞ sin 2 π = limk→∞ 1 = 1,

our tool says that if sin(1/x) has a limit at 0, it must be 1.


However: we also know that sin 4k+3 2 π = −1, for any k. Consequently, because the
n o∞
2
sequence (4k+3)π satisfies the properties
k=1
2
• limk→∞ (4k+3)π
= 0 and
 
1 4k+3

• limk→∞ sin 2/(4k+3)π = limk→∞ sin 2 π = limk→∞ −1 = −1,

our tool also says that if sin(1/x) has a limit at 0, it must be −1. Thus, because −1 6= 1,
we have that the limit limx→0 sin(1/x) cannot exist, as claimed.

13

You might also like