0% found this document useful (0 votes)
10 views24 pages

Week1 Handout

Math 164 Numerical Analysis, taught by Christopher Kim in Spring 2025, focuses on numerical approximation algorithms and their applications in continuous mathematics. The course includes bi-weekly homework, two midterms, and a final exam, with a significant emphasis on using Python and Jupyter Notebook for programming assignments. Key topics include number representation in computers, evaluating non-polynomial functions using Taylor's theorem, and understanding floating-point systems.

Uploaded by

tobajama
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views24 pages

Week1 Handout

Math 164 Numerical Analysis, taught by Christopher Kim in Spring 2025, focuses on numerical approximation algorithms and their applications in continuous mathematics. The course includes bi-weekly homework, two midterms, and a final exam, with a significant emphasis on using Python and Jupyter Notebook for programming assignments. Key topics include number representation in computers, evaluating non-polynomial functions using Taylor's theorem, and understanding floating-point systems.

Uploaded by

tobajama
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Math 164 Numerical Analysis

Spring 2025: Week 1


Course announcements
▶ Instructor: Christopher Kim, Assistant Professor, Mathematics
▶ Lecture: Tue, Thur 11:10 - 12:30 at DGH 112
▶ Office hours: Tue and Thu 10:00 – 11:00am (or by
appointment) at Annex III 218
▶ Course Textbook: A Compact Compendium in Numerical
Analysis by Roberto De Leo (online textbook)
https://deleo.website/NumericalAnalysisPython/numericalAnalysisBook.html

All course materials will be posted on Canvas.


▶ Course syllabus
▶ Lecture notes
▶ Python code covered in class
▶ Homework assignments
▶ Exams
Grading
▶ Bi-weekly HW should be submitted on Canvas (40% of grade)
▶ Two of the lowest homework score will be dropped.
▶ No late homework accepted (unless in exceptional
circumstances)
▶ Two take-home midterms (15% each)
▶ Take-home final exam (30%).
Python code
▶ All programming assignments should be written in Python
▶ Use Jupyter Notebook to make your Python code interactive.
ChatGPT
▶ You may use AI tools (e.g., ChatGPT) for you homework
problems. You must disclose whether AI tools were used and
share the URL of dialogue history.
▶ For Midterm and Final exams, the use of AI tools are
not allowed.
What is Numerical Analysis?
Numerical Analysis is a study of algorithms that use numerical
approximation (as opposed to symbolic manipulation) for the
problems of continuous mathematics (as opposed to discrete
mathematics). (Source: Wikipedia)
Topics covered in this course:
1. Represent numbers in computers with finite memory (Ch1)
2. Numerically evaluate non-polynomial functions (Ch2),
derivatives (Ch2) and integrals (Ch7,8)
d
R1
(e.g., sin(0.3), dx sin(x)|x=0.3 , 0 sin(x)dx)
3. Numerical solutions to linear equations (Ch4,5), nonlinear
equations (Ch3), ordinary differential equations (Ch9).
4. Optimization of functions (Ch6 and AutoDiff)
(e.g., find the max/min of a function, optimize a function to
perform a classification task)
Representing numbers in a computer

To perform numerical analysis with a computer, we first need to


represent numbers in a computer.

For instance, how can we represent irrational numbers (e.g., 2
that have infinitely many decimal points) or rational numbers (e.g.,
numbers of the form a/b where a, b are integers.)
1. Irrational numbers
Because of its finite memory, a computer cannot store all the
decimal points of an irrational number:

2 ≈ 1.41421356237...

A computer can only approximate the true value of 2.
2. Rational numbers
What about rational numbers such as

1/3 ≈ 0.3333...?

Because the decimal points never end, again a computer only


approximates the true value of 1/3.
What is surprising is that a simple (rational) number such as
1/10 = 0.1 cannot be expressed exactly in a computer!
Demonstration of Jupyter Notebook
Scientific notation for numbers: floating-point system
Let’s say we want express 73,824,563 in scientific notation.
In the floating-point system, a number is written in the form

7.3824563 × 103 (1)

Every floating-point system is characterized by


▶ a base (we typically use base 10, i.e., decimal)
▶ the number of digits to keep
▶ a range for the exponent

3
|{z}
| {z } × 10
7.3824563 (2)
exponent
| {z }
# of digits base
A toy floating-point system, D3

A floating-point system can have unexpected results that are


mathematically unintuitive.
Consider a toy floating-point system where the base is 10, the
number of digits to keep is 3 and the range of exponent is from
-10 to 10. We name this floating-point system, D3 .
The largest number D3 can represent is

9.99 × 1010

and the smallest positive number is

0.01 × 10−10 .
Property 1. We can have a + b = a even when b ̸= 0.
In D3 , take
a = 1.00 × 100
and
b = 1.00 × 10−3 .
Then, mathematically,

(math) a + b = 1.001.

However, since D3 uses only 3 digits, this number cannot be


represented in D3 . In fact, the closest number to 1.001 in D3 is

(D3 ) a + b = |{z}
1.00 1A = a.
3 digits

Remark. Adding a small number to a large number may not have


any effect because of the finite number of digits representing the
floating-point system (round-off error).
Property 2. Two equivalent formulae can give different results.
In D3 , take

a = 1.01, b = 1.00.

Let us compare

a2 − b 2 and (a + b)(a − b).

First, a2 − b 2 :

a2 = 1.01 × 1.01 = 1.0201 (math)


= |{z}
1.02 
01
 (D3 )
Z
Z
3 digits
2
b = 1.00
a − b 2 = 2.00 × 10−2
2
Next, (a + b)(a − b):

a + b = 2.01
a − b = 0.01 = 1.00 × 10−2
(a + b)(a − b) = 2.01 × 10−2 .

Therefore, a2 − b 2 ̸= (a + b)(a − b).


Remark. Two equivalent formulae can incur different round-off
errors, leading to different answers.
Double-precision floating-point system
To represent numbers, modern computers use
▶ base 2

A number in the double-precision binary floating-point is


specified by the following 64 bits:

(−1)sign (b51 .b50 ...b0 )2 ×2e−1023


| {z }
a number in base 2

▶ 52 bits for the number of digits


▶ 11 bits for the range of exponent
▶ 1 bit for the sign (+ or −)
Binary Numbers

Representing a number in base 2 works the same as in base 10, but


with two differences:
▶ b0 , ..., b51 is taken from one of two digits, 0 and 1
▶ a digit in position k indicates how many summands of 2k are
in the number
Examples of integers:
▶ 12 = 1 × 20 = 1
▶ 102 = 1 × 21 + 0 × 20 = 2
▶ 112 = 1 × 21 + 1 × 20 = 3
Examples of fractional binaries
▶ 0.12 = 1 × 2−1 = 0.5
▶ 0.012 = 1 × 2−2 = 0.25
▶ 0.112 = 1 × 2−1 + 1 × 2−2 = 0.75
Double-precision floating-system cannot represent the exact values
of simple-looking numbers:
1
= 0.0001100110011...2 (approximate) (3)
10
1
= 0.12 (exact) (4)
2
This is the source of round-off error.
Chapter 2
Evaluating the value of a function

Now that we can represent numbers in a computer, our first goal is


to evaluate a function f (x) numerically. This means that, given a
function f (x), we wish to find its value at an arbitrary x, say
x = 0.1.
It is straightforward to evaluate polynomials. If f (x) is a
polynomial, i.e., it is expressed in the form

f (x) = a0 + a1 x + ...an x n (5)

then we can simply plug in x = 0.1 and compute

f (0.1) = a0 + a1 (0.1) + ...an (0.1)n .


How do we numerically evaluate functions that are NOT
polynomials.
Examples of non-polynomial functions:
sin(x), cos(x), e x , log(x)
The exact values of these functions are known for some values of x:
sin(0) = 0, cos(0) = 1, sin(π/2) = 1, cos(π/2) = 0
How can we evaluate them at an arbitrary x?
sin(0.1), cos(0.1), e 0.1 , log(0.1)?
Taylor’s Theorem
Our approach is to use a mathematical result (Taylor’s theorem) to
approximate a non-polynomial function with a polynomial. Then,
evaluate the polynomial numerically to find an approximate value
of the non-polynomial function.
Taylor’s Theorem
A smooth function f (x) can be approximated around x0 by

f ′′ (x0 ) f (n) (x0 )


f (x) = f (x0 ) + f ′ (x0 )(x − x0 ) + (x − x0 )2 + ... + (x − x0 )n
2! n!
f (n+1) (s)
+ (x − x0 )n+1
(n + 1)!

• Left hand side, f (x): Non-polynomial


• Right hand side, result of Taylor’s theorem:
− (x − x0 ), (x − x0 )2 , ... are polynomials in x
− f ′ (x0 ), f ′′ (x0 )... are non-polynomials
− h = x − x0 is small, and s ∈ [x0 , x0 + h]
If the derivatives f ′ (x0 ), f ′′ (x0 )... are known, then we can
approximate a non-polynomial f (x) with a polynomial.
• n-th order Taylor polynomial:

f ′′ (x0 ) f (n) (x0 )


Tn (x) = f (x0 ) + f ′ (x0 )(x − x0 ) + (x − x0 )2 + ... + (x − x0 )n
2! n!
Here, x0 is selected so that f ′ (x0 ), f ′′ (x0 )... are known. Then, we
don’t have to evaluate the non-polynomial functions.
• Remainder (i.e., error due to polynomial approximation):

f (n+1) (s)
Rn (x) = (x − x0 )n+1
(n + 1)!

The remainder term is not a polynomial since the non-polynomial


function f (n+1) (s) has to be evaluated at arbitrary s. Instead of
trying to evaluate the reminder term, we will show mathematically
that it can be made arbitrarily small, so can be ignored.
Algorithm for evaluating a non-polynomial function

1. Use the n-th order Taylor polynomial to evaluate the function


(it’s an approximation)

f ′′ (x0 ) f (n) (x0 )


Tn (x) = f (x0 ) + f ′ (x0 )(x − x0 ) + (x − x0 )2 + ... + (x − x0 )n
2! n!
2. Use the remainder term to estimate the error due to the
polynomial approximation.

f (n+1) (s)
Rn (x) = (x − x0 )n+1
(n + 1)!
An important issue to consider

How do we determine n, the number of terms to include in Tn (x)?


• Choose n so that Tn (x) is a good approximation of f (x).
• This means that the remainder should be smaller than an error
tolerance, ϵ.
|Rn (x)| = |f (x) − Tn (x)| < ϵ
Typically, Rn (x) becomes smaller as n gets larger. So, find a large
enough n so that Rn (x) is smaller than ϵ.
A worked-out example
Evaluate cos(0.15) with error smaller than ϵ = 10−5 .
• Find a Taylor approximation of cos(x), x = 0.15 around x0 = 0.
• Evaluate the derivatives of cos(x0 ) and use them in the Taylor
approximation.
cos′ x = − sin x, cos′′ x = − cos x, cos′′′ x = sin x, cos′′′′ x = cos x
cos′ x0 = 0, cos′′ x0 = −1, cos′′′ x0 = 0, cos′′′′ x0 = 1
• Taylor approximation of cos(x) at x0 = 0 is:

1 2 1 1 1 2n
Tn (x) = 1 − x + x 4 − x 6 + ... + (−1)n x
2! 4! 6! (2n)!

• The remainder is

cos(n+1) (s) (n+1)


Rn (x) = x
(n + 1)!
How large is the error of Taylor approximation? Estimate how large
the remainder term is.
See if we can control the absolute value of Rn (x)

| cos(n+1) (s)| (n+1)


|Rn (x)| = |x|
(n + 1)!

Notice that the derivatives of cos(x) are in the form of


± sin(x), ± cos(x). This means that

| cos(n+1) (s)| ≤ 1

Hence,
1
|Rn (x)| = |x|(n+1)
(n + 1)!
Now, using Python, evaluate Rn (x) at x = 0.15 for different values
of n and verify it becomes smaller as n increases.

You might also like