0% found this document useful (0 votes)
16 views2 pages

Logbook Lectures

Uploaded by

Angie Pulgarin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views2 pages

Logbook Lectures

Uploaded by

Angie Pulgarin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

NUMERICAL ANALYSIS

Master Degree in Civil Engineering


Prof. Luca Dede’
A.Y. 2024/25

Logbook of Lectures

• Lecture 01 – Monday, September 16, 2024, 08:15-10:15 – LINK to recording


Introduction to the Course
Goals and scope, contents, organization, exam. Scientific Computing and its role in
Engineering. Examples of meaningful applications: statics of truss bridge, etc.
1 – Introduction to Numerical Analysis and Scientific Computing
1.1 – Mathematical Models and Scientific Computing
Physical, mathematical, and numerical problems; Numerical Analysis and Scientific
Computing, computers and supercomputers, efficiency of a method, algorithms.
1.2 – Representation of Real Numbers and Computer Operations
Floating–point numbers, epsilon machine, round–off error, double precision. Floating–
point arithmetic, propagation of round–off errors, example in Matlabr .
1.3 – From the Mathematical Problem to the Numerical Problem
Numerical problem, well–posedness, consistency, convergence, computational costs; ex-
ample. Truncation and round–off errors, numerical and computational solutions.
2 – Numerical Solution of Linear Systems
2.1 – Motivations and Classification of Methods
Goals, existence and uniqueness of the solution, computational costs and number of
operations; example of the Cramer’s rule. Direct and iterative methods: classification.
2.2 – Direct Methods
Solution of linear systems with diagonal matrices (D).

• Lecture 02 – Tuesday, September 17, 2024, 15:15-16:15 – LINK to recording


2.2 – Direct Methods (cont’d)
Solution of linear systems with lower and upper triangular matrices (L and U ); forward
and backward substitutions algorithms, computational costs. LU factorization method
for solving the linear system Ax = b, procedure, convention for matrix L, Gauss Elim-
ination method (GEM) to obtain the LU factorization of A, algorithm, interpretation,
pivot elements, possible issues, computational costs; LU factorization method for solving
linear systems with different right–hand side vectors b, computational costs.

1
• Lecture 03 – Monday, September 23, 2024, 08:15-10:15 – LINK to recording
2.2 – Direct Methods (cont’d)
Existence and uniqueness of LU factorization, conditions, examples, role of pivot. LU
factorization method with pivoting by row, GEM with pivoting, criterion for row permu-
tation, permutation matrix, LU factorization with pivoting by row, computational cost,
example; the Matlabr command lu. Cholesky factorization method for A symmetric
and positive definite, properties and computational costs. Thomas algorithm for tridi-
agonal matrices, computational costs (hints). Accuracy of the numerical solution, exact
and numerical solution, error, relative error, residual, condition number of a matrix,
stability estimate, error estimate, role of condition number. The Matlabr command \.
2.3 – Iterative Methods
Iterates, initial guess, general scheme of iterative methods, iteration matrix and iteration
vectors, strong consistency condition, error and iteration matrix, spectral radius of
the iteration matrix; necessary and sufficient condition for the convergence, velocity of
convergence.
• Lecture 04 – Monday, September 30, 2024, 08:15-10:15 – LINK to recording
2.3 – Iterative Methods (cont’d)
Splitting methods, preconditioner, residual and preconditioned residual, algorithm, cri-
teria for the choice of the preconditioner (convergence and computatonal costs), error
and residual, stopping criteria, criterion of the normalized residual and its reliability,
difference of consecutive iterates. Jacobi and Gauss–Seidel methods, corresponding pre-
conditioners and iterations matrices, applicability of the methods, sufficient conditions
for convergence. Richardson methods, stationary and dynamic methods, algorithms,
convergence properties of the preconditioned stationary Richardson methods, role of
the spectral condition number and of the preconditioner. Gradient methods, with and
without preconditioner, algorithms, convergence properties, role of the preconditioner;
interpretation of the gradient method as an optimization algorithm, role of the residual
as descent direction. Conjugate gradient methods, intepretation, A–conjugate descent
directions, convergence properties.
• Lecture 05 – Monday, October 7, 2024, 08:15-10:15 – LINK to recording
2.4 – Direct vs. Iterative Methods
Choice of the method based on properties of the matrix and computational resources.
3 – Approximation of Zeros of Nonlinear Equations and Systems
3.1 – Newton Methods
Goal, zeros of scalar functions. Newton method, construction of the method and graph-
ical interpretation, algorithm, role of first derivative, choice of the initial iterate, conver-
gence order of an iterative method, graphical interpretation of convergence, convergence
order p = 1 vs. p = 2, convergence properties of the Newton method, multiplicity of the
zero, impact on the convergence order. Modified Newton method, role of the multiplic-
ity, algorithm, convergence order. Stopping criterion, difference of successive iterations,
absolute and relative residual, errors vs. error estimator, quality of the error estimators.
Inexact and quasi–Newton methods, pros and cons of computing the first derivative,
Rope and secant methods, convergence properties and graphical interpretation. Systems
of nonlinear equations, goals and notation, Jacobian matrix.

You might also like