Logbook Lectures
Logbook Lectures
Logbook of Lectures
1
• Lecture 03 – Monday, September 23, 2024, 08:15-10:15 – LINK to recording
2.2 – Direct Methods (cont’d)
Existence and uniqueness of LU factorization, conditions, examples, role of pivot. LU
factorization method with pivoting by row, GEM with pivoting, criterion for row permu-
tation, permutation matrix, LU factorization with pivoting by row, computational cost,
example; the Matlabr command lu. Cholesky factorization method for A symmetric
and positive definite, properties and computational costs. Thomas algorithm for tridi-
agonal matrices, computational costs (hints). Accuracy of the numerical solution, exact
and numerical solution, error, relative error, residual, condition number of a matrix,
stability estimate, error estimate, role of condition number. The Matlabr command \.
2.3 – Iterative Methods
Iterates, initial guess, general scheme of iterative methods, iteration matrix and iteration
vectors, strong consistency condition, error and iteration matrix, spectral radius of
the iteration matrix; necessary and sufficient condition for the convergence, velocity of
convergence.
• Lecture 04 – Monday, September 30, 2024, 08:15-10:15 – LINK to recording
2.3 – Iterative Methods (cont’d)
Splitting methods, preconditioner, residual and preconditioned residual, algorithm, cri-
teria for the choice of the preconditioner (convergence and computatonal costs), error
and residual, stopping criteria, criterion of the normalized residual and its reliability,
difference of consecutive iterates. Jacobi and Gauss–Seidel methods, corresponding pre-
conditioners and iterations matrices, applicability of the methods, sufficient conditions
for convergence. Richardson methods, stationary and dynamic methods, algorithms,
convergence properties of the preconditioned stationary Richardson methods, role of
the spectral condition number and of the preconditioner. Gradient methods, with and
without preconditioner, algorithms, convergence properties, role of the preconditioner;
interpretation of the gradient method as an optimization algorithm, role of the residual
as descent direction. Conjugate gradient methods, intepretation, A–conjugate descent
directions, convergence properties.
• Lecture 05 – Monday, October 7, 2024, 08:15-10:15 – LINK to recording
2.4 – Direct vs. Iterative Methods
Choice of the method based on properties of the matrix and computational resources.
3 – Approximation of Zeros of Nonlinear Equations and Systems
3.1 – Newton Methods
Goal, zeros of scalar functions. Newton method, construction of the method and graph-
ical interpretation, algorithm, role of first derivative, choice of the initial iterate, conver-
gence order of an iterative method, graphical interpretation of convergence, convergence
order p = 1 vs. p = 2, convergence properties of the Newton method, multiplicity of the
zero, impact on the convergence order. Modified Newton method, role of the multiplic-
ity, algorithm, convergence order. Stopping criterion, difference of successive iterations,
absolute and relative residual, errors vs. error estimator, quality of the error estimators.
Inexact and quasi–Newton methods, pros and cons of computing the first derivative,
Rope and secant methods, convergence properties and graphical interpretation. Systems
of nonlinear equations, goals and notation, Jacobian matrix.