Anderson M2B Lesson 0
Anderson M2B Lesson 0
FT
The Fundamental Problems
We begin with the most important and often most nuanced problem.
Find and solve real-world problems using the applied math modeling
process. In linear algebra, we focus on transforming our real-world prob-
lem into an ideal mathematical model that is stated in terms of vectors
A
and matrices. Then, we attempt to represent the most pertinent aspects
of our problem using at least one of the linear-algebraic problems stated
below.
R
Applied mathematics focuses on solving real-world problems using mathemati-
cal models. In this book, we define a real-world problem to be a set of questions
used to illicit information that can be stated using quantitative data observable
by humans beings. Very often, real-world problems are created by scientific and
entrepreneurial thinkers in order to address some specific need of a society. Many
real-world problems arise in the study of the physical, biological, chemical or social
D
spheres of our existence. Let’s investigate the nature of applied mathematical mod-
eling process. By doing so, we can learn to identify the different patterns has four
distinct phases and is iterative.
The first phase of this process begins by identifying a real-world problem, which
we define as any problem that matters in our lives and that can be studied by making
observations. The practice of observing problem dynamics is more formally known
as the collection of measured data though this distinction usually exists only in the
mind of trained scientists. One of the fascinating features of real-world problems
is that by identifying and imagining the problem, we likely develop some idea of
what a meaningful solution might be. However, from the standpoint of applied
mathematics, a valuable real-world problem usually includes major obstacles that
block the path between the problem statement and our desired solution. In order
FT
A
Figure 1: Diagram of the applied mathematical modeling process. Ap-
plied modeling begins with a real-world problem.
As we see in this paper, there are many subtleties and nuances involved in mathe-
R
matizing a real-world problem to produce a corresponding ideal mathematical model
that actually models key aspects of the problem. In good applied mathematics, this
is where interdisciplinary teams prove to be invaluable. Each professional is trained
in a particular field. By combining talents with a growth mindset and some luck,
great learning might result in a useful modeling scheme.
Once we’ve decided on an ideal mathematical modeling scheme, phase three of
D
c Jeffrey A. Anderson
• 4 vS20190403
One of the major reasons that we force students in STEM fields to take col-
lege coursework in mathematics is that we want our students to be able to use
mathematics to accurately analyze problems that may arise in their future careers.
However, a major tragedy of many college mathematics classes is that most of the
applied mathematical modeling process is hidden from view. Instead, students in
our college math classes focus only on mathematical analysis techniques that are
required to analyze an ideal mathematical model and produce a corresponding ideal
solution.
Indeed, it is very seldom that young students get a chance to participate in all
four steps of the applied modeling process prior to the end of their undergraduate
degree. It is even more rare that students observe this process in action while
completing their lower division courses in mathematics. As college instructors, we
see evidence of this missed opportunity when our students ask “when will I use this
material?” or “how is this applicable in the real world?”
The modeling activity presented in this paper is designed to provide a compelling
answer to these questions. This is part of the author’s larger develop program to
enrich our ability to teach introductory courses in applied linear algebra. The
ultimate goal of these types of modeling activities is to provide students access
to valuable mathematical modeling experience that illustrate all four phases of
FT
the applied modeling process. The hope is that these types of activities enriches
students’ understanding of how linear algebra is used in practice. The particular
activity presented in this paper is especially relevant for students who want to earn
degrees in electrical engineering, computer science, mechanical engineering, physics,
or applied mathematics.
We continue with a discussion of the most basic problem in linear algebra: the
matrix-vector multiplication problem.
A·x=b
R
Matrix-vector multiplication is a “forward problem.” To understand this termi-
nology, let’s define the function f : Rn ≠æ Rm given by f (x) = A · x. Based on this
definition, we will see that function f satisfies the following:
D
• the domain of f is Rn
• the codomain f is Rm
• the range of f is contained in Rm but may not be equal to Rm depending on
the entries in the columns of matrix A
In this case, the function f is defined as a matrix-vector product. Any matrix
A implicitly defines the matrix-vector product function f , as illustrated above.
Matrix-vector multiplication is a forward problem because we start with a given
input x in the domain Rn and move forward into the range to find the output
vector b. When solving the matrix-vector multiplication problem, we map from the
domain forward into the range. Hence, we call this a forward problem.
FT
vector multiplication.
A · x = b.
A
Just like matrix-vector multiplication, we can describe the nonsingular linear-
systems problem using the function f : Rn ≠æ Rn defined as f (x) = A · x. Based
on this definition, we have:
R
• the domain of f is Rn
• the codomain f is Rn .
f (x) = b.
When solving the linear-systems problem, we begin in the range and work our way
backwards into the domain. Hence, we call this a backward, or inverse, problem.
To craft a nonsingular linear-systems problem, we need to construct a square,
nonsingular matrix A that describes a physical phenomenon. As we will see, there
are many real-world applications that result in these special types of matrices.
With our nonsingular matrix A in hand, we also need to determine a vector b that
c Jeffrey A. Anderson
• 6 vS20190403
represents an output state in our system. We construct our model in such a way that
output vector b can be written as the product of our matrix A with some unknown
input vector x œ Rn . The solution to the nonsingular linear-systems problem is
any input vector that results in output b after a matrix-vector multiplication with
matrix A.
A major similarity between matrix-vector multiplication and the nonsingular
linear-systems problem is that both depend on matrix-vector multiplication. The
process of solving the former problem is simply to calculate a product. To solve
the later problem, we “reverse engineer” our matrix-vector product in order to find
input vectors that produce a given output.
One of the major differences between the matrix-vector multiplication problem
and the nonsingular linear-systems problem is in the assumptions on the matrix
A. For general matrix-vector products, we need only create a rectangular m ◊ n
matrix, where m may not be equal to n. No additional structure on the matrix A
is needed in order to solve this problem. On the other hand, for the nonsingular
linear-systems problem, we need to create a matrix with the same number of rows
as columns and this matrix must have very special column structure.
Not all linear-systems problems are nonsingular. Indeed, one of the advanced
problems in applied linear algebra is known as the general linear-systems problem
FT
and focuses on solving linear systems with general rectangular coefficient matrices.
Because most of the techniques for solving the general linear systems depend on
the solution methods we construct for nonsingular linear systems, we focus our
attention most heavily on nonsingular linear systems of equations.
The nonsingular linear-systems problem arises in a wide variety of applications.
Most of these depend on interconnections between many different objects. For ex-
ample, we use nonsingular linear systems to analyze a collection of masses connected
by springs, a so called mass-spring chain. Using similar matrix algebra but focusing
on laws that govern electronics, we construct nonsingular linear-systems problems
to calculate node voltage potentials in an electric circuit containing only resistors
A
and known DC voltage and current sources. Similarly, we use nonsingular linear
systems to discretize ODEs and PDEs. By doing so, we arrive at a finite approx-
imation to our desired solutions. Very common numerical algorithms that rely on
this approach are known as finite difference or finite element methods. These can
be employed to analyze the static behavior of structures like buildings and bridges
R
under known loads.
Further, we can use nonsingular linear-systems to design continuous piecewise
polynomial functions that interpolate a given set of data. This technique is known as
polynomial spline interpolation of which the most famous variety are cubic splines.
Polynomial splines can be applied in computer graphics, the design of roller coasters
and the construction of any object with a smooth surface that exits in 2D or 3D
D
space. The preceding list are just a few of the many application areas that give rise
to nonsingular linear systems.
We can make a convincing argument that the nonsingular-linear systems prob-
lem is the most fundamental problem of modern day applied mathematics. The
central supporting point for this claim is that computers make it possible to solve
nonsingular linear-systems problems quickly and accurately. Moreover, since the
dawn of digital computation in the mid 1940’s, continual improvements in com-
puter hardware and software have enabled STEM professionals to solve larger and
larger linear systems of equations. In the 1950’s, state of the art computers costing
millions of dollars could solve 20 ◊ 20 linear-systems problems in less than a weeks
time. By the 1970’s, it was standard to benchmark new computers in solving linear
systems with 100 equations and 100 unknowns in about a 24 hour period. Today,
FT
solving linear algebraic problems using a computer. For that reason, it is best if
you start your journey in linear algebra with a practical focus. Understand that
the power of applied linear algebra is intertwined with our ability to implement
computer algorithms to solve the the fundamental problems of linear algebra.
The nonsingular linear-systems problem is a special case of a much more general
problem type.
A · x = b.
R
We break these into two separate problems in order to specify which paradigm
to use. Many of the technique used to solve general linear-systems problems are
built on intuition from both matrix-vector multiplication and the nonsingular linear
systems. In this text, we spend much more energy motivating and analyzing the
D
c Jeffrey A. Anderson
• 8 vS20190403
Problem 3: The Full-Rank Least-Squares Problem
ÎA · x ≠ bÎ2
Again, we can describe the full-rank least-squares problem using the function
f : Rn ≠æ Rm with f (x) = A · x. Based on this definition, we have:
• the domain of f is Rn
• the codomain f is Rm .
• the range of f is a subset of Rm and most likely not equal to Rm
Like the linear-systems problem, the least-squares problem starts with a matrix A.
Unlike the linear-systems problem, the given vector b in the least-squares problem
FT
is not necessarily in the range of our function. In fact, the only requirement on b is
that it is in the codomain. In most meaningful least-squares problems, there will be
no input vector x in the domain such that the equality f (x) = b holds exactly. As
we will see, to solve the least-squares problem, we produce an optimal input vector
that makes f (x) ¥ b. We do so by minimizing the distance between the output
vector b and the range of function f .
In order to construct a least-squares problem, we need to create a full-rank,
rectangular matrix A. The number of rows of A should be greater than or equal to
the number of columns of this matrix. We also need to create an output vector b
representing some state in our modeled phenomenon. The solution of the full-rank
A
least-squares problem is a vector x œ Rn such that f (x) is as close as possible to b.
Again, the least-squares problem depends on matrix-vector multiplication. This
is a backward problem that results in an approximate answer not exactly equal to
our original desired output.
The full-rank least-squares problem is used to solve a variety of applied modeling
R
problems. Attempting to find a sum of continuous functions that can be used to
interpolate measured data that contains measurement error results in a least squares
problem. With mild conditions on the input data, we can guarantee that the matrix
A will be full rank. The least-squares problem is also used in geographical surveying,
image processing, GPS calculations, statistical regression, the Procrustes problem
D
A · x = ⁄x
FT
the solution to partial differential equations, analysis of vibrations, facial recognition
software, and principal component analysis.
ÎAx ≠ bÎ2
D
Ax = ⁄ B x
c Jeffrey A. Anderson
• 10 vS20190403
Problem 7: The Quadratic Eigenvalue Problem
⁄2 M x + ⁄ B x + K x+ = 0
• Mass-Spring Systems
• RLC Circuit analysis (using Laplace transforms)
FT
such that
A=U Vú