Computational and Applied 22s Soln
Computational and Applied 22s Soln
1
⟨𝑓, 𝑔⟩ = ∫ 𝑓(𝑥)𝑔(𝑥)𝑤(𝑥) 𝑑𝑥, 𝑤(𝑥) > 0 for 𝑥 ∈ (−1, 1),
−1
where 𝑝𝑖 (𝑥) is a polynomial of degree 𝑖. Let 𝑥0 , 𝑥1 , … , 𝑥𝑛 be the roots of 𝑝𝑛+1 (𝑥). Construct an orthonormal
basis in the subspace of the polynomials of degree no more than 𝑛 such that, for any polynomial in this subspace,
the coefficients of its expansion into the basis are equal to the scaled values of this polynomial at the nodes
𝑥0 , 𝑥 1 , … , 𝑥 𝑛 .
Solution: Start by considering 𝑙𝑖 (𝑥), 𝑖 = 0, … , 𝑛, the Lagrange interpolating polynomials of degree 𝑛 for the nodes
𝑥0 , 𝑥1 , … , 𝑥𝑛 . Let us compute the inner product of two such polynomials using the Gaussian quadrature, exact for the
polynomials of degree less or equal to 2𝑛 + 1. We have
1 𝑛
⟨𝑙𝑖 , 𝑙𝑗 ⟩ = ∫ 𝑙𝑖 (𝑥)𝑙𝑗 (𝑥)𝑤(𝑥) 𝑑𝑥 = ∑ 𝑙𝑖 (𝑥𝑘 )𝑙𝑗 (𝑥𝑘 )𝑤𝑘 = 𝛿𝑖𝑗 𝑤𝑖 ,
−1 𝑘=0
1
𝑅𝑖 (𝑥) = 𝑙𝑖 (𝑥),
√𝑤 𝑖
and obtain
1 𝑛 𝑛
1 1
∫ 𝑅𝑖 (𝑥)𝑅𝑗 (𝑥)𝑤(𝑥) 𝑑𝑥 = ∑ 𝑤𝑘 𝑅𝑖 (𝑥𝑘 )𝑅𝑗 (𝑥𝑘 ) = ∑ 𝑤𝑘 𝛿𝑖𝑘 𝛿𝑗𝑘 = 𝛿𝑖𝑗 ,
−1 𝑘=0 𝑘=0 √𝑤 𝑖 √𝑤 𝑗
namely, these functions form an orthonormal basis. The coefficients of a function in this subspace are computed as projec-
tions on the basis,
1 𝑛 𝑛
1
𝑓𝑖 = ⟨𝑓, 𝑅𝑖 ⟩ = ∫ 𝑓(𝑥)𝑅𝑖 (𝑥)𝑤(𝑥) 𝑑𝑥 = ∑ 𝑤𝑘 𝑓(𝑥𝑘 )𝑅𝑖 (𝑥𝑘 ) = ∑ 𝑤𝑘 𝑓(𝑥𝑘 ) 𝛿𝑖𝑘 = √𝑤𝑖 𝑓(𝑥𝑖 ).
−1 𝑘=0 𝑘=0 √𝑤 𝑖
Assume that the vector-valued function 𝐻(𝑥,⃗ 𝑦) = (𝑓(𝑥, 𝑦), 𝑔(𝑥, 𝑦))𝑇 is continuously-differentiable, and the infin-
ity norm of the Jacobian matrix is less than 1 at a unique fixed point (𝑥∞ , 𝑦∞ ).
Prove that iteration (2) is convergent, to the same fixed point as iteration (1), for the initial conditions sufficiently
close to the fixed point.
Solution: First, we check that the new iteration has the same fixed point as the original iteration. For (1), 𝑥∞ = 𝑓(𝑥∞ , 𝑦∞ ),
𝑦∞ = 𝑔(𝑥∞ , 𝑦∞ ). Thus, we have for the new iteration,
𝑓(𝑥∞ , 𝑦∞ ) =𝑥∞ ,
𝑔(𝑓(𝑥∞ , 𝑦∞ ), 𝑦∞ ) = 𝑔(𝑥∞ , 𝑦∞ ) =𝑦∞ .
𝜕1 𝑓(𝑥, 𝑦) 𝜕2 𝑓(𝑥, 𝑦)
J2 = [ ]. (3)
𝜕1 𝑔(𝑓(𝑥, 𝑦), 𝑦)𝜕1 𝑓(𝑥, 𝑦) 𝜕1 𝑔(𝑓(𝑥, 𝑦), 𝑦)𝜕2 𝑓(𝑥, 𝑦) + 𝜕2 𝑔(𝑓(𝑥, 𝑦), 𝑦)
The infinity norm of the Jacobian is the maximum absolute row sum. The first row has exactly the same absolute row sum
as the Jacobian of the original iteration, thus we have
Thus, the Jacobian of the new iteration has infinity norm less than 1 at the fixed point. Since the new iteration is continuously-
differentiable, there must be a neighborhood of the fixed point such that iterations initialized in this neighborhood will
converge.
(b) Prove that ‖𝐴‖∞ is the max row sum (of absolute values) of 𝐴.
Solution:
(a) Suppose 𝐴 is singular and 𝑣 = [𝑣1 𝑣2 ⋯ 𝑣𝑚 ]𝑇 be an eigenvector corresponding to eigenvalue 0 with max𝑗 |𝑣𝑗 | = |𝑣𝑘 |.
Consider the 𝑘-th row of the vector equation 𝐴𝑣 = 0,
𝑎𝑘𝑘 𝑣𝑘 = − ∑ 𝑎𝑘𝑗 𝑣𝑗 .
𝑗≠𝑘
Therefore,
| 𝑣𝑗 |
|𝑎𝑘𝑘 | ≤ ∑ |𝑎𝑘𝑗 | | | ≤ ∑ |𝑎𝑘𝑗 |,
𝑗≠𝑘
| 𝑣𝑘 | 𝑗≠𝑘
which is a contradiction.
(b) Suppose the 𝑘-th row has the max absolute sum, i.e.,
‖𝐴𝑣‖∞ ≥ ∑ |𝑎𝑘𝑗 |,
𝑗
since the right hand side is the 𝑘-th element of 𝐴𝑣. Noting that ‖𝑣‖∞ = 1, we have ‖𝐴‖∞ ≥ ∑𝑗 |𝑎𝑘𝑗 |. For the other
inequality, we have that for any vector 𝑢,
(d) 𝐴 = 𝐴𝑇 means singular values are the absolute values of (real) eigenvalues. By the Gershgorin Theorem,
|𝜆 − 𝑎𝑖𝑖 | ≤ ∑ |𝑎𝑖𝑗 |.
𝑗≠𝑖
Therefore,
𝑎𝑖𝑖 − ∑ |𝑎𝑖𝑗 | ≤ 𝜆 ≤ 𝑎𝑖𝑖 + ∑ |𝑎𝑖𝑗 |.
𝑗≠𝑖 𝑗≠𝑖
By given info.
2 ≤ 𝜆 ≤ 12.
Now since 𝐴 = 𝐴 and 𝐴 is invertible, the smallest and largest singular values of 𝐴−1 will be the reciprocal of the
𝑇
𝑑
𝑢 = 𝑓(𝑢), 𝑢(0) = 𝑢0 .
𝑑𝑡
Assume that 𝑓(𝑢) has the property that the forward Euler (FE) method:
𝑈 𝑛+1 = 𝑈 𝑛 + 𝑘𝑓(𝑈 𝑛 ),
satisfies
‖𝑈 𝑛+1 ‖ ≤ ‖𝑈 𝑛 ‖
for some norm ‖ ⋅ ‖ and for all time-steps 𝑘, 0 < 𝑘 ≤ 𝑘𝐹𝐸 . Now consider the 2-stage Runge-Kutta method:
where
𝛽10 ≥ 0, 𝛽20 ≥ 0, 𝛽21 ≥ 0, 𝛼20 ≥ 0, 𝛼21 ≥ 0, 𝛼20 + 𝛼21 = 1.
(a) Prove that the above 2-stage Runge-Kutta method also satisfies the inequality:
‖𝑈 𝑛+1 ‖ ≤ ‖𝑈 𝑛 ‖
under some appropriate time-step restriction: 0 ≤ 𝑘 ≤ 𝑘∗ , where you need to explicitly determine 𝑘∗ in
terms of 𝑘𝐹𝐸 .
so that
Solution:
(a) The first stage is simply the forward Euler method with a time step 𝑘𝛽10 . Therefore, as long as
we have that
‖𝑈 (1) ‖ ≤ ‖𝑈 𝑛 ‖.
The second stage can be written as a linear combination of two forward Euler steps:
𝛽20 𝛽
𝑈 𝑛+1 = 𝛼20 {𝑈 𝑛 + 𝑘 𝑓(𝑈 𝑛 )} + 𝛼21 {𝑈 (1) + 𝑘 21 𝑓(𝑈 (1) )} .
𝛼20 𝛼21
Requiring that
𝛽20 𝛽21
𝑘 ⋅ max {1, 𝛽10 , , } ≤ 𝑘𝐹𝐸 ,
𝛼20 𝛼21
we get that
‖𝑈 𝑛+1 ‖ ≤ 𝛼20 ‖𝑈 𝑛 ‖ + 𝛼21 ‖𝑈 (1) ‖ ≤ (𝛼20 + 𝛼21 )‖𝑈 𝑛 ‖ = ‖𝑈 𝑛 ‖.
Therefore, the 2-stage RK method satisfies ‖𝑈 (1) ‖ ≤ ‖𝑈 𝑛 ‖ under the following time-step constraint:
1 𝛼20 𝛼21
𝑘 ≤ 𝑘∗ = 𝑘𝐹𝐸 ⋅ min {1, , , }.
𝛽10 𝛽20 𝛽21
(b) To see the local truncation error, apply the method to the function 𝑓(𝑢) = 𝜆𝑢 (and let 𝑧 = 𝑘𝜆):
𝑈 (1) = (1 + 𝑧𝛽10 ) 𝑈 𝑛 ,
𝛽20 𝛽
𝑈 𝑛+1 = 𝛼20 (1 + 𝑧 ) 𝑈 𝑛 + 𝛼21 (1 + 𝑧 21 ) 𝑈 (1) .
𝛼20 𝛼21
Combining these two results (and using the fact that 𝛼20 + 𝛼21 = 1):
𝑧2
𝑈 𝑛+1 = (1 + 𝑧 (𝛽20 + 𝛽21 + 𝛼21 𝛽10 ) + (2𝛽10 𝛽21 ) )𝑈 𝑛 .
2
Therefore, for accuracy considerations we require that
Putting these all together yields the following 2-stage RK method that is norm-preserving under the optimal time-
step restriction 𝑘 ≤ 𝑘∗ :
𝑈 (1) = 𝑈 𝑛 + 𝑘𝑓(𝑈 𝑛 ),
1
𝑈 𝑛+1 = 2
{𝑈 𝑛 + 𝑈 (1) + 𝑘𝑓(𝑈 (1) )} .
Problem 5. Construct a third-order accurate Lax-Wendroff-type method for 𝑢𝑡 + 𝑎𝑢𝑥 = 0 (𝑎 > 0 is a constant)
in the following way:
(a) • Expand 𝑢(𝑡 + 𝑘, 𝑥) in a Taylor series and keep the first four terms. Replace all time derivatives by
spatial derivatives using the equation.
𝑛 𝑛
• Construct a cubic polynomial passing through the points 𝑈𝑗−2 , 𝑈𝑗−1 , 𝑈𝑗𝑛 , 𝑈𝑗+1
𝑛
.
• Approximate the spatial derivatives in the Taylor series by the exact derivatives of the above con-
structed cubic polynomial.
Solution:
𝑘2 𝑘3
𝑢(𝑡 + 𝑘, 𝑥) = 𝑢(𝑡, 𝑥) + 𝑘𝑢𝑡 (𝑡, 𝑥) + 𝑢𝑡𝑡 (𝑡, 𝑥) + 𝑢𝑡𝑡𝑡 (𝑡, 𝑥) + 𝑂(𝑘4 )
2 6
(𝑎𝑘)2 (𝑎𝑘)3
= 𝑢(𝑡, 𝑥) − 𝑎𝑘𝑢𝑥 (𝑡, 𝑥) + 𝑢𝑥𝑥 (𝑡, 𝑥) − 𝑢𝑥𝑥𝑥 (𝑡, 𝑥) + 𝑂(𝑘4 ).
2 6
Use Lagrange interpolation to construct a cubic polynomial passing through 𝑥0 − 2ℎ, 𝑥0 − ℎ, 𝑥0 , 𝑥0 + ℎ. Since the
interpolation is in the spatial variable, we omit the time variable for this part. The polynomial 𝑝3 (𝑥) satisfying 𝑢 =
𝑝3 + 𝑂(ℎ4 ) is,
(𝑥 − 𝑥0 + ℎ)(𝑥 − 𝑥0 )(𝑥 − 𝑥0 − ℎ)
𝑝3 (𝑥) = − 𝑢(𝑥0 − 2ℎ)
6ℎ3
(𝑥 − 𝑥0 + 2ℎ)(𝑥 − 𝑥0 )(𝑥 − 𝑥0 − ℎ)
+ 𝑢(𝑥0 − ℎ)
2ℎ3
(𝑥 − 𝑥0 + 2ℎ)(𝑥 − 𝑥0 + ℎ)(𝑥 − 𝑥0 − ℎ)
− 𝑢(𝑥0 )
2ℎ3
(𝑥 − 𝑥0 + 2ℎ)(𝑥 − 𝑥0 + ℎ)(𝑥 − 𝑥0 )
+ 𝑢(𝑥0 + ℎ).
6ℎ3
Also, the error is
𝑎𝑘
𝑈𝑗𝑛+1 = 𝑈𝑗𝑛 − (𝑈 𝑛 − 6𝑈𝑗−1𝑛
+ 3𝑈𝑗𝑛 + 2𝑈𝑗+1
𝑛
)
6ℎ 𝑗−2
2
(𝑎𝑘) 𝑛
+ (𝑈𝑗−1 − 2𝑈𝑗𝑛 + 𝑈𝑗+1
𝑛
)
2ℎ2
(𝑎𝑘)3 𝑛 𝑛
− (−𝑈𝑗−2 + 3𝑈𝑗−1 − 3𝑈𝑗𝑛 + 𝑈𝑗+1
𝑛
).
6ℎ3
𝑢(𝑡 + 𝑘, 𝑥) − 𝑢(𝑡, 𝑥) 𝑎2 𝑘 𝑎3 𝑘 2
= −𝑎𝑢𝑥 + 𝑢𝑥𝑥 − 𝑢 + 𝑂(𝑘3 )
𝑘 2 6ℎ 𝑥𝑥𝑥
𝑎
= − (𝑢(𝑡, 𝑥 − 2ℎ) − 6𝑢(𝑡, 𝑥 − ℎ) + 3𝑢(𝑡, 𝑥) + 2𝑢(𝑡, 𝑥 + ℎ)) + 𝑂(ℎ3 )
6ℎ
𝑎2 𝑘
+ 2 (𝑢(𝑡, 𝑥 − ℎ) − 2𝑢(𝑡, 𝑥) + 𝑢(𝑡, 𝑥 + ℎ)) + 𝑂(𝑘ℎ2 )
2ℎ
𝑎3 𝑘2
− ( − 𝑢(𝑡, 𝑥 − 2ℎ) + 3𝑢(𝑡, 𝑥 − ℎ) − 3𝑢(𝑡, 𝑥) + 𝑢(𝑡, 𝑥 + ℎ)) + 𝑂(𝑘2 ℎ)
6ℎ3
+ 𝑂(𝑘3 ).
Problem 6. Suppose you have $60K to invest and there are 3 investment options available. You must invest in
multiples of $10𝐾. If 𝑑𝑖 dollars are invested in investment 𝑖 then you receive a net value (as the profit) of 𝑟𝑖 (𝑑𝑖 )
dollars. For 𝑑𝑖 > 0 we have
and 𝑑1 (0) = 𝑑2 (0) = 𝑑3 (0). All are measured in $10𝐾 dollars. The objective is to maximize the net value of your
investments. This can be formulated as a linear programming problem:
such that 𝑑1 + 𝑑2 + 𝑑3 ≤ 6,
𝑑𝑖 ≥ 0 𝑖 = 1, 2, 3 are integers.
Solution: We solve this problem by dynamical programming. The key elements are
Define 𝑓𝑖 (𝑥𝑖 ) = maximum net value for stages 𝑖 given that we have 𝑥𝑖 dollars available. Then the recursion equation is
The boundary condition is 𝑓4 (𝑥4 ) = 0. The answer is 𝑓1 (6). This is a backward recursion and the computation goes as:
Stage 3: Note that
𝑓3 (𝑥3 ) = max {𝑟3 (𝑑3 )} .
0≤𝑑3 ≤6