Exercise 4 2025
Exercise 4 2025
4
Nonlinear Equations Pt. II.
Instructions
• Write your answers to the questions, including your plots, in a single PDF document (you can
produce this using Word or TeX or similar).
• Ensure you also include all your code in appropriately labelled .py files.
• The exercise starts on page 3 (below is a note) and has two questions.
This exercise tries to lead you through solving non-linear equations in a ”realistic” way. The first
part below is not for submission, this is a guide on how to use a tool (automatic differentiation)
used in the questions in this tutorial.
If you are unclear on what Automatic Differentiation is, or why it is important check the lecture
slides.
WARNING: autodiff creates functions for you. You call these generated functions with an
input vector as parameter and you get a derivative, gradient, or Jacobian returned. The inputs
must be a NumPy array of float values; it will not work with a Python list or a vector of integers.
1. To use autograd, you will have to install it into your environment (if you haven’t already). Recall
what you did in the first week when setting-up your environment using Anaconda. If your environ-
ment has autograd then you can skip this step, otherwise close Spyder then open your Anaconda
shell or prompt. Install using the following commands
1 conda activate tkp4135 - env1
2 conda install autograd
then hit “y” when asked. Assuming installation completed successfully, you can start Spyder again.
2. First, a simple motivating example. Lets say we have a function
ftest1 (x) := 11x40 + 7x31 + sin(x2 + 5x23 ). (1)
Examine the code in ex nle autodiff simple.py. For ftest1 , we can calculate the gradient sym-
bolically as,
44x30
21x21
∇ftest1 =
cos(x2 + 5x23 ) .
(2)
2
10x3 cos(x2 + 5x3 )
In the code, we use autograd to generate the function gradad test1() using the statement
gradad test1 = grad(f test1). Executing gradad test1(x) for some x will return the gradient
of ftest1 evaluated at x, using automatic differentiation.
For a comparison, the example code evaluates both the symbolic derivative and the autograd
version at a random x (obviously, in a real task you wouldn’t need to calculate the symbolic
gradient — we’re only doing that for illustrative purposes here!).
3. (Extra) Into the same .py file, define some ftest2 (x) of your own (whatever you wish). Follow
the same process as in the previous example: manually derive the gradient symbolically, create the
automatic differentiation function using grad(), and then compare the two for some suitable x. Re-
peat this with new functions of your own, until you are confident you understand what is happening.
1
4. Now, lets do an example of calculating a Jacobian. The Jacobian is the matrix of partial derivatives
of a vector-valued function. For some function f ,
∂f ∂f1 ∂f1
1
∂x1 ∂x2 . . . ∂x n
∂f2 ∂f2 ∂f2
∂x1 ∂x 2
. . . ∂x n
Jf = .. .. .. .. (3)
.
. . .
∂fm ∂fm ∂fm
∂x1 ∂x2 ... ∂xn
(a) Let
3x0
ftest4 := x20 sin((x1 − x2 )2 ) . (4)
(x0 − x1 )(x1 − x2 )
By hand, we can calculate the (symbolic) Jacobian for ftest4 . Evaluated at test vector x =
[2, 3, 5]T the expected values are,
1 [[ 3. 0. 0. ]
2 [ -3.02720998 10.45829793 -10.45829793]
3 [ -2. 1. 1. ]]
(b) Now, we will evaluate the Jacobian of ftest4 at x = [2, 3, 5]T by using the autograd package.
The code is,
1 import autograd . numpy as np
2 from autograd import jacobian
3 import matplotlib as mpl
4 import matplotlib . pyplot as plt
5
6 def f_test4 ( x ) :
7 return np . array ([ 3* x [0] , x [0]**2 * np . sin (( x [1] - x [2]) **2) , ( x [0] - x [1]) *( x [1] - x [2]) ] ,
dtype = np . float64 )
8
9 jac_test4 = jacobian ( f_test4 )
10 jac_test4 ( np . array (2. , 3. , 4.) )
Then all you need to do is call jac test4() with the appropriate vector (remember NumPy
vector of floats!).
5. So by using this tool you can calculate accurate derivatives of (nearly) every mathematical func-
tion... Let us know if you have any confusion or doubts.
2
1. Automatic Differentiation in Conjunction with a SciPy
Root Solver
In this question, we’ll be using automatic differentiation in concert with a root solver from the SciPy
package.
In particular we use the same example from the lectures:
f (x) = arctan(x)
Follow the instructions in the items below. Answer any questions in your answers document. Also
submit your completed Python code in appropriate .py files.
1. Create a new Python file with suitable filename. Start with the necessary import statements, e.g.
1 import autograd . numpy as np
2 from autograd import jacobian
3 import scipy . optimize as sp
2. Code a suitable function to represent f and then use autograd to create a function that will
calculate the Jacobian.
3. In the last exercise, newton’s method could not converge for some initial values of x. Now we are
going to use the Levenberg-Marquadt method with a start point of x = 3. To use this method to
solve system of nonlinear equations in python, call:
1 sol_sp = sp . root ( fun , x0 , jac = jac_f , method = ’ lm ’)
where fun is your function, jac f is the Jacobian function, and x0 is a suitable starting point.
4. The solution is returned as a structure. You can type print(sol sp) to return all solution in-
formation. Note that you can access the solution directly as sol sp.x. Does the solver find the
correct solution? Is there any error message?
3
Table 1: Antoine coefficients, from pg. 190, Skogestad (2008)
Name A B C
Pentane 3.97786 1064.840 -41.136
Hexane 4.00139 1170.875 -48.833
Cyclohexane 3.93002 1182.774 -52.532
yi = Ki xi
If we assume Raoult’s law applies then Ki depends on and temperature and total pressure only:
yi = psat
i (T ) ∗ xi /P (6)
Where psat
i can be given by e.g. the Antoine equation:
Bi
log10 (psat
i [bar]) = Ai −
T [K] + Ci
With a given feed (F, zi) we have 3N c + 2 equations (Eq. 5,6 and 7) and 3N c + 4 unknowns (xi , yi , Ki ,
L, V, T, P). For this example we will specify temperature and pressure.
Consider a flash tank at 390K and 5 bar, with a feed 100 mole/s, with mole fraction of 50 % pentane,
30% hexane and 20% cyclohexane, with assumptions of ideal gas and that Raoult’s law applies. We want
to solve this using a numerical method.
2. Create variables p and T, and NumPu vectors (1d array) A, B, C containing the constants from
Table 1, and another z, containing the feed proportion. Each of these arrays should have 3 elements.
Double-check you have writen these correctly!
3. We’re going to calculate the pisat using a vectorised calculation. The reason for using a vectorised
calculation is so that, if you want later, you can use the same code to solve for any number of
feeds. If you wrote your code explicitly for each variable then you’d have to change it every time
the number of feeds changed, which is a waste of effort. The following statement will achieve this:
1 psat = np . float64 (10) **( A - B /( T + C ) )
4
4. Write a line of code that calculates vector K.
5. Now, define a function that calculates the ”errors” of the system of equations made of up Eq. 5,6
and 7. You can use this to get started:
1 def f_res ( inp ) :
2 # unpack the input , e . g . x = inp [0:3] ...
3
4 # define the errors
5 res_MB = ....
6 res_EQ = ....
7 res_sum = ...
8 return np . concatenate (( res_MB , res_EQ , res_sum ) )
where inp is a vector of length 8, and res MB, res EQ and res sum are vectors corresponding of
residuals or errors..
6. Use autograd to generate a jacobian function.
7. Solve the system using sp.root(fun, x0, jac=jac, method=’lm’) (Levenberg-Marquardt). Use
engineering judgement to give an reasonable initial guess (e.g. the mole fractions can’t be more
than 1). If you have coded everything write then you should fine the solution, Sol.x :
1 array ([ 0.33930055 , 0.36511813 , 0.29558132 , 0.57169928 , 0.27094624 ,
2 0.15735448 , 69.14816098 , 30.85183902])
8. (Extra) In the above we gave the automatic derivative (AD) function to the solver. If we do not spec-
ify the derivative it will automatically use finite differences, i.e. sp.root(fun, x0, method=’lm’).
Compare and comment on the number of function + derivative evaluations with and without au-
tomatic differentiation.
9. Change the method to ’hybr’ (Newton method + trust region) with AD. What output do you get?
(For some reasonable initial guesses this method will fail).
As we are modelling an isothermal flash at known temperature and pressure then we can
write the flash equations in a special form known as the Racford-Rice equation. If we
substitute the equilibrium equation into the mass balance and re-arrange we get:
F zi zi
xi = = V
L + V Ki 1+ F (Ki − 1)
P
We know that i yi − xi = 0, and can use this condition with the previous equation to get
the Rachford–Rice equation:
X zi (Ki − 1)
f (ψ) = =0
i
1 + ψ(Ki − 1)
V
where ψ = F . Note that f is monotonic in ψ and that 0 ≤ ψ ≤ 1.
10. Now one can solve the system only using one equation, that is much better ”behaved”. Set up the
Rachford-Rice equation, generate a Jacobian, and solve using the lm and hybr methods.