0% found this document useful (0 votes)
12 views44 pages

MITESD 77S10 Lec09

This document discusses various methods for gradient calculation and sensitivity analysis in Multidisciplinary System Design Optimization (MSDO). It covers analytic, finite difference, complex step, adjoint methods, and automatic differentiation, emphasizing their importance in optimization algorithms and sensitivity analysis. Additionally, it highlights the computational challenges and errors associated with these methods.

Uploaded by

anhtri.journal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views44 pages

MITESD 77S10 Lec09

This document discusses various methods for gradient calculation and sensitivity analysis in Multidisciplinary System Design Optimization (MSDO). It covers analytic, finite difference, complex step, adjoint methods, and automatic differentiation, emphasizing their importance in optimization algorithms and sensitivity analysis. Additionally, it highlights the computational challenges and errors associated with these methods.

Uploaded by

anhtri.journal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Multidisciplinary System

Design Optimization (MSDO)


Gradient Calculation and
Sensitivity Analysis
Lecture 9

Olivier de Weck
Karen Willcox
1 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Today‟s Topics

• Gradient calculation methods


– Analytic and Symbolic
– Finite difference
– Complex step
– Adjoint method
– Automatic differentiation

• Post-Processing Sensitivity Analysis


– effect of changing design variables
– effect of changing parameters
– effect of changing constraints

2 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
Definition of the Gradient
“How does the function J value change
locally as we change elements of the J
design vector x?”
x1
J
Compute partial derivatives J J x2
of J with respect to xi xi
J
J
xn
x3
Gradient vector points normal
x2 to the tangent hyperplane of J(x)
3 x1 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Geometry of Gradient vector (2D)
Example function: 1
J x1 , x2 x1 x2
x1 x2
Contour plot
2

1.8
25
1.6 3. 3.2
5 3.
5

4
1.4 3.1

1.2

x2
1

3.1
3.2
3.5
5
5
0.8

4
3.1
5
0.6 3.25 3.2
3.5
J 1 0.4 4 3.5
1 2 5
4
x1 x1 x2 0.2

J 0
J 1 0 0.5 1 1.5 2
1 x1

x2 x1 x22 Gradient normal to contours


4 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Geometry of Gradient vector (3D)
2 2 2
Example J x 1 x
2 x 3 increasing
values of J
2 x1 T
J xo
2 2 2
J 2 x2
2 x3
Tangent plane
2 x1 2 x2 2 x3 6 0

x3
x2 J=3 x o
1 1 1
T

x1
Gradient vector points to larger values of J
5 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Other Gradient-Related Quantities
• Jacobian: Matrix of derivatives of multiple functions
w.r.t. vector of variables
J1 J2 Jz
J1 x1 x1 x1
J1 J2 Jz
J2
J J x2 x2 x2

J1 J2 Jz
Jz
xn xn xn

zx1 nxz
• Hessian: Matrix of second-order derivatives
2 2 2
J J J

x12 x1 x2 x1 xn
2 2 2
J J J
2J 
H x2 x1 x22 x2 xn nxn
   
2 2 2
J J J

xn x1 xn x2 xn2

6 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
Why Calculate Gradients
• Required by gradient-based optimization
algorithms
– Normally need gradient of objective function and
each constraint w.r.t. design variables at each
iteration
– Newton methods require Hessians as well
• Isoperformance/goal programming
• Robust design
• Post-processing sensitivity analysis
– determine if result is optimal
– sensitivity to parameters, constraint values
7 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Analytical Sensitivities
If the objective function is known in closed form, we can often
compute the gradient vector(s) in closed form (analytically):
Example
1
Example: J x1 , x2 x1 x2 x1 = x2 =1
x1 x2
J(1,1)=3
J 1
1 2 0
x1 x1 x2 J (1,1)
Analytical Gradient: J 0
J 1
1
x2 x1 x22
Minimum
For complex systems analytical gradients are rarely available
8 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Symbolic Differentiation
• Use symbolic mathematics programs
• e.g. MATLAB®, Maple®, Mathematica®
construct a symbolic object

» syms x1 x2
» J=x1+x2+1/(x1*x2);
» dJdx1=diff(J,x1)
dJdx1 =1-1/x1^2/x2
» dJdx2=diff(J,x2)
dJdx2 = 1-1/x1/x2^2 difference operator

9 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
Finite Differences (I)
Function of a single variable f(x)
• First-order finite difference
approximation of gradient:
f x x
' f xo x f xo
f xo O x
x

Forward difference Truncation Error


approximation to
the derivative x
• Second-order finite difference xo- x xo xo+ x
approximation of gradient:
' f xo x f xo x
f xo O x2
2 x
Central difference Truncation Error
approximation to
10 the derivative
© Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Finite Differences (II)

Approximations are derived from Taylor Series expansion:


2
x ''
f xo x f xo xf xo'
f xo O x32
2
Neglect second order and higher order terms; solve for
gradient vector:
' f xo x f xo
f xo O x
x Truncation Error
x ''
Forward Difference O x f
2
xo xo x
11 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Finite Differences (III)
Take Taylor expansion backwards at xo x
' x 2 '' (1)
f xo x f xo xf xo f xo O x2
2
x 2 '' (2)
f xo x f xo xf ' xo f xo O x2
2

(1) - (2) and solve again for derivative


' f xo x f xo x
f xo O x2
2 x Truncation Error
x 2 '''
Central Difference O x2 f
6
xo xo x

12 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
Finite Differences (IV)

J J x11 J x1o J x1o x1 J x1o J


x1 x11 x1o x1 x1
finite difference
approximation
J(x)
J x11
J
J x1o

o 1 x1
true, analytical x 1 x
1

sensitivity x1
o
x11 - x1
13 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Finite Differences (V)

• Second-order finite difference


approximation of second derivative:

f xo x 2 f xo f xo x
f ''( xo )
x2 f x x

x
xo- x xo xo+ x

© Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
Errors of Finite Differencing
Caution: - Finite differencing always has errors
- Very dependent on perturbation size
1
J x1 , x2 x1 x2 x1 = x2 =1 0
x1 x2 J (1,1)
J(1,1)=3 0
10 -5
9 10 Truncation
8
Errors ~ x
7
x1

Gradient Error
-6
6 10
J 5 Rounding
4
3
Errors ~ 1/ x
-7
2 10
1
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
-8
x1 10 -9 -8 -7 -6
10 10 10 10
Perturbation Step Size x1
Choice of x is critical
15 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Perturbation Size x Choice
theoretical function
• Error Analysis (Gill et al. 1981)
1/ 2
x A f - Forward difference
1/ 3 A
x A f - Central difference
xo

• Machine Precision ~ x
computed
values
Step size q
xk xk 10 q-# of digits of machine
at k-th iteration Precision for real numbers

• Trial and Error – typical value ~ 0.1-1%

16 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
Computational Expense of FD
Cost of a single objective function
F Ji
evaluation of Ji

Cost of gradient vector one-sided


n F Ji finite difference approximation for Ji
for a design vector of length n
Cost of Jacobian finite
z n F Ji difference approximation with
z objective functions

Example: 6 objectives 3 min of CPU time


30 design variables for a single Jacobian
1 sec per function evaluation estimate - expensive !

17 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
Complex Step Derivative
• Similar to finite differences, but uses an imaginary

Im[ f ( x0 + iΔx)]
step
p
f ' ( x0 ) ≈
Δx
• Second order accurate
• Can use very small step sizes e.g. Δx ≈ 10 −20
– Doesn’t have rounding error, since it doesn’t perform
subtraction
• Limited application areas
– Code must be able to handle complex step values

J.R.R.A. Martins, I.M. Kroo and J.J. Alonso, An automated method for
sensitivity analysis using complex variables, AIAA Paper 2000-0689,
Jan 2000

18 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
Automatic Differentiation
• Mathematical formulae are built from a finite set of basic
functions, e.g. additions, sin x, exp x, etc.
• Using chain rule, differentiate analysis code: add
statements that generate derivatives of the basic
functions
• Tracks numerical values of derivatives, does not track
symbolically as discussed before
• Outputs modified program = original + derivative
capability
• e.g., ADIFOR (FORTRAN), TAPENADE (C, FORTRAN),
TOMLAB (MATLAB), many more…
• Resources at http://www.autodiff.org/
19 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Adjoint Methods
Consider the following problem:
Minimize J (x, u)
s.t. R(x, u) 0
where x are the design variables and u are the state variables.
The constraints represent the state equation.
e.g. wing design: x are shape variables, u are flow variables,
R(x,u)=0 represents the Navier Stokes equations.

We need to compute the gradients of J wrt x:


dJ J J du
dx x u dx
Typically the dimension of u is very high (thousands/millions).
Adjoint Methods
dJ J J du
dx x u dx
• To compute du/dx, differentiate the state
equation:
dR R R du
0
dx x u dx

R du R
u dx x
1
du R R
dx u x
21 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Adjoint Methods
1
dJ J J du J J R R
• We have
dx x u dx x u u x

1 T λT
• Now define J R
λ
u u

• Then to determine the gradient:


T T
First solve R J (adjoint equation)
λ
u u
Then compute dJ J T R
22 λ
dx x x
Adjoint Methods
• Solving adjoint equation T T
R J
λ
u u
about same cost as solving forward problem
(function evaluation)

• Adjoints widely used in aerodynamic shape optimization,


optimal flow control, geophysics applications, etc.

• Some automatic differentiation tools have „reverse mode‟


for computing adjoints

23 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
Post-Processing Sensitivity Analysis
• A sensitivity analysis is an important
component of post-processing
• Key to understanding which design variables,
constraints, and parameters are important
drivers for the optimum solution
• How sensitive is the “optimal” solution J* to
changes or perturbations of the design
variables x*?
• How sensitive is the “optimal” solution x* to
changes in the constraints g(x), h(x) and
fixed parameters p ?

24 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
Sensitivity Analysis: Aircraft

Questions for aircraft design:

How does my solution change if I


• change the cruise altitude?
• change the cruise speed?
• change the range?
• change material properties?
• relax the constraint on payload?
• ...

25 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
Sensitivity Analysis: Spacecraft

Questions for spacecraft design:

How does my solution change if I


• change the orbital altitude?
• change the transmission frequency?
• change the specific impulse of the propellant?
• change launch vehicle?
• Change desired mission lifetime?
• ...

26 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
Sensitivity Analysis
“How does the optimal solution change as we change
the problem parameters?”

effect on design variables


effect on objective function
effect on constraints

Want to answer this question without having to solve the


optimization problem again.

27 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
Normalization
In order to compare sensitivities from different
design variables in terms of their relative sensitivity
it may be necessary to normalize:

J “raw” - unnormalized sensitivity = partial


derivative evaluated at point xi,o
xi xo
Normalized sensitivity captures
J J xi ,o J relative sensitivity
=
o
xi xi J (x ) xi x o ~ % change in objective per
% change in design variable
Important for comparing effect between design variables
28 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Example: Dairy Farm Problem
“Dairy Farm” sample problem L – Length = 100 [m]
N - # of cows = 10 xo
cow R – Radius = 50 [m]
cow
R N
With respect to which
cow design variable is the
objective most sensitive?
fence
L
Parameters:
A 2 LR R2 C f F n N
f=100$/m
n=2000$/cow F 2L 2 R I N M m
m=2$/liter
M 100 A/ N P I C
29 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Dairy Farm Sensitivity
P
• Compute objective at xo J (x o ) 13092 L 36.6
• Then compute raw sensitivities J
P
2225.4
N
100 588.4
P
• Normalize 13092
36.6
0.28 R
o
x 10
J o
J 2225.4 1.7
J (x ) 13092
2.25
50
588.4
13092 Dairy Farm Normalized Sensitivities

Design Variable
R

• Show graphically with


N

L
tornado chart
0 0.5 1 1.5 2 2.5
30 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Realistic Example: Spacecraft
NASA Nexus Spacecraft Concept
J2= Centroid Jitter on Focal Plane [RSS LOS]
60

40 1 pixel
T=5 sec

Centroid Y [ m]
Z 20

X
Y
0
14.97 m
Finite Element -20
Model
-40
0 1 2
Requirement: J2,req=5 m
meters -60
-60 -40 -20 0 20 40 60
Centroid X [ m]
Image by MIT OpenCourseWare. Simulation
Spacecraft
CAD model
“x”-domain “J”-domain
What are the design variables that are “drivers”
of system performance ?
31 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Graphical Representation
J1: Norm Sensitivities: RMMS WFE J2: Norm Sensitivities: RSS LOS
Graphical Representation of
Ru Ru
Us Us
Jacobian evaluated at design
Ud Ud xo, normalized for comparison.

disturbance
fc fc

variables
Qc Qc J1 J2
Tst Tst
Srg Srg Ru Ru
Sst Sst x0
Tgs Tgs
J
m_SM m_SM
Jo
Design Variables

K_yPM K_yPM
J1 J2
K_rISO K_rISO K cf K cf

variables
structural
m_bus m_bus
K_zpet K_zpet J1: RMMS WFE most sensitive to:
t_sp t_sp Ru - upper wheel speed limit [RPM]
I_ss I_ss
Sst - star tracker noise 1 [asec]
I_propt I_propt
zeta zeta
K_rISO - isolator joint stiffness [Nm/rad]
K_zpet - deploy petal stiffness [N/m]

variables variables
lambda lambda

control optics
Ro Ro
QE QE
J2: RSS LOS most sensitive to:
Mgs Mgs Ud - dynamic wheel imbalance [gcm2]
fca fca K_rISO - isolator joint stiffness [Nm/rad]
Kc analytical Kc zeta - proportional damping ratio [-]
finite difference
Kcf Kcf Mgs - guide star magnitude [mag]
-0.5 0 0.5 1 1.5 -0.5 0 0.5 1 1.5 Kcf - FSM controller gain [-]
x /J * J / x xo /J2,o * J2 / x
o 1,o 1
32 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Parameters
Parameters p are the fixed assumptions.
How sensitive is the optimal solution x* with respect
to fixed parameters ?
Optimal solution:
Example:
x* =[ R=106.1m, L=0m, N=17 cows]T
“Dairy Farm” sample problem
Fixed parameters:
cow
Parameters:
cow
N f=100$/m - Cost of fence
R
n=2000$/cow - Cost of a single cow
cow
m=2$/liter - Market price of milk

fence How does x* change as parameters


L
change?
Maximize Profit
33 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Sensitivity Analysis
KKT conditions: J ( x *) j gˆ j ( x * ) 0
j M Set of
gˆ j ( x *) 0, j M active
constraints
j 0, j M
For a small change in a parameter, p, we require
that the KKT conditions remain valid:
d (KKT conditions)
0
dp
Rewrite first equation:
J * gˆ j
(x ) j (x * ) 0, i 1,..., n
xi j M xi
34 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Sensitivity Analysis
Recall chain rule. If: Y = Y ( p, x( p ) ) then
dY ∂Y n
∂Y ∂xi
= +∑
dp ∂p k =1 ∂xi ∂p
A l i tto fifirstt equation
Applying ti off KKT conditions:
diti
d ⎛ ∂J (x, p ) ∂gˆ j (x, p ) ⎞

⎜ + ∑ λ j ( p) ⎟

dp ⎝ ∂xi j∈M ∂xi ⎠
∂ J
2
∂ J
2 n ⎛
∂ 2
g ∂ 2
gˆ j ⎞ ∂xk ∂λ j ∂gˆ j
= + ∑λj + ∑⎜ + ∑λj ⎟ +∑ =0
⎜ ⎟
∂xi ∂p j∈M ∂xi ∂p k =1 ⎝ ∂xi ∂xk j∈M ∂xi ∂xk ⎠ ∂p j∈M ∂p ∂xi
n
∂xk ∂λ j

k =1
Aik + ∑ Bij
∂p j∈M ∂p
+ ci = 0

35 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
Sensitivity Analysis
*
Perform same procedure on equation: g j ( x , p) 0
gˆ j n gˆ j xk
0
p k 1 xk p
n
xk
Bkj dj 0
k 1 p

36 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
Sensitivity Analysis
In matrix form we can write:
n M
n A B x c
0
M BT
0 λ d
2
J
2
gˆ j
Aik j
x1 1
xi xk j M xi xk p p
gˆ j
Bij x2 2
xi
2
J
2
gˆ j
x p λ p
ci j
xi p j M xi p
xn
gˆ j M
dj p p
37 p © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Sensitivity Analysis
We solve the system to find x and , then the sensitivity
of the objective function with respect to p can be found:

dJ J
JT x
dp p
dJ (first-order
J p
dp approximation)

x x p

To assess the effect of changing a different parameter, we


only need to calculate a new RHS in the matrix system.
38 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Sensitivity Analysis - Constraints
• We also need to assess when an active constraint will
become inactive and vice versa
• An active constraint will become inactive when its
Lagrange multiplier goes to zero:
j
j p j p
p
Find the p that makes j zero:

j j p 0
j
p j M
j

This is the amount by which we can change p before the jth


constraint becomes inactive (to a first order approximation)
39 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Sensitivity Analysis - Constraints
An inactive constraint will become active when gj(x)
goes to zero:
g j (x) g j (x*) p g j (x*)T x 0
Find the p that makes gj zero:

g j ( x *)
p for all j not
T
g j ( x *) x active at x*

• This is the amount by which we can change p before the jth


constraint becomes active (to a first order approximation)
• If we want to change p by a larger amount, then the problem
must be solved again including the new constraint
• Only valid close to the
40
optimum
© Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Lagrange Multiplier Interpretation
• Consider the problem:
minimize J(x) s.t. h(x)=0
with optimal solution x*

• What happens if we change constraint k by a small


amount?

hk (x* ) h j (x* ) 0, j k

• Differentiating w.r.t
dx* dx*
hk 1 hj 0, j k
d d
41 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Lagrange Multiplier Interpretation
• How does the objective function change?

dJ dx*
J
d d
• Using KKT conditions:

dJ dx* dx*
j hj j hj k
d j d j d

• Lagrange multiplier is negative of sensitivity of cost


function to constraint value. Also called shadow
price.
42 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox
Engineering Systems Division and Dept. of Aeronautics and Astronautics
Lecture Summary
• Gradient calculation approaches
– Analytical and Symbolic
– Finite difference
– Automatic Differentiation
– Adjoint methods
• Sensitivity analysis
– Yields important information about the design space,
both as the optimization is proceeding and once the
“optimal” solution has been reached.

Reading
Papalambros – Section 8.2 Computing Derivatives

43 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox


Engineering Systems Division and Dept. of Aeronautics and Astronautics
MIT OpenCourseWare
http://ocw.mit.edu

ESD.77 / 16.888 Multidisciplinary System Design Optimization


Spring 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

You might also like