Main
Main
April 2021
Contents
1 Introduction 2
1.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Himmelblau function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 Rosenbrock’s function (Banana Function) . . . . . . . . . . . . . . . . . . . 6
1.3.3 Optimization Problem with Constraints . . . . . . . . . . . . . . . . . . . . 6
1.4 Model Calibration / Parameter Identification . . . . . . . . . . . . . . . . . . . . . 7
1.5 Portfolio Optimization after Markowitz . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Linear Optimization 9
2.1 Linear Optimization Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Classification of optimization problems . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Transport Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Network Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 Structural Optimization 33
4.1 Dimensioning Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1.1 Creating a Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1.2 Minimization of material required to wrap goods (e.g. sugar) . . . . . . . . 34
4.2 Shape Optimization: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2.1 (A very) short introduction to FEM . . . . . . . . . . . . . . . . . . . . . . 35
4.2.2 Geometric description of shapes by splines: . . . . . . . . . . . . . . . . . . 37
4.3 Application: Optimization of the Shape of a Dam . . . . . . . . . . . . . . . . . . . 40
4.4 Topology Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.4.1 Examples of optimized structures . . . . . . . . . . . . . . . . . . . . . . . . 41
4.4.2 Evolutionary Structural Optimization(ESO): . . . . . . . . . . . . . . . . . 41
4.4.3 ESO for Stiffness Optimization: . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.4.4 Bi-directional Evolutionary Structural Optimization (BESO): . . . . . . . . 43
4.4.5 Solid Isotropic Material with Penalization (SIMP): . . . . . . . . . . . . . . 48
1
Chapter 1
Introduction
This is a script which goes along the lecture of Prof. Tom Lahmer on Optimization. It will be an
introduction to theory and numerical solution approaches for
• Linear Problems (graphical solution and Simplex Algorithm),
• Nonlinear Problems (descent and Newton Methods, gradient-free methods, global search),
1.5
G
1 G
N N
0.5
N
0
y
-0.5
-1
-1.5
-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x
2
1.1 Basics
1.2 Definitions
Optimization Optimization is the systematic search to improve a certain system or situation.
Generally, a measure (the objective) is defined with which the gain can be quantified.
We focus on minimization problems as the maximization of any function f equals the minimization
of −f .
Objective
f is called cost function or objective (Zielfunktion).
Optimization Variable
x ∈ Rn is called design or optimization variable.
Optimal Solution
x∗ is the optimal solution of the optimization problem (sometimes also x̂).
Global Minimum
x∗ is called global minimum, if
f (x∗ ) ≤ f (x) ∀x ∈ X.
Local Minimum
x∗ is called a local minimum, if
3
f (x)
U
x∗1 x∗2 x
Figure 1.2: Local and Global Minimum of a Function: x∗1 : local minimum, x∗2 : global minimum
gi ≤ 0, i = 1, ..., m,
hj = 0, j = 1, ..., p.
Short form:
min f (x) under the constraints gi (x) ≤ 0, h(x) = 0.
1.3 Examples
1.3.1 Himmelblau function
The following function should be minimized over all values (x, y)> ∈ R2 :
Nf (c) := {x ∈ Rn |f (x) = c}
Thus, the function f always has the same value along a contour line.
4
Figure 1.3: Surface plot of the function fH . Levels of same color indicate the same value of the
objective.
Task: Draw the surface plot of the Himmelblau function in either OCTAVE or Matlab. How
many mininal / maximal points are there?
The gradient ∇f (x) is the vector of all partial derivatives of f .
∂f (x)
1 ∂x
∂f (x)
∇f (x) := x ∈ Rn
∂x2
...
∂f (x)
∂xn
The gradient of a function f points in the direction of the steepest increase of f .
Example 1.3.1.
f (x1 , x2 ) = x21 + 4x32 + 3x1 x2
We see n = 2 and !
∂f (x)
∂x1 2x1 + 3x2
∇f (x) := = .
∂f (x) 12x22 + 3x1
∂x2
5
1.3.2 Rosenbrock’s function (Banana Function)
We minimize the function
fR (x, y) = 100(y − x2 )2 + (1 − x2 )2
with one minimal point xf = (1, 1)> with f (1, 1) = 0.
First, we consider the set of admissible solutions X, i.e. all points (x,y) that fulfill the constraints.
This is the area within the red triangle in the following figure.
10
100
6
y
4 50
2
10
0
0 0 5
0 2 4 6 8 10 5 y
10 0
x x
6
It can be seen that the objective function f has a global minimum in point (7,7). Because
7 + 7 ≥ 10, however, this point is not admissible. The actual solution of the problem therefore lies
in the point (x, y) = (5, 5) on the edge of the set of admissible solutions.
In order to minimize the misfit between simulated response and measurements we try to reduce
the sum of squared errors.
min f (x) x ∈ X
with: N number of measurement points, yi measurements at (time) or point ti , m(ti , x) response
of the model at point ti and x vector of model parameters.
7
that the shares xj of the different investments must be optimally determined in order to achieve a
certain goal. Often the aim is to achieve the highest possible expected expected return:
n
X n
X
max E(R) = xj E(Rj ), R= xj Rj
xj
j=1 j=1
A high profit correlates with a high risk, which can be modelled with the variance
n
X 2
V (R) = E(R − E(R))2 = E xj (Rj − E(Rj ))
j=1
A good compromise between high profit and low risk can be achieved by solving the following
optimization problem:
n
X n
X 2
min − xj E(Rj ) + αE xj (Rj − E(Rj )) , α>0
j=1 j=1
α is a weighting parameter with which the willingness to assume risks can be integrated:
• high willingness of the investor to take risks: α → 0
• low willingness to take risks: α → ∞.
8
Chapter 2
Linear Optimization
• f (x + y) = f (x) + f (y) ∀ x, y ∈ Rn
• f (cx) = cf (x), c∈R
holds true.
Example 2.2.1.
(a) f (x) = x X
(b) f (x) = 3x1 − 5x2 , x = (x1 , x2 ) X
(c) f (x) = d> x, x, d ∈ Rn X
(d) f (x) = Ax, A ∈ Rm×n , x ∈ Rn X
(e) f (x) = 1 ×
(f) f (x) = x + 1 ×
(g) f (x) = x2 ×
f (x) = c> x,
i.e.
≤
min c> x under the constraint Ax = b, x ≥ 0.
≥
with x, c ∈ Rn , b ∈ Rm and A ∈ Rm×x being a matrix.
• n - dimension of the design variable,
• m - number of constraints,
• c - vector of costs.
9
The linear optimization problem is a special case of a constrained optimization problem by writing
it as follows:
f (x) = c> x,
g(x) = (−x) ≤ 0,
h(x) = Ax − b = 0.
Example 2.2.2.
For linear problems, the constraining lines are straight lines defining the set of admissible
solutions. The linear optimization problem can be solved graphically by simply shifting the cost
function f to the corner point of the set admissible solutions which give the smallest / highest
value of the objective.
40
30
20
10
g1 : x1 + x2 = 40
−30
In this example x1 = 30, x2 = 10, f (30, 10) = 5500 the investment is limiting, so it is an
”active” constraint. Those constraints, not affecting the value of the objective at the optimal
point, are called in-active constraints.
10
The example can be put into a general approach to solve linear optimization problems: We define
x = (x1 , x2 )>
c> x → max c = (100, 250)>
such that
1 1
Ax ≤ b with A = 40 120 ,
6 12
40
b = 2400
312
.
What can happen in the general case?
1) The constraints form a set which is convex, then one of the corner points is the optimal
solution.
2) The set of admissible solutions is not bounded and f (x∗ ) takes as values either +∞ of −∞.
3) There are too many constraints
→ the set of admissible solutions is empty → no solution exists.
(a) convex
(b) not convex
x2 constraints
x1
objective
Figure 2.1: Top) Graphical representation of convex and non-convex sets. Middle) Domain with
bounded, convex set of admissible solutions. Lower) Example of a non-bounded set.
Algorithms for solving linear optimization problems are looking solely for the corner points of
the convex set of admissible solutions, which give each time an admissible solution. The optimal
solution is now found by choosing other corner points so that the value of the objective decreases.
The algorithms stops if
• no further improvement is possible,
• the algorithm detects that the problem is unbounded,
• the algorithm detects that the problem has no solution (too many constraints).
This Algorithm was found in 1948 based on works of G. Danzig, W. Leontief, and T. S. Motzkin.
Further reading. File SimplexAlgo.pdf in Moodle or https://en.wikipedia.org/wiki/Simplex algorithm.
One implementation of this algorithm can be seen in the following.
To apply this algorithm, e.g. to the farmer problem, the latter was slightly changed into the
so-called standard form, i.e.
min cT x
s.t. Ax = b
x ≥ 0.
11
f u n c t i o n SimplexMethodForFarmerProblem
% Tom Lahmer , 2018
% Simplex Method , 2nd phase .
% The f o l l o w i n g data a r e p r o v i d e d by t h e u s e r :
% Matrix A
A=[1 1 1 0 0 ; 40 120 0 1 0 ; 6 12 0 0 1 ] ;
%V e c t o r b
b=[40 , 2400 , 3 1 2 ] ' ;
% Vector c
c = [ −100 , −250 , 0 , 0 , 0 ] ' ;
% F i r s t b a s i s , i . e . a c t u a l l y a c o r n e r p o i n t b e i n g an a d m i s s i b l e s o l u t i o n ( g u e s s e d )
B= [ 3 , 4 , 5 ] ;
i f l e n g t h (B) ˜= l e n g t h ( b )
f p r i n t f ( ' The b a s i s n e e d s t o have a s many e n t r i e s a s c o n s t r a i n t s ! ' ) ;
return ;
end
AllIndices = 1 : 1 : length ( c ) ;
N = s e t d i f f ( A l l I n d i c e s , B) ; % S e t o f non−b a s i s e n t r i e s
NoOptSolution = t r u e ;
i t e r =0;
w h i l e NoOptSolution
i t e r = i t e r +1;
f p r i n t f ( ' \ n I t e r a t i o n %d . . . ' , iter ) ;
i f d e t (A ( : , B) ) ˜=0 % c h e c k f o r i n v e r t a b i l i t y
GammaBN = i n v (A ( : , B) ) ∗A ( : , N) ;
xB = i n v (A ( : , B) ) ∗b ;
zetaN = c (B) ' ∗GammaBN − c (N) ' ;
c o s t f u n c t i o n = c (B) ' ∗xB ;
else
f p r i n t f ( ' The c h o s e n b a s i s d o e s not r e s u l t i n an a d m i s s i b l e s o l u t i o n ' ) ;
end
minVal = max(xBOverGamma) ;
f o r i =1: l e n g t h (xBOverGamma)
i f minVal >=xBOverGamma( i ) && GammaBN( i , IndexPC )>0
% f i n d s m a l l e r o n e s with pos . Gamma
minVal = xBOverGamma( i ) ; %s t o r e i t
IndexPR =i ; % t h e l a s t i n d e x where t h i s c a s e i s t r u e i s t h e p i v o t i n d e x
end
end
pivotRow = B( IndexPR ) ;
% change t h e b a s i s
[ B,N] = c h a n g e B a s i s (B, N, c , pivotRow , pivotColumn ) ;
end
x = zeros ( length ( c ) ,1) ;
x (B) = xB ;
12
To achieve at this system, so-called non-negative slack-variables have been added to the inequality
constraints. Compare with definitions of A, B and c in the source code. The usage of the simplex
algorithm is in particular useful, when the dimension of the design variable x or the number of
constraints increases, see the following examples.
delivery
a1
b1
consumers
a2
stores
. . .
. . .
bn
am
This results in the following transport problem, which is a special linear optimization problem:
m X
X n
min xi,j ci,j
i=1 j=1
13
B
5 7
2
A C D
3 6
• E = {(AB), (BC), (BD), (AC), (CD)} set of edges corresponding to the streets
• xi,j describes the transported amount along the edge (i, j) with maximal capacity
Since no additional goods are produced or disappear on the way, the task is to maximize the
quantity of goods leaving the start node A (= the quantity of goods arriving at the end node D).
Remark: The pair G := (V, E), i.e. the combination of the set of all vertices and edges forms
a graph G. Many optimization problems are solved on graphs, e.g. the shortest-paths algorithm
which are used in navigation software. Generally, these are optimization algorithms on graphs
which allow to find the shortest distance between two cities or two points in a network. Optimiza-
tion of graphs is a lecture on its own. Still, most of the algorithms are variants of linear optimization
problems, where the dimension of the matrix A scales with the number of cities (edges) and streets
(vertices). Some further reading here: https://en.wikipedia.org/wiki/Shortest path problem
14
Chapter 3
3.1 Definitions
The construction of algorithms for nonlinear Optimization Problems (NLOP) generally requires:
• (sometimes) the Hessian matrix Hf (x), matrix of second derivatives of f (x). The Hessians
are always symmetric.
∂ 2 f (x) ∂ 2 f (x) ∂ 2 f (x)
2 ···
∂∂2 fx(x)
1 ∂x1 ∂x2
∂ 2 f (x)
∂x1 ∂xn
∂ 2 f (x)
∂x2 ∂x1 ∂ 2 x2 ··· ∂x2 ∂xn
Hf (x) = .. .. .. .. .
. . . .
∂ 2 f (x) ∂ 2 f (x) ∂ 2 f (x)
∂xn ∂x1 ∂xn ∂x2 ··· ∂ 2 xn
Note: There are algorithms which do not require any or only some of these. For example the
Nelder-Mead Method, genetic algorithms, etc.
Consequence: The negative gradient d = −∇f (x) is therefore always a descent direction.
Geometrical interpretation: The angle between gradient and descent direction needs to be
between 90◦ and 270◦ .
15
Figure 3.1: Descent directions
As long as (∇f (x))> d < 0 there are possibilities to further decrease the value of f . This
condition is violated if ∇f (x) = 0.
If Hf is positive (or negative) definite, the function is said to have non-negative (or non-positive)
curvature.
As the matrices Hf are symmetric, we can test the definiteness with the eigenvalue criterion:
A symmetric matrix is positive definite, when all its eigenvalues are larger than zero. It is negative
definite, when all its eigenvalues are smaller than zero. If we have eigenvalues equal to zero, we
speak of semi-definite matrices. Matrices which have positive and negative eigenvalues are called
indefinite.
Example 3.1.1.
f (x, y) = x2 + y 2
2x
∇f (x, y) =
2y
2 0
Hf (x, y) =
0 2
x∗ = (0, 0)>
∗ 0
∇f (x ) =
0
[Ger16]
Eigenvalues: [2, 2] > 0 → Positive definite, minimum
16
Example 3.1.2.
f (x, y) = −x2 − y 2
−2x
∇f (x, y) =
−2y
−2 0
Hf (x, y) =
0 −2
x∗ = (0, 0)>
∗ 0
∇f (x ) =
0
[Ger16]
Eigenvalues: [−2, −2] > 0 → Negative definite, maximum
Example 3.1.3.
f (x, y) = x2 − y 2
2x
∇f (x, y) =
−2y
2 0
Hf (x, y) =
0 −2
x∗ = (0, 0)>
∗ 0
∇f (x ) =
0
x = (1, 1)>
[Ger16]
Eigenvalues: [−2, 2] 6= 0 → indefinite, Saddle point
17
3.2.1.2 Method of steepest descent
3. From ϕ0 (0) = ∇f (xk )> dk < 0 it follows ϕ(t) < ϕ(0) ∀ 0 < t < t̄
tk = max{β j , j = 0, 1, 2, ...}
such that
ϕ(tk ) ≤ ϕ(0) + σtk ϕ0 (0)
| {z } | {z }
convergence efficiency
0
Due to ϕ (0) being negative, the Armijo rule guarantees that the values of the cost functions are
decreasing.
18
(a) One-dimensional line search starting from xk to- (b) Increments that fulfil the Armijo rule [Ger16]
wards dk [Ger16]
Example 3.2.1.
We use the gradient method to minimize the function
f (x1 , x2 ) = x21 + 10x22
The step size is determined with an exact line search with:
ϕ(t) := f (xk + tdk ) with t > 0
We obtain:
2x1
∇f (x1 , x2 ) =
20x2
2x1
d=− = (d1 , d2 )>
20x2
ϕ(t) = (x1 + td1 )2 + 10(x2 + td2 )2
ϕ0 (t) = 2(x1 + td1 )d1 + 20(x2 + td2 )d2
For an optimal t it needs to hold that ϕ0 (t) = 0 which gives
x1 d1 + 10x2 d2
t=−
d21 + 10d22
Then we can update xk+1 = xk + tdk .
With the initial guess of (1, 0.1)T the method of steepest descent with this exact line search
requires 63 iterations to reach a stopping criterion of ||∇f (xk )|| ≤ 10−5 and the optimal point is
(0, 0)T .
19
Note: The equations used above are specific for this example. Within the computer classes, you
will implement this algorithm for exactly this function. However, try to implement it afterwards
as general as possible, so that you can replace the choice of the function f by any other function.
For the general case, the implementation of the simple Armijo-Rule is more flexible than the use
of an exactly derived step-length (as suggested here).
In order for the method to converge, these two directions should be conjugate to each other.
(dk+1 )T Ak dk = 0
The matrix A should be the Hessian Hf to be exact. We will use an approximation and obtain a
CG-Method.
The parameter βk may be computed according to one of the following equations (named after their
developers):
• Fletcher–Reeves:
| − ∇f (xk )|2
βkF R =
| − ∇f (xk−1 )|2
• Polak–Ribière:
∇f (xk )T (∇f (xk ) − ∇f (xk−1 ))
βkP R =
| − ∇f (xk−1 )|2
• Hestenes-Stiefel:
∇f (xk )T (∇f (xk ) − ∇f (xk−1 ))
βkHS =
−dTk−1 (−∇f (xk (−∇f (xk−1 ))
• Dai–Yuan:
| − ∇f (xk )|2
βkDY = T
−dk−1 (−∇f (xk (−∇f (xk−1 ))
Note: The method simply requires storage of the old descent direction.
20
Algorithm 4: Gradient Method CG with β F R Algorithm
Initialize at x0 ∈ Rn ,k = 1
dk = −∇f (xk )
while ||∇f (xk )|| > do
if k=1 then
dk = −∇f (xk )
else
compute β
dk = −∇f (xk ) + βdk−1
end
store dk , ∇f (xk ) for the next step
choose tk , e.g. approximate linesearch
xk+1 = xk + tk dk ,
k =k+1
end
Finding the optimum of fˆ(x) means to differentiate fˆ(x) and to set the result equal to zero.
f 0 (xk )
xk+1 = xk − 00
f (xk )
| {z }
N ewton0 s M ethod
Going to n-dimensions
f 0 (xk ) → ∇f (xk )
f 00 (xk ) → Hf (xk )
So now for x ∈ Rn , f : Rn → R
Generally, we do not invert Hf , but rather solve the so-called Newton Equation
Hf dN = ∇f
Rates of Convergence: Let xk be the components of a series with limit x∗ . We say that:
xk converges linearly to x∗ if for any constant 0 < c < 1 if holds
||xk+1 − x∗ || ≤ c||xk − x∗ ||
21
Algorithm 5: Local Newton’s Method
Initialize at x0 ∈ Rn , > 0
while ||∇f (xk )|| > do
dk solve Hf (xk )dk = −∇f (xk ) (Newton Equation→ NE)
xk+1 = xk + dk
k =k+1
end
3.2.2.2 Levenberg-Marquardt-Method(LMM):
In order to deal with divergence far from x∗ we may regard the following descent directions:
1 0 0···
0 1 0···
• dsd := −I∇f (xk ) Steepest descent Method, I = 0 0 1 · · ·
..
.
22
Table 3.1: Comparison: Steepest descent Method and Newton’s Method
If α → 0 → dLM ∼ dN
So adapting α during the algorithms is beneficial.
Properties:
• Convergence for every initial guess.
• One may implement this with a variable αk with αk → 0 if k → ∞.
• For LMM expect at least superlinear convergence.
23
−1
and for the inverse Hk+1 of Hk+1 it holds
sy T ysT ssT
−1
Hk+1 = I− Hk−1 I − T + T .
yT s y s y s
Here s = xk+1 − xk and y = ∇f (xk+1 ) − ∇f (xk ). Each Hk+1 is symmetric positive definite.
If ||H0 − ∇2 f (x∗ )|| ≤ δ then the BFGS iterates are well-defined and converge q-superlinearly to
x∗ .
B/A = A / (A+B)
13/21 = 21/ (13+21) = 0,62
(a) Method of Golden Cut (b) Fibonacci series1 forming a spiral. The golden cut
is the ratio of area of all squares.
These few lines are easily implemented. The question is, how to make the algorithm fast, so
that not too many iterations are required. To answer this question, we normalize the intervals at
each iteration k
bk − ak
mk =
ck − ak
1 (1 + 1 = 2; 1 + 2 = 3; 2 + 3 = 5; 3 + 5 = 8; ...
24
Algorithm 7: Golden cut Algorithm
Choose two points d and b such that a < d < b < c
while |f (a) − f (c)| ≥ do
if f (d) ≤ f (b)(minimal point is left of b) then
c=b
else
a=d
end
Define new intermediate values
d = a + m(c − a)
b = a + (1 − m)(c − a)
end
and
ck − bk
1 − mk = .
ck − ak
The key idea is to choose m in that way, that there is an equal probability for the two cases
(f (d) ≤ f (b) and f (d) > f (b)) in the next iteration. This might be achieved by making use of the
rule of the golden cut:
a+b b
= =Φ
b a
a b
+1− =0
b a
Φ−1 + 1 − Φ = 0
1 + Φ − Φ2 = 0
s
2
1 1
···Φ = − + + 1 = 0.618
2 2
m = 1 − 0.618 = 0.382
A further one dimensional-search might be added to find the optimal length of the joint search
direction.
25
3.2.3.3 Method of Nelder-Mead (Simplex Method)
(a) Nelder-Mead (Simplex Method) [Ast06] (b) Nelder-Mead Simplex and New Points [Kel99]
−1 < µic < 0 < µoc < µr < µe often − 1 < −0.5 < 0 < 0.5 < 1 < 2
26
Algorithm 7: Algorithm of Nelder-Mead
Define objective function f (x) with dimensions n
Initialize µic = −0.5, µoc = 0.5, µr = 1, µe 2
for 1 < k < n + 1 do
fk = f (xk )
end
Sort fk and xk
while Stopping criterion do
Compute x̄, x(µr ) and fr = f (x(µr ))
if f (x1 ) ≤ fr < f (xn ) (Reflection) then
Replace xn+1 with x(µr )
else if fr < f (x1 ) (Expansion) then
Compute fe = f (x(µe ))
if fe < fr then
Replace xn+1 with x(µe )
else
Replace xn+1 with x(µr )
end
else if f (xn ) ≤ fr < f (xn+1 ) (Outside Contraction) then
Compute fc = f (x(µoc ))
if fc ≤ fr then
Replace xn+1 with x(µoc )
else
for 2 ≤ i ≤ n + 1 do
set xi = x1 − (xi − x1 )/2, compute f (xi ) (Shrink).
end
end
else if f (xr ) ≥ f (xn+1 ) (Inside Contraction) then
Compute fc = f (x(µic ))
if fc < f (xn+1 ) then
Replace xn+1 with x(µic )
else
for 2 ≤ i ≤ n + 1 do
set xi = x1 − (xi − x1 )/2, compute f (xi ) (Shrink).
end
end
for 1 < k < n + 1 do
fk = f (xk )
end
Sort fk and xk
end
The Nelder-Mead Method is provided in Matlab or Octave with the fminsearch method: An
application to the Rosenbrock’s function
fun = @( x ) 100*( x (2) - x (1) ^2) ^2 + (1 - x (1) ) ^2;
x0 = [ -1.2 ,1]; % initial guess
options = optimset ( ' PlotFcns ' ,@ optimplotfval , ' Display ' , ' iter ') ;
x = fminsearch ( fun , x0 , options )
27
3.3 Constrained Optimization
We regard now problems of the following type:
min f (x) such that
gi (x) ≤ 0, i = 1, . . . , m Inequality constraints,
hj (x) = 0, j = 1, . . . , p Equality constraints,
xl [q] ≤ x[q] ≤ xu [q], q = 1, . . . , n Explicit restrictions.
Definition: If x∗ is the optimal point, then xl [q] is an active constraint if xl [q] = x∗ [q]. Equally
with xu [q].
Else, the constraints are called inactive constraints.
A(x) = {q ∈ {1, · · · , n}, xl [q] = x∗ [q] or xui = x∗i } forms the set of active constraints.
I(x) = {i ∈ {1, · · · , n}} \ A(x) forms the set of inactive constraints.
28
Figure 3.9: Projection method for the bound constraints x < −1 and y < −1
with Pf (x) a penalty function and r > 0 is a scaling parameter. There are different definition of
Pf , for example:
p
X m
X n
X n
X
Pf (x) = (hj (x))2 + (max(gi (x), 0))2 + (min(xi − xli , 0))2 + (min(xui − xi , 0))2
j=1 i=1 i=1 i=1
The effect on the optimization algorithm can be seen in Figure 3.10. Penalty formulations are
Figure 3.10: Penalty method for the bound constraints x < −1 and y < −1
more flexible, and do not require any changes of the algorithm (only the objective is changed). A
difficulty might be the evaluation of points outside the set of admissible points which may lead to
failing model evaluations (for example negative material parameters).
29
3.3.3 Linear Independence Constraint Qualification (LICQ)
We will extend now the theory and numerical frameworks to allow also for equality (hj (x) = 0)
and inequality (gi (x) ≤ 0) constraints. These are in particular of high importance in the context
of structural optimization, where, e.g., one wants to minimize the volume (material) of a structure
while keeping its safety guaranteed. Without this safety constraint, the problem would provide
the trivial solution not to use any material at all (zero volume).
We now define a condition on the gradients of the inequality constraints. We regard the linear
independence constrained qualification (LICQ) as a condition on X.
Let x∗ be admissible. The set
is the set of indices of ”active inequality constraints”. The LICQ is fulfilled in x∗ if the gradients
of the active constraints
λi ≥ 0, λi gi (x∗ ) = 0, gi (x∗ ) ≤ 0
(Either gi is active, than λi can be arbitrary, or gi is not active and λi needs to be zero)
Example 3.3.1. Equality constraints
L(x, µ) = x1 + x2 +µ (x21 − x2 )
| {z } | {z }
f (x1 ,x2 ) h(x1 ,x2 )
It holds
1 + 2µx1 12x1
∇x L(x1 , x2 , µ) = ∇h (x1 , x2 ) =
1−µ −1
1
∇f (x1 , x2 ) =
1
30
LICQ is fulfilled as ∇h 6= 0 ∀x1 .
We apply KKT: Let (x1 , x2 ) be a local minimum. Then there exists a parameter µ with ∇x L(x1 , x2 ) =
(0, 0)T which gives a system of linear equations
1 + 2µx1 = 0
1−µ=0
with solution x1 = −0.5 and µ = 1. Introducing x1 = −0.5 to the equality constraint we get
x2 = 0.25.
Example 3.3.2.
min f (x) = x1 , ∈ R2 such that
g(x) = (x1 − 4)2 + x22 − 16 ≤ 0
With this
L(x, λ) = x1 + λ((x1 − 4)2 + x22 − 16)
and
1 + 2λ(x1 − 4) 1 2(x1 − 4)
∇x L = ∇f (x) = ∇g(x) =
2λx2 0 2x2
The LICQ is fulfilled for all points except (x1 , x2 ) = (4, 0) as ∇g 6= 0. And for g we obtain
g(4, 0) = −16 ≤ 0, so the inequality constraint is not active.
The KKT theorem tells us, that there is a λ such that
1 + 2λ(x1 − 4) = 0
−→ ○ 1
2λx2 = 0
31
and
1 + 2λ(x1 − 4) = 0.
With this, we have a system of equations for x1 and λ. It follows that x1 = 0 or x1 = 8 For
x1 = 8 we obtain a value for λ = −1/8 which contradicts λ > 0. (Conflict with equation ○)
3
The only point satisfying the KKT condition is x1 = 0, x2 = 0 and λ = 1/8.
2. λ = 0
If λ = 0 we would get that 1 = 0 which is again a contradiction w.r.t equation ○.
1
We require F (x, µ) = 0
Applying Newton’s method to F (x, µ) = 0 gives the Lagrange-Newton Method.
The Lagrange-Newton Method is well defined for all x0 and µ0 close to x∗ and µ̂ and converges
superlinearly. If Hf and h00 are Lipschitz-continuous, then we have convergence of quadratic order.
Comment: For inequality constraint one generally has to use penalty formulations, i.e. we
minimize f˜(x) = f (x) + rP (x) with r > 0 and P (x) a penalty function which returns ”high” values
if any of the inequality constraint is violated. The function f˜(x) is minimized by approaches from
unconstrained non-linear optimization.
32
Chapter 4
Structural Optimization
Dimensioning: The situation of a system which is defined by its geometry and the related
dimensions such as length, height, radius, etc. can be improved. For example: Finding the
optimum depth of cut required in a sheet of paper to obtain a box of maximum volume, finding
the minimal requirement of materials to form a cuboid (or cube).
Shape Optimization: It refers to improvement of the state of an object by changing its shape
parameters or the shape of its boundary.
Topology Optimization: It refers to the state of a system by changing its topology, which
counts the number of holes. It deals with material distribution within a design space.
Consider a sheet of paper (eg. 40cm x 30cm) which is to be modified to form a box so as to
obtain maximal volume, where
33
x - design variable: How deep to cut into a sheet of paper to form a box with maximal volume
Derivative:
dV (x) !
= ab − 4(a + b)x + 12x2 = 0
dx
with one positive root:
r
∗ a+b (a + b)2 ab
x = − −
6 6 12
with a = 30 and b = 40
x∗ = 5, 7
Remark: When you use this example together with the method of golden cut, do not be confused
with the variables a and b. They are used in the algorithm and in the example. Better to rename
them, e.g. for the example into l length and w width.
V =a×b×c
A = 2ab + 2bc + 2ac
Let V be given, e.g. 1000
1000
→c=
ab
Minimize now
1000 1000
A(a, b) = 2ab + 2b + 2a
ab ab
We work with the gradient
2b − 2000
a2 !
∇A(a, b) = =0
2a − 2000
b2
→ a = b = 10 → c = 10
The cube is the best shape to wrap goods of any given volume. Remark: When you use this
example together with the method of golden cut, do not be confused with the variables a and b.
They are used in the algorithm and in the example. Better to rename them, e.g. for the example
into x, y and z.
34
4.2 Shape Optimization:
Let’s begin with the shape optimization of mechanical structures.
B > cBu = f in Ω,
u = ū on Γe
cBu = t̄ on Γt
where
• u-displacements,
• B-strain-displacement differential matrix
∂
∂x 0 0
0 ∂
∂y 0
0 ∂
0 ∂z
B=
∂ ∂
0 ∂z ∂y
∂ ∂
∂z 0 ∂x
∂ ∂
∂y ∂x 0
• Bu - strains,
• c - Stiffness tensor/matrix.
Ω-computational domain
Γ-boundary
Γe -part of Γ with essential boundary conditions
Γt -part of Γ with traction forces
Nodal displacements ui are computed on the nodes by the FEM directly from the FE system
35
Figure 4.2: Computational domain, mesh and boundaries with boundary conditions
un = Nn uni
Nn -Shape functions
uni -nodes connected to element n.
The strains can be obtained by differentiation of the displacements.
n = Bn uni ,
σn = cn Bn uni
B > cBu = f in Ω
Multiplying with sufficiently smooth functions v which vanish on the boundary and application of
partial integration yields
Z Z
(cBu)> BvdΩ = f vdΩ
Ω Ω
Ku = F
36
where,
u-vector of all the nodal displacements
F -vector of forces
K-stiffness matrix
The matrix K is assembled from the elemental stiffness matrices Kn
N
X N Z
X
K= Kn = (Bn> cn Bn )dΩn
n=1 n=1 Ωm
• Global formulation:
Z
min u(x)dΩ .
Ω
Recommended software: PDETOOL in Matlab (easy to use, export of system matrix, solution
vectors to matlab workspace for further processing), CALFEM (http://www.byggmek.lth.se/english/calfem/,
Finite Element Package based on Matlab provided by University of Lund), ANSYS (educational
license available at BUW), ABAQUS, FREEFEM, OOFEM, ...)
R(u, P1 , P2 ) = P1 + (P2 − P1 ) · u
where u ∈ [0, 1].
[ui , ui+1 ] is called a knot span. To form the basis functions NA,p we use a recursion formula,
(
1 if uA ≤ u < uA+1
NA,0 (u) =
0 else
37
Figure 4.3: Basis functions for splines. Source [FSF02]
when p = 0.
If p > 0
u − uA uA+p+1 − u
NA,p (u) = NA,p−1 (u) + NA+1,p−1 (u)
uA+p − uA uA+p+1 − uA+1
The B-spline is finally obtained by combination of the control points with the shape functions.
n
X
S(u) = PA NA,p (u)
A=1
P = {PA }nA=1
Properties:
•
P
NA,p = 1, Partition of unity
• Local support
The use of splines can be used to define boundaries. Further, they can be generalized to
represent to surfaces and solids. The generalization is straight forward:
n X
X m
p,q
Surface:S(u, v) = PA,B NA,B (u, v)
A=1 B=1
n X
X m X
l
p,q,r
Volume:V (u, v, w) = PA,B,C NA,B,C (u, v, w)
A=1 B=1 C=1
38
Circle described by R-Spline Circle described by R-Spline Circle described by R-Spline
1 1 1
0 0 0
y
y
-0.5 -0.5 -0.5
-1 -1 -1
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
X X X
(a) plain (b) changed weights: (c) changed control points:
w =[1 s45 5 s45 1 s45 1 s45 1] x = [1 1 0.5 -1 -1 -1 0 1 1];
y = [0 1 1 1 0 -1 -1 -1 0];
clear
close all
% Generates a circle with minimal information
% Define Control Points
x = [1 1 0 -1 -1 -1 0 1 1]; y = [0 1 1 1 0 -1 -1 -1 0];
% weights
s45 = 1/ sqrt (2) ;
w =[1 s45 1 s45 1 s45 1 s45 1];
39
4.3 Application: Optimization of the Shape of a Dam
Optimization of the cross-section of a dam described by splines. For the dam, a hydro-mechanical
coupled Finite Element Model is solved. The objective is formed by the volume (surface area).
A stress threshold is taken as constraint. The Nelder-Mead-Algorithm is chosen to optimize the
structure.
Optimized Structure
25 5000
20 0
15
y
-5000
10
5 -10000
0
0 10 20
x
Figure 4.6: Optimization of the Cross Section of a Dam. Left upper: Finite Element Mesh, Middle
upper: Different tested shapes described by splines, physical quantities computed on the optimized
structure.
40
4.4 Topology Optimization
In set theory, the properties of geometric structures are defined by the number of holes or voids
they contain. Two objects are equivalent if the number of voids is the same.
Topology optimization allows us to change from one topology class to another.
41
Figure 4.9: Mesh and Design Variables
K(x)u(x) = f.
Let us assume, we remove one Finite Element, i.e. xi = 0, xj = 1 ∀j 6= i. Then, the stiffness
matrix will change to:
∆K = K ∗ − K = −Ki
K ∗ is the stiffness matrix of the resulting structure after the element is removed and Ki is the ith
elemental stiffness matrix.
Varying both sides of equation (4.4.3) gives the changes in the displacements
∆u = K −1 (x)∆Ku(x).
42
4.4.4 Bi-directional Evolutionary Structural Optimization (BESO):
BESO is an improvement compared to the ESO method. While ESO is limited to the removal
of elements, BESO can also add the elements. The optimization task is given as the following
compliance minimization (stiffness maximization) problem:
1 >
min C = f u(x),
2
n
X
subject to V ∗ − Vi xi = 0,
i=1
As once an element is removed, wo do not share any information about its sensitivity. However,
with neighboring elements, we can average the sensitivity for a certain region including the already
deleted element. So, we average the value of α over all elements connected to one node
M
X
αjn = wi αie ,
i=1
where M is the total number of elements connected to the j th node. wi is the weight factor defined
as: !
1 rij
wi = 1 − PM ,
M −1 i=1 rij
where rij is the distance of the midpoint of the ith element from node j. These nodal sensitivities
will now be taken and then we average over different nodes.
43
Pk
j=1 w(rij )αjn
αi = Pk ,
j=1 w(rij )
where k is the number of elements in a sub-domain Ωi and w(rij ) = rmin − rij is a weight factor
and j = 1, ...., k.
There are some thresholds based on which the elements are kept, deleted or added.
th
The elements are to be deleted (switched to 0) if αi ≤ αdel .
th
The elements are to be added (switched to 1) if αi > αadd .
Stability: Sensitivities are averaged between two iterations to obtain stabilization of the process
αi = αik + αik−1 .
|{z} | {z }
current old
(Try out the codes without this line, then you know why it is there!) In the above setting either
an element is active or not. Thus, we deal with a discrete optimization problem, a so called hard-
kill approach. Generally, in optimization theory it has been found that continuous problems are
more convenient to solve. So the following deals with an implementation of BESO as a continuous
optimization problem:
Goal:
xi ∈ [0, 1],
i.e. each design variable is now any value in the interval from 0 to 1. Such an approach is called
soft kill, where the design variable becomes continuous. (Earlier we started with discrete (hard
kill) where xi ∈ {0, 1}.)
In soft kill topology optimization algorithms, the Young’s Modulus is defined as a function of the
design variable xi follows:
Ei = Ei (xi ) = xpi E0 ,
where p > 0 is called as penalization factor, E0 is the Young’s modulus of the solid, and xi is the
design variable.
44
Figure 4.10: Effects if the power-law used in the soft-kill approaches.
Figure 4.11: One sided MBB beam (Messerschmitt–Bölkow–Blohm), which is actually a 3 point
bending test.
45
% STABLIZATION OF EVOLUTIONARY PROCESS
i f i > 1 ; ( dc+o l d d c ) / 2 . ; end
% BESO DESIGN UPDATE
[x] = ADDDEL( n e l x , n e l y , v o l , dc , x ) ;
% PRINT RESULTS
i f i >10;
change=abs ( sum ( c ( i −9: i −5) )−sum ( c ( i −4: i ) ) ) /sum ( c ( i −4: i ) ) ;
end
V( i ) = sum ( sum ( x ) ) / ( n e l x ∗ n e l y ) ;
d i s p ( [ ' I t . : ' s p r i n t f ( '%4 i ' , i ) ' Obj . : ' s p r i n t f ( ' %10.4 f ' , c ( i ) ) ...
' Vol . : ' s p r i n t f ( ' %6.3 f ' , sum ( sum ( x ) ) / ( n e l x ∗ n e l y ) ) ...
' ch . : ' s p r i n t f ( ' %6.3 f ' , change ) ] )
% PLOT DENSITIES
colormap ( g r a y ) ; i m a g e s c (−x ) ; a x i s e q u a l ; a x i s t i g h t ; a x i s o f f ; drawnow ;
end
%%%%%%%%%% OPTIMALITY CRITERIA UPDATE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
f u n c t i o n [ x]=ADDDEL( n e l x , n e l y , v o l f r a , dc , x )
l 1 = min ( min ( dc ) ) ; l 2 = max(max( dc ) ) ;
w h i l e ( ( l 2 −l 1 ) / l 2 > 1 . 0 e −5)
th = ( l 1+l 2 ) / 2 . 0 ;
x = max ( 0 . 0 0 1 , s i g n ( dc−th ) ) ;
i f sum ( sum ( x ) )− v o l f r a ∗ ( n e l x ∗ n e l y ) > 0 ;
l 1 = th ;
else
l 2 = th ;
end
end
%%%%%%%%%% MESH−INDEPENDENCY FILTER %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
f u n c t i o n [ d c f ]= c h e c k ( n e l x , n e l y , rmin , x , dc )
d c f=z e r o s ( n e l y , n e l x ) ;
for i = 1: nelx
for j = 1: nely
sum = 0 . 0 ;
f o r k = max( i −f l o o r ( rmin ) , 1 ) : min ( i+f l o o r ( rmin ) , n e l x )
f o r l = max( j −f l o o r ( rmin ) , 1 ) : min ( j+f l o o r ( rmin ) , n e l y )
f a c = rmin−s q r t ( ( i −k ) ˆ2+( j −l ) ˆ 2 ) ;
sum = sum+max ( 0 , f a c ) ;
d c f ( j , i ) = d c f ( j , i ) + max ( 0 , f a c ) ∗ dc ( l , k ) ;
end
end
d c f ( j , i ) = d c f ( j , i ) /sum ;
end
end
%%%%%%%%%% FE−ANALYSIS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
f u n c t i o n [U]=FE( n e l x , n e l y , x , p e n a l )
[KE] = l k ;
K = s p a r s e ( 2 ∗ ( n e l x +1) ∗ ( n e l y +1) , 2 ∗ ( n e l x +1) ∗ ( n e l y +1) ) ;
F = s p a r s e ( 2 ∗ ( n e l y +1) ∗ ( n e l x +1) , 1 ) ; U = z e r o s ( 2 ∗ ( n e l y +1) ∗ ( n e l x +1) , 1 ) ;
for elx = 1: nelx
for ely = 1: nely
n1 = ( n e l y +1) ∗ ( e l x −1)+e l y ;
n2 = ( n e l y +1)∗ e l x +e l y ;
e d o f = [ 2 ∗ n1 −1; 2∗ n1 ; 2∗ n2 −1; 2∗ n2 ; 2∗ n2 +1; 2∗ n2 +2; 2∗ n1 +1; 2∗ n1 + 2 ] ;
K( e d o f , e d o f ) = K( e d o f , e d o f ) + x ( e l y , e l x ) ˆ p e n a l ∗KE;
end
end
% DEFINE LOADS AND SUPPORTS ( C a n t i l e v e r )
% F ( 2 ∗ ( n e l x +1) ∗ ( n e l y +1)−n e l y , 1 ) = −1.0;
% f i x e d d o f s = [ 1 : 2 ∗ ( n e l y +1) ] ;
% alldofs = [ 1 : 2 ∗ ( n e l y +1) ∗ ( n e l x +1) ] ;
% freedofs = setdiff ( alldofs , fixeddofs ) ;
% % SOLVING
% U( f r e e d o f s , : ) = K( f r e e d o f s , f r e e d o f s ) \ F( f r e e d o f s , : ) ;
% U( f i x e d d o f s , : ) = 0 ;
46
U( f i x e d d o f s , : ) = 0 ;
Results with BESO are given in Figures 4.13a and 4.13b and for different void ratios.
Top Opt Beam With BESO Top Opt Beam With BESO
(a) Classical half MBB topology optimization prob- (b) Classical half MBB topology optimization prob-
lem solved with BESO and volume fraction of 0.6. lem solved with BESO and volume fraction of 0.3.
The solution is obtained by calling the code with The solution is obtained by calling the code with
beso(80,60,0.6,2,3) beso(80,60,0.3,0.1,2)
47
4.4.5 Solid Isotropic Material with Penalization (SIMP):
SIMP uses the power law of the soft kill approach to solve the optimization problems.
X = {x ∈ Rn : 0 ≤ xi ≤ 1}
K(x)u(x) = f
∗
V is the target volume and V is the vector of volumes of all finite elements.
As we are dealing with a contraint optimization problem (Ku = f is an equality constraint) the
Lagrangian function is used in order to solve the optimization problem
Now,
∂c(x) ∂u(xi ) ∂u(xi )
= f> = u(x)> K(x)
∂xi ∂xi ∂xi
∂u(xi )
Further, to find ∂xi , we regard the derivative of K(x)u(x)=f
∂K(x) ∂u(x)
u(x) + K(x) =0
∂xi ∂xi
∂u(x) ∂K(x)
= −K(x)−1 u(x)
∂xi ∂xi
Pn
Now we have the stiffness matrix, K(x) = i=1 xpi Ki
So,
n
∂K(x) X p−1
= pxi Ki
∂xi i=1
f u n c t i o n top ( n e l x , n e l y , v o l f r a c , p e n a l , rmin ) ;
% INITIALIZE
x ( 1 : nely , 1 : nelx ) = v o l f r a c ;
loop = 0;
change = 1 . ;
% START ITERATION
w h i l e change > 0 . 0 1
loop = loop + 1;
xold = x ;
% FE−ANALYSIS
[U]=FE( n e l x , n e l y , x , p e n a l ) ;
% OBJECTIVE FUNCTION AND SENSITIVITY ANALYSIS
[KE] = l k ;
c = 0.;
for ely = 1: nely
for elx = 1: nelx
n1 = ( n e l y +1) ∗ ( e l x −1)+e l y ;
48
n2 = ( n e l y +1)∗ e l x +e l y ;
Ue = U( [ 2 ∗ n1 −1;2∗ n1 ; 2∗ n2 −1;2∗ n2 ; 2∗ n2 +1;2∗ n2 +2; 2∗ n1 +1;2∗ n1 + 2 ] , 1 ) ;
c = c + x ( e l y , e l x ) ˆ p e n a l ∗Ue ' ∗KE∗Ue ;
dc ( e l y , e l x ) = −p e n a l ∗x ( e l y , e l x ) ˆ ( p e n a l −1)∗Ue ' ∗KE∗Ue ;
end
end
% FILTERING OF SENSITIVITIES
[ dc ] = c h e c k ( n e l x , n e l y , rmin , x , dc ) ;
% DESIGN UPDATE BY THE OPTIMALITY CRITERIA METHOD
[x] = OC( n e l x , n e l y , x , v o l f r a c , dc ) ;
% PRINT RESULTS
change = max(max( abs ( x−x o l d ) ) ) ;
d i s p ( [ ' I t . : ' s p r i n t f ( '%4 i ' , l o o p ) ' Obj . : ' s p r i n t f ( ' %10.4 f ' , c ) ...
' Vol . : ' s p r i n t f ( ' %6.3 f ' , sum ( sum ( x ) ) / ( n e l x ∗ n e l y ) ) ...
' ch . : ' s p r i n t f ( ' %6.3 f ' , change ) ] )
% PLOT DENSITIES
colormap ( g r a y ) ; i m a g e s c (−x ) ; a x i s e q u a l ; a x i s t i g h t ; a x i s o f f ; pause ( 1 e −6) ;
end
%%%%%%%%%% OPTIMALITY CRITERIA UPDATE %%%%%%%%%%%%%%%%%%%%%%%%%
f u n c t i o n [ xnew]=OC( n e l x , n e l y , x , v o l f r a c , dc )
l 1 = 0 ; l 2 = 1 0 0 0 0 0 ; move = 0 . 2 ;
w h i l e ( l 2 −l 1 > 1 e −4)
lmid = 0 . 5 ∗ ( l 2+l 1 ) ;
xnew = max ( 0 . 0 0 1 , max( x−move , min ( 1 . , min ( x+move , x . ∗ s q r t (−dc . / lmid ) ) ) ) ) ;
i f sum ( sum ( xnew ) ) − v o l f r a c ∗ n e l x ∗ n e l y > 0 ;
l 1 = lmid ;
else
l 2 = lmid ;
end
end
%%%%%%%%%% MESH−INDEPENDENCY FILTER %%%%%%%%%%%%%%%%%%%%%%%%%%%
f u n c t i o n [ dcn ]= c h e c k ( n e l x , n e l y , rmin , x , dc )
dcn=z e r o s ( n e l y , n e l x ) ;
for i = 1: nelx
for j = 1: nely
sum = 0 . 0 ;
f o r k = max( i −f l o o r ( rmin ) , 1 ) : min ( i+f l o o r ( rmin ) , n e l x )
f o r l = max( j −f l o o r ( rmin ) , 1 ) : min ( j+f l o o r ( rmin ) , n e l y )
f a c = rmin−s q r t ( ( i −k ) ˆ2+( j −l ) ˆ 2 ) ;
sum = sum+max ( 0 , f a c ) ;
dcn ( j , i ) = dcn ( j , i ) + max ( 0 , f a c ) ∗x ( l , k ) ∗ dc ( l , k ) ;
end
end
dcn ( j , i ) = dcn ( j , i ) / ( x ( j , i ) ∗sum ) ;
end
end
%%%%%%%%%% FE−ANALYSIS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
f u n c t i o n [U]=FE( n e l x , n e l y , x , p e n a l )
[KE] = l k ;
K = s p a r s e ( 2 ∗ ( n e l x +1) ∗ ( n e l y +1) , 2 ∗ ( n e l x +1) ∗ ( n e l y +1) ) ;
F = s p a r s e ( 2 ∗ ( n e l y +1) ∗ ( n e l x +1) , 1 ) ; U = z e r o s ( 2 ∗ ( n e l y +1) ∗ ( n e l x +1) , 1 ) ;
for elx = 1: nelx
for ely = 1: nely
n1 = ( n e l y +1) ∗ ( e l x −1)+e l y ;
n2 = ( n e l y +1)∗ e l x +e l y ;
e d o f = [ 2 ∗ n1 −1; 2∗ n1 ; 2∗ n2 −1; 2∗ n2 ; 2∗ n2 +1; 2∗ n2 +2; 2∗ n1 +1; 2∗ n1 + 2 ] ;
K( e d o f , e d o f ) = K( e d o f , e d o f ) + x ( e l y , e l x ) ˆ p e n a l ∗KE;
end
end
% DEFINE LOADS AND SUPPORTS (HALF MBB−BEAM)
F ( 2 , 1 ) = −1;
fixeddofs = union ( [ 1 : 2 : 2 ∗ ( n e l y +1) ] , [ 2 ∗ ( n e l x +1) ∗ ( n e l y +1) ] ) ;
alldofs = [ 1 : 2 ∗ ( n e l y +1) ∗ ( n e l x +1) ] ;
freedofs = setdiff ( alldofs , fixeddofs ) ;
% SOLVING
U( f r e e d o f s , : ) = K( f r e e d o f s , f r e e d o f s ) \ F( f r e e d o f s , : ) ;
U( f i x e d d o f s , : ) = 0 ;
%%%%%%%%%% ELEMENT STIFFNESS MATRIX %%%%%%%%%%%%%%%%%%%%%%%%%
f u n c t i o n [KE]= l k
E = 1.;
nu = 0 . 3 ;
k=[ 1/2−nu /6 1/8+nu /8 −1/4−nu /12 −1/8+3∗nu /8 ...
−1/4+nu /12 −1/8−nu /8 nu /6 1/8−3∗nu / 8 ] ;
49
KE = E/(1−nu ˆ 2 ) ∗ [ k ( 1 ) k(2) k(3) k(4) k(5) k(6) k(7) k(8)
k(2) k(1) k(8) k(7) k(6) k(5) k(4) k(3)
k(3) k(8) k(1) k(6) k(7) k(4) k(5) k(2)
k(4) k(7) k(6) k(1) k(8) k(3) k(2) k(5)
k(5) k(6) k(7) k(8) k(1) k(2) k(3) k(4)
k(6) k(5) k(4) k(3) k(2) k(1) k(8) k(7)
k(7) k(4) k(5) k(2) k(3) k(8) k(1) k(6)
k(8) k(3) k(2) k(5) k(4) k(7) k(6) k(1) ] ;
%
Top Opt Beam With SIMP Top Opt Beam With SIMP
(a) Classical half MBB topology optimization prob- (b) Classical half MBB topology optimization prob-
lem solved with SIMP and volume fraction of 0.6. lem solved with SIMP and volume fraction of 0.3.
The solution is obtained by calling the code with The solution is obtained by calling the code with
top(80,60,0.6,3,2) top(80,60,0.3,3,2)
For further details, variants and extension, please see https://www.topopt.mek.dtu.dk and
[Sig01]
There is a list of packages and codes which provide Topology Optimization .e.g Ansys Work-
bench, Matlab (codes top.m, beso.m, top88.m, top3D.m, ...). Architects and Designers use Rhino
and Grashopper.
Often, the optimized structures are remodelled in order to smooth the boundaries and re-
establish symmetry. Attached some works of previous students:
50
Topology Optimization
Projects SoSe 2019
Prof. Tom Lahmer and students
(Natural Hazards and Risks in Structural Engineering, Digital Engineering,
Bauingenieurwesen, Baustoffingenieurwesen)
51
Hanan Hadidi & Mohamad Nour Alkhalaf
Re-modeling (SpaceClaim)
Object with constraints
(ANSYS)
52
Sreekanth Buddhiraju (NHRE), Michael Glas (KIM)
53
Alexander Benz (BWM), Andreas Kirchner (BIM)
Topology optimization
Re-modelling
(SpaceClaim)
Modelling
(ANSYS)
Object Visualization
Topology optimization
(15 % vol. fract.)
3D-printing (144 layers PETG)
54
Figure 4.21: Topology Optimization, Rendering and 3D Printing of a Chair (Andreas Lenz, 2019)
55
Bibliography
56