0% found this document useful (0 votes)
36 views28 pages

MMA Presentation

na

Uploaded by

divineyoga45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views28 pages

MMA Presentation

na

Uploaded by

divineyoga45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Method of Moving Asymptotes and its

Applications in Structural Optimization

Guided by: Presented by:


Prof. Salil. S. Kulkarni Sanket Chavan (173109002)

Mechanical Engineering Department


IIT Bombay
Introduction
Structural Optimization
1. Size optimization
• Cross-sectional area as
design variable.
2. Shape optimization
• Boundary of problem can
be modified.
• Node location as design
variable.
3. Topology optimization
• Controls the material
distribution in given Different structural optimization approaches applied to truss
domain. structure [1]
• Various Algorithms are used for nonlinear optimization.
• For structural optimization MMA is wildly used.
[1] Afshin Faramarzi, Mohammad Hadi Afshar. (2014). “A Novel Hybrid Cellular Automata-Linear Programming Approach for the Optimal Sizing of Planar Truss 2
Structures”, Journal of Civil Engineering and Environmental Systems, 209-228.
Motivation
Topology optimization
1. Static structural design
• Design for minimum compliance
• Design for minimum weight
2. Dynamic structural design
• Design for minimum structural response (acceleration, natural frequency)
3. Material design
• Design for Extremal material properties (NPR material).
In all these cases material distribution over a given domain is taken as design variable.

Topology Optimization Example with Microstructure and 3D Printed Structure for the NPR
Minimum Compliance Design [1] Material Designed by Erik et.al [2]
[1] Ji-Hong Zhu, Wei-Hong Zhang, Liang Xia, “Topology Optimization in Aircraft and Aerospace Structures Design,” Archieves of Computational Methods in Engineering,
vol. 23, pp. 595-622, 2016 3
[2] Erik Andreassen, Boyan S. Lazarov, Ole Sigmund, “Design of Manufacturable 3D Extremal Elastic Microstructure,” Mechanics of Materials, vol. 69, pp. 1-10, 2014.
General form of Structural Optimization Problem

𝑃: minimize 𝑓0 𝒙 • 𝑓0 𝑥 is an objective function


• Weight, compliance, cost of manufacturing, dynamic
subject to 𝑓𝑖 𝒙 ≤ 0, for 𝑖 = 1, … , 𝑚 response etc.
𝒙 ∈ 𝕊, • 𝒙 is vector consisting of design variables
𝕊 = {𝒙 ∈ ℝ𝑛 : 𝑥𝑗min ≤ 𝑥𝑗 ≤ 𝑥𝑗max , j=1,…,n} • Cross-sectional area, elemental density etc.
• 𝑓𝑖 𝒙 are constraint functions
• Stress in member, nodal displacement, max volume
Two bar truss etc.
Minimize the weight subject to stress constraint.

𝑃: minimize 𝐴1 + 𝐴2

𝐹𝑐𝑜𝑠 𝛼
subject to ≤ 𝜎0 ,
𝐴1
𝐹𝑠𝑖𝑛 𝛼
≤ 𝜎0
𝐴2
𝐴1 , 𝐴2 ≥ 0
Two bar truss [1] Graphical Solution [1]
4
[1] Peter W. Christensen, Anders Klarbring, An Introduction to Structural Optimization, Linkoping, Sweden: Springer Science, 2009.
KKT conditions
• Solving for the stationary points
1. ℒ 𝒙, 𝝀 = 𝑓0 𝒙 + σ𝑚
𝑖=1 𝜆𝑖 𝑓𝑖 (𝒙)

= 0, if 𝑥𝑗min < 𝑥𝑗 < 𝑥𝑗max


𝜕ℒ 𝒙,𝝀
2. 𝜕𝑥𝑗

3. 𝜆𝑖 𝑓𝑖 𝒙 = 0,
4. 𝑓𝑖 𝑥 ≤ 0
5. 𝜆𝑖 ≥ 0
Geometrical interpretation [1]

• Negative of gradient of objective function should be the linear combination of gradients of active
constraints.
• At point 𝒙1 in figure −𝛻𝑓0 𝒙1 = 𝜆1 𝛻𝑓1 𝒙1 + 𝜆2 𝛻𝑓2 𝒙1 , 𝜆1 ≥ 0, 𝜆2 ≥ 0, so 𝒙1 is a KKT point and
consequently optimal solution.
• There does not exist any 𝜆2 ≥ 0 and 𝜆3 ≥ 0 such that −𝛻𝑓0 𝒙2 = 𝜆2 𝛻𝑓2 𝒙2 + 𝜆3 𝛻𝑓3 (𝒙2 ). Consequently,
𝒙2 is not a KKT point and hence not an optimal solution.

5
[1] Peter W. Christensen, Anders Klarbring, An Introduction to Structural Optimization, Linkoping, Sweden: Springer Science, 2009.
Iterative Approach
Approach requiring function and gradient information
• Sub-problem 𝑃𝑘

Formulate problem 𝑃 𝑃𝑘 : minimize 𝑓ሚ0𝑘 𝒙


starting point 𝒙(1) , 𝑘 = 1
subject to 𝑓ሚ𝑖𝑘 (𝑥) ≤ 0, 𝑖 = 1, … , 𝑚

𝑘
𝒙 ∈ 𝕊,
Given 𝒙 ,
Calculate
𝒙𝑘+1 = 𝒙
ෝ𝑘 𝑓𝑖 (𝒙 ) and 𝛻𝑓𝑖 (𝒙𝑘 )
𝑘
𝕊 = {𝒙 ∈ ℝ𝑛 : 𝑥𝑗min ≤ 𝑥𝑗 ≤ 𝑥𝑗max , 𝑗 = 1, … , 𝑛

• Approximating functions needs to be


Convex, Conservative and separable.
Solve sub-problem 𝑃𝑘 Generate sub-problem • Solving for KKT conditions is like solving
ෝ𝑘 .
to get 𝒙 𝑃𝑘 using approximation
for stationary points.
Solving KKT Conditions scheme. (MMA)
• Iteration stops when some convergence
criteria (Change in objective function) is
fulfilled. 6
Approximation Methods
• Taylor’s series first order expansion is used.
• Intermediate variables - 𝜙𝑗(𝑥𝑗)
ሚ 𝜕𝑓
• 𝑓(𝒙) = 𝑓 𝒙𝑘 + σ𝑛𝑗=1 ฬ 𝜙𝑗 − 𝜙𝑗𝑘 .
𝜕𝜙𝑗 𝑘
𝒙

1. Sequential Linear Programming (SLP)

• Objective function and constraints are linearized with respect to intermediate variables
• 𝜙𝑗 𝑥𝑗 = 𝑥𝑗
𝜕𝑓
• 𝑓ሚ𝑖 (𝒙) = 𝑓𝑖 𝒙𝑘 + σ𝑛𝑗=1 𝜕𝑥 ฬ 𝑥𝑗 − 𝑥𝑗𝑘
𝑗 𝒙𝑘
𝜕𝑓
• 𝑓𝑖 𝒙𝑘 and 𝜕𝑥 ฬ are constants → separable approximation.
𝑗 𝒙𝑘
• Approximated functions are affine functions i.e. 𝒂𝑻 𝒙 + 𝒃 (𝒂, 𝒃 constants) → Convex approximation.

• Used for solving general nonlinear optimization problem.


• Does not take into consideration special characteristics of structural problems.

7
Approximation Methods Cont..
2. Convex Linearization (CONLIN)
1
• Statically determinate structures stresses and displacements are function of .
𝐴𝑗
1 1
• Linearization with inverse variables → 𝜙𝑗 𝑥𝑗 =
𝑥𝑗 𝑥𝑗

𝜕𝑓 2 1 1
• 𝑓ሚ𝐼 𝒙 = 𝑓 𝒙𝑘 + σ𝑛𝑗=1 − 𝑥 ฬ −
𝜕𝑥𝑗 𝑗 𝑘 𝑥𝑗 𝑥𝑗𝑘
𝒙

• Objective function need not be function of inverse variables.


• Linearize function w.r.t. some variables as direct and some as inverse variables.
• Selection of variable based on convexity
• Hessian (𝛻 2 𝑓ሚ 𝒙 ) has to be positive semi definite.
• Selection of variable based on conservativeness.
• 𝑓෫ ෫
𝐼 𝒙 − 𝑓𝑙 𝒙 ≥ 0 for selection of inverse linearization.

8
Approximation Methods Cont..
• Convex Linearization (CONLIN)
• Hessian (𝛻 2 𝑓ሚ 𝒙 ). • Non-diagonal terms will be zero
• Diagonal terms has to be non-negative
𝜕 2 𝑓ሚ 𝜕 2 𝑓ሚ 𝜕 2 𝑓ሚ 2
⋯ Separable Func. 2 𝑥𝑗𝑘
𝜕𝑥12 𝜕𝑥1 𝜕𝑥2 𝜕𝑥1 𝜕𝑥𝑛 𝜕2 𝑓ሚ 𝜕𝑓
• =− ฬ
𝜕 2 𝑓ሚ 𝜕 2 𝑓ሚ 𝜕 2 𝑓ሚ 𝜕𝑥𝑗2 𝜕𝑥𝑗 𝑘 𝑥𝑗3
… 𝒙
𝛻 2 𝑓ሚ 𝒙 = 𝜕𝑥2 𝜕𝑥1 𝜕𝑥22 𝜕𝑥2 𝜕𝑥𝑛
𝜕𝑓
⋮ . ⋱ ⋮ • ฬ ≤0
2 ሚ
𝜕 𝑓 2 ሚ
𝜕 𝑓 2 ሚ
𝜕 𝑓 𝜕𝑥𝑗 𝑘
𝒙

𝜕𝑥𝑛 𝜕𝑥1 𝜕𝑥𝑛 𝜕𝑥2 𝜕𝑥𝑛2 • Function will be convex
• Conservativeness 2
𝜕𝑓 𝑥𝑗𝑘 −𝑥𝑗 Disadvantages
• 𝑓෫ ෫
𝐼 𝒙 − 𝑓𝑙 𝒙 = − 𝜕𝑥 ฬ •
𝜕𝑓
ฬ ≤0
𝑗 𝒙𝑘 𝑥𝑗 𝜕𝑥𝑗 𝑘
𝒙 • Too slow convergence
for some variables.
Conclusion • Sometimes oscillation
𝜕𝑓 occurs about point
• Use 𝑥𝑗 for whom 𝜕𝑥 𝑖 ฬ > 0 for direct approximation because of less
𝑗 𝒙𝑘
𝜕𝑓 conservativeness.
• Use 𝑥𝑗 for whom 𝜕𝑥 𝑖 ฬ ≤ 0 for inverse approximation
𝑗 𝒙𝑘 9
Method of Moving Asymptotes (MMA)
• Generalization of CONLIN
Original form of MMA
1 𝜕𝑓 𝑘 𝑘
, for those 𝑥𝑗 whose ฬ >0 𝑝𝑖𝑗 𝑞𝑖𝑗
𝑈𝑗 −𝑥𝑗 𝜕𝑥𝑗 𝑘
𝒙 • 𝑓ሚ𝑖𝑘 𝒙 = 𝑟𝑖𝑘 + σ𝑛𝑗=1 +
• 𝜙𝑗 𝑥𝑗 = , 𝑗 = 1, … , 𝑛 𝑈𝑗𝑘 −𝑥𝑗 𝑥𝑗 −𝐿𝑘𝑗
1 𝜕𝑓
, for those 𝑥𝑗 whose ฬ ≤0 2 𝜕𝑓𝑖 𝜕𝑓𝑖
𝑥𝑗 −𝐿𝑗 𝜕𝑥𝑗 𝑘
𝒙 𝑈𝑗𝑘 − 𝑥𝑗𝑘 , if >0
𝑘 𝜕𝑥𝑗 𝜕𝑥𝑗
• 𝐿𝑗 and 𝑈𝑗 are so-called Moving Asymptotes . 𝐿𝑗𝑘 < 𝑥𝑗𝑘 < 𝑈𝑗𝑘 • 𝑝𝑖𝑗 = 𝜕𝑓𝑖
0, if ≤0
𝜕𝑥𝑗
𝜕𝑓𝑖
Convexity Check 0, if ≥0
𝑘 𝜕𝑥𝑗
𝑘 𝑘 • 𝑞𝑖𝑗 =
𝜕2 𝑓ሚ𝑖𝑘 2𝑝𝑖𝑗 2𝑞𝑖𝑗 2 𝜕𝑓𝑖 𝜕𝑓𝑖
• = 3 + 3 − 𝑥𝑗𝑘 − 𝐿𝑗𝑘 , if <0
𝜕𝑥𝑗2 𝑈𝑗𝑘 −𝑥𝑗 𝑥𝑗 −𝐿𝑘𝑗
𝜕𝑥𝑗 𝜕𝑥𝑗
𝑘 𝑘
𝜕2 𝑓ሚ𝑖𝑘 𝑝𝑖𝑗 𝑞𝑖𝑗
• = 0 if 𝑗 ≠ 𝑙 • 𝑟𝑖𝑘 = 𝑓𝑖 𝒙 𝑘
− σ𝑛𝑗=1 +
𝜕𝑥𝑗 𝜕𝑥𝑙 𝑈𝑗𝑘 −𝑥𝑗𝑘 𝑥𝑗𝑘 −𝐿𝑘𝑗
• Requires function values and first
𝜕2 𝑓ሚ𝑖𝑘 𝑘 𝑘 order derivatives.
• ≥ 0 since 𝑝𝑖𝑗 ≥ 0 and 𝑞𝑖𝑗 ≥0
𝜕𝑥𝑗2
10
Method of Moving Asymptotes (MMA) Cont..
• Simplification of second derivative 𝑔 𝑥 =𝑥+𝑥 − 2 𝑥4
at 𝑥 0 = 1, 𝑔𝑥 𝑥 0 = 2.9 > 0
𝜕𝑓 40
2𝜕𝑥 𝑖
𝑗 𝜕𝑓𝑖
𝑘, if >0
𝜕2 𝑓ሚ𝑖𝑘 𝑈𝑗𝑘 −𝑥𝑗 𝜕𝑥𝑗
• =
𝜕𝑥𝑗2 𝜕𝑓
2𝜕𝑥 𝑖
𝑗 𝜕𝑓𝑖
− , if <0
𝑥𝑗 −𝐿𝑘𝑗
𝑘 𝜕𝑥𝑗

• With change in asymptotes curvature of


approximating function changes.
• Closer the 𝐿𝑗𝑘 and 𝑈𝑗𝑘 are chosen to 𝑥𝑗𝑘 , the larger
become the second derivatives.
• More curvature more conservative the function
approximation.
• As 𝐿𝑗𝑘 → −∞ and 𝑈𝑗𝑘 → +∞ → SLP
• As 𝐿𝑗𝑘 → 0 and 𝑈𝑗𝑘 → +∞ → CONLIN
MMA approximation with change in asymptotes [1]
11
[1] Peter W. Christensen, Anders Klarbring, An Introduction to Structural Optimization, Linkoping, Sweden: Springer Science, 2009.
Method of Moving Asymptotes (MMA) Cont..
• Guidelines for updating the asymptotes
• for 𝑘 = 1 and 𝑘 = 2 →
• 𝑈𝑗𝑘 = 𝑥𝑗𝑘 + 𝑆init (𝑥𝑗max − 𝑥𝑗min ) 0 < 𝑆init < 1
• 𝐿𝑗𝑘 = 𝑥𝑗𝑘 − 𝑆init (𝑥𝑗max − 𝑥𝑗min )

• For 𝑘 > 2
𝑆slower if 𝑥𝑗𝑘 − 𝑥𝑗𝑘−1 𝑥𝑗𝑘−1 − 𝑥𝑗𝑘−2 < 0
• 𝑈𝑗𝑘 = 𝑥𝑗𝑘 + 𝑆(𝑈𝑗𝑘−1 − 𝑥𝑗𝑘−1 )
𝑆 = 𝑆faster if 𝑥𝑗𝑘 − 𝑥𝑗𝑘−1 𝑥𝑗𝑘−1 − 𝑥𝑗𝑘−2 > 0
• 𝐿𝑗𝑘 = 𝑥𝑗𝑘 − 𝑆(𝑥𝑗𝑘−1 − 𝐿𝑗𝑘−1 )
1 if 𝑥𝑗𝑘 − 𝑥𝑗𝑘−1 𝑥𝑗𝑘−1 − 𝑥𝑗𝑘−2 = 0

• 0 < 𝑆slower < 1 → Asymptotes are brought closer to 𝑥𝑗𝑘 , conservativeness increases.
• 𝑆faster > 1 → Asymptotes are brought further away from 𝑥𝑗𝑘 , Conservativeness decreases.

12
Modifications in Original MMA
𝑚
1
minimize 𝑓ሚ0 𝒙 + 𝑎0 𝑧 + ෍(𝑐𝑖 𝑦𝑖 + 𝑑𝑖 𝑦𝑖2 ) Standard structural optimization form
2 𝑃: minimize 𝑓0 𝒙 𝑥𝜖𝑅𝑛
𝑖=1 𝑎0 = 1
subject to 𝑓ሚ𝑖 𝒙 − 𝑎𝑖 𝑧 − 𝑦𝑖 ≤ 𝑏𝑖 , 𝑖 = 1, … , 𝑚 𝑎𝑖 = 0, subject to 𝑓𝑖 𝒙 ≤ 0, for 𝑖 = 1, … , 𝑚
𝑥𝑗min ≤ 𝑥𝑗 ≤ 𝑥𝑗max , 𝑗 = 1, … , 𝑛 𝒙 ∈ 𝕊,
𝑧 ≥ 0, 𝑦𝑖 ≥ 0, 𝑖 = 1, … , 𝑚 𝑑𝑖 = 0 𝕊 = {𝒙 ∈ ℝ𝑛 : 𝑥𝑗min ≤ 𝑥𝑗 ≤ 𝑥𝑗max , 𝑗
• 𝑎𝑖 , 𝑐𝑖 , and 𝑑𝑖 are non-negative real numbers 𝑐𝑖 = a large = 1, … , 𝑛
number • 𝑦𝑖 becomes positive to provide
𝑖 = 1, … , 𝑚. feasible solution at initial iterations.
𝑎0 = 𝑎1 = ⋯ 𝑎𝑝 = 1 𝑓0 𝑥 = 0, 𝑚 = 𝑝 + 𝑞
𝑎𝑝+1 = ⋯ 𝑎𝑝+𝑞 = 0 𝑓𝑖 𝑥 = ℎ𝑖 (𝑥),𝑖 = 1, . . , 𝑝
𝑑𝑖 = 0, 𝑐𝑖 = a large number 𝑓𝑝+𝑖 𝑥 = 𝑔𝑖 𝑥 , 𝑖 = 1, . . , 𝑞

minimize 𝑧 Min-Max Problem


subject to ℎ𝑖 𝒙 − 𝑧 ≤ 0, 𝑖 = 1, . . , 𝑝 𝑃: minimize max ℎ𝑖 (𝒙)
𝑔𝑖 𝒙 ≤ 0, 𝑖 = 1, . . , 𝑞 𝑖=1,..,𝑝
subject to 𝑔𝑖 𝒙 ≤ 0, for 𝑖 = 1, … , 𝑞
𝑥𝑗min ≤ 𝑥𝑗 ≤ 𝑥𝑗max , for j = 1, … , n ,
𝑧≥0 𝑥𝑗min ≤ 𝑥𝑗 ≤ 𝑥𝑗max , for j=1,…,n
13
Modifications in Original MMA Cont..
GCMMA Initialize
𝑘 = 1, 𝑥 𝑘 = 𝑥 1 ,
• Add strictly positive term to
approximated function. 𝜈=0
• Update term in each inner 𝑥 𝑘,𝜈 = 𝑥 𝑘 , calculate 𝑓ሚ𝑖
(𝑘,𝜈)
,
iteration Formulate and solve sub-problem to get 𝒙 ෝ 𝑘,𝜈
• Thus increase its Stop ->convergence criteria fulfilled 𝑘 =𝑘+1
conservativeness 𝑥 𝑘+1 = 𝒙
ෝ 𝑘,𝜈
• Tested on Rosenbrock function
• Reduces oscillations but takes If
longer time 𝑓ሚ𝑖
(𝑘,𝜈)
ෝ 𝑘,𝜈
𝒙 yes
≥ 𝑓𝑖 𝒙 ෝ 𝑘,𝜈

No
𝜈 =𝜈+1
(𝑘,𝜈)
calculate 𝑓ሚ𝑖 ,
Formulate & solve sub-problem to get
ෝ 𝑘,𝜈
𝒙 14
MMA Test on Benchmark Optimization Problems
• Code provided by Prof. Krister Svanberg was used for generating sub-problem.
• Modified code for implementation of GCMMA was tested on Rosenbrock function.
• Function and gradient information required is either provided explicitly or calculated using FEM and
Sensitivity Analysis.
• Sub-problem is solved using Primal-Dual Interior Point Algorithm.(Code provided by Prof. Krister
Svanberg)

Problems on which MMA is tested are of following types: -


1. Constraint function minimization problem
2. Structural Engineering optimization problem
I. Problems where gradients information needs to be provided explicitly.
a) With all design variables as continuous
b) With some design variables as Discrete
II. Problems where gradient information is obtained using Numerical Techniques.

15
MMA Test on Benchmark Optimization Problems Cont..
Constraint function minimization problem
• Rosenbrock function constraint with a cubic and a
line • Constraint function II
• Minimize 𝑓 𝑥 = 1 − 𝑥1 2
+ 100 𝑥2 − 𝑥12 2 • Minimize 𝑓 𝑥 = 𝑥12 + 𝑥2 − 11 2 + 𝑥1 + 𝑥22 − 7 2

Subject to 𝑔1 : 𝑥1 − 1 3 − 𝑥2 + 1 ≤ 0 Subject to 𝑔1 : 𝑥1 − 0.05 2 + 𝑥2 − 2.5 2 − 4.84 ≤ 0


𝑔2 : 𝑥1 + 𝑥2 − 2 ≤ 0 𝑔2 : −𝑥12 − 𝑥2 − 2.5 2 + 4.84 ≤ 0
Literature
0 ≤ 𝑥1 ≤ 6; 0 ≤ 𝑥2 ≤ 6
IC MMA GCMMA Literature
[1] IC MMA
[1]
𝑥1 15.0000 1.0000 0.9999 1.0000
𝑥1 3.0000 2.2468 2.2468
𝑥2 15.0000 1.0000 1.0000 1.0000
𝑥2 3.0000 2.3821 2.3891
𝑓(𝑥) 0.0000 0.0000 0.0000
𝑓(𝑥) 13.5908 13.5910
Processor No. of Computation time
iterations (Sec) No. of iterations Computation time
(Sec)
MMA 1.2 GHz 16 0.2689
MMA 37 0.4736
GCMMA 1.2 GHz (17,37) 0.5577
Literature [1] 15000
Literature[1] 600 MHz 50000 120 Harmony Search
Harmony Search

[1] Kang Seok Lee and Zong Woo Geem, “A New Meta-Heuristic Algorithm for Continuous Engineering Optimization: Harmony Search Theory and Practice,” Computer 16
Methods in Applied Mechanics and Engineering, vol. 194, pp. 3902-3933, 2005.
MMA Test on Benchmark Optimization Problems Cont..
Structural Engineering optimization problem with all continuous variables.
• Three bar truss
• Explicit Expression for constraints and gradients.
• Objective is to minimize weight. Literature [1] MMA MMA (FEM)

• Design variables are 𝑥1 , 𝑥2 cross sectional areas of member. 𝑥1 0.7886 0.7880 0.7870
• Constraints on max stress in each bar. 𝑥2 0.4840 0.4101 0.4130
• 3 Nonlinear inequality constraints.
𝑓(𝑥) 263.8958 263.9000 263.9000
• 𝑙 = 100 cm , 𝑃 = 2 KN/cm2 , 𝜎 =2 KN/cm2

Processor No. of Computation time


iterations (Sec)

MMA 1.2 GHz 19 0.1779

MMA (FEM) 1.2 GHz 30 0.5513

Literature[1] 1 GHz 17610 0.46


Simulation of
Social
Three bar truss [1] Behaviour

[1] Tapabrata Ray and K.M. Liew, “Society and Civilization : An Optimization Algorithm Based on the Simulation of Social Behaviour,” IEEE Transactions on 17
Evolutionary Computation, vol. 7, no. 4, August 2003.
MMA Test on Benchmark Optimization Problems Cont..
Structural Engineering optimization problem with all continuous variables.
Welded beam problem
• Minimize cost of welding
• Four design variables 𝑥1 ℎ , 𝑥2 𝑙 , 𝑥3 𝑡 , and 𝑥4 𝑏 Literature
MMA
• Constraints on shear stress, bending stress, buckling [1]

load, deflection of beam and side constraints. 𝑥1 0.2057 0.2057

• 2 linear inequalities 𝑥2 3.4704 3.4705

• 5 Non-linear inequalities. 𝑥3 9.0366 9.0366


𝑥4 0.2057 0.2057
𝑓(𝑥) 1.7248 1.7249

No. of iterations Computation time


(Sec)
MMA 19 12.242

Literature[1] 110000
Harmony Search

Welded beam Design [2]


[1] Kang Seok Lee and Zong Woo Geem, “A New Meta-Heuristic Algorithm for Continuous Engineering Optimization: Harmony Search Theory and Practice,” Computer
Methods in Applied Mechanics and Engineering, vol. 194, pp. 3902-3933, 2005.
[2] Bahriye Akay and Dervis Karaboga, “Artificial Bee Colony Algorithm for Large-scale Problems and Engineering Design Optimization,” Journal of Intelligent 18
Manufacturing, vol. 23, pp. 1001-1014, 2012.
MMA Test on Benchmark Optimization Problems Cont..
Structural Engineering optimization problem with some discrete variables.
• Pressure vessel design
• Objective - minimize the total cost of material, forming, and welding Literature [1] MMA
• The design variables are 𝑥1 (𝑇𝑠 , shell thickness), 𝑥2 (𝑇𝐻 , the thickness of 𝑥1 0.8125 0.8125
the head), 𝑥3 (𝑅, inner radius) and 𝑥4 (𝐿, length of the cylindrical section 𝑥2 0.4375 0.4375
of the vessel, not including the head) 𝑥3 42.0984 42.0980
• 𝑥1 and 𝑥2 are multiple of 0.0625 𝑥4 176.6365 176.6400
•3 linear inequalities. 𝑓(𝑥) 6059.7000 6059.7000
•1 Non-linear inequalities.
No. of iterations Computation time
(Sec)
MMA 46 2.8854
Literature[2] 30000 -
Teaching-Learning

Schematic of pressure vessel [1]


[1] Kang Seok Lee and Zong Woo Geem, “A New Meta-Heuristic Algorithm for Continuous Engineering Optimization: Harmony Search Theory and
Practice,” Computer Methods in Applied Mechanics and Engineering, vol. 194, pp. 3902-3933, 2005
[2] R.V. Rao, V.J. Savsani, D.P. Vakharia, “Teaching–learning-based optimization: A novel method for constrained mechanical design optimization,” 19
Journal of Computer Aided Design, vol. 43, pp. 303-315, 2011.
MMA Test on Benchmark Optimization Problems Cont..
Structural Engineering optimization problem with some discrete variables.
• Speed reducer design Literature
MMA
[1]
• Objective – Minimize weight of speed reducer
𝑥1 3.4999 3.5000
• Subject to constraints on the bending stress of the gear teeth, surface stress,
transverse deflections of the shafts and stresses in the shafts. 𝑥2 0.7000 0.7000

• 𝑥3 is integer 𝑥3 17.0000 17.0000

• 4 Linear and 7 Non-linear inequalities. 𝑥4 7.3000 7.3000


𝑥5 7.8000 7.8000
𝑥6 3.3502 3.3502
𝑥7 5.2878 5.2867

𝑓(𝑥) 2997.0584 2995.6000

Processor No. of Computation time


iterations (Sec)
MMA 1.2 GHz 5 0.0968

Literature[1] 1 GHz 54456 9.62


Simulation of
social behaviour
Schematic of Speed reducer [2]
[1] Tapabrata Ray and K.M. Liew, “Society and Civilization : An Optimization Algorithm Based on the Simulation of Social Behaviour,” IEEE Transactions
on Evolutionary Computation, vol. 7, no. 4, August 2003.
[2] Ming-Hua Lin, Jung-Fa Tsai, Nian-ze Hu and Shu-Chuan Chang, “Design Optimization of a Speed Reducer Using Deterministic Techniques,” 20
Mathematical Problems in Engineering, vol. 2, pp. 1-7, November 2013.
MMA Test on Benchmark Optimization Problems Cont..
Structural Engineering optimization problem using FEM and sensitivity analysis Element Stress Node Displacement
𝐾𝑠𝑖 number 𝑖𝑛
• Ten bar truss Literature[1] MMA No
1 5.5696 1 0.0000
• Objective is to minimize the weight. 𝑥1 23.2500 28.0900
2 18.0903 2 0.0000
𝑥2 12.2100 11.4100
• Stress limitations of ±25𝐾𝑠𝑖. 𝑥3 25.7300 30.0000
3 -8.2259 3 0.0000
4 -6.9234 4 0.0000
• Nodal Displacement limitations of ±2.0 𝑖𝑛. 𝑥4 12.6100 10.6000
5 -7.2235 5 0.2005
𝑥5 0.1020 0.1000
• 10 Stress constraints and 12 displacement 6 24.9975 6 -0.6990
constraints. 𝑥6 0.1000 0.1000
7 8.2533 7 -0.2961
𝑥7 20.3600 17.3100
8 -5.8359 8 -1.5989
𝑥8 14.5100 17.3100
9 10.2156 9 -0.0595
𝑥9 0.1000 0.1000
10 24.9999 10 -1.1000
𝑥10 1.9670 1.9710
11 -0.5062
𝑓(𝑥) 4668.0000 4802.0000
12 -2.0000

Processor No. of Computation time


iterations (Sec)
MMA 1.2 GHz 17 0.6832
Literature[1] 600 MHz 20000 180
Harmony Search
Ten bar truss [1]
[1] Kang Seok Lee and Zong Woo Geem, “A New Meta-Heuristic Algorithm for Continuous Engineering Optimization: Harmony Search Theory and 21
Practice,” Computer Methods in Applied Mechanics and Engineering, vol. 194, pp. 3902-3933, 2005. .
Conclusion
• MMA gives the convex, separable and conservative approximation.
• SLP and CONLIN are special cases of MMA
• GCMMA implementation on Rosenbrock function infers that GCMMA is slow process but it reduces
the oscillations.
• MMA takes into consideration special characteristics of Structural optimization problems.
• It takes very less number of iterations compared to evolutionary algorithms.
• It can be clubbed with FEM and Sensitivity analysis to solve large scale problems.
• Since FEM can be used for providing required function and gradient information MMA can be
applied for solving Topology optimization problems.

22
23
Appendix

Convex Programming
• If the objective function and all constraints in Problem 𝑃 are convex then problem 𝑃 is said to be convex.
• For function to be convex following conditions need to followed
• A set 𝒮 ⊂ ℝ𝑛 is convex if for all 𝒙𝟏 , 𝒙𝟐 ∈ 𝒮 and all 𝜆 ∈ 0,1 , it holds that
𝜆𝒙𝟏 + 1 − 𝜆𝒙𝟐 ∈ 𝒮
• A function 𝑓: 𝒮 → ℝ is convex (on the convex set 𝒮) if, for all 𝒙𝟏 , 𝒙𝟐 ∈ 𝒮 with 𝒙𝟏 ≠ 𝒙𝟐 and 𝜆 ∈ 0,1 , it
holds that
𝑓 𝜆𝒙𝟏 + 1 − 𝜆 𝒙𝟐 ≤ 𝜆𝑓 𝒙𝟏 + (1 − 𝜆)𝑓(𝒙𝟐 )

Convex set Definition [1] Convex function Definition [1]


24
[1] Peter W. Christensen, Anders Klarbring, An Introduction to Structural Optimization, Linkoping, Sweden: Springer Science, 2009.
Convex Programming Cont..
• For a twice differentiable function convexity can be determined by examining its Hessian (𝛻 2 𝑓 𝒙 ).

𝜕2𝑓 𝜕2𝑓 𝜕2𝑓



𝜕𝑥12 𝜕𝑥1 𝜕𝑥2 𝜕𝑥1 𝜕𝑥𝑛
𝜕2𝑓 𝜕2𝑓 𝜕2𝑓
𝛻 2 𝑓 𝒙 = 𝜕𝑥2 𝜕𝑥1 …
𝜕𝑥22 𝜕𝑥2 𝜕𝑥𝑛
⋮ . ⋱ ⋮
𝜕2𝑓 𝜕2𝑓 𝜕2𝑓

𝜕𝑥𝑛 𝜕𝑥1 𝜕𝑥𝑛 𝜕𝑥2 𝜕𝑥𝑛2
• Hessian has to be positive semidefinite.
• 𝒚𝑇 𝑨𝒚 ≥ 0 ; 𝑨 ∈ ℝ𝑛×𝑛 for all𝒚 ∈ ℝ𝑛
• If approximated function is separable, non-diagonal terms will be zero.
• Diagonal terms has to be non-negative.
• Convex problem ensures local minima is global minima.
25
Conservative Function

• For sub-problem with “less than equal to type constraint’.


• Approximation scheme which predict higher value of function.
• It overestimated the value of true function.
• Optimal point will always be inside true feasible domain.
• Two approximating scheme comparison
• 𝑓෫ ෫
1 𝒙 − 𝑓2 𝒙 > 0

• Conservativeness is also decided by curvature of function.


• Second derivative of approximating function.
• Higher second derivative – Higher Curvature- More conservative.

Conservative function design space [1]


26
[1] Peter W. Christensen, Anders Klarbring, An Introduction to Structural Optimization, Linkoping, Sweden: Springer Science, 2009.
GCMMA

𝑘,𝜈 𝑘,𝜈
(𝑘,𝜈) 𝑝𝑖𝑗 𝑞𝑖𝑗
• 𝑓ሚ𝑖 𝒙 = 𝑟𝑖𝑘,𝜈 + σ𝑛𝑗=1 +
𝑈𝑗𝑘 −𝑥𝑗 𝑥𝑗 −𝐿𝑘𝑗

2 𝜕𝑓𝑖 𝜌𝑖𝑘,𝜈 𝜕𝑓𝑖


𝑈𝑗𝑘 − 𝑥𝑗𝑘 + max min , if >0
𝑘,𝜈 𝜕𝑥𝑗 𝑥 −𝑥 𝜕𝑥𝑗
• 𝑝𝑖𝑗 = 𝑗 𝑗
𝜕𝑓𝑖
0, if ≤0
𝜕𝑥𝑗
𝜕𝑓𝑖
0, if ≥0
𝜕𝑥𝑗
𝑘,𝜈
• 𝑞𝑖𝑗 = 2 𝜕𝑓𝑖 𝜌𝑖𝑘,𝜈 𝜕𝑓𝑖
− 𝑥𝑗𝑘 − 𝐿𝑗𝑘 + max min , if <0
𝜕𝑥𝑗 𝑥 −𝑥 𝜕𝑥𝑗
𝑗 𝑗
𝑘,𝜈 𝑘,𝜈
𝑝𝑖𝑗 𝑞𝑖𝑗
• 𝑟𝑖𝑘,𝜈 = 𝑓𝑖 𝒙 𝑘
− σ𝑛𝑗=1 +
𝑈𝑗𝑘 −𝑥𝑗𝑘 𝑥𝑗𝑘 −𝐿𝑘𝑗

27
Adjoint Method for Sensitivity Analysis

• Gradient of objective and constraint functions is calculated.


𝑑𝑓 𝜕𝑓 𝜕𝑲 0 𝑑𝑭
• = − 𝝀𝑇 𝑼 + 𝝀𝑇 𝑗 = 1, … , 𝑛
𝑑𝑥𝑗 𝜕𝑥𝑗 𝜕𝑥𝑗 𝑑𝑥𝑗

𝜕𝑓 𝑇
• 𝑲 𝒙𝑘 𝝀=
𝜕𝑼
𝜕𝑓
• will be zero if function is not explicit expression of design variable.
𝜕𝑥𝑗
𝑑𝑭
• will be zero if forcing condition is not explicit expression of design variable.
𝑑𝑥𝑗
𝐸
𝜕𝑓𝑖 𝜕 𝐿 −𝐶 −𝑆 𝐶 𝑆 {𝑈 𝑒 }
• =
𝜕𝑼 𝜕𝑼
𝜕𝑓𝑖 𝜕𝑓𝑖 𝜕𝑓𝑖 𝜕𝑓𝑖 𝜕𝑓𝑖
• = 𝑘 = 1, … , 𝑛𝑑𝑜𝑓 = , ,…,
𝜕𝑼 𝜕𝑈𝑘 𝜕𝑈1 𝜕𝑈2 𝜕𝑈𝑛𝑑𝑜𝑓
𝜕𝑓𝑖 1, 𝑓𝑗 is constraint on 𝑈𝑘
=ቊ
𝜕𝑈𝑘 0, elsewere

28

You might also like