0% found this document useful (0 votes)
18 views6 pages

Multidisciplinary Design Optimisation

Uploaded by

madhumitta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views6 pages

Multidisciplinary Design Optimisation

Uploaded by

madhumitta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Multidisciplinary Design Optimisation

Introduction:
The field of engineering that advocated and addresses the design and optimization of
diverse disciplines within a system is often referred as Multidisciplinary Design Optimization
(MDO). It primarily focuses on design and optimisation of complex engineering systems,
emphasizing the integration of a number of disciplines or subsystems.
It has an history which goes back to 1960s, initially advocated by Schmidt in the field of
Structural optimisation which paved the way for the emergence of Multidisciplinary design
Optimization. Jaroslaw Sobieski was the one to master the decomposition methods specifically
designed for MDO applications.
MDO has found use in a wide range of engineering domains, such as aerospace,
automotive, civil, and more. It is now a crucial component of the design process for complex
systems, guiding engineers and designers through the difficulties of integrating various
subsystems to produce the best possible overall system performance.

Benefits of MDO
MDO helps in building more efficient solutions through simultaneous optimization of
multiple parameters by integrating various disciplines. It accelerates the design and development
phases of a product by streamlining and parallelizing the design process. It creates a flexible
design framework which help in adapting quick changes and adjustments based on the
requirement without disrupting the overall design integrity. Engineers work on different aspects
of design, faster iteration and faster progress towards optimal solutions. Reduces the over all
design and rectification time and helps to attain the best efficient result.

Definitions
Accurate Solutions (accuracy): a solution balances well with the actual physical system
(validation)
Precise Solutions (precision): a solution that means the model is correctly programmed and
solved (verification)
Implicit Function: a function where the dependant and independent variable are not expressed
in terms of each other.
Explicit Function: a function where the dependant variable is directly defined in terms of the
independent variables.
Symmetric Positive Definite Matrix: a symmetric matrix which has all its eigen values
positive.
Singular Matrix: a matrix for which the determinant value is equal to zero.
Sparse Matrix: a matrix which contains a significant number of zero elements.

Linear Solvers:
An understanding about solver is very crucial as it affects the cost and precision of the function
evaluations in optimisation process. Solvers are of two types based on the type of equation they
are, Linear and Non-Linear.
Linear system of equation
r ( x )= Ax−b=0
where A = Non-dependent Matrix of u
b = Non-dependent Vector of u
x = State Variable

Classification of Linear solvers


Linear solvers are further classified as direct method and iterative method. Direct method is well
established and are widespread in the standard libraries. This method is more reliable and robust.

LU Factorisation:
LU Factorisation is equivalent to Gaussian Elimination method in matrix form. A matrix A is
factorised into matrices LU were lower and upper triangular matrices respectively.
Consider
Ax−b=0
Let A=LU
LUx−b=0
Let Ux= y
Through substitution we arrive at
n

y i=
bi − ∑ U ij x j
j= j +1
U ij

This method is numerically stable and computationally efficient. The special case of LU
factorisation is Cholesky Factorisation where the matrix is positive definite and symmetric.
The factorisation can be given as

A=LDL LT

Where D = diagonal matrix

QR Factorization:
QR factorization is a matrix decomposition method wherein the matrix A is represented as the product of
orthonormal matrix Q and an upper triangular Matrix R.
A=QR
Further calculations are carried out using Gram Schmidt Process.

Iterative Method:
In iterative method, the solution is arrived through continuous refining of the initial approximation value
predicted. This method can yield an approximation value when they are stopped at any point. Iterative
methods can be more efficient than direct method for matrices where the matrix (A) is large and sparse.
Iterative methods are further classified as fixed-point method and Krylov method.

Fixed Point Methods:


Fixed point method follows iteration process starting with initial guess x=0 and generates the final value
by through iteration with the succeeding Value. Until the convergence criterion is met , the process id
continued.
Consider the function.

x k+1 =G(x k ) k =1 ,2 , … n
x 0 - Initial Guess Value

G(x ) - Devised to converge the iterates to solution.

Fixed methods can be derived from splitting a matrix of the format.

A=M −N ------ (1)


Let’s consider the linear system
Ax=b
By substituting (1)
Mx=Nx +b
−1
x=M ( Nx+ b) ------(2)
Through substituting Nx=Mx− Ax
−1
x k+1 =x k + M (b− A x k )
Defining the residual iteration k as

r ( x k )=b− A xk

We arrive at
−1
x k+1 =x k + M r ( x k )

Jacobi Method:
Jacobi Method is arrived by decomposing the co-efficient Matrix M into a diagonal matrix D where the
diagonal entries are of A
−1
x k+1 =x k + D r ( x k )

At each iteration, the solution vector x is arrived by the formula.


1
x ik +1= ¿
A ii

Gauss – Seidel Method:


The Gauss seidel method is similar to Jacobi method where the coefficient matrix M is decomposed into
lower triangular Matrix E and is composed of Matrix A portion and is given as

−1
x k+1 =x k + E r ( x k )

It follows the component form


1
x ik +1= ¿
A ij
The method updates the value of x using the recently computed values through the formula.

SOR – Successive Over Relaxation


The successive over-relaxation (SOR) method uses an update that is a weighted average of the Gauss–
Seidel update and Jacobi method,
uk +1=uk +¿ ¿
where
ω – relaxation factor (a scalar value between 1-2)
It follows the component form.
uik +1=¿

You might also like