0% found this document useful (0 votes)
17 views14 pages

UNIT12345

Uploaded by

xylyxlimited
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views14 pages

UNIT12345

Uploaded by

xylyxlimited
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

UNIT – I

1. Design Constraints

In optimization, design constraints are the essential boundaries or limits that the solution
must satisfy to be feasible in practical scenarios. These constraints arise from physical,
economic, or operational limitations in real-world problems.

🔷 Definition:

Constraints are mathematical expressions representing conditions that solutions must satisfy.
They are typically expressed as:

 Equality Constraints: hi(x)=0h_i(x) = 0hi(x)=0


 Inequality Constraints: gj(x)≤0g_j(x) \leq 0gj(x)≤0

🔷 Types of Constraints:

1. Physical Constraints:
o Due to limitations in size, shape, or strength of materials.
o Example: A beam’s deflection must be below a certain value.
2. Economic Constraints:
o Related to budget, cost limitations, or resource availability.
o Example: Total production cost should not exceed ₹50,000.
3. Performance Constraints:
o Related to performance criteria like speed, output, efficiency.
o Example: An engine must produce at least 200 HP.
4. Operational Constraints:
o Imposed due to process or system operations.
o Example: Maximum load that can be transported.
5. Environmental Constraints:
o Related to pollution, emissions, and sustainability goals.
o Example: CO₂ emissions must not exceed 100 g/km.

🔷 Mathematical Representation:

If x=(x1,x2,...,xn)x = (x_1, x_2, ..., x_n)x=(x1,x2,...,xn) is the vector of decision variables:

 Objective function: f(x)f(x)f(x) to be minimized or maximized.


 Constraints:
o gj(x)≤0g_j(x) \leq 0gj(x)≤0 → inequality constraints
o hi(x)=0h_i(x) = 0hi(x)=0 → equality constraints

🔷 Role of Constraints in Optimization:

 Define the feasible region where solutions exist.


 Prevent unachievable or impractical solutions.
 Improve the realism and relevance of the optimization model.

🔷 Example:

Problem: Minimize cost f(x)=4x1+3x2f(x) = 4x_1 + 3x_2f(x)=4x1+3x2


Subject to:

 Material limit: 2x1+x2≤1002x_1 + x_2 \leq 1002x1+x2≤100


 Time limit: x1+2x2≤80x_1 + 2x_2 \leq 80x1+2x2≤80
 Non-negativity: x1,x2≥0x_1, x_2 \geq 0x1,x2≥0

Here, the constraints define the permissible region in which the cost function is minimized.

2. Classification of Optimization Problems

Optimization problems can be categorized in various ways based on the nature of the
objective function, variables, constraints, etc.

🔷 Based on Objective Function:

1. Linear Programming (LP):


o Objective function and constraints are linear.
o Example: Maximize Z=3x+4yZ = 3x + 4yZ=3x+4y
2. Nonlinear Programming (NLP):
o Either the objective function or constraints (or both) are nonlinear.
o Example: Minimize Z=x2+y2Z = x^2 + y^2Z=x2+y2

🔷 Based on Constraints:

1. Constrained Optimization:
o Subject to equality or inequality constraints.
o Example: f(x), s.t. g(x)≤0f(x), \text{ s.t. } g(x) \leq 0f(x), s.t. g(x)≤0
2. Unconstrained Optimization:
o No constraints are present.
o Example: Maximize f(x)=x2−3x+5f(x) = x^2 - 3x + 5f(x)=x2−3x+5

🔷 Based on Variables:

1. Continuous Optimization:
o Variables can take any real value.
o Example: x∈Rx \in \mathbb{R}x∈R
2. Integer Optimization:
o Variables take only integer values.
o Example: x∈Zx \in \mathbb{Z}x∈Z
3. Mixed-Integer Optimization:
o Some variables are integers, others are continuous.
🔷 Based on Determinism:

1. Deterministic Optimization:
o All inputs and parameters are known and fixed.
2. Stochastic Optimization:
o Some data is uncertain or random.

🔷 Based on Time Dependency:

1. Static Optimization:
o Conditions do not change with time.
2. Dynamic Optimization:
o Conditions vary over time (e.g., inventory levels).

🔷 Summary Table:

Criteria Type
Objective Function Linear / Nonlinear
Constraints Constrained / Unconstrained
Variables Continuous / Integer / Mixed
Data Deterministic / Stochastic
Time Static / Dynamic

Understanding the classification helps in selecting suitable algorithms and solvers.

3. Multivariable Optimization Without Constraints

This involves optimizing (maximizing or minimizing) a function of multiple variables


without any restrictions.

🔷 Problem Form:

Given a function:
f(x1,x2,...,xn)f(x_1, x_2, ..., x_n)f(x1,x2,...,xn),
find the values of xix_ixi that minimize or maximize fff.

🔷 First-Order Necessary Conditions:

 Compute partial derivatives of fff with respect to each variable.


 Set each derivative to zero:

∂f∂x1=0,∂f∂x2=0,...,∂f∂xn=0\frac{\partial f}{\partial x_1} = 0, \frac{\partial f}{\


partial x_2} = 0, ..., \frac{\partial f}{\partial x_n} = 0∂x1∂f=0,∂x2∂f=0,...,∂xn∂f=0

 Solve the system to get critical points.

🔷 Second-Order Sufficient Condition (Hessian Test):


 Compute the Hessian Matrix:

H=[∂2f∂x12⋯∂2f∂x1∂xn⋮⋱⋮∂2f∂xn∂x1⋯∂2f∂xn2]H = \begin{bmatrix} \frac{\


partial^2 f}{\partial x_1^2} & \cdots & \frac{\partial^2 f}{\partial x_1 \partial x_n} \\
\vdots & \ddots & \vdots \\ \frac{\partial^2 f}{\partial x_n \partial x_1} & \cdots & \
frac{\partial^2 f}{\partial x_n^2} \end{bmatrix}H=∂x12∂2f⋮∂xn∂x1∂2f⋯⋱⋯∂x1∂xn
∂2f⋮∂xn2∂2f

 Evaluate definiteness:
o Positive definite: Local minimum.
o Negative definite: Local maximum.
o Indefinite: Saddle point.

🔷 Example:

Minimize f(x,y)=x2+y2−4x−6y+13f(x, y) = x^2 + y^2 - 4x - 6y + 13f(x,y)=x2+y2−4x−6y+13

Step 1:
∂f∂x=2x−4=0⇒x=2\frac{\partial f}{\partial x} = 2x - 4 = 0 \Rightarrow x = 2∂x∂f
=2x−4=0⇒x=2
∂f∂y=2y−6=0⇒y=3\frac{\partial f}{\partial y} = 2y - 6 = 0 \Rightarrow y = 3∂y∂f
=2y−6=0⇒y=3

Step 2:
Hessian = [2002]\begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix}[2002] → Positive definite →
Minimum at (2, 3)

Value of function = f(2,3)=22+32−4×2−6×3+13=4+9−8−18+13=0f(2,3) = 2^2 + 3^2 - 4×2 -


6×3 + 13 = 4 + 9 - 8 - 18 + 13 = 0f(2,3)=22+32−4×2−6×3+13=4+9−8−18+13=0

4. Solution by Method of Lagrange Multipliers

This method is used to solve constrained optimization problems with equality constraints.

🔷 Problem:

Maximize/minimize f(x,y)f(x, y)f(x,y) subject to g(x,y)=0g(x, y) = 0g(x,y)=0

🔷 Steps:

1. Form Lagrangian:

L(x,y,λ)=f(x,y)+λ⋅g(x,y)L(x, y, \lambda) = f(x, y) + \lambda \cdot g(x,


y)L(x,y,λ)=f(x,y)+λ⋅g(x,y)

2. Compute partial derivatives:


∂L∂x=0,∂L∂y=0,∂L∂λ=0\frac{\partial L}{\partial x} = 0, \quad \frac{\partial L}{\
partial y} = 0, \quad \frac{\partial L}{\partial \lambda} = 0∂x∂L=0,∂y∂L=0,∂λ∂L=0

3. Solve the equations simultaneously.

🔷 Example:

Maximize f(x,y)=xyf(x, y) = xyf(x,y)=xy


Subject to x+y=10x + y = 10x+y=10

Step 1:
Lagrangian: L(x,y,λ)=xy+λ(10−x−y)L(x, y, \lambda) = xy + \lambda (10 - x -
y)L(x,y,λ)=xy+λ(10−x−y)

Step 2:
∂L∂x=y−λ=0⇒y=λ\frac{\partial L}{\partial x} = y - \lambda = 0 \Rightarrow y = \
lambda∂x∂L=y−λ=0⇒y=λ
∂L∂y=x−λ=0⇒x=λ\frac{\partial L}{\partial y} = x - \lambda = 0 \Rightarrow x = \
lambda∂y∂L=x−λ=0⇒x=λ
∂L∂λ=10−x−y=0\frac{\partial L}{\partial \lambda} = 10 - x - y = 0∂λ∂L=10−x−y=0

Step 3:
Substitute: x=y,⇒x+x=10⇒x=5⇒y=5x = y, \Rightarrow x + x = 10 \Rightarrow x = 5 \
Rightarrow y = 5x=y,⇒x+x=10⇒x=5⇒y=5

Answer: Maximum value at (5,5), f=25f = 25f=25

5. Karush-Kuhn-Tucker (KKT) Conditions / Tucker Conditions

KKT conditions generalize the Lagrange multipliers method to handle inequality


constraints.

🔷 Problem:

Minimize f(x)f(x)f(x)
Subject to: gi(x)≤0g_i(x) \leq 0gi(x)≤0, hj(x)=0h_j(x) = 0hj(x)=0

🔷 KKT Conditions:

∇f(x∗)+∑λi∇gi(x∗)+∑μj∇hj(x∗)=0\nabla f(x^*) + \sum \lambda_i \nabla g_i(x^*)


1. Stationarity:

+ \sum \mu_j \nabla h_j(x^*) = 0∇f(x∗)+∑λi∇gi(x∗)+∑μj∇hj(x∗)=0


2. Primal Feasibility:
gi(x∗)≤0g_i(x^*) \leq 0gi(x∗)≤0, hj(x∗)=0h_j(x^*) = 0hj(x∗)=0
3. Dual Feasibility:
λi≥0\lambda_i \geq 0λi≥0
4. Complementary Slackness:
λi⋅gi(x∗)=0\lambda_i \cdot g_i(x^*) = 0λi⋅gi(x∗)=0

🔷 Geometrical Interpretation:

 At optimal point, the objective function gradient is balanced by gradients of active


constraints.
 Only active constraints contribute to solution (those that are equalities or satisfied
with equality).

🔷 Example:

Minimize f(x)=x2f(x) = x^2f(x)=x2


Subject to x≥3x \geq 3x≥3

Rewriting: −x+3≤0-x + 3 \leq 0−x+3≤0

Lagrangian: L(x,λ)=x2+λ(−x+3)L(x, \lambda) = x^2 + \lambda(-x + 3)L(x,λ)=x2+λ(−x+3)

Stationarity: dLdx=2x−λ=0⇒λ=2x\frac{dL}{dx} = 2x - \lambda = 0 \Rightarrow \lambda =


2xdxdL=2x−λ=0⇒λ=2x

Primal feasibility: x≥3x \geq 3x≥3

Complementary slackness: λ(−x+3)=0\lambda(-x + 3) = 0λ(−x+3)=0

Assume x=3x = 3x=3, then λ=6\lambda = 6λ=6, and slackness = 0


Hence, satisfies KKT → Minimum at x=3x = 3x=3

UNIT – II

1. Geometry of Linear Programming Problems

Linear Programming (LP) involves optimizing a linear objective function subject to linear
constraints. The geometric interpretation offers visual insight into LP solutions, especially in
two-variable problems.

🔷 LP Problem Structure:

Maximize or Minimize: Z=c1x1+c2x2Subject to: a11x1+a12x2≤b1a21x1+a22x2≤b2x1,x2≥0\


text{Maximize or Minimize: } Z = c_1x_1 + c_2x_2 \\ \text{Subject to: } \\ a_{11}x_1 +
a_{12}x_2 \leq b_1 \\ a_{21}x_1 + a_{22}x_2 \leq b_2 \\ x_1, x_2 \geq
0Maximize or Minimize: Z=c1x1+c2x2Subject to: a11x1+a12x2≤b1a21x1+a22x2≤b2x1,x2
≥0

🔷 Geometric Concepts:
1. Feasible Region:
o Formed by the intersection of all constraint inequalities.
o A convex polygon (or polyhedron in 3D).
2. Objective Function Line:
o Lines of constant Z values: Z=c1x1+c2x2Z = c_1x_1 + c_2x_2Z=c1x1+c2x2
o Moved in the direction of increasing/decreasing Z to find optimal point.
3. Corner Point Theorem:
o Optimal solution lies at a vertex (corner point) of the feasible region.
o Evaluate Z at all vertices to find the maximum or minimum.

🔷 Graphical Example:

Maximize Z=3x+4yZ = 3x + 4yZ=3x+4y


Subject to:

 x+y≤4x + y \leq 4x+y≤4


 x≤2x \leq 2x≤2
 y≤3y \leq 3y≤3
 x,y≥0x, y \geq 0x,y≥0

Feasible region is bounded. Vertices = (0,0), (0,3), (1,3), (2,2), (2,0)


Compute Z at each and choose max/min accordingly.

🔷 Properties:

 Feasible region is convex.


 Any local optimum is also a global optimum.
 Infinite solutions possible if the objective function is parallel to a constraint edge.

2. Pivotal Reduction of a General System of Equations

This is a key step in solving LP problems using the Simplex Method by converting equations
to canonical (pivoted) form.

🔷 Purpose:

 Transform the system into a basic feasible solution using pivot operations.
 Represent the LP problem in a tableau form.

🔷 Steps:

1. Convert inequalities to equalities using slack/surplus variables.


o a1x1+a2x2≤b⇒a1x1+a2x2+s=ba_1x_1 + a_2x_2 \leq b \Rightarrow a_1x_1 +
a_2x_2 + s = ba1x1+a2x2≤b⇒a1x1+a2x2+s=b
2. Set up an initial simplex tableau.
3. Use Gauss-Jordan pivoting to:
o Make the pivot element 1.
o Make all other entries in the pivot column 0.

🔷 Pivot Element:

 The element around which the tableau is adjusted.


 Chosen from the column with most negative coefficient in the Z-row (for
maximization).

🔷 Example:

Solve: Maximize Z=3x+2yZ = 3x + 2yZ=3x+2y


Subject to:
x+2y≤6x + 2y \leq 6x+2y≤6
3x+2y≤123x + 2y \leq 123x+2y≤12

Add slack variables: x+2y+s1=6x + 2y + s_1 = 6x+2y+s1=6


3x+2y+s2=123x + 2y + s_2 = 123x+2y+s2=12

Initial Tableau formed. Perform row operations to obtain pivot form and proceed with
simplex iterations.

3. Simplex Algorithm

The Simplex Method is a powerful iterative procedure to solve linear programming


problems.

🔷 Characteristics:

 Applicable for maximization/minimization LP problems.


 Moves from one vertex (feasible solution) to another with improved objective value.
 Terminates at optimal or unbounded condition.

🔷 Steps:

1. Convert constraints into equations with slack/surplus variables.


2. Construct the initial simplex tableau.
3. Identify entering variable:
o Most negative coefficient in objective row (for maximization).
4. Identify leaving variable:
o Apply minimum ratio test to RHS values.
5. Perform pivot operation.
6. Repeat until no negative values remain in the objective row.

🔷 Example Tableau:
s s
Basic x y RHS
₁ ₂
s₁ 1 2 1 0 6
s₂ 3 2 0 1 12
Z -3 -2 0 0 0

Perform operations to find optimal Z and values of variables.

UNIT – III

1. North-West Corner Rule (NWCR)

Definition:
NWCR is a method to compute an initial basic feasible solution (IBFS) for the
transportation problem. This method is based on a simplistic rule: allocate as much as
possible to the top-left (north-west) corner of the cost matrix and move accordingly to the
next cell until all supplies and demands are satisfied.

Step-by-Step Procedure:

1. Start with the cell in the top-left (north-west) of the cost matrix.
2. Allocate minimum of supply and demand for that cell.
3. Reduce the supply and demand accordingly:
o If supply = demand, move diagonally to the next cell.
o If supply < demand, move down to the next row.
o If demand < supply, move right to the next column.
4. Repeat the allocation process for the reduced matrix until all values are allocated.

Example:

Given:

D1 D2 D3 Supply
S1 4 6 8 50
S2 5 3 7 60
S3 6 8 5 25
30 50 55

Start at (S1,D1), allocate 30, then go to (S1,D2), etc.

Advantages:

 Simple and easy to apply.


 Provides a basic feasible solution.

Limitations:
 Does not consider cost – hence, not optimal.
 Can result in high-cost solutions.

2. Vogel’s Approximation Method (VAM)

Definition:
VAM is an improved method to determine an initial feasible solution by considering
penalties associated with not choosing the least cost route.

Procedure:

1. Compute penalties for each row and column:


o Penalty = difference between lowest and second lowest cost.
2. Select the row or column with the highest penalty.
3. In that row/column, allocate as much as possible to the lowest cost cell.
4. Adjust the supply and demand, strike out the satisfied row/column.
5. Repeat steps 1–4 until all allocations are made.

Example:

For a cost matrix, calculate penalties and proceed to allocate with cost consideration.

Advantages:

 More accurate than NWCR.


 Produces solutions closer to optimal.

3. Testing for Optimality of Balanced Transportation Problems

After obtaining an IBFS, we check for optimality using:

(i) MODI (Modified Distribution) Method:

Used for minimizing the cost in transportation problems.

Steps:

1. Assign values u_i and v_j for rows and columns such that u_i + v_j = c_ij for
allocated cells.
2. For unallocated cells, calculate opportunity cost:
Δij=cij−(ui+vj)\Delta_{ij} = c_{ij} - (u_i + v_j)Δij=cij−(ui+vj)
3. If all Δij≥0\Delta_{ij} \geq 0Δij≥0, the solution is optimal.
4. If any Δij<0\Delta_{ij} < 0Δij<0, perform improvement:
o Construct closed loop path.
o Add and subtract allocations alternately along the loop.
o Update allocations to improve cost.

(ii) Stepping Stone Method:

 Evaluate the effect of reallocating one unit through unoccupied routes.


 If cost reduces, adjust allocations.

✅ UNIT – IV

1. Fibonacci Method

Definition:
Fibonacci method is an optimization technique used to find the minimum or maximum of a
unimodal function in a 1D continuous interval using Fibonacci numbers.

Key Concepts:

 Relies on reducing the interval size using ratios of Fibonacci numbers.


 Guarantees a convergence based on pre-decided number of iterations.

Steps:

1. Choose initial interval [a,b][a, b][a,b] and number of iterations n.


2. Calculate points: x1=a+Fn−2Fn(b−a)x_1 = a + \frac{F_{n-2}}{F_n}(b - a)x1=a+Fn
Fn−2(b−a)
x2=a+Fn−1Fn(b−a)x_2 = a + \frac{F_{n-1}}{F_n}(b - a)x2=a+FnFn−1(b−a)
3. Evaluate function at x1x_1x1 and x2x_2x2.
4. Retain the subinterval that contains the optimum.
5. Repeat until the interval is sufficiently small.

Benefits:

 Fewer evaluations.
 Efficient compared to exhaustive search.

2. Univariate Method

Definition:
This method optimizes a multivariable function by improving one variable at a time while
keeping others constant.

Algorithm:
1. Choose initial point and small step size Δ\DeltaΔ.
2. Optimize along x1x_1x1, then fix it and optimize x2x_2x2, etc.
3. After each full cycle, update the direction and repeat.
4. Stop when the improvement is below threshold.

Advantages:

 Simple and easy to understand.


 Can be used for non-differentiable functions.

Disadvantages:

 Slow convergence.
 May not find global optimum.

3. Characteristics of a Constrained Problem

In optimization, constrained problems involve an objective function with restrictions


(constraints) on the variables.

Types:

 Equality constraints: g(x)=0g(x) = 0g(x)=0


 Inequality constraints: h(x)≤0h(x) \leq 0h(x)≤0

Key Characteristics:

 Feasible Region: Set of all points satisfying constraints.


 Active Constraints: Constraints that are equal at the optimum.
 Binding Constraints: Influence the location of the optimum.

Solution Techniques:

 Lagrange Multipliers
 Karush-Kuhn-Tucker (KKT) conditions
 Penalty and barrier methods

4. Interior and Exterior Penalty Function Methods

These are indirect methods used to convert constrained problems into unconstrained ones.

Interior Penalty (Barrier) Method:

 Penalizes approaching the boundary of feasible region.


 Penalty function:
ϕ(x)=f(x)−μ∑ln⁡(−gi(x))\phi(x) = f(x) - \mu \sum \ln(-g_i(x))ϕ(x)=f(x)−μ∑ln(−gi(x))
 Works only within feasible region.
 Uses small positive μ\muμ which is decreased iteratively.

Exterior Penalty Method:

 Applies penalty outside feasible region.


 Penalty function:
ϕ(x)=f(x)+r∑[gi(x)]2\phi(x) = f(x) + r \sum [g_i(x)]^2ϕ(x)=f(x)+r∑[gi(x)]2 (for
violated constraints)
 Works on infeasible solutions and pulls them toward feasibility.
 rrr is penalty constant, increased progressively.

✅ UNIT – V

1. Dynamic Programming – Multistage Decision Process

Definition:
Dynamic Programming (DP) is a method for solving complex problems by breaking them
into simpler sub-problems and solving them sequentially.

Characteristics:

 Stages: Each point in the decision-making process.


 State Variables: Conditions describing a stage.
 Decision Variables: Control actions at each stage.
 Bellman’s Principle of Optimality:
An optimal policy has the property that, regardless of the initial state, the remaining
decisions must constitute an optimal policy.

Recursive Formulation:

Let fn(x)f_n(x)fn(x) be the optimal value at stage n with state x, then:


fn(x)=min⁡u[gn(x,u)+fn+1(T(x,u))]f_n(x) = \min_{u} \left[ g_n(x, u) + f_{n+1}(T(x, u)) \
right]fn(x)=minu[gn(x,u)+fn+1(T(x,u))]

Applications:

 Shortest path
 Inventory control
 Resource allocation
2. Computational Procedure in Dynamic Programming (Numerical Example)

Problem:

A company must allocate N resources to 3 projects to maximize profit.

Steps:

1. Define stages: e.g., project 1, 2, and 3.


2. States: Remaining resources.
3. Decisions: Resources allocated at each stage.
4. Recursive relation:
fk(x)=max⁡[Rk(dk)+fk+1(x−dk)]f_k(x) = \max \left[ R_k(d_k) + f_{k+1}(x - d_k) \
right]fk(x)=max[Rk(dk)+fk+1(x−dk)]
where Rk(dk)R_k(d_k)Rk(dk) is return from allocating d_k to project k.

Example Table:

Resources Return from Project 1 Return from Project 2 Return from Project 3
0 0 0 0
1 1 2 2
2 2 4 4
... ... ... ...

Start from last project, compute maximum return at each stage, and move backward to
determine optimal allocations.

You might also like