0% found this document useful (0 votes)
6 views3 pages

Linear Programming Answers

Optimization problems seek the best solution by maximizing or minimizing an objective function within given constraints, prevalent in various fields. Key concepts include Bellman's Optimality Principle, convexity, and the gradient method, which facilitate finding optimal solutions. Applications of linear programming span resource allocation, transportation, and financial optimization, utilizing mathematical models and the Complementary Slackness Theorem for verification.

Uploaded by

sahil 02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views3 pages

Linear Programming Answers

Optimization problems seek the best solution by maximizing or minimizing an objective function within given constraints, prevalent in various fields. Key concepts include Bellman's Optimality Principle, convexity, and the gradient method, which facilitate finding optimal solutions. Applications of linear programming span resource allocation, transportation, and financial optimization, utilizing mathematical models and the Complementary Slackness Theorem for verification.

Uploaded by

sahil 02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

21.

Define Optimization Problems

Optimization problems involve finding the best solution from all feasible solutions by maximizing or minimizing an

objective function. These problems occur in fields like engineering, economics, and logistics, and include constraints that

must be satisfied by the decision variables.

22. Bellmans Optimality Principle

Bellmans Principle states: An optimal policy has the property that, whatever the initial state, the remaining decisions

must form an optimal policy with respect to the state resulting from the first decision. Its foundational to dynamic

programming and recursive optimization.

23. Define Convexity

Convexity refers to a set or function where any line segment connecting two points lies entirely within the set or below

the graph. In optimization, convex problems guarantee global optima, making them easier to solve compared to

non-convex problems.

24. Gradient Method

The gradient method is an iterative optimization technique where one moves in the direction opposite to the gradient to

find the function's minimum. It's used in unconstrained optimization and machine learning to adjust variables for

improving performance or minimizing loss.

25. Isoperimetric Problem

The isoperimetric problem seeks a shape with the maximum area for a given perimeter. In calculus of variations, it's a

constraint optimization problem. The classic solution is a circle, which encloses the maximum area for a fixed boundary

length.

26. Degenerate Solution

A degenerate solution in linear programming or transportation occurs when one or more basic variables are zero,
despite being in the basis. It can cause cycling in the simplex method and needs special handling to ensure algorithm

convergence.

27. Surplus Variables

Surplus variables are subtracted from constraints to convert them into equations in linear programming. They represent

excess over the required minimum and are always non-negative. Example: x + y - s = 10, where s 0.

28. Define Extremum

An extremum is either a maximum or a minimum value of a function. It represents a point where the function reaches its

highest or lowest value within a certain domain. Calculus and optimization techniques help identify and classify extrema.

29. Entering and Leaving Variables in LPP

In the simplex method, the entering variable is chosen to increase or decrease the objective function, while the leaving

variable maintains feasibility by exiting the basis. This pivot operation moves the solution toward optimality while

satisfying all constraints.

30. Five Application Areas of Linear Programming

1. Resource allocation in manufacturing

2. Transportation and logistics

3. Workforce scheduling

4. Diet optimization in nutrition

5. Financial portfolio optimization

LPP helps in making cost-effective, resource-efficient decisions across industries using a structured mathematical

approach.

31. Mathematical Model for General Transportation Problem

Let x_ij be units shipped from source i to destination j. Minimize Z = c_ij x_ij subject to: x_ij = a_i, x_ij = b_j, and x_ij 0.
It aims to minimize total transportation cost.

32. Explain Degenerate Transportation Problem

A degenerate transportation problem has fewer than m+n-1 allocations in a basic feasible solution. This may lead to an

undefined solution in optimality tests. To resolve this, a small is placed in an unallocated cell to maintain feasibility.

33. State Complementary Slackness Theorem

The Complementary Slackness Theorem connects primal and dual solutions in linear programming. If a primal

constraint is not tight (slack), its corresponding dual variable must be zero, and vice versa. It helps verify optimality of

solutions.

35. Forward and Backward Planning

Forward planning starts from the projects beginning and calculates earliest start and finish times. Backward planning

starts from the end date and works backward to determine latest start times. Both help manage deadlines,

dependencies, and resource allocation in projects.

36. Convex Function with Example

A convex function has a graph where a line joining any two points lies above the function. Example: f(x) = x2. For any

x1, x2: f(x1+(1)x2) f(x1)+(1)f(x2), where 0 1.

37. Saddle Point

A saddle point is a point in a function where it is minimum in one direction and maximum in another. In optimization, it's

not a local extremum. Example: f(x, y) = x2 - y2 has a saddle point at (0,0).

38. Lagranges Multiplier Condition for NLPP

To optimize a function f(x, y) subject to a constraint g(x, y) = 0, use f = g. This gives necessary and sufficient conditions

for constrained maxima or minima in non-linear programming problems using Lagrange multipliers.

You might also like