0% found this document useful (0 votes)
17 views8 pages

Operations Research

Uploaded by

cristicanizaleso
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views8 pages

Operations Research

Uploaded by

cristicanizaleso
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Operations Research

Linear Programming
The goal of linear programming is to find the values of the decision variables that maximize
(or minimize) the objective function while satisfying all the constraints.
step-by-step guide

1. Define Variables:
○ Start by identifying the decision variables in the problem. These are the
unknown quantities that you are trying to optimize. Assign a name and a
description to each variable to make it easier to understand the problem.
○ For example, if you are trying to optimize the production of two products, you
could define x1 as the number of units of product 1 to produce and x2 as the
number of units of product 2 to produce.

2. Build the Objective Function:


○ The objective function is a linear equation that represents the quantity you are
trying to maximize or minimize. It should be a function of the decision
variables defined in step 1.
○ For example, if you want to maximize profit, you could define the objective
function as:

Z = 10x1 + 15x2
where 10 and 15 are the profit per unit of product 1 and product 2, respectively.

3. State the Constraints:


○ Constraints are limitations or requirements that restrict the values of the
decision variables. They should be expressed as linear inequalities or
equalities that involve the decision variables.
○ For example, if the production of product 1 requires 2 hours of labor and the
production of product 2 requires 3 hours of labor, and there are only 24 hours
of labor available, you could define the labor constraint as:

2 * x1 + 3 * x2 <= 24

○ Similarly, if the production of product 1 requires 4 units of raw material and the
production of product 2 requires 6 units of raw material, and there are only 40
units of raw material available, you could define the raw material constraint
as:
4 * x1 + 6 * x2 <= 40

Note: Constraints should always be equal to a constant value. That's why they are called
“Linear”
1
Graphical Method
The Graphical Method is a technique used in Operations Research to solve linear
programming problems graphically. It involves representing the constraints and the objective
function of the problem on a two-dimensional graph, and finding the optimal solution by
identifying the intersection point of the constraint lines that maximize or minimize the
objective function.

In this method, each constraint is represented by a straight line on the graph, and the
feasible region of the problem is the area of the graph that satisfies all the constraints
simultaneously. The objective function is also represented as a line on the graph, and the
optimal solution is found at the point where this line intersects the feasible region and
maximizes or minimizes the objective function.

The Graphical Method is useful for solving small linear programming problems with two
decision variables, and it provides a visual representation of the problem that can help in
understanding the solution process.

Step by step guide.


1. Formulate the problem:
The first step is to formulate the problem as a linear programming problem with objective
function and constraints.

2. Plot the constraints:


Plot each constraint as a line or curve on a coordinate system. The constraints will form a
region on the graph that satisfies all the constraints.

3. Identify the feasible region:


Shade the region that satisfies all the constraints. This region is called the feasible region.

2
4. Identify the objective function:
Identify the objective function and plot it on the graph.

5. Determine the optimal solution:


The optimal solution is the point within the feasible region that maximizes or minimizes the
objective function. Find this point by either visually inspecting the graph or by using algebraic
methods such as finding the intersection points of the objective function with the boundary
lines of the feasible region.

6. Check the solution:


Once you have identified the optimal solution, check whether it satisfies all the
constraints. If it does, then it is the optimal solution. If not, then revise the solution
until it satisfies all the constraints.

Types of solutions that can arise:

● Unique optimal solution: This is a solution where there is only one point within the
feasible region that maximizes or minimizes the objective function. This point is the
optimal solution and is the best possible solution to the problem.

● Non-feasible solution: This is a solution where no point in the feasible region satisfies
all the constraints. In other words, there is no feasible solution to the problem.

● Unbounded solution: This is a solution where the objective function can be increased
or decreased indefinitely without violating any of the constraints. In other words, there
is no maximum or minimum value of the objective function within the feasible region.

● Multiple optimal solutions: This is a solution where there are multiple points within the
feasible region that have the same optimal objective function value. This means that
there are multiple solutions that provide the same optimal outcome for the problem.

7. Interpret the solution:


Finally, interpret the solution in the context of the problem. The optimal values of the decision
variables and the optimal value of the objective function provide insights into the best course
of action to take.

3
Some comments:

In Linear programming, Constraints should be “Linear”. Which means that equations should
be a polynomial equalized to a constant variable.

For example:

This is how constraints should be defined for a linear programming exercise.

4
Sensitivity Analysis
Sensitivity analysis is a useful technique used in linear programming to evaluate how
changes in the input parameters affect the optimal solution of the problem. In linear
programming, sensitivity analysis is used to determine the effect of small changes in the
coefficients of the objective function or the constraints on the optimal solution.

There are two types of sensitivity analysis in linear programming: dual analysis and primal
analysis.

Dual analysis examines how changes in the right-hand side coefficients of the constraints
affect the optimal value of the objective function. Specifically, it evaluates how the shadow
price (the change in the objective function value resulting from a one-unit change in the
right-hand side of a constraint) changes when the right-hand side coefficient of a constraint
is increased or decreased. The shadow price reflects the value of an additional unit of the
resource represented by the constraint.

Primal analysis, on the other hand, examines how changes in the coefficients of the
objective function affect the optimal values of the decision variables and the slack or surplus
variables. Specifically, it evaluates the sensitivity of the optimal solution to changes in the
coefficients of the objective function. This analysis is done by evaluating the range of values
over which the objective function coefficients can change without affecting the optimal
solution.

Sensitivity analysis is important in linear programming because it helps decision-makers to


understand the stability of the solution and to assess the impact of any changes in the input
parameters on the optimal solution. This analysis provides valuable information to
decision-makers that can help them make better-informed decisions.

Shadow prices
Shadow prices represent the rate of change in the objective function value with respect to
changes in the right-hand side (RHS) of a constraint. Specifically, the shadow price of a
constraint is the amount by which the objective function value would increase (or decrease)
if the RHS of the constraint were increased (or decreased) by one unit while keeping all
other variables and constraints unchanged.

Shadow prices are useful for identifying the most valuable resources and constraints in an
LP problem. A high shadow price for a constraint indicates that the objective function is
sensitive to changes in the RHS of that constraint, and thus that constraint is a critical
resource. Conversely, a low or zero shadow price for a constraint indicates that the objective
function is not very sensitive to changes in that constraint, and thus that constraint is not as
critical.

By examining the shadow prices, a decision-maker can gain insights into the optimal solution
and make informed decisions about resource allocation and production planning.

5
● "Final Valor" or "Final Value" column: This column shows the final optimal value of
the objective function.

● "Reducido Coste" or "Reduced Cost" column: This column shows the reduced cost
for each variable. Reduced cost is the amount by which the objective function would
improve if one unit of the corresponding variable's value is increased. If the reduced
cost is zero, it means the corresponding variable is already at its optimal value.

● "Objetivo Coeficiente" or "Objective Coefficient" column: This column shows the


coefficients of the decision variables in the objective function. These coefficients
represent the contribution of each variable to the objective function.

● "Permisible Aumentar" or "Allowable Increase" column: This column shows the


maximum amount by which the coefficient of each variable can increase while still
maintaining the current optimal solution. This value is calculated based on the
shadow price of the corresponding constraint.

● "Permisible Reducir" or "Allowable Decrease" column: This column shows the


maximum amount by which the coefficient of each variable can decrease while still
maintaining the current optimal solution. This value is also calculated based on the
shadow price of the corresponding constraint.

Overall, the Sensitivity Report provides valuable information on how changes in the model's
parameters can affect the optimal solution. It allows decision-makers to identify the variables
and constraints that have the greatest impact on the objective function and to make informed
decisions about changes to the model.

6
● "Final Valor" or "Final Value" column: This column shows the final optimal value of
the objective function.

● "Sombra Precio" or "Shadow Price" column: This column shows the shadow price of
each constraint. Shadow price represents the change in the objective function value
per unit increase in the right-hand side of the corresponding constraint.

● "Restricción Lado derecho" or "Constraint Right-Hand Side" column: This column


shows the right-hand side value of each constraint. The right-hand side value is the
constant on the right side of the inequality or equation that defines the constraint.

● "Permisible Aumentar" or "Allowable Increase" column: This column shows the


maximum amount by which the right-hand side of each constraint can increase while
still maintaining the current optimal solution. This value is calculated based on the
shadow price of the corresponding constraint.

● "Permisible Reducir" or "Allowable Decrease" column: This column shows the


maximum amount by which the right-hand side of each constraint can decrease while
still maintaining the current optimal solution. This value is calculated based on the
shadow price of the corresponding constraint.

Overall, the Sensitivity Report provides valuable information on how changes in the model's
parameters can affect the optimal solution. It allows decision-makers to identify the
constraints that have the greatest impact on the objective function and to make informed
decisions about changes to the model. By understanding the shadow price and allowable
increase for each constraint, decision-makers can determine whether additional resources
should be allocated to improve the constraint value, or whether the constraint can be relaxed
without affecting the optimal solution.

7
Assignment

You might also like