0% found this document useful (0 votes)
4 views50 pages

Operations Research

Uploaded by

Sandhya jadhav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views50 pages

Operations Research

Uploaded by

Sandhya jadhav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 50

Operations Research

Unit 6 : Decision Theory and Sensitivity


Analysis
Decision making under certainty in operations research
Decision making under certainty is a foundational concept in
operations research. It assumes a perfect knowledge of all the
relevant factors and outcomes associated with each decision
alternative. In this ideal scenario, operations research provides
powerful tools to identify the optimal course of action.
Here's a breakdown of key aspects of decision making under
certainty in operations research:
Advantages:
 Straightforward analysis: Since all outcomes are
known, evaluating each alternative and its consequences
is relatively simple.
 Guaranteed optimal solution: The decision maker can
choose the option that definitively leads to the best
outcome based on their objective (e.g., maximizing
profit, minimizing cost, etc.).
Common Techniques:
 Linear Programming: This method is a cornerstone of
optimization techniques. It allows you to express your
objective (profit, cost, etc.) as a mathematical function
and identify the values for decision variables that
optimize this function, subject to certain constraints
(resource limitations, production requirements, etc.).
Applications:
 Production scheduling: When demand forecasts are
highly accurate, operations research can determine the
most efficient production plan to meet that demand while
minimizing costs.
 Inventory management: If future demand is perfectly
predictable, you can calculate the optimal inventory
levels to avoid stockouts or excessive holding costs.
 Transportation problems: When shipping costs and
distances are known, you can determine the most
economical way to transport goods between different
locations.
Limitations:
 Rare in real-world scenarios: In most practical
situations, some level of uncertainty exists. Operations
research offers a broader set of tools to handle risk and
probabilistic outcomes.
 Oversimplification: Focusing solely on perfectly
predictable situations can lead to overlooking potential
risks or opportunities.
Uncertainty and risk:
Ncertainty and risk are fundamental factors that operations
research (OR) must grapple with in the real world. Unlike
decision making under certainty, where everything is perfectly
known, OR uses various techniques to handle situations where
outcomes are not guaranteed. Here's a breakdown of how OR
addresses uncertainty and risk:
Understanding the Distinction:
 Uncertainty: Refers to situations where the likelihood of
future events is unknown or difficult to quantify. There
might be a range of possible outcomes, but their
probabilities are unclear.
 Risk: Involves situations where there are identifiable
events with measurable probabilities of occurring. While
the exact outcome might be uncertain, there's data to
estimate the likelihood of each possibility.
How OR Deals with Uncertainty and Risk:
 Stochastic Modelling: This approach incorporates
probabilities into the decision-making process. By
representing variables and outcomes with probability
distributions, OR models can analyze various scenarios
and their potential impacts.
 Decision Analysis: This framework helps evaluate
different options under uncertain conditions. It considers
factors like potential outcomes, their probabilities, and
the decision maker's risk tolerance to arrive at a preferred
course of action.
 Scenario Planning: This involves exploring a range of
possible future situations, both positive and negative. By
considering "what-if" scenarios, OR can help identify
potential risks and develop contingency plans to mitigate
them.
 Robust Optimization: This technique aims to find
solutions that are less sensitive to changes in data or
unexpected events. It prioritizes options that perform
well across a range of possible scenarios, not just the
most likely one.
Benefits of Addressing Uncertainty and Risk:
 Improved Decision Making: By considering potential
risks and their probabilities, OR can help decision
makers avoid pitfalls and choose more robust strategies.
 Increased Resilience: Operations can be better prepared
for unexpected events by having contingency plans in
place developed through OR techniques.
 More Realistic Planning: By incorporating uncertainty,
OR models provide a more accurate picture of potential
outcomes, leading to more realistic planning and
resource allocation.
Challenges of Uncertainty and Risk:
 Data Availability: Accurately estimating probabilities
often requires historical data, which may not be readily
available or reliable.
 Computational Complexity: Stochastic models can
become mathematically complex, especially with many
variables and uncertainties.
 Subjectivity: Risk tolerance and the valuation of
outcomes can be subjective, requiring careful
consideration in the decision-making process.
Sensitivity analysis:
Sensitivity analysis is a powerful tool in operations research
(OR) that helps assess how the optimal solution to a model
changes when there are variations in the input data. In
essence, it's a "what-if" analysis for your OR model.
Here's a deeper dive into sensitivity analysis in OR:
Core Purpose:
 Understanding Model Robustness: Sensitivity analysis
reveals how much the optimal solution (e.g., production
plan, resource allocation) changes when the initial data
points (e.g., costs, demand forecasts) deviate from their
expected values. This helps determine how robust the
solution is to these variations.
 Identifying Critical Factors: By analyzing how changes
in different input parameters affect the solution,
sensitivity analysis highlights the most critical factors
influencing the optimal outcome. This allows decision
makers to focus on the aspects that have the greatest
impact.
Common Applications:
 Linear Programming: Sensitivity analysis is
particularly useful with linear programming models,
which are widely used in OR. It helps assess how
changes in coefficients (costs, profits) and right-hand
side values (resource availability) affect the optimal
solution and identify shadow prices (opportunity costs
associated with constraints).
 Simulation Models: Sensitivity analysis can be applied
to simulation models used for risk analysis. By varying
input parameters within a defined range, you can observe
how the overall system behavior changes, providing
valuable insights into potential risks and opportunities.
Benefits of Sensitivity Analysis:
 Improved Decision Making: By understanding how
variations in input data affect the optimal solution,
decision makers can make more informed choices that
are adaptable to changing circumstances.
 Risk Management: Sensitivity analysis helps identify
potential weaknesses in a plan by revealing which data
variations might lead to undesirable outcomes. This
allows for proactive risk mitigation strategies.
 Resource Allocation: By highlighting the most critical
factors, sensitivity analysis can guide resource allocation
decisions. Decision makers can prioritize resources
towards areas that have the greatest impact on the
optimal solution.
Challenges of Sensitivity Analysis:
 Computational Complexity: Extensive sensitivity
analysis can be computationally demanding, especially
with complex models with many input parameters.
 Interpretation: The results of sensitivity analysis need
careful interpretation. Large changes in input data might
not necessarily translate to significant changes in the
optimal solution, and vice versa.
 Over-reliance: Sensitivity analysis should not replace
robust model development. A well-constructed model
with sound data is crucial for reliable sensitivity analysis
results.

Goal Programming Formulation and Algorithms:


Weights Method , Preemptive Method.
Goal programming tackles decision-making problems with
multiple, often conflicting objectives. Unlike linear
programming with a single objective, goal programming seeks
a "satisficing" solution, where all objectives are achieved to
an acceptable degree.
Here's a breakdown of two common goal programming
methods:
1. Weights Method:
 Formulation:
o Each objective is transformed into a goal constraint.

o Deviational variables are introduced to represent the

positive and negative deviations from each goal.


(e.g., exceeding a profit target or falling short of a
production goal)
o The objective function minimizes the weighted sum

of these deviational variables. The weights reflect


the relative importance of each goal.
 Process:
o Assign weights to each objective based on their

importance.
o Formulate the goal constraints with deviational

variables.
o Define the objective function as the weighted sum

of absolute deviations from goals.


o Solve the linear program considering all constraints

and the objective function.


 Advantage:
o Straightforward implementation, works well for

problems with clearly defined priorities.


 Disadvantage:
o Choosing weights can be subjective and

significantly impact the solution. It might be


challenging to determine the "correct" weights for
all goals.
2. Preemptive Method:
 Formulation:
o Goals are arranged in a prioritized order.

o Each goal is converted into a constraint with a

higher priority than the subsequent goals.


o No deviational variables are used.

 Process:
o Rank the objectives in order of importance.

o Formulate constraints for each goal in decreasing

order of priority.
o Solve the linear program with the highest priority

goal constraint as the primary objective function.


o If an optimal solution is found, it becomes the final

answer. If not, proceed to the next-highest priority


goal and repeat.
 Advantage:
o Explicitly prioritizes goals, avoids the subjectivity

of weight assignment.
 Disadvantage:
o May not always find a feasible solution, especially

for highly conflicting goals.

Unit 5: Integer Programming Problem


and Project Management
Integer Programming Algorithms – B&B
Algorithms, cutting plane algorithm
Integer programming (IP) deals with optimization problems
where some or all decision variables must take integer values.
Finding optimal solutions for IP problems can be
computationally challenging. Two popular algorithms tackle
this challenge: Branch and Bound (B&B) and Cutting Plane.
Here's a breakdown of each approach:
1. Branch and Bound (B&B) Algorithm:
 Concept:
o Systematically explores all possible solutions using

a tree structure.
o Branches represent fixing a variable to be integer

(e.g., x = 0 or x = 1).
o Bounds are calculated at each branch to determine if

a sub-problem can potentially contain an optimal


solution.
o Sub-problems with infeasible solutions or solutions

worse than the current best solution are discarded


(bounded).
 Process:
o Solve the relaxed linear programming (LP) version

of the original IP problem (where all variables are


continuous).
o If the LP solution has all integer values, it's the

optimal solution. Otherwise, choose a variable to


branch on (fix it to be 0 or 1).
o Create two child nodes, each representing a new

sub-problem with the chosen variable fixed to one


of its possible integer values.
o Repeat the process for each child node until all

possibilities are explored or an optimal solution is


found.
 Advantage:
o Guaranteed to find the optimal solution if all

branches are explored.


2. Cutting Plane Algorithm:
 Concept:
o Iteratively solves the LP relaxation and adds

additional constraints (cuts) to remove infeasible or


non-optimal solutions from the continuous
relaxation.
o Cuts are derived from information obtained during

the LP solution process.


o The LP problem is re-solved after each added cut,

potentially leading to a tighter feasible region with


integer-valued solutions.
 Process:
o Solve the LP relaxation of the IP problem.

o If the LP solution is integer-feasible, it's the optimal

solution. Otherwise, analyze the LP solution to


identify violated inequalities that only allow non-
integer solutions.
o Add this inequality (cut) to the original problem

constraints.
o Re-solve the LP problem with the added cut.

o Repeat steps 2-5 until an optimal integer solution is

found.
 Advantage:
o Can sometimes find the optimal solution faster than

B&B, especially with efficient cut generation


techniques.
Choosing the Right Algorithm:
The choice between B&B and Cutting Plane depends on the
specific problem and available software.
 B&B is a robust and reliable method, especially for
problems with a moderate number of variables.
 Cutting Plane can be more efficient for problems with
specific structures or readily identifiable cuts.
In practice, some solvers combine both approaches (Branch-
and-Cut) to leverage the strengths of each algorithm.

Gomory’s All-IPP method


Gomory's All-Integer Programming (All-IPP) method is a
specific type of Cutting Plane Algorithm designed to solve
integer programming problems. It works by iteratively solving
the Linear Programming (LP) relaxation of the problem and
adding constraints (cuts) called Gomory cuts to exclude non-
integer solutions.
Here's a deeper dive into Gomory's All-IPP method:
Concept:
 Similar to the Cutting Plane Algorithm, it starts with the
LP relaxation, which ignores the integer constraints and
finds an optimal solution.
 If the LP solution has all integer values, it's the optimal
solution for the original IP problem.
 However, if the LP solution contains fractional values,
Gomory's All-IPP method identifies a row in the LP
tableau that violates integrality (i.e., has a non-integer
basic variable).
 Based on this row, a Gomory cut is generated. This cut is
a new linear constraint that eliminates the fractional
solution from the feasible region while ensuring all
feasible integer solutions remain valid.
 The newly generated Gomory cut is added to the original
problem constraints, and the LP problem is re-solved
with the tighter feasible region.
 This process of solving the LP, identifying violated rows,
generating cuts, and re-solving continues until an optimal
solution with all integer values is found.
Key Points:
 Gomory cuts are guaranteed to eliminate the current
fractional solution without excluding any feasible integer
solutions.
 The method converges in a finite number of steps to find
the optimal integer solution.
 While powerful, Gomory cuts can sometimes be complex
to generate and may lead to a large number of additional
constraints, impacting computational efficiency.
Advantages:
 Guaranteed to find the optimal integer solution.
 Well-suited for problems with specific structures where
efficient Gomory cut generation techniques exist.
Disadvantages:
 Generating Gomory cuts can be computationally
expensive.
 Adding many cuts can significantly increase the size of
the LP problem, potentially slowing down the solution
process.
Comparison with Cutting Plane Algorithm:
Gomory's All-IPP method is a specific implementation of the
Cutting Plane Algorithm focused on generating Gomory cuts.
Other cutting plane algorithms might explore different types
of cuts based on the problem structure.
In conclusion, Gomory's All-IPP method offers a robust
approach for solving integer programming problems,
guaranteeing an optimal solution. However, its efficiency can
be impacted by the complexity of cut generation and the
number of cuts required.

Project Management: Rules for drawing the network


diagram
Network Diagram Rules in Project Management
A network diagram, also known as a project schedule network
diagram (PSND), visually represents the tasks and their
dependencies in a project. Here are some key rules for
drawing an effective network diagram:
1. Define Activities and Events:
 Before drawing, identify all the activities (tasks)
involved in the project.
 Clearly define the start and end points of the project,
represented by events (boxes).
2. Activity Representation:
 Activities are typically shown as arrows, with the tail
representing the start and the head representing the
finish.
 The length of the arrow usually doesn't signify duration
(although some methods might use it).
3. Dependencies:
 Show the relationships between activities using arrows.
 There are four main types of dependencies:
o Finish-to-Start (FS): The successor activity cannot

begin until the predecessor finishes (most common).


o Start-to-Start (SS): Both activities can start

simultaneously.
o Finish-to-Finish (FF): Both activities must finish

together.
o Start-to-Finish (SF): The predecessor activity

cannot finish until the successor starts (less


common).
4. Flow and Clarity:
 Maintain a left-to-right flow for activities, representing
the project's overall progress.
 Avoid crossing arrows whenever possible. If
unavoidable, use clear labels to indicate the dependency.
 Use numbering or unique identifiers for activities and
events to enhance readability.
5. Additional Considerations:
 Include milestones (significant project markers) as events
within the diagram.
 You can use dummy activities (arrows with zero
duration) to maintain the logical flow, especially when
there are external dependencies.
 Keep the diagram visually clean and avoid overloading it
with too much information.

Application
of CPM and PERT techniques in project planning
and control
CPM (Critical Path Method) and PERT (Program Evaluation
and Review Technique) are two cornerstone techniques used
in project planning and control. While they share some
similarities, they have distinct strengths and applications.
Here's a breakdown of their use in project management:
Critical Path Method (CPM):
 Focus: Deterministic project environments with well-
defined activity durations and minimal uncertainty.
 Strengths:
o Identifies the critical path, the longest sequence of

dependent activities that determines the project


duration.
o Helps in resource allocation and scheduling to

optimize project completion time.


o Enables monitoring of progress and early

identification of potential delays.


 Applications: Construction projects, manufacturing
processes, maintenance activities with well-understood
tasks.
Program Evaluation and Review Technique (PERT):
 Focus: Projects with inherent uncertainty in activity
durations.
 Strengths:
o Estimates project completion time considering

probabilistic variations in activity durations.


o Uses statistical methods like weighted averages to

account for optimistic, pessimistic, and most likely


durations for each activity.
o Provides a risk assessment by calculating the

probability of completing the project within a


specific timeframe.
 Applications: Research and development projects,
software development, projects with new or untested
technologies.
Complementary Use:
In many real-world projects, there's a mix of certainty and
uncertainty in activity durations. This is where CPM and
PERT can be used together effectively:
1. Project Breakdown Structure (PBS): Break down the
project into manageable activities using a Work
Breakdown Structure (WBS).
2. Estimate Activity Durations: For each activity,
estimate optimistic, pessimistic, and most likely
durations using expert judgment or historical data (if
available).
3. Apply PERT: Use PERT calculations to determine the
expected project duration and analyze the probability of
meeting deadlines.
4. Identify Critical Path: Use the expected durations from
PERT to construct a network diagram and identify the
critical path using CPM methods.
5. Project Monitoring and Control: Track project
progress, identify deviations from the plan, and take
corrective actions using both CPM and PERT insights.
Benefits of Combined Approach:
 Realistic Planning: Accounts for both deterministic and
probabilistic factors in project duration estimation.
 Improved Risk Management: Provides a clearer
understanding of potential delays and allows for
proactive risk mitigation strategies.
 Enhanced Communication: A combined network
diagram with critical path and probabilistic information
facilitates better communication with stakeholders.
Conclusion:
CPM and PERT offer valuable tools for project planning and
control. Choosing the right technique or using them together
depends on the specific project environment and the level of
uncertainty involved. By understanding their strengths and
weaknesses, project managers can leverage these techniques
to create realistic schedules, manage risks effectively, and
improve project execution.
Crashing and resource
leveling of operations Simulation and its uses in Queuing
theory & Materials Management
Crashing, Resource Leveling, Simulation, and their
Applications
Crashing and Resource Leveling:
These techniques are used in project management to optimize
project schedules and resource allocation.
 Crashing: Aims to shorten the project schedule by
adding resources (e.g., manpower, equipment) to critical
activities. This often comes at an increased cost.
 Resource Leveling: Focuses on smoothing out resource
demands throughout the project. It might involve
delaying non-critical activities or shifting resources
between tasks to avoid overloading or underutilizing
them.
Simulation:
Simulation involves creating a digital model that imitates the
behavior of a real-world system. In project management,
simulation can be used to:
 Evaluate the impact of crashing and resource leveling
on project outcomes. By simulating different scenarios,
managers can assess the trade-offs between schedule
compression, resource needs, and costs.
 Identify potential bottlenecks: Simulation can help
pinpoint activities or resources causing delays and
inefficiencies.
Applications in Queuing Theory and Materials
Management:
Queuing Theory:
 Studies waiting lines (queues) and their behavior.
 Simulation can be used to:
o Analyze queuing systems: Model customer

arrivals, service times, and server capacity to


understand queue lengths, waiting times, and system
performance.
o Optimize resource allocation: Simulate different
service configurations (e.g., number of servers) to
determine the optimal resource allocation for
managing queue lengths and wait times effectively.
Materials Management:
 Deals with planning, purchasing, and controlling the
flow of materials in a production or supply chain system.
 Simulation can be used to:
o Optimize inventory levels: Simulate demand

fluctuations and lead times to determine the optimal


amount of inventory to hold, minimizing stockouts
and storage costs.
o Evaluate production planning strategies:

Simulate different production schedules and


material flow scenarios to identify the most efficient
and cost-effective approach.
Benefits of Simulation:
 Reduced Risk: Allows testing of different scenarios
without impacting the real system.
 Improved Decision Making: Provides insights into
potential problems and helps evaluate the effectiveness
of alternative approaches before implementation.
 Enhanced Communication: Simulation models can be
used to visually communicate complex systems and their
behavior to stakeholders.
Conclusion:
Crashing, resource leveling, and simulation are valuable tools
for project managers and professionals in queuing theory and
materials management. By understanding these techniques
and their applications, they can make informed decisions to
optimize project schedules, resource allocation, inventory
levels, and overall system performance.

Unit No 4: Game Theory and Dynamic Programming


Game Theory and Dynamic Programming: An
Introduction
These two powerful tools, though seemingly separate, can be
surprisingly complementary. Here's a quick look at each and
how they connect:
Game Theory
 Studies strategic decision-making in situations with
multiple agents (players).
 Analyzes how players with conflicting or aligned
interests make choices based on a set of rules and
potential outcomes (payoffs).
 Aims to predict the equilibrium, which is a stable state
where no player has an incentive to change their strategy.
 Applications span economics, business, politics, and
even evolutionary biology.
Dynamic Programming
 An optimization technique for solving problems with
overlapping subproblems.
 Breaks down a complex problem into smaller, simpler
subproblems and stores the solutions for future reference.
 Works best when problems can be solved by making a
sequence of decisions.
 Used in areas like finance, engineering, robotics, and
artificial intelligence.
The Connection
 Dynamic programming can be a powerful tool for
solving games, particularly those with sequential moves,
where players react to each other's decisions.
 By breaking down the game into stages and calculating
the optimal choices for each sub-stage, you can find the
overall best strategy for a player.
 This is particularly useful in games with perfect
information, where players know all the past moves.
Here are some additional points to consider:
 Not all games are solvable with dynamic programming,
especially those with simultaneous moves or imperfect
information.
 Game theory offers a broader framework for analyzing
strategic interactions, while dynamic programming is a
specific technique for finding optimal solutions.
A two-person zero-sum game
Key Players and Goals:
 Two Players: There are exactly two players involved.
 Zero-Sum: This means the outcome for one player is
directly tied to the other player's outcome. In simpler
terms, what one player gains, the other loses, and the
total payoff between them always adds up to zero.
Strategies and Payoffs:
 Strategies: Each player has a set of choices they can
make. These choices are called strategies.
 Payoff Matrix: The game is often represented by a
payoff matrix. This matrix shows the payoff for each
player (usually Player 1, denoted as "R" for row player)
for every combination of strategies chosen by both
players.
Finding Equilibrium:
 The goal for each player is to find the best strategy,
which depends on what they think the other player will
do.
 An equilibrium is a situation where neither player wants
to change their strategy because doing so wouldn't
benefit them. This can be achieved through different
strategies:
o Maximin: Player 1 tries to maximize their

minimum gain across all possible strategies of


Player 2 (ensuring a worst-case scenario isn't too
bad).
o Minimax: Player 1 tries to minimize their

maximum loss across all possible strategies of


Player 2 (guaranteeing they don't lose too much).
o Saddle Point: In some games, there exists a

specific cell in the payoff matrix where Player 1's


maximum payoff for a row coincides with the
minimum payoff across that same row for Player 2.
This is called a saddle point and represents a stable
equilibrium.
Examples:
 Rock-Paper-Scissors: While not purely zero-sum (ties
exist), it can be analyzed as a zero-sum game to
understand basic strategic thinking.
 Matching Pennies: Players simultaneously choose heads
(H) or tails (T). If they match, Player 1 wins, otherwise
Player 2 wins.
Real-World Applications:
 Auctions (bidding strategies)
 Pricing strategies in businesses
 Military tactics
 Negotiation processes
Understanding two-person zero-sum games is a starting point
for exploring more complex game theory scenarios where
players might have cooperative or non-zero-sum goals.
Maximi - Minimax principle
Actually, there's a slight inaccuracy in the term "Maximi-
Minimax principle." Here's a breakdown of the two related
concepts in game theory, especially for two-person zero-sum
games:
1. Maximin Principle (for Row Player):
 This principle focuses on maximizing the minimum
guarantee for Player 1 (often denoted as the "row
player" because their strategies are represented by rows
in the payoff matrix).
 Here's the thought process:
o Player 1 considers all their possible strategies (rows

in the matrix).
o For each strategy, they identify the minimum
payoff they would receive across all possible
responses from Player 2 (columns in the matrix).
This ensures they are prepared for the worst-case
scenario from Player 2's perspective.
o Finally, Player 1 chooses the strategy that leads to
the highest minimum payoff among all their
options. This guarantees they won't end up in a
terrible situation, regardless of what Player 2 does.
2. Minimax Principle (for Column Player):
 This principle focuses on minimizing the maximum loss
for Player 2 (often denoted as the "column player"
because their strategies are represented by columns in the
payoff matrix).
 Here's Player 2's approach:
o They consider all their possible strategies (columns

in the matrix).
o For each strategy, they identify the maximum

payoff Player 1 could receive (by choosing the


worst row for Player 2). This represents the worst
possible outcome for Player 2.
o Player 2 then chooses the strategy that leads to the

minimum maximum payoff for Player 1. This


minimizes the potential damage they could suffer
from Player 1's best possible response.
Key Points:
 Maximin and Minimax are not directly combined into a
single principle. They represent two different approaches
for each player in a zero-sum game.
 In some games, there might be a situation where the
maximin value for Player 1 coincides with the
minimax value for Player 2. This point of intersection
is called a saddle point and represents a stable
equilibrium for the game. At this point, neither player
has an incentive to change their strategy.
Principle of Dominance
The principle of dominance is a fundamental concept in game
theory, especially for simplifying games before diving into
complex strategies. It helps identify and eliminate inferior
strategies for each player, leading to a more concise game
representation.
Here's how it works:
Dominated Strategies:
 A strategy for a player is considered dominated if there
exists another strategy that yields a better payoff in all
scenarios, regardless of the opponent's choice.
 In simpler terms, a dominated strategy is always inferior
to another option, making it illogical for a rational player
to choose it.
Identifying Dominated Strategies:
We can identify dominated strategies by analyzing the payoff
matrix, which represents the outcomes for each player based
on their chosen strategies. Here are the two main approaches:
1. Dominated Row (for Player 1):
o Consider a row (representing a strategy for Player 1)

in the payoff matrix.


o Compare the payoffs in that row with the

corresponding payoffs in other rows (other


strategies of Player 1) for each column (opponent's
strategies).
o If all the elements in a row are less than or equal to

the corresponding elements in another row, then the


first row is dominated by the second row.
o We can eliminate the dominated row (inferior

strategy) from the matrix.


2. Dominated Column (for Player 2):
o Similar to rows, consider a column (representing a

strategy for Player 2).


o Compare the payoffs in that column with the

corresponding payoffs in other columns (other


strategies of Player 2) for each row (opponent's
strategies).
o If all the elements in a column are greater than or

equal to the corresponding elements in another


column, then the first column is dominated by the
second column.
o We can eliminate the dominated column (inferior

strategy) from the matrix.


Benefits of Dominance Principle:
 Simplifies the Game: By eliminating dominated
strategies, we reduce the size of the payoff matrix,
making the game easier to analyze and solve.
 Focus on Relevant Strategies: It ensures players only
consider rational choices, leading to more efficient
decision-making within the game
Solution for mixed strategy problems
Mixed strategy problems in game theory involve situations
where players don't choose a single, pure strategy but instead
randomize their choices with specific probabilities. Finding
the solution in these scenarios requires a bit more advanced
math compared to pure strategy problems. Here's a breakdown
of the general approach:
1. Expected Payoff:
The core concept is expected payoff. This represents the
average payoff a player can expect to receive over a long
series of games, considering the probabilities assigned to each
strategy in their mixed strategy.
2. Equilibrium Through Mixed Strategies (Nash
Equilibrium):
The goal is to find a Nash Equilibrium in mixed strategies.
This occurs when both players choose their mixed strategies
such that:
 No player wants to deviate from their chosen
strategy: If a player changes their strategy while the
other player remains at equilibrium, their expected
payoff won't improve.
Steps to Solve Mixed Strategy Problems:
1. Define the Payoff Matrix: Start by representing the
game with a payoff matrix, showing the payoffs for each
player under all combinations of pure strategies.
2. Assign Probabilities: Each player defines a probability
distribution for their pure strategies. This represents the
chance they choose each strategy in a single game.
3. Calculate Expected Payoffs: Based on the chosen
probabilities, calculate the expected payoff for each
player for each of their pure strategies.
4. Best Response Functions: Formulate the "best response
function" for each player. This function shows how a
player should adjust their probabilities (mixed strategy)
in response to the other player's chosen probabilities.
5. Solve for Equilibrium: Find the combination of
probabilities for both players where neither player has an
incentive to change their mixed strategy. This is the Nash
Equilibrium.
Challenges and Considerations:
 Solving for mixed strategy equilibrium often involves
solving a system of equations derived from the expected
payoffs and best response functions. Linear algebra
techniques might be necessary.
 Not all games have a mixed strategy Nash Equilibrium.
Some games might have a pure strategy equilibrium
where a single strategy is dominant for each player.
Graphical method for 2 x n and m x 2 games

The graphical method is a powerful tool for solving specific


types of games in game theory, particularly those with two
rows (for player 1) and n columns (for player n) or m rows
(for player m) and two columns (for player 2). Here's a
breakdown of how it works for these two types of games:
1. Games with 2 Rows (Player 1) and n Columns (Player
n):
These are also called dominance solvable games. Here's the
approach:
 Payoff Matrix: Start with the payoff matrix representing
the game. It will have two rows for Player 1's strategies
and n columns for Player n's strategies.
 Dominance (Optional): You can apply the dominance
principle (explained earlier) to eliminate any weakly
dominated rows (strategies) for Player 1. This simplifies
the matrix if applicable.
 Expected Payoff Lines: For each of Player 1's
remaining strategies (rows), calculate the expected
payoff as a function of the probability (p) that Player n
chooses each column (strategy). This will result in n
linear equations, each representing the expected payoff
for a strategy of Player 1.
 Graphical Representation: Plot these expected payoff
equations on a graph with p (probability for Player n) on
the x-axis and the expected payoff on the y-axis. Each
line will represent the expected payoff for a specific
strategy of Player 1.
Finding the Solution:
 Focus on the Lower Envelope: Since Player 1 wants to
minimize their maximum loss (considering Player n's
possible strategies), identify the lower envelope of the
expected payoff lines on the graph. This envelope
connects the lowest points of each line across all possible
probabilities for Player n.
 Intersection Points: Identify the intersection points
between the lower envelope and the expected payoff
lines. These points represent probabilities (p) where
Player 1 achieves their minimum expected payoff for a
specific strategy.
 Choose the Best Response: Analyze the intersection
points. The best response for Player 1 is the strategy
(represented by the line) that intersects the lower
envelope at a point with the highest expected payoff.
This ensures Player 1 minimizes their maximum loss
across all possible strategies of Player n.
2. Games with m Rows (Player m) and 2 Columns (Player
2):
These are also called two-person zero-sum games. The
graphical method for this type is quite similar:
 Payoff Matrix: Start with the payoff matrix with m rows
for Player m's strategies and two columns for Player 2's
strategies.
 Expected Payoff Lines: Similar to the previous case,
calculate the expected payoff for each strategy of Player
m as a function of the probability (q) that Player 2
chooses each column (strategy). This will result in two
linear equations, each representing the expected payoff
for a strategy of Player m.
 Graphical Representation: Plot these expected payoff
equations on a graph with q (probability for Player 2) on
the x-axis and the expected payoff on the y-axis.
Finding the Solution:
 Focus on the Upper Envelope: Since Player 2 wants to
maximize their gain (considering Player m's possible
strategies), identify the upper envelope of the expected
payoff lines on the graph. This envelope connects the
highest points of each line across all possible
probabilities for Player m.
 Intersection Points: Identify the intersection points
between the upper envelope and the expected payoff
lines. These points represent probabilities (q) where
Player 2 achieves their maximum expected payoff for a
specific strategy.
 Choose the Best Response: Analyze the intersection
points. The best response for Player 2 is the strategy
(represented by the line) that intersects the upper
envelope at a point with the lowest expected payoff
(from Player m's perspective). This minimizes Player m's
gain and maximizes Player 2's gain.
Key Points:
 The graphical method is a visual approach and might not
be as efficient for large games with many rows or
columns.
 It's particularly useful for games with a relatively small
number of strategies (typically 2 rows or columns).
 For more complex games, other methods like linear
programming might be more suitable.

Recursive nature of computations in Dynamic


Programming,
Absolutely! Dynamic programming (DP) thrives on its
recursive nature when solving problems. Here's how it
works:
Breaking Down the Problem:
 DP tackles complex problems by breaking them down
into smaller, overlapping subproblems. These
subproblems are essentially smaller versions of the
original problem.
 The key is that the solutions to these subproblems can be
reused to solve larger subproblems and ultimately the
entire problem.
Recursion in Action:
 Imagine you want to find the nth Fibonacci number (a
series where each number is the sum of the two
preceding ones). Using recursion, you can define a
function that calculates the nth Fibonacci number.
 This function would call itself twice, once to find the (n-
1)th Fibonacci number and again to find the (n-2)th
Fibonacci number.
 The solutions to these subproblems are then used to
calculate the nth Fibonacci number.
Benefits of Recursion in DP:
 Clarity: Recursion can often lead to more elegant and
easier-to-understand code, especially for problems that
naturally break down into smaller versions of
themselves.
 Modularity: By defining functions for each subproblem,
you create modular code that can be reused and
potentially adapted for other problems.
The Catch: Repetitive Calculations
 While recursion offers advantages, a major drawback is
the potential for redundant calculations.
 In the Fibonacci example, the same subproblems (like
calculating the 2nd Fibonacci number) might be
computed multiple times as you solve for larger
numbers.
Dynamic Programming to the Rescue!
 To address this inefficiency, DP introduces a concept
called memoization.
 With memoization, the solutions to each subproblem are
stored in a table or array.
 Whenever a subproblem needs to be solved, the program
first checks the table.
 If the solution is already stored, it's retrieved from the
table instead of being re-calculated recursively.
Here's the key point:
Even though DP algorithms might be defined recursively to
break down the problem, they often use memoization to avoid
redundant calculations, making them more efficient than pure
recursive solutions.
Analogy:
Imagine climbing a large staircase. A purely recursive
approach would be like constantly taking two steps up, then
checking if you've reached the top. If not, you take two steps
back down and repeat. Dynamic programming with
memoization would be like marking each step you've climbed.
If you reach a step you've already marked, you know the path
from there and don't need to retrace your steps.
Forward and backward recursion,
Forward and backward recursion are two approaches to
solving problems recursively, each with its own advantages
and applications. Here's a breakdown:
Forward Recursion:
 Intuition: Forward recursion works in a bottom-up
manner. It starts with the base case, the simplest
subproblem, and builds its way up to the final solution.
 Process: Imagine solving a maze. With forward
recursion, you'd start at the exit (the goal) and define
functions that check if a given point in the maze leads to
the exit. You'd recursively explore possible paths from
that point, essentially working your way backwards from
the goal until you reach the entrance (the starting point).
 Example: The classic example of forward recursion is
the factorial function. To calculate n!, the function calls
itself n-1 times, multiplying the result by n. It starts with
the base case (1! = 1) and builds up to the desired
factorial value.
Backward Recursion:
 Intuition: Backward recursion works in a top-down
manner. It starts with the entire problem and breaks it
down into smaller subproblems, solving those
subproblems recursively until it reaches the base case.
 Process: Continuing the maze analogy, backward
recursion would involve starting at the entrance and
defining functions that check if a given point leads to the
exit. You'd recursively explore possible paths forward,
breaking down the problem into smaller "can I reach this
point from here?" questions until you find a path to the
exit.
 Example: An example of backward recursion could be
finding the optimal strategy in a game like chess. The
function would analyze the current board state and
recursively explore all possible moves, evaluating the
resulting positions until it reaches the base case (end of
the game).
Choosing the Right Approach:
The choice between forward and backward recursion depends
on the specific problem you're trying to solve. Here are some
general guidelines:
 Forward recursion: Often used when the solution
directly depends on solving smaller subproblems that
lead to the final solution. It might be easier to understand
for problems where the base case is the natural starting
point.
 Backward recursion: Suitable when the problem is
easier to break down into smaller subproblems starting
from the overall goal. It can be more efficient for
problems with overlapping subproblems, as solutions can
be reused through memoization (storing solutions to
subproblems to avoid recalculating them).
Dynamic Programming Applications –
Knapsack, Equipment replacement,
Investment models
Dynamic Programming in Action: Knapsack, Equipment
Replacement, and Investment Models
Dynamic programming shines in solving optimization
problems with overlapping subproblems. Here's how it tackles
three specific problems:
1. Knapsack Problem:
Imagine you're a thief with a limited weight capacity
backpack (knapsack) and a room full of treasures (items) with
varying weights and values. You want to steal the most
valuable treasure combination without exceeding the weight
limit.
 Subproblems: The problem can be broken down into
subproblems: what's the maximum value you can steal
considering different weight capacities (up to the total
limit).
 Dynamic Programming Approach: We can build a
table where each cell represents the maximum value
achievable with a specific weight capacity. By iteratively
filling the table, considering the value and weight of each
item, we can find the optimal solution (maximum value)
for the entire knapsack capacity.
2. Equipment Replacement Problem:
This problem involves determining the optimal time to replace
aging equipment. A machine might have decreasing efficiency
and increasing maintenance costs over time.
 Subproblems: We can define subproblems for each time
step: what's the minimum total cost (purchase,
maintenance) of having an operational machine at that
time, considering all future replacement possibilities.
 Dynamic Programming Approach: By building a table
where each cell represents the minimum cost at a specific
time, considering the initial purchase cost, future
replacement options, and maintenance costs, we can find
the most cost-effective replacement schedule.
3. Investment Models:
Dynamic programming can be used to optimize investment
decisions over time. We can consider factors like risk
tolerance, investment returns, and desired future goals.
 Subproblems: We can define subproblems for each time
period and potential account balance: what's the
maximum expected future wealth achievable with that
current balance, considering different investment
options.
 Dynamic Programming Approach: By building a table
where each cell represents the maximum expected wealth
at a specific time and account balance, we can develop
an investment strategy that maximizes wealth over the
desired time horizon.
Benefits of Dynamic Programming:
 Solves Complex Problems: It breaks down complex
problems into manageable subproblems, making them
easier to solve.
 Efficient for Overlapping Subproblems: By storing
solutions to subproblems, it avoids redundant
calculations, improving efficiency.
 Optimal Solutions: It helps find the optimal solution
(maximum value, minimum cost, maximum wealth)
based on the defined objective.
Important Considerations:
 Suitable for Specific Problems: It works best for
problems with well-defined stages, subproblems, and
optimal substructure (where the solution to a larger
problem depends on solutions to smaller ones).
 Curse of Dimensionality: As the number of variables or
states increases, the size of the table used for
memoization can grow exponentially, making the
solution computationally expensive.
Unit 3:The Transportation Problem and
Assignment Problem

Finding an initial feasible solution - North


West corner method, Least cost method,
Vogel’s Approximation method
In the world of transportation problems, where you need to
efficiently move goods from origins (suppliers) to destinations
(demands), finding an initial feasible solution is crucial.
Here's a breakdown of three popular methods to achieve this:
1. North-West Corner Method (NW Corner Rule):
 Simple and Intuitive: This is the easiest method to
understand and implement.
 Logic: It starts at the upper left corner (north-west
corner) of the transportation cost matrix and allocates the
maximum possible amount from the origin (supplier) to
the destination (demand) until either the origin's supply
or the destination's demand is fulfilled.
 Movement: Then, it moves to the right cell in the same
row if supply remains, or down to the cell below in the
same column if demand remains. This process continues
until all supply and demand are satisfied.
 Drawbacks: While simple, it doesn't necessarily
consider the actual transportation costs. It might lead to a
solution that's not the most cost-effective.
2. Least Cost Method (Minimum Cost Method):
 Cost-Conscious: This method prioritizes minimizing
transportation costs.
 Logic: It identifies the cell with the lowest
transportation cost in the entire matrix. The maximum
possible amount is allocated to that cell, considering
supply and demand constraints.
 Iteration: The row or column containing the chosen cell
is then eliminated (because supply or demand is
fulfilled), and the process is repeated by finding the least
cost cell in the remaining matrix. This continues until all
supply and demand are satisfied.
 Advantage: It's more likely to lead to a solution with a
lower total transportation cost compared to the north-
west corner method.
3. Vogel's Approximation Method (VAM):
 Refined Cost Consideration: This method builds on the
idea of minimizing transportation costs but tries to find a
better initial solution than the least cost method.
 Penalty Calculations: For each row and column in the
cost matrix, it calculates the penalty by subtracting the
second-lowest cost from the lowest cost in that row or
column. This penalty represents the potential additional
cost if the lowest cost cell isn't chosen.
 Focus on High Penalties: The method prioritizes
allocating goods to cells with the highest penalty values
(where the opportunity cost of not choosing the lowest
cost is significant). It then follows a similar iterative
process to the least cost method, considering supply and
demand constraints.
 Advantage: Vogel's Approximation Method often leads
to a better initial solution (closer to the optimal solution)
compared to the least cost method, especially for larger
problems.
Choosing the Right Method:
 For quick and easy initial solutions: North-West
Corner Method is a good starting point.
 For prioritizing cost-effectiveness: Least Cost Method
is a better choice.
 For potentially better initial solutions, especially with
larger problems: Vogel's Approximation Method is
recommended.
Finding the optimal solution,
In transportation problems, after finding an initial feasible
solution using methods like the North-West Corner Method,
Least Cost Method, or Vogel's Approximation Method, you'd
typically want to find the optimal solution, which represents
the minimum total transportation cost for moving goods from
origins to destinations. Here are two common approaches:
1. The Stepping Stone Method:
This iterative method builds upon an initial feasible solution
and tries to identify improvement opportunities that reduce
overall transportation costs.
 Evaluation: It involves evaluating closed loops in the
transportation cost matrix. A closed loop consists of cells
that are alternately occupied (assigned a value) and
unoccupied (value of zero).
 Net Cost Calculation: For each closed loop, the method
calculates the net transportation cost difference between
occupied cells going in a clockwise direction and those
going counter-clockwise.
 Improvement Steps: If a closed loop has a negative net
cost (meaning the counter-clockwise total is higher than
the clockwise total), a unit of good can be virtually
"shipped" around the loop in a way that reduces total cost
without affecting supply or demand constraints. This
"virtual shipment" identifies potential cost
improvements.
 Iteration: The process continues by identifying new
closed loops and performing virtual shipments until no
negative net cost loops are found. This indicates that the
current solution is the optimal solution with the
minimum total transportation cost.
2. The Simplex Method:
This linear programming technique can also be applied to
solve transportation problems.
 Formulation as a Linear Program: The transportation
problem can be reformulated as a linear program with an
objective function (minimizing total transportation cost)
and constraints representing supply, demand, and non-
negativity requirements.
 Tableau and Pivoting: The simplex method uses a
tabular representation (simplex tableau) and a series of
pivot operations to systematically move from an initial
feasible solution to the optimal solution. It iteratively
improves the solution by adjusting allocations based on
cost coefficients and constraints.
Choosing the Right Method:
 Stepping Stone Method: Suitable for smaller
transportation problems due to its less complex
calculations. It's easier to understand and implement
manually.
 Simplex Method: More powerful and can handle larger
problems efficiently. However, it requires setting up and
manipulating the linear program structure, which can be
more computationally involved.
optimal solution by stepping stone and
MODI methods,
he stepping stone method is an algorithm used to find the
optimal solution for transportation problems. Here's a
breakdown of how it helps achieve an optimal solution:
1. Initial Basic Feasible Solution:
 You'll need a starting point, which can be found using
methods like Northwest Corner Method (NWCM), Least
Cost Method (LCM), or Vogel's Approximation Method
(VAM). These methods allocate quantities to satisfy
supply and demand constraints, but might not be the
optimal solution yet.
2. Evaluating Improvement:
 The stepping stone method focuses on unoccupied cells
(cells with no allocation) and checks if moving quantities
there would improve the total transportation cost.
 To do this, you create a closed loop path starting from an
unoccupied cell. The path can only turn at occupied cells
and return to the starting unoccupied cell.
 Within the loop, you mark alternating positive (+) and
negative (-) signs at each turn, starting and ending with
positive signs at the unoccupied cell.
 For each cell in the loop (occupied or unoccupied), you
add the transportation cost if the sign is positive and
subtract the cost if it's negative. This difference is called
the net change.
 You repeat this process for all unoccupied cells.
3. Iterations for Improvement:
 A negative net change for an unoccupied cell indicates
that moving a quantity there would reduce the total cost.
 You'll identify the unoccupied cell with the most
negative net change (highest potential cost reduction).
 Increase the quantity allocated to that cell as much as
possible, considering it shouldn't violate supply or
demand constraints at origin or destination.
 As you increase allocation in one cell, you'll need to
decrease allocation in other cells along the closed loop
path you created to maintain supply and demand balance.
This maintains a feasible solution.
 Repeat steps 2 and 3 until all unoccupied cells have a
non-negative net change. This indicates you've reached
the optimal solution – no further improvement is
possible.
Finding the Optimal Solution:
 By iteratively evaluating unoccupied cells and adjusting
allocations, the stepping stone method ensures you reach
a distribution plan with the minimum total transportation
cost.
The term "MODI method" is likely a combination of the
Stepping Stone Method and the U-V method used in
transportation problems. Here's how it might work:
1. Initial Basic Feasible Solution:
 Similar to the stepping stone method, MODI starts by
finding an initial feasible solution using methods like
NWCM, LCM, or VAM.
2. Optimizing with U-V method:
 Here's where MODI might differ. The U-V method
assigns penalty values (U for rows and V for columns) to
ensure a balanced solution. It involves calculations to
determine these penalties and adjust them iteratively
until a feasible solution is found.
3. Stepping Stone for Further Optimization:
 Once a feasible solution is established (often with the U-
V method's help), MODI might utilize the stepping stone
method to further optimize the transportation cost.
 The steps from the stepping stone method would be
followed: creating closed loops, calculating net changes
for unoccupied cells, identifying the most negative net
change, adjusting allocations, and repeating until all
unoccupied cells have non-negative net changes (optimal
solution).
Special cases in Transportation problems -
Unbalanced Transportation problem. Assignment
Problem:
A transportation problem is considered balanced when the
total supply from all origins is equal to the total demand at all
destinations. However, in real-world scenarios, this might not
always be the case. Here's how we handle unbalanced
transportation problems:
Unbalanced Scenario 1: Total Supply > Total Demand
 This means there's excess supply at some origins
compared to the demand at destinations.
 To solve this, we introduce a dummy destination with
zero transportation cost.
 This dummy destination acts as a "sink" to absorb the
excess supply without affecting the actual destinations.
 The total demand for the dummy destination will be the
difference between the total supply and total demand
(total supply - total demand).
Unbalanced Scenario 2: Total Demand > Total Supply
 This scenario involves insufficient supply to meet the
total demand at all destinations.
 We introduce a dummy origin with zero transportation
cost to represent the additional supply needed.
 The dummy origin injects the required additional supply
into the system.
 The total supply for the dummy origin will be the
difference between the total demand and total supply
(total demand - total supply).
Solving the Balanced Problem:
Once you've introduced dummy rows or columns to make the
problem balanced, you can solve it using standard
transportation problem algorithms like the Northwest Corner
Method, Least Cost Method, Vogel's Approximation Method,
or the Stepping Stone Method.
Assignment Problem: A Special Case of Transportation
Problem
An assignment problem is a special type of transportation
problem with some unique characteristics:
 There are an equal number of origins (often representing
workers or machines) and destinations (representing jobs
or tasks).
 Each origin can be assigned to only one destination, and
vice versa. This ensures complete utilization of resources
and fulfillment of all tasks.
 The objective is typically to minimize the total cost
(time, effort, etc.) of assigning origins to destinations.
Solving Assignment Problems:
There are specialized algorithms for solving assignment
problems, such as the Hungarian Method, which is often more
efficient than using general transportation problem methods
for these specific cases.
Hungarian method of Assignment problem, Maximization
in Assignment problem,

The Hungarian Method is a powerful tool for solving


assignment problems, which involve assigning resources
(workers, machines) to tasks (jobs) in an optimal way. It can
be used for both minimization (finding the least cost
assignment) and maximization (finding the most profit
assignment).
Regular Hungarian Method (Minimization):
The standard Hungarian Method works by minimizing the
total cost of assigning resources to tasks. Here's a simplified
overview:
1. Cost Matrix: Prepare a matrix where rows represent
resources and columns represent tasks. Each cell contains
the cost of assigning that resource to that task (e.g., time,
effort).
2. Reduce Rows and Columns: Use row and column
reduction techniques to identify the minimum value in
each row and subtract it from all elements in that row.
Similarly, subtract the minimum value in each column
from all elements in that column. This ensures at least
one zero in each row and column.
3. Covering and Assignment: Find a maximum number of
zeros that can be covered using a minimum number of
horizontal and vertical lines (without intersections). This
is called a "covering." If the number of covering lines
equals the number of rows (or columns), you have an
optimal solution.
4. Iteration and Optimization: If the covering doesn't
meet the optimal condition, identify an uncovered zero
with the least cost and improve the covering by creating
a closed loop path that includes this zero. Adjust
assignments and covering lines iteratively until you reach
an optimal solution with the minimum total cost.
Maximization in Assignment Problems:
The Hungarian Method can also be used for maximization
problems, where the goal is to find the assignment with the
highest total profit. Here's how to adapt it:
1. Convert to Minimization: Since the Hungarian Method
is designed for minimization, we convert the
maximization problem into a minimization problem. Do
this by subtracting all elements in the cost matrix from
the largest cost value in the entire matrix. This essentially
inverts the cost values, turning the highest profit into the
lowest cost.
2. Solve as Minimization: Apply the regular Hungarian
Method steps outlined above. The resulting solution will
provide the assignment with the maximum total profit
(which was originally the minimum total cost after
conversion).
Benefits of the Hungarian Method:
 Efficient: It finds the optimal solution in polynomial
time, making it efficient for solving even large
assignment problems.
 Versatile: It can handle both minimization and
maximization problems with a simple conversion step for
maximization.
 Clear Steps: The method follows a well-defined set of
steps, making it easy to understand and implement.

unbalanced problem, problems with


restrictions, travelling salesman problems.

These are all optimization problems that share some


similarities but also have key differences. Here's a breakdown:
Unbalanced Problems:
 These occur when the total supply of something doesn't
equal the total demand for it. This is common in
transportation problems where goods need to be shipped
from origins to destinations.
o Examples: A factory might produce more widgets

than stores can sell, or a city might have more


teachers than available classrooms.
 Solution Strategies:
o Introduce dummy rows or columns representing
"virtual" sources or destinations to absorb excess
supply or fulfill unmet demand, creating a balanced
problem.
o Solve the balanced problem using standard
algorithms like the Northwest Corner Method or the
Stepping Stone Method.
Problems with Restrictions:
 These problems involve constraints that limit the
possible solutions. Restrictions can be on quantities,
resources, or relationships between variables.
 Examples: In a diet optimization problem, you might
have restrictions on the daily intake of calories, fat, and
carbohydrates. In scheduling problems, there might be
limitations on the number of working hours per day or
conflicts between tasks.
 Solution Strategies:
o Identify and incorporate the restrictions into the

optimization model. This might involve using linear


programming techniques or specialized algorithms
depending on the specific problem type.
o The solution process needs to find the optimal

solution that satisfies all the imposed restrictions.


Traveling Salesman Problem (TSP):
 This is a classic optimization problem where a salesman
wants to find the shortest route to visit all cities in his
territory exactly once and return to the starting city.
 Restrictions:
o Each city must be visited exactly once.

o The salesman must return to the starting city.

 Challenges:
o TSP is a computationally difficult problem,
especially for large numbers of cities. Finding the
optimal solution becomes increasingly time-
consuming as the problem size grows.
 Solution Strategies:
o Exact Algorithms: These methods guarantee the

optimal solution but can be slow for large problems.


Examples include branch-and-bound and dynamic
programming.
o Heuristics: These are faster but might not always

find the optimal solution. Examples include nearest


neighbor, insertion heuristics, and genetic
algorithms

You might also like