Lecture 01 (03-01-2025)
Lecture 01 (03-01-2025)
Lecture 1 — 03/01/2025
Lecturer: Prof Minati De Scribe: Team 1
Team Members
1 Optimization Problems
• F : The feasible region, the set of all possible solutions that satisfy the problem’s constraints.
• C: The cost function, a mapping C : F → R that assigns a numerical value to each feasible
solution.
The goal is to find f ∈ F that either minimizes or maximizes C(f ), which is called the optimal
solution.
1. Combinatorial Optimization:
2. Continuous Optimization:
Optimization problems are interconnected in many ways, and understanding these relationships is
crucial in solving them effectively:
1
• Linear Programming (LP):
– LP involves linear cost functions and linear constraints. The feasible region is a polyhe-
dron.
– LP forms the basis for many combinatorial optimization problems since it provides a
relaxation that can be solved efficiently using algorithms like the simplex method or
interior-point methods.
• Integer Linear Programming (ILP):
– ILP is a variant of LP where the variables are restricted to integer values.
– ILP is NP-complete, unlike LP, which is solvable in polynomial time. Many combina-
torial problems, like the Traveling Salesman Problem, are modeled as ILPs.
• Flow Problems:
– Flow problems, such as maximum flow or minimum cost flow, are LP problems where
the constraints represent flow conservation in a graph.
– These problems are foundational for network optimization and are solvable efficiently
using combinatorial algorithms.
• Matching Problems:
– Matching problems, particularly in bipartite graphs, can often be reduced to LP or flow
problems.
– Algorithms like the Hungarian method for maximum weight matching rely on LP relax-
ations.
• Nonlinear Programming (NLP):
– NLP generalizes LP to allow nonlinear cost functions or constraints.
– Convex NLP is particularly well-studied since it guarantees global optimality under
certain conditions, while non-convex NLP often requires heuristics or local search.
• The optimal solution lies at the vertices of the polytope formed by the constraints.
• These vertices are the intersection points of hyperplanes, which are countable.
The simplex algorithm exploits this property by navigating these vertices to find the optimal solu-
tion.
2
1.4 Examples of Optimization Problems
where dπ(i),π(i+1) represents the distance between the cities π(i) and π(i + 1), and π(n +
1) = π(1) ensures the tour is closed.
3. Matching Problem:
3
1.5 Convex Functions and Optimization
This means that the line segment between any two points on the graph of f lies above or on the
graph.
Convex Optimization: Optimization problems involving convex functions and convex feasible
regions. These problems are significant because:
• Efficient algorithms, such as interior-point methods, are available for solving them.
References