Ain Optimization G8

Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

OPTIMIZATION

October 24, 2024


CONSTRAINT
LOCAL SEARCH
SATISFACTION

NODE
HILL CLIMBING
CONSISTENCY

SIMULATED ARC
ANNEALING CONSISTENCY

LINEAR BACKTRACKING
PROGRAMMING SEARCH
- search algorithm that maintains a single node
and searches by moving to a neighboring node.
--> is a local search algorithm commonly used for
optimization problems

--> attempts to find the best solution by continuously


improving the current solution based on nearby local values

--> is a greedy algorithm


How does Hill Climbing
works?
Step 1: Start with an initial state (which can be randomly initialized).

Step 2: Evaluate the neighbors of the current state.

Step 3: Assess the cost or quality of the neighboring states.

Step 4: Choose the best neighboring state:

If no neighboring state is better than the current state, the algorithm


stops (it has reached the top of a hill).
If a better neighboring state is found, the algorithm moves to that state
and repeats from step 2.

Step 5: Repeat until no improvement can be made.


Advantage disadvantage

Simplicity: Easy to implement and Local optima: Hill Climbing can get
doesn’t require complex data structures. stuck in a local optimum.

Efficient for small problems: Provides No guarantee of finding the best


quick searches in small or simple problem solution: stop without finding the global
spaces. optimum.
Annealing is the process of heating metal and allowing it
to cool slowly, which serves to toughen the metal
used as a metaphor for the simulated annealing algorithm
allows the algorithm to change its state to a neighbor
that’s worse than the current state
Annealing is the process of heating metal
and allowing it to cool slowly, which serves
to toughen the metal
used as a metaphor for the simulated
annealing algorithm
allows the algorithm to change its state to a
neighbor that’s worse than the current state
In the traveling salesman problem, the task is to connect all points while
choosing the shortest possible distance. This is, for example, what delivery
companies need to do: find the shortest route from the store to all the
customers’ houses and back.
Linear programming is a family of problems
that optimize a linear equation (an equation
of the form y = ax₁ + bx₂ + …).
Linear programming will have the following
components:
A cost function that we want to minimize:
A constraint that’s represented as a sum of variables
Individual bounds on variables of the form lᵢ ≤ xᵢ ≤ uᵢ.
The following is a linear
programming example
that uses the scipy
library in Python:
Constraint Satisfaction problems are a class of
problems where variables need to be assigned
values while satisfying some conditions.
Constraints satisfaction problems have the
following properties:
Set of variables (x₁, x₂, …, xₙ)
Set of domains for each variable {D₁, D₂, …, Dₙ}
Set of constraints C
A few more terms worth knowing about constraint satisfaction problems:
A Hard Constraint is a constraint that must be satisfied in a correct solution.
A Soft Constraint is a constraint that expresses which solution is preferred over others.
A Unary Constraint is a constraint that involves only one variable. In our example, a unary
constraint would be saying that course A can’t have an exam on Monday {A ≠ Monday}.
A Binary Constraint is a constraint that involves two variables. This is the type of constraint
that we used in the example above, saying that some two courses can’t have the same value
{A ≠ B}.
When all the values in a variable's domain
satisfy the variable's unary constraints
When all the values in a variable's domain
satisfy the variable's binary constraints

To make X arc-consistent with respect to Y,


remove elements from X's domain until
every choice for X has a possible choice for Y
general algorithm used for solving algorithm may need to reverse
constraint satisfaction problems decisions (backtrack) if a conflict
(CSPs), combinatorial problems, or dead-end is encountered
and other types of search
problems
Main Steps

Step 1: Choose a Variable Step 2: Choose a Value

Step 4: Repeat Until a


Step 3: Check Constraints Solution is Found or All
Options are Exhausted.
function Backtrack(assignment, csp):
if assignment complete:
return assignment
var = Select-Unassigned-Var(assignment, csp)
for value in Domain-Values(var, assignment, csp):
if value consistent with assignment:
add {var = value} to assignment
result = Backtrack(assignment, csp)
if result ≠ failure:
return result
remove {var = value} from assignment
return failure
THANK YOU
FOR LISTENING
October 24, 2024

You might also like