Goal Formation
Goal Formation
Problem Solving Agents, Example problems, Searching for Solutions, Uninformed Search Strategies,
Informed search strategies, Heuristic Functions, Beyond Classical Search: Local Search Algorithms
and Optimization Problems, Local Search in Continues Spaces, Searching with Nondeterministic
Actions, Searching with partial observations, online search agents and unknown environments.
Goal Formation:
Problem Formulation:
Example:
Function Simple_Problem_Solving_Agent(percept):
return action
Persistent:
goal = Goal(state)
if seq is Empty:
goal = Goal(state)
action = First(seq)
seq = Search(problem)
if seq == failure:
return null_action
seq = REST(seq)
return action
Toy Problems: Simplified, structured scenarios for testing methods (e.g., Vacuum World,8 puzzle
and 8 queen).
Real-World Problems: Practical problems with meaningful solutions, often lacking a single clear
description.
vacuum world
Description: The vacuum world consists of a 2D grid with two locations that can either be dirty or
clean. The vacuum agent can move left, right, or clean the current location.
Initial State: Configuration of the grid (clean/dirty) and the starting position of the vacuum agent
(e.g., Location A or B).
Actions:
Move Left: Shift the vacuum to the left.
Move Right: Shift the vacuum to the right.
Clean: Clean the current location.
Transition Model:
Moving changes the agent's position.
Cleaning alters the state from dirty to clean.
Goal State: Both locations must be clean.
Path Cost: Each action (move or clean) costs 1.
Description: The 8-puzzle consists of a 3x3 grid with 8 numbered tiles and one empty space. The goal
is to arrange the tiles in a specific order.
The 8-Queens problem involves placing eight queens on a chessboard such that no queen can attack
another (no two queens share the same row, column, or diagonal). Key elements of the problem:
Figure 3.5 shows an attempted solution that fails: the queen in the rightmost column is attacked by
the queen at the top left.
Transition model: Returns the board with a queen added to the specified square .
Goal test: 8 queens are on the board, none attacked
Real-world problems are complex, practical, and often require advanced solutions. Examples include:
1. Route-Finding: Finding paths between locations, as in navigation and airline travel. States include
location, time, and flight details. Goals involve reaching a destination with minimal cost and
delays.
2. Touring Problems: Visiting locations under constraints, such as the Traveling Salesperson
Problem (TSP), which seeks the shortest tour visiting each city exactly once.
3. VLSI Layout: Optimizing the placement of components on chips to minimize area and delays
while maximizing yield.
4. Robot Navigation: Moving a robot in continuous space, often involving multi-dimensional state
spaces.
5. Assembly Sequencing: Determining the correct order to assemble parts, avoiding infeasible
configurations through geometric searches.
Searching for solutions is a core aspect of Artificial Intelligence (AI), involving the exploration of a
search space to find a path or solution to a given problem.
Key Components:
Process:
1. Define the problem (initial state, goal state, actions, and constraints).
2. Explore the search space using a suitable algorithm.
3. Evaluate paths or solutions based on cost, efficiency, or other criteria.
Applications:
Uninformed search strategies, also known as blind search strategies, do not have any
additional information about the goal other than the problem definition.
1. Breadth-First Search (BFS):
Explores all nodes at the current depth before proceeding to the next level.
Data Structure: Uses a FIFO queue.
Features:
Complete: Finds a solution if one exists.
Optimal: Guarantees the shortest path in unweighted graphs.
Complexity: time and Space complexity of BFS algorithm is O(b^d).
3.Depth-Limited Search:
Combines the benefits of BFS and DFS by repeatedly applying depth-limited search with
increasing limits.
Completeness: Complete in finite state spaces.
Time Complexity: O(b^d).
Space Complexity: O(bd).
Optimality: Yes, for uniform step costs.
6.Bidirectional Search
Description: Searches forward from the start and backward from the goal, meeting in the
middle.
Completeness: Yes, in finite spaces.
Time Complexity: O(b^{d/2}).
Space Complexity: O(b^{d/2})
Optimality: Yes, with BFS.
States: Positions of eight tiles and a blank in a 3x3 grid.
Initial State: Any configuration; half can reach any goal.
Actions: Move the blank Left, Right, Up, or Down.
Transition Model: Returns the new state after an action (e.g., moving Left swaps the blank
with an adjacent tile).
Goal Test: Checks if the current state matches the goal configuration.
Path Cost: Each move costs 1; total cost equals the number of moves.
Greedy Best-First Search expands the node closest to the goal, aiming for a quick
solution.
It evaluates nodes using only the heuristic function: f(n)=h(n)
Example:
Starting from A, we have paths to nodes B (heuristic: 32), C (heuristic: 25), and D (heuristic:
35).
The path with the lowest heuristic value is C, so we move from A to C.
A* ALGORITHM
A* combines the features of both uniform-cost search and greedy best-first search. It
uses a cost function f(n)=g(n)+h(n), where:
Example:
f(B) = 6 + 8 = 14
f(F) = 3 + 6 = 9
Path- A → F
Step-02:
f(G) = (3+1) + 5 = 9
f(H) = (3+7) + 3 = 13
Path- A → F → G
Step-03:
It decides to go to node I.
Path- A → F → G → I
Step-04:
f(E) = (3+1+3+5) + 3 = 15
f(H) = (3+1+3+2) + 3 = 12
f(J) = (3+1+3+3) + 0 = 10
Path- A → F → G → I → J
Local Search:
Local Search Algorithms focus on finding solutions by iteratively improving a single current
state, rather than exploring multiple paths. These methods are useful for large search spaces.
1. Characteristics:
2. Common Algorithms:
Hill Climbing: Moves to better neighboring states but may get stuck in local maxima.
3. Optimization Techniques:
Local search and optimization are key in solving large, complex problems efficiently.
Hill-Climbing Algorithm in AI
Hill climbing is a local search algorithm that iteratively moves toward higher values to find the
best solution, terminating at a peak where no neighbors have higher values. It optimizes
problems like the Traveling Salesman Problem and is also known as greedy local search,
focusing only on immediate neighbors.
Key Features:
State-Space Diagram:
Local Maximum: A peak better than neighbors but not the highest.
Global Maximum: The highest peak in the landscape.
Plateau: Flat region with identical neighbor values.
Shoulder: Plateau with an uphill edge.
Types of Hill Climbing:
Problems: