0% found this document useful (0 votes)
5 views11 pages

Goal Formation

AI Goal Formation

Uploaded by

sravschinnusravs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views11 pages

Goal Formation

AI Goal Formation

Uploaded by

sravschinnusravs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

UNIT II Solving Problems by searching

Problem Solving Agents, Example problems, Searching for Solutions, Uninformed Search Strategies,
Informed search strategies, Heuristic Functions, Beyond Classical Search: Local Search Algorithms
and Optimization Problems, Local Search in Continues Spaces, Searching with Nondeterministic
Actions, Searching with partial observations, online search agents and unknown environments.

3.1 PROBLEM SOLVING AGENT WORKS

A problem-solving agent is an artificial Intelligence or computation entity designed to


identity and solve the problems
It is a goal-based agent:
goal formulation.
problem formulation

Goal Formation:

The agent defines what it wants to achieve (goal).


Example: Suppose the goal is to find the shortest path from point A to point B on a map.

Problem Formulation:

The agent formulates the problem by defining:


Initial State: Where it starts (e.g., location A).
Possible Actions: What actions can be taken (e.g., move north, south, etc.).
Transition Model: The result of taking an action (e.g., moving to a neighboring
location).
Goal State: The desired outcome (e.g., location B).
Path Cost: The cost of getting from one state to another (e.g., distance or time).

Example:

Function Simple_Problem_Solving_Agent(percept):

return action

Persistent:

seq = Empty Sequence

state = description of current world state

goal = Goal(state)

problem = Problem(state, goal)

State UPDATE_STATE(state, percept):

if seq is Empty:
goal = Goal(state)

problem = Problem(state, goal)

action = First(seq)

seq = Search(problem)

if seq == failure:

return null_action

seq = REST(seq)

return action

3.2 Example Problems

The problem-solving approach is applied to:

Toy Problems: Simplified, structured scenarios for testing methods (e.g., Vacuum World,8 puzzle
and 8 queen).
Real-World Problems: Practical problems with meaningful solutions, often lacking a single clear
description.

3.2.1 Toy Problem

vacuum world

Description: The vacuum world consists of a 2D grid with two locations that can either be dirty or
clean. The vacuum agent can move left, right, or clean the current location.

Initial State: Configuration of the grid (clean/dirty) and the starting position of the vacuum agent
(e.g., Location A or B).
Actions:
Move Left: Shift the vacuum to the left.
Move Right: Shift the vacuum to the right.
Clean: Clean the current location.
Transition Model:
Moving changes the agent's position.
Cleaning alters the state from dirty to clean.
Goal State: Both locations must be clean.
Path Cost: Each action (move or clean) costs 1.

8-Puzzle Problem Formulation

Description: The 8-puzzle consists of a 3x3 grid with 8 numbered tiles and one empty space. The goal
is to arrange the tiles in a specific order.

States: Positions of eight tiles and a blank in a 3x3 grid.


Initial State: Any configuration; half can reach any goal.
Actions: Move the blank Left, Right, Up, or Down.
Transition Model: Returns the new state after an action (e.g., moving Left swaps the blank with an
adjacent tile).
Goal Test: Checks if the current state matches the goal configuration.
Path Cost: Each move costs 1; total cost equals the number of moves.

The 8-Queens Problem

The 8-Queens problem involves placing eight queens on a chessboard such that no queen can attack
another (no two queens share the same row, column, or diagonal). Key elements of the problem:

Figure 3.5 shows an attempted solution that fails: the queen in the rightmost column is attacked by
the queen at the top left.

States: Any arrangement of 0 to 8 queens on the board is a state.

Initial state: No queens on the board.


Actions: Add a queen to any empty square.

Transition model: Returns the board with a queen added to the specified square .
Goal test: 8 queens are on the board, none attacked

3.2.2 Real-World Problems

Real-world problems are complex, practical, and often require advanced solutions. Examples include:

1. Route-Finding: Finding paths between locations, as in navigation and airline travel. States include
location, time, and flight details. Goals involve reaching a destination with minimal cost and
delays.
2. Touring Problems: Visiting locations under constraints, such as the Traveling Salesperson
Problem (TSP), which seeks the shortest tour visiting each city exactly once.
3. VLSI Layout: Optimizing the placement of components on chips to minimize area and delays
while maximizing yield.
4. Robot Navigation: Moving a robot in continuous space, often involving multi-dimensional state
spaces.
5. Assembly Sequencing: Determining the correct order to assemble parts, avoiding infeasible
configurations through geometric searches.

3.3 Searching for Solutions

Searching for solutions is a core aspect of Artificial Intelligence (AI), involving the exploration of a
search space to find a path or solution to a given problem.

Key Components:

1. Initial State: The starting point of the problem.


2. Goal State: The desired outcome or solution.
3. Search Space: The set of all possible states or configurations.
4. Actions: Moves or decisions that transition between states.
5. Path: A sequence of actions leading from the initial state to the goal.

Process:

1. Define the problem (initial state, goal state, actions, and constraints).
2. Explore the search space using a suitable algorithm.
3. Evaluate paths or solutions based on cost, efficiency, or other criteria.

Applications:

Pathfinding (e.g., navigation systems).


Game playing (e.g., chess, tic-tac-toe).
Scheduling and resource allocation.
Problem-solving in robotics and optimization tasks.

Uninformed Search Strategies

Uninformed search strategies, also known as blind search strategies, do not have any
additional information about the goal other than the problem definition.
1. Breadth-First Search (BFS):

Explores all nodes at the current depth before proceeding to the next level.
Data Structure: Uses a FIFO queue.
Features:
Complete: Finds a solution if one exists.
Optimal: Guarantees the shortest path in unweighted graphs.
Complexity: time and Space complexity of BFS algorithm is O(b^d).

2. Depth-First Search (DFS)

Explores as far as possible along a branch before backtracking.


Data Structure: Uses a stack.
Features:
Completeness: DFS is complete in finite state spaces but may fail in infinite spaces or
with cycles.
Time Complexity: O(b^d), where b is the branching factor and d is the depth of the
deepest node.

3.Depth-Limited Search:

Depth-First Search with a depth limit l to prevent infinite descent.


Completeness: Only if the solution is within l.
Time Complexity: O(b^l).
Space Complexity: O(bl).
Optimality: Not optimal.

4.Iterative Deepening Search:

Combines the benefits of BFS and DFS by repeatedly applying depth-limited search with
increasing limits.
Completeness: Complete in finite state spaces.
Time Complexity: O(b^d).
Space Complexity: O(bd).
Optimality: Yes, for uniform step costs.

5.Uniform Cost Search:


Explores nodes with the lowest path cost using a priority queue.
Completeness: Yes, if costs are non-negative.
Time Complexity: O(b1+⌊C∗/ϵ⌋).
Space Complexity: O(b1+⌊C∗/ϵ⌋).
Optimality: Always optimal.

6.Bidirectional Search

Description: Searches forward from the start and backward from the goal, meeting in the
middle.
Completeness: Yes, in finite spaces.
Time Complexity: O(b^{d/2}).
Space Complexity: O(b^{d/2})
Optimality: Yes, with BFS.
States: Positions of eight tiles and a blank in a 3x3 grid.
Initial State: Any configuration; half can reach any goal.
Actions: Move the blank Left, Right, Up, or Down.
Transition Model: Returns the new state after an action (e.g., moving Left swaps the blank
with an adjacent tile).
Goal Test: Checks if the current state matches the goal configuration.
Path Cost: Each move costs 1; total cost equals the number of moves.

Informed Search strategies:

Greedy Best-First Search:

Greedy Best-First Search expands the node closest to the goal, aiming for a quick
solution.
It evaluates nodes using only the heuristic function: f(n)=h(n)

Example:

Starting from A, we have paths to nodes B (heuristic: 32), C (heuristic: 25), and D (heuristic:
35).
The path with the lowest heuristic value is C, so we move from A to C.

From C, we can go to F (heuristic: 17) or E (heuristic: 19). We choose F.

From F, we can directly reach the goal node G (heuristic: 0).

The final path is A → C → F → G.

A* ALGORITHM
A* combines the features of both uniform-cost search and greedy best-first search. It
uses a cost function f(n)=g(n)+h(n), where:

g(n): the cost from the start node to node n.


h(n): the heuristic estimate from node n to the goal.

Example:

path to reach from start A final state J using A*


STEP1:

A* Algorithm calculates f(B) and f(F).

f(B) = 6 + 8 = 14

f(F) = 3 + 6 = 9

Since f(F) < f(B), so it decides to go to node F.

Path- A → F

Step-02:

Node G and Node H can be reached from node F.

A* Algorithm calculates f(G) and f(H).

f(G) = (3+1) + 5 = 9

f(H) = (3+7) + 3 = 13

Since f(G) < f(H), so it decides to go to node G.

Path- A → F → G

Step-03:

Node I can be reached from node G.

A* Algorithm calculates f(I).


f(I) = (3+1+3) + 1 = 8

It decides to go to node I.

Path- A → F → G → I

Step-04:

Node E, Node H and Node J can be reached from node I.

A* Algorithm calculates f(E), f(H) and f(J).

f(E) = (3+1+3+5) + 3 = 15

f(H) = (3+1+3+2) + 3 = 12

f(J) = (3+1+3+3) + 0 = 10

Since f(J) is least, so it decides to go to node J.

The required shortest path from node A to node J.

Path- A → F → G → I → J

Local Search and Optimization Techniques

Local Search:

Local Search Algorithms focus on finding solutions by iteratively improving a single current
state, rather than exploring multiple paths. These methods are useful for large search spaces.

1. Characteristics:

Single state: Works with one current state at a time.


No memory: Doesn't retain a path from start to goal, just finds a solution.
Iterative improvement: Progresses step by step toward a solution.

2. Common Algorithms:

Hill Climbing: Moves to better neighboring states but may get stuck in local maxima.

Simulated Annealing: Allows occasional worse moves to escape local maxima.

3. Optimization Techniques:

Unconstrained/Constrained Optimization: Focuses on minimizing or maximizing


objective functions, with or without constraints.

Examples: Traveling Salesman Problem, Knapsack Problem.

Local search and optimization are key in solving large, complex problems efficiently.

Hill-Climbing Algorithm in AI

Hill climbing is a local search algorithm that iteratively moves toward higher values to find the
best solution, terminating at a peak where no neighbors have higher values. It optimizes
problems like the Traveling Salesman Problem and is also known as greedy local search,
focusing only on immediate neighbors.

Key Features:

Generate and Test Variant: Feedback guides the search direction.


Greedy Approach: Moves toward cost optimization.
No Backtracking: Does not revisit previous states.
Deterministic: Produces the same result for identical inputs.

State-Space Diagram:

Local Maximum: A peak better than neighbors but not the highest.
Global Maximum: The highest peak in the landscape.
Plateau: Flat region with identical neighbor values.
Shoulder: Plateau with an uphill edge.
Types of Hill Climbing:

1. Simple Hill Climbing


2. Steepest-Ascent Hill Climbing
3. Stochastic Hill Climbing

Problems:

Local Maximum: Stuck at a suboptimal peak.

Plateau: No direction to move.

Ridges: Peaks with slopes inaccessible in a single step.

You might also like