100% found this document useful (1 vote)
44 views19 pages

Unit 2

Uploaded by

aman Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
44 views19 pages

Unit 2

Uploaded by

aman Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

UNIT - 2

1. Search Techniques
Search algorithms in artificial intelligence are significant because they provide solutions to many
artificial intelligence-related issues. There are a variety of search algorithms in artificial
intelligence.

Search algorithms in AI are algorithms that aid in the resolution of search issues. A search issue
comprises the search space, start, and goal state. The algorithms provide search solutions by
transforming the initial state to the desired state. Therefore, AI machines and applications can
only perform search functions and discover viable solutions with these algorithms.

AI agents make artificial intelligence easy. These agents carry out tasks to achieve a specific
objective and plan actions that can lead to the intended outcome. The combination of these
actions completes the given task. Search algorithms in artificial intelligence are used to find the
best possible solutions for AI agents.

1.1 Search Algorithm Terminologies

1. Search - Searching solves a search issue in a given space step by step. Three major factors
can influence a search issue.

 Search Space - A search space is a collection of potential solutions a system may have.
 Start State - The jurisdiction where the agent starts the search.
 Goal test - A function that examines the current state and returns whether or not the goal
state has been attained.

2. Search tree - A Search tree is a tree representation of a search issue. The node at the root of
the search tree corresponds to the initial condition.

3. Actions - It describes all the steps, activities, or operations accessible to the agent.

4. Transition model - It can be used to convey a description of what each action does.

5. Path Cost - It is a function that gives a cost to each path.


6. Solution - An action sequence connects the start node to the target node.

7. Optimal Solution - If a solution has the lowest cost among all solutions, it is said to be the
optimal answer.

1.2 Properties of Search Algorithms


The four important properties of search algorithms in artificial intelligence for comparing their
efficiency are as follows:

1. Completeness - A search algorithm is said to be complete if it guarantees to yield a


solution for any random input if at least one solution exists.

2. Optimality - A solution discovered for an algorithm is considered optimal if it is assumed


to be the best solution (lowest path cost) among all other solutions.

3. Time complexity - It measures how long an algorithm takes to complete its job.

4. Space Complexity - The maximum storage space required during the search, as
determined by the problem's complexity.

1.3 Importance of Search Algorithms in Artificial Intelligence

The following points explain how and why the search algorithms in AI are important:

 Solving problems: Using logical search mechanisms, including problem description,


actions, and search space, search algorithms in artificial intelligence improve problem-
solving. Applications for route planning, like Google Maps, are one real-world
illustration of how search algorithms in AI are utilized to solve problems. These
programs employ search algorithms to determine the quickest or shortest path between
two locations.

 Search programming: Many AI activities can be coded in terms of searching, which


improves the formulation of a given problem's solution.
 Goal-based agents: Goal-based agents' efficiency is improved through search
algorithms in artificial intelligence. These agents look for the most optimal course of
action that can offer the finest resolution to an issue to solve it.

 Support production systems: Search algorithms in artificial intelligence help


production systems run. These systems help AI applications by using rules and methods
for putting them into practice. Production systems use search algorithms in artificial
intelligence to find the rules that can lead to the required action.

 Neural network systems: The neural network systems also use these algorithms.
These computing systems comprise a hidden layer, an input layer, an output layer, and
coupled nodes. Neural networks are used to execute many tasks in artificial intelligence.
For example, the search for connection weights that will result in the required input-
output mapping is improved by search algorithms in AI.

1.4 Types of Search Algorithms in AI

We can divide search algorithms in artificial intelligence into uninformed (Blind search) and
informed (Heuristic search) algorithms based on the search issues.
1.4.1. Uninformed/Blind Search
The uninformed search needs domain information, such as proximity or goal location. It works
by brute force because it only contains information on traversing the tree and identifying leaf and
goal nodes.

Uninformed search is a method of searching a search tree without knowledge of the search space,
such as initial state operators and tests for the objective, and is also known as blind search. It
goes through each tree node until it reaches the target node. These algorithms are limited to
producing successors and distinguishing between goal and non-goal states.

1.4.1.1 Breadth-first search - This is a search method for a graph or tree data structure. It starts
at the tree root or searches key and goes through the adjacent nodes in the current depth level
before moving on to the nodes in the next depth level. It uses the queue data structure that works
on the first in, first out (FIFO) concept. It is a complete algorithm as it returns a solution if a
solution exists.

1.4.1.2 Depth-first search - It is also an algorithm used to explore graph or tree data structures.
It starts at the root node, as opposed to the breadth-first search. It goes through the branch nodes
and then returns. It is implemented using a stack data structure that works on the concept of last
in, first out (LIFO).
1.4.1.3. Uniform cost search (UCS) - Unlike breadth-first and depth-first algorithms, uniform
cost search considers the expense. When there are multiple paths to achieving the desired
objective, the optimal solution of uniform cost algorithms is the one with the lowest cost. So
uniform cost search will check the expense to go to the next node. It will choose the path with
the least cost if there are multiple paths. Only finite states and the absence of loops with zero
weights make UCS complete. Also, only when there are no negative costs is UCS optimum. It is
similar to the breadth-first search if each transition's cost is the same.

1.4.1.4. Iterative deepening depth-first search - It performs a depth-first search to level 1, then
restarts, completes a depth-first search to level 2, and so on until the answer is found. It only
generates a node once all the lower nodes have been produced. It only stores a node stack. The
algorithm terminates at depth d when the goal node is found.

1.4.2. Informed Search

Informed search algorithms in AI use domain expertise. Problem information is accessible in an


informed search, which can guide the search. As a result, informed search strategies are more
likely to discover a solution than uninformed ones.

Heuristic search is another name for informed search. A heuristic is a method that, while not
always guaranteed to find the best solution, is guaranteed to find a decent solution in a
reasonable amount of time. An informed search can answer many complex problems that would
be impossible to handle otherwise.

1.4.2.1. Greedy Search - The closest node to the target node is expanded in greedy search
algorithms in AI. A heuristic function, h, determines the closeness factor (x). h(x) is a distance
estimate between one node and the end or target node. The closer the node is to the endpoint, the
smaller the h(x) value. When the greedy search looks for the best route to the target node, it will
select nodes with the lowest possible values. This algorithm is implemented through the priority
queue. It is not an optimal algorithm. It can get stuck in loops.
For example, imagine a simple game where the goal is to reach a specific location on a board.
The player can move in any direction but walls are blocking some paths. In a greedy search
approach, the player would always choose the direction that brings them closer to the goal,
without considering the potential obstacles or the fact that some paths may lead to dead ends.

If the chosen path leads to a dead end or a loop, the algorithm will keep moving back and forth
between the same nodes, without ever exploring other options. This can result in an infinite loop
where the algorithm keeps repeating the same steps and fails to find a solution.

1.4.2.2. A* Search - A* Tree Search, known as A* Search, combines the strengths of uniform-
cost search and greedy search. To find the best path from the starting state to the desired state,
the A* search algorithm investigates all potential paths in a graph or grid. The algorithm
calculates the cost of each potential move at each stage using the following two criteria:

 How much it costs to reach the present node?


 The approximate cost from the present node to the goal.

A heuristic function is used to determine the estimated cost and estimate the distance between
the current node and the desired state. The acceptable nature of this heuristic function ensures
that it never overestimates the actual cost of achieving the goal.

The path with the lowest overall cost is chosen after an A* search examines each potential route
based on the sum of the actual cost and the estimated cost (i.e., the cost so far and the estimated
cost-to-go). By doing this, the algorithm is guaranteed to always investigate the most promising
path first, which is most likely to lead to the desired state.
Hill Climbing Algorithm in Artificial Intelligence
 Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best solution to the
problem. It terminates when it reaches a peak value where no neighbor has a higher value.

 Hill climbing algorithm is a technique which is used for optimizing the mathematical
problems. One of the widely discussed examples of Hill climbing algorithm is Traveling-
salesman Problem in which we need to minimize the distance traveled by the salesman.

Features of Hill Climbing:


Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of Generate and Test method.
The Generate and Test method produce feedback which helps to decide which direction
to move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction which
optimizes the cost.
o No backtracking: It does not backtrack the search space, as it does not remember the
previous states.

State-space Diagram for Hill Climbing:

The state-space landscape is a graphical representation of the hill-climbing algorithm which is


showing a graph between various states of algorithm and Objective function/Cost.

On Y-axis we have taken the function which can be an objective function or cost function, and
state-space on the x-axis. If the function on Y-axis is cost then, the goal of search is to find the
global minimum and local minimum. If the function of Y-axis is Objective function, then the
goal of the search is to find the global maximum and local maximum.
Different regions in the state space landscape:

Local Maximum: Local maximum is a state which is better than its neighbor states, but there is
also another state which is higher than it.

Global Maximum: Global maximum is the best possible state of state space landscape. It has
the highest value of objective function.

Current state: It is a state in a landscape diagram where an agent is currently present.

Flat local maximum: It is a flat space in the landscape where all the neighbor states of current
states have the same value.

Shoulder: It is a plateau region which has an uphill edge.

Simple Hill Climbing:

Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only
evaluates the neighbor node state at a time and selects the first one which optimizes current
cost and set it as a current state. It only checks it's one successor state, and if it finds better
than the current state, then move else be in the same state. This algorithm has the following
features:
o Less time consuming
o Less optimal solution and the solution is not guaranteed

Algorithm for Simple Hill Climbing:

o Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
o Step 2: Loop Until a solution is found or there is no new operator left to apply.
o Step 3: Select and apply an operator to the current state.
o Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new state as a current state.
c. Else if not better than the current state, then return to step2.
o Step 5: Exit.

Problems in Hill Climbing Algorithm:

1. Local Maximum: A local maximum is a peak state in the landscape which is better than each
of its neighboring states, but there is another state also present which is higher than the local
maximum.
 Solution: Backtracking technique can be a solution of the local maximum in state space
landscape. Create a list of the promising path so that the algorithm can backtrack the
search space and explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of the
current state contains the same value, because of this algorithm does not find any best direction
to move. A hill-climbing search might be lost in the plateau area.
 Solution: The solution for the plateau is to take big steps or very little steps while searching,
to solve the problem. Randomly select a state which is far away from the current state so it is
possible that the algorithm could find non-plateau region.

3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its
surrounding areas, but itself has a slope, and cannot be reached in a single move.
 Solution: With the use of bidirectional search, or by moving in different directions, we can
improve this problem.
PROBLEM REDUCTION: AND-OR GRAPHS

The AND-OR GRAPH (or tree) is useful for representing the solution of problems that can
solved by decomposing them into a set of smaller problems, all of which must then be solved.
This decomposition, or reduction, generates arcs that we call AND arcs. One AND arc may point
to any number of successor nodes, all of which must be solved in order for the arc to point to a
solution. Just as in an OR graph, several arcs may emerge from a single node, indicating a
variety of ways in which the original problem might be solved. This is why the structure is called
not simply an AND-graph but rather an AND-OR graph (which also happens to be an AND-OR
tree)

EXAMPLE FOR AND-OR GRAPH

ALGORITHM:

Let G be a graph with only starting node INIT.

Repeat the followings until INIT is labeled SOLVED or h(INIT) > FUTILITY

a) Select an unexpanded node from the most promising path from INIT (call it NODE)

b) Generate successors of NODE. If there are none, set h(NODE) = FUTILITY (i.e., NODE is
unsolvable); otherwise for each SUCCESSOR that is not an ancestor of NODE do the following:

i. Add SUCCESSSOR to G.

ii. If SUCCESSOR is a terminal node, label it SOLVED and set h(SUCCESSOR) = 0.

iii. If SUCCESSPR is not a terminal node, compute its h


c) Propagate the newly discovered information up the graph by doing the following: let S be set
of SOLVED nodes or nodes whose h values have been changed and need to have values
propagated back to their parents. Initialize S to Node. Until S is empty repeat the followings:

i. Remove a node from S and call it CURRENT.

ii. Compute the cost of each of the arcs emerging from CURRENT. Assign minimum cost of its
successors as its h.

iii. Mark the best path out of CURRENT by marking the arc that had the minimum cost in step ii

iv. Mark CURRENT as SOLVED if all of the nodes connected to it through new labeled arc
have been labeled SOLVED

v. If CURRENT has been labeled SOLVED or its cost was just changed, propagate its new cost
back up through the graph. So add all of the ancestors of CURRENT to S.

EXAMPLE: 1

STEP 1:

A is the only node, it is at the end of the current best path. It is expanded, yielding nodes B, C, D.
The arc to D is labeled as the most promising one emerging from A, since it costs 6compared to
B and C, Which costs 9.
STEP 2:

Node B is chosen for expansion. This process produces one new arc, the AND arc to E and F,
with a combined cost estimate of 10.so we update the f’ value of D to 10.Going back one more
level, we see that this makes the AND arc B-C better than the arc to D, so it is labeled as the
current best path.

STEP 3:
We traverse the arc from A and discover the unexpanded nodes B and C. If we going to find a
solution along this path, we will have to expand both B and C eventually, so let’s choose to
explore B first. This generates two new arcs, the ones to G and to H. Propagating their f’ values
backward, we update f’ of B to 6(since that is the best we think we can do, which we can achieve
by going through G). This requires updating the cost of the AND arc B-C to 12(6+4+2). After
doing that, the arc to D is again the better path from A, so we record that as the current best path
and either node E or node F will chosen for expansion at step 4.

STEP4:
Best First Search

OR Graph

We will call a graph as an OR - graph, since each of its branches represents alternative problem
solving path. The Best First Search, selects the most promising of the nodes we have generated
so far. This can be achieved by applying appropriate Heuristic function to each of them.

Heuristic function:

f(n) = h(n)

where,

h (n) - estimated straight line distance from node n to goal

To implement the graph search procedure, we will need to use two list of nodes.

OPEN- nodes that have been generated but have not been visited yet

Closed - nodes that have been already visited

Algorithm:

1. The 1st step is to define the OPEN list with a single node, the starting node.
2. The 2nd step is to check whether or not OPEN is empty. If it is empty, then the algorithm
returns failure and exits.
3. The 3rd step is to remove the node with the best score, n, from OPEN and place it in
CLOSED.
4. The 4th step “expands” the node n, where expansion is the identification of successor
nodes of n.
5. The 5th step then checks each of the successor nodes to see whether or not one of them is
the goal node. If any successor is the goal node, the algorithm returns success and the
solution, which consists of a path traced backwards from the goal to the start node.
Otherwise, proceeds to the sixth step.
6. In 6th step, for every successor node, the algorithm applies the evaluation function, f, to
it, then checks to see if the node has been in either OPEN or CLOSED. If the node has
not been in either, it gets added to OPEN.
7. Finally, the 7th step establishes a looping structure by sending the algorithm back to the
2nd step. This loop will only be broken if the algorithm returns success in step 5 or
failure in step 2.

Constraint Satisfaction problem (CSP)

A constraint satisfaction problem (CSP) consists of

 a set of variables,
 a domain for each variable, and
 a set of constraints.

The aim is to choose a value for each variable so that the resulting possible world satisfies the
constraints; we want a model of the constraints.

A finite CSP has a finite set of variables and a finite domain for each variable. Many of the
methods considered in this chapter only work for finite CSPs, although some are designed for
infinite, even continuous, domains.

The multidimensional aspect of these problems, where each variable can be seen as a separate
dimension, makes them difficult to solve but also provides structure that can be exploited.

Given a CSP, there are a number of tasks that can be performed:

 Determine whether or not there is a model.


 Find a model.
 Find all of the models or enumerate the models.
 Count the number of models.
 Find the best model, given a measure of how good models are
 Determine whether some statement holds in all models.
Consider the following cryptarithmetic problem as an example,

SEND+MORE=MONEY

1) SEND + MORE = MONEY

5 4 3 2 1
S E N D
+ M O R E
c3 c2 c1
----------------------
M O N E Y

1. From Column 5, M=1, since it is only carry-over possible from sum of single digit number in
column 4.

2. To produce a carry from column 4 to column 5 'S + M' is atleast 9 so'S=8or9' so 'S+M=9or10'
& so 'O = 0 or 1'. But 'M=1', so 'O = 0'.

3. If there is carry from Column 3 to 4 then 'E=9' & so 'N=0'. But

'O = 0' so there is no carry & 'S=9' & 'c3=0'.

4. If there is no carry from column 2 to 3 then 'E=N' which is

impossible, therefore there is carry & 'N=E+1' & 'c2=1'.

5. If there is carry from column 1 to 2 then 'N+R=E mod 10' & 'N=E+1'

so 'E+1+R=E mod 10', so 'R=9' but 'S=9', so there must be carry

from column 1 to 2. Therefore 'c1=1' & 'R=8'.

6. To produce carry 'c1=1' from column 1 to 2, we must have 'D+E=10+Y'

as Y cannot be 0/1 so D+E is atleast 12. As D is atmost 7 & E is

atleast 5 (D cannot be 8 or 9 as it is already assigned). N is atmost 7

& 'N=E+1' so 'E=5or6'.

7. If E were 6 & D+E atleast 12 then D would be 7, but 'N=E+1' & N would

also be 7 which is impossible. Therefore 'E=5' & 'N=6'.


8. D+E is atleast 12 for that we get 'D=7' & 'Y=2'.

SOLUTION:

9 5 6 7

+ 1 0 8 5

-----------------

1 0 6 5 2

VALUES:

S=9

E=5

N=6

D=7

M=1

O=0

R=8

Y=2

You might also like