AI Unit 2 Notes
AI Unit 2 Notes
A problem solving system that uses either forward and backward reasoning whose each
operator works to produce a single new object/ new state in the database is said to represent
problems in a state space. From a programming perspective, AI includes the study of symbolic
programming, problem solving, and search.
A plan can then be seen as a sequence of operations that transform the initial state into
the goal state, i.e. the problem solution. Typically we will use some kind of search algorithm
to find a good plan.
Search is a method that can be used by computers to examine a problem space like this
in order to find a goal. Often, we want to find the goal as quickly as possible or without using
too many resources. A problem space can also be considered to be a search space because in
order to solve the problem, we will search the space for a goal state. We will continue to use
the term search space to describe this concept. In this chapter, we will look at a number of
methods for examining a search space. These methods are called search methods.
An Intelligent Agent must sense, must act, must be autonomous (to some extent),. It also must
be rational.
AI is about building rational agents. An agent is something that perceives and acts.
A rational agent always does the right thing.
1. What are the functionalities (goals)?
2. What are the components?
3. How do we build them?
Intelligence is often defined in terms of what we understand as intelligence in humans.
Allen Newell defines intelligence as the ability to bring all the knowledge a system has at its
disposal to bear in the solution of a problem.
A more practical definition that has been used in the context of building artificial systems with
intelligence is to perform better on tasks that humans currently do better.
Artificial Intelligence is the study of building agents that act rationally. Most of the time,
these agents perform some kind of search algorithm in the background in order to achieve
their tasks.
A search problem consists of:
A State Space. Set of all possible states where you can be.
A Start State. The state from where the search begins.
A Goal Test. A function that looks at the current state returns whether or not it is
the goal state.
The Solution to a search problem is a sequence of actions, called the plan that
transforms the start state to the goal state.
This plan is achieved through search algorithms.
Types of Search Algorithms
There are far too many powerful search algorithms out there to fit in a single article.
Instead, this article will discuss six of the fundamental search algorithms, divided
into two categories, as shown below.
The search algorithms in this section have no additional information on the goal node
other than the one provided in the problem definition. The plans to reach the goal state from
the start state differ only by the order and/or length of actions. Uninformed search is also
called Blind search.
The following uninformed search algorithms are discussed in this section.
A problem graph, containing the start node S and the goal node G.
A strategy, describing the manner in which the graph will be traversed to get to G .
A fringe, which is a data structure used to store all the possible states (nodes) that you
can go from the current states.
A tree, that results while traversing to the goal node.
A solution plan, which the sequence of nodes from S to G.
UCS is different from BFS and DFS because here the costs come into play. In other
words, traversing via different edges might not have the same cost. The goal is to find a path
where the cumulative sum of costs is least.
Here, the algorithms have information on the goal state, which helps in more efficient
searching. This information is obtained by something called a heuristic.
In this section, we will discuss the following search algorithms.
1. Greedy Search
2. A* Tree Search
3. A* Graph Search
Search Heuristics:
In an informed search, a heuristic is a function that estimates how close a state is to
the goal state. For examples – Manhattan distance, Euclidean distance, etc. (Lesser the
distance, closer the goal.)
Different heuristics are used in different informed algorithms discussed below.
Greedy Search
In greedy search, we expand the node closest to the goal node. The “closeness” is estimated by
a heuristic h(x) .
Heuristic: A heuristic h is defined as-
h(x) = Estimate of distance of node x from the goal node.
Lower the value of h(x), closer is the node from the goal.
Strategy: Expand the node closest to the goal state, i.e. expand the node with lower h value.
Question. Find the path from S to G using greedy search. The heuristic values h of each node below
the name of the node.
Solution. Starting from S, we can traverse to A(h=9) or D(h=5). We choose D, as it has the lower heuristic
cost. Now from D, we can move to B(h=4) or E(h=3). We choose E with lower heuristic cost. Finally,
from E, we go to G(h=0). This entire traversal is shown in the search tree below, in blue.
A* Tree Search
Solution. Starting from S, the algorithm computes g(x) + h(x) for all nodes in the fringe at each
step, choosing the node with the lowest sum. The entire working is shown in the table below.
Note that in the fourth set of iteration, we get two paths with equal summed cost f(x), so we
expand them both in the next set. The path with lower cost on further expansion is the chosen
path.
PATH H(X) G(X) F(X)
S 7 0 7
S -> A 9 3 12
S -> D 5 2 7
S -> D -> B 4 2+1=3 7
Path: S->D->B->E->G
Cost: 7
A* Graph Search
A* tree search works well, except that it takes time re-exploring the branches it has already
explored. In other words, if the same node has expanded twice in different branches of the
search tree, A* search might explore both of those branches, thus wasting time
A* Graph Search, or simply Graph Search, removes this limitation by adding this rule: do not
expand the same node more than once.
Heuristic. Graph search is optimal only when the forward cost between two successive
nodes A and B, given by h(A) - h (B) , is less than or equal to the backward cost between those
two nodes g(A -> B). This property of graph search heuristic is called consistency.
Consistency:
Example
Question. Use graph search to find path from S to G in the following graph.
Solution. We solve this question pretty much the same way we solved last question, but in
this case, we keep a track of nodes explored so that we don’t re-explore them.