UNIT-2 Artificial Intelligence
UNIT-2 Artificial Intelligence
UNIT-2 Artificial Intelligence
Topics
Search Space: Search space represents a set of possible solutions, which a system may have.
Start State: It is a state from where agent begins the search.
Goal test: It is a function which observe the current state and returns whether the goal state is achieved or not.
Search tree: A tree representation of search problem is called Search tree. The root of the search tree is the root
node which is corresponding to the initial state.
Actions: It gives the description of all the available actions to the agent.
Path Cost: It is a function which assigns a numeric cost to each path.
Solution: It is an action sequence which leads from the start node to the goal node.
Optimal Solution: If a solution has the lowest cost among all solutions.
Water Jug Problem
Rule Left of Rule OR Right of Rule Description
No CONDITION OR OPERATION
1 (X,Y | X<5) (5,Y) Fill 5-L Jug
5 (X,Y | X+Y<=5 & Y>0 ) (X+Y,0) Empty 3-L Jug into 5-L Jug
6 (X,Y | X+Y<=3 & X>0 ) (0,X+Y) Empty 5-L Jug into 3-L Jug
7 (X,Y | X+Y>=5 & Y>0 ) (5,Y-(5-X)) Pour water from 3-L Jug into 5-L jug until 5-L Jug is full
8 (X,Y | X+Y>=3 & X>0 ) (X-(3-Y),3) Pour water from 5-L Jug into 3-L jug until 3-L Jug is full
Water Jug Problem (X-m, Y-n), m>n
Rule Left of Rule OR Right of Rule OR Description
No CONDITION OPERATION
1 (X,Y | X<m) (m, Y) Fill m-L Jug
5 (X,Y | X+Y<=m & Y>0 ) (X+Y, 0) Empty n-L Jug into m-L Jug
6 (X,Y | X+Y<=n & X>0 ) (0, X+Y) Empty m-L Jug into n-L Jug
7 (X,Y | X+Y>=m & Y>0 ) (m, Y-(m-X)) Pour water from n-L Jug into m-L jug until m-L Jug is full
8 (X,Y | X+Y>=n & X>0 ) (X-(n-Y), n) Pour water from m-L Jug into n-L jug until n-L Jug is full
Missionaries and Cannibals Problem
Missionaries and Cannibals Problem
SEARCHING FOR SOLUTIONS
◉ Search algorithms require a data structure to keep track of the search tree
that is being constructed. For each node n of the tree, we have a structure
that contains four components:
• n.STATE: the state in the state space to which the node corresponds;
• n.PARENT: the node in the search tree that generated this node;
• n.ACTION: the action that was applied to the parent to generate the node;
• n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the
initial state to the node, as indicated by the parent pointers.
Measuring problem-solving performance
Advantages:
• BFS will provide a solution if any solution exists.
• If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.
Disadvantages:
• It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.
• BFS needs lots of time if the solution is far away from the root node.
BFS- Working
S--->A--->B---->C--->D----
>G--->H--->E---->F----
>I---->K
Water Jug Problem
Rule Left of Rule OR Right of Rule Description
No CONDITION OR OPERATION
1 (X,Y | X<5) (5,Y) Fill 5-L Jug
5 (X,Y | X+Y<=5 & Y>0 ) (X+Y,0) Empty 3-L Jug into 5-L Jug
6 (X,Y | X+Y<=3 & X>0 ) (0,X+Y) Empty 5-L Jug into 3-L Jug
7 (X,Y | X+Y>=5 & Y>0 ) (5,Y-(5-X)) Pour water from 3-L Jug into 5-L jug until 5-L Jug is full
8 (X,Y | X+Y>=3 & X>0 ) (X-(3-Y),3) Pour water from 5-L Jug into 3-L jug until 3-L Jug is full
1
Depth-first search (DFS)
Advantage:
• DFS requires very less memory as it only needs to store a stack of the nodes
on the path from root node to the current node.
• It takes less time to reach to the goal node than BFS algorithm (if it traverses
in the right path).
Disadvantages:
• There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
• DFS algorithm goes for deep down searching and sometime it may go to the
infinite loop.
Informed Search Algorithms
The informed search algorithm is more useful for large search space.
Informed search algorithm uses the idea of heuristic, so it is also called
Heuristic search.
◉ Hill climbing
◉ A* Algorithm
◉ Alpha-Beta Pruning
Hill Climbing
Flat local maximum: It is a flat space in the landscape where all the neighbour agents of current states have the same
value.
Ridge: It is a region that is higher than its neighbors but itself has a slope. It is a special kind of local
maximum.
Shoulder: It is a plateau that has an uphill edge.
Problems in different regions in Hill climbing
Hill climbing cannot reach the optimal/best state(global
maximum) if it enters any of the following regions :
◉ Local maximum: At a local maximum all neighboring
states have a value that is worse than the current state. Since
hill-climbing uses a greedy approach, it will not move to the
worse state and terminate itself. The process will end even
though a better solution may exist.
To overcome the local maximum problem: Utilize the
backtracking technique. Maintain a list of visited states. If
the search reaches an undesirable state, it can backtrack to
the previous configuration and explore a new path.
Problems in different regions in Hill climbing
◉ Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make
the initial state as the current state.
◉ Loop until the solution state is found or there are no new operators present which can be
applied to the current state.
Select a state that has not been yet applied to the current state and apply it to produce a new state.
Perform these to evaluate the new state.
If the current state is a goal state, then stop and return success.
If it is better than the current state, then make it the current state and proceed further.
If it is not better than the current state, then continue in the loop until a solution is found.
◉ Exit from the function.
Hill Climbing Algorithm
Problem: https://www.youtube.com/watch?v=wM4n12FHelM
4-Queens
◉ States: 4 queens in 4 columns (256 states)
◉ Neighborhood Operators: move queen in column
◉ Evaluation / Optimization function: h(n) = number of attacks
◉ Goal test: no attacks, i.e., h(G) = 0
Initial state (guess).
• Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make current
state as initial state.
• Step 2: Loop until a solution is found or the current state does not change.
• Let SUCC be a state such that any successor of the current state will be better than it.
• For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set current state to SUCC.
• Step 5: Exit.
Algorithm
◉ Step-01:
Define a list OPEN.
Initially, OPEN consists solely of a single node, the start node S.
◉ Step-02:
If the list is empty, return failure and exit.
◉ Step-03:
Remove node n with the smallest value of f(n) from OPEN and move it to list CLOSED.
If node n is a goal state, return success and exit.
◉ Step-04:
Expand node n.
Algorithm Continued…
◉ Step-05:
If any successor to n is the goal node, return success and the solution
by tracing the path from goal node to S.
Otherwise, go to Step-06.
◉ Step-06:
For each successor node,
Apply the evaluation function f to the node.
If the node has not been in either list, add it to OPEN.
◉ Step-07:
Go back to Step-02.
Graph that we will work on… A is Initial and J is final state
Problem Example
◉ Step-01:
We start with node A.
Node B and Node F can be reached from node A.
◉ A* Algorithm calculates f(B) and f(F).
f(B) = 6 + 8 = 14
f(F) = 3 + 6 = 9
◉ Since f(F) < f(B), so it decides to go to node F.
Path- A → F
Problem Example
Path- A → F → G
Mini-Max algorithm
◉ The Min-Max algorithm involves several key steps, executed recursively until
the optimal move is determined. Here is a step-by-step breakdown:
◉ Step 1: Generate the Game Tree
◉ Objective: Create a tree structure representing all possible moves from the
current game state.
◉ Details: Each node represents a game state, and each edge represents a possible
move.
Step-by-Step involved in the Mini-Max Algorithm
◉ Terminal States
◉ For terminal states, the utility value is directly assigned:
Alpha-Beta Pruning
Step-1 Step-2
Step-3
Continued…
Step-4
Step-5
Continued…
Step-6
Step-7
Continued…
Step-8
Step-9
Continued…
Step-10
Step-12
Continued…
Step-13
Step-14
Continued…
Step-15
Step-16
Continued…
Step-17
Step-18
Continued…
Final Step
Constraint Satisfaction Problem
These are the three main elements of a constraint satisfaction technique. The constraint value consists of a pair
of {scope, rel}. The scope is a tuple of variables which participate in the constraint and rel is a relation which
includes a list of values which the variables can take to satisfy the constraints of the problem.