0% found this document useful (0 votes)
12 views66 pages

CS44112 Artificial Intelligence Unit 2

The document outlines the curriculum for a course on Artificial Intelligence, covering topics such as intelligent agents, problem-solving through search, knowledge representation, planning, and machine learning. It details specific algorithms and techniques, including forward and backward chaining, state-space search, and various search strategies like A*, greedy search, and minimax. Each section provides examples and properties of the algorithms, emphasizing their applications and performance metrics.

Uploaded by

Virat D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views66 pages

CS44112 Artificial Intelligence Unit 2

The document outlines the curriculum for a course on Artificial Intelligence, covering topics such as intelligent agents, problem-solving through search, knowledge representation, planning, and machine learning. It details specific algorithms and techniques, including forward and backward chaining, state-space search, and various search strategies like A*, greedy search, and minimax. Each section provides examples and properties of the algorithms, emphasizing their applications and performance metrics.

Uploaded by

Virat D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

CS44112: Artificial Intelligence

1
UNIT I: Intelligent Agents
UNIT II: Problem-solving through Search
UNIT III: Knowledge Representation and Reasoning
UNIT IV: Planning
UNIT V: Representing and Reasoning with Uncertain Knowledge
UNIT VI: Decision-Making
UNIT VII: Machine Learning and Knowledge Acquisition

2
UNIT II: PROBLEM-SOLVING THROUGH SEARCH
ROADMAP

❑ Forward and backward,


❑ State-space,
❑ Blind,
❑ Heuristic,
❑ Hill-Climbing
❑ Problem-reduction, A, A*, AO*, minimax,
❑ Constraint propagation

3
Forward and Backward chaining in AI
❑What is a Inference engine? The artificial intelligence system's inference engine is the
part that applies logical rules to the knowledge base to deduce new information from
known facts.
❑ Forward chaining: "forward chaining" begins with basic statements in the knowledge
base and proceeds forward by applying inference rules to extract further data until the
objective is met.
❑ Beginning with known facts, the Forward-chaining algorithm adds the conclusion of
each rule whose premises are satisfied to the set of known facts. Until the issue is
resolved, this process is repeated.
❑ Backward chaining: An algorithm described as a "backward chaining algorithm"
operates on the principle of starting with the objective and working backward by
chaining together existing facts that support the aim.

4
Forward Chaining in AI
Properties:
• It takes a down-up strategy, working from the bottom to the top.
• Making a decision based on data/information (data driven) that is
currently available involves beginning at the initial state and working
your way up to the desired state.
• As the forward-chaining strategy uses the available data to attain the
goal, it is sometimes referred to as data-driven.

5
Forward Chaining in AI: Example
“As per the law, it is a crime for an American to sell weapons to hostile nations.
Country A, an enemy of America, has some missiles, and all the missiles were
sold to it by Robert, who is an American citizen.”
• Prove that “Robert is criminal."

6
Forward Chaining in AI: Example
“As per the law, it is a crime for an American to sell weapons to hostile nations. Country A, an
enemy of America, has some missiles, and all the missiles were sold to it by Robert, who is an
American citizen”
FACTS CONVERSION
➢It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r are
variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
➢Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It can be written in two definite
clauses by using Existential Instantiation, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)

7
Forward Chaining in AI: Example
➢All of the missiles were sold to country A by Robert.
p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
➢Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
➢Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
➢Country A is an enemy of America.
Enemy (A, America) .........(7)
➢Robert is American
American(Robert). ..........(8)

8
Forward Chaining in AI: Proof
STEP 1:
➢ Start with the known facts and will choose the sentences which do not have implications,
such as: American(Robert), Enemy(A, America), Owns(A, T1), and Missile(T1)

STEP 2:
➢ We will see those facts which infer from available facts and with satisfied premises

9
Forward Chaining in AI: Proof
STEP 3:

Hence it is proved that Robert is Criminal using forward chaining approach.

10
Backward Chaining in AI
Properties:
• It is referred to as a top-down strategy.
• The goal of backward chaining is divided into one or more sub-goals in order to
substantiate the facts. Since a set of goals determines which rules are chosen and
applied, it is known as a goal-driven method.
• Applications of the backward-chaining algorithm include automated tools for
proving theorems, inference engines, proof helpers, and game theory.
• For the most part, the backward-chaining approach employed a depth-first
search (DFS) approach to find proof.

11
Backward Chaining in AI: Example
➢ In Backward chaining, we will start with our goal predicate, which is Criminal(Robert),
and then infer further rules.
Step-1:
▪ At the first step, we will take the goal fact and from the goal fact, we will infer other facts, and at
last, we will prove those facts true. So our goal fact is “Robert is Criminal," so following is the
predicate of it.
Step-2:
• We will infer other facts form goal fact which satisfies the rules. So as we can see in Rule-1, the
goal predicate Criminal (Robert) is present with substitution {Robert/P}. Here we can see
American (Robert) is a fact, so it is proved here.

12
Backward Chaining in AI: Example
Step-3:
▪ We will extract further fact Missile(q) which infer from Weapon(q), as it satisfies Rule-(5).
Weapon (q) is also true with the substitution of a constant T1 at q.

13
Backward Chaining in AI: Example
Step-4:
▪ We can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert, T1, r) which satisfies the
Rule- 4, with the substitution of A in place of r. So these two statements are proved here.

14
Backward Chaining in AI: Example
Step-5:
▪ We can infer the fact Enemy(A, America) from Hostile(A) which satisfies Rule- 6. And hence all
the statements are proved true using backward chaining.

15
Forward vs Backward Chaining in AI

16
UNIT II: PROBLEM-SOLVING THROUGH SEARCH
ROADMAP

❑ Forward and backward,


❑ State-space search,
❑ Blind search (uninformed search),
❑ Heuristic,
❑ Hill-Climbing
❑ Problem-reduction, Greedy, A*, AO*, minimax,
❑ Constraint propagation

17
State-space Search

• In artificial intelligence and computer science, state space search is a popular technique for
solving problems by continuously analysing all of the problem's possible states.
• In addition, a state space search method deals with between the starting state to the desired
state using the state space. Until a solution is found, it also produces and investigates alternate
possibilities of the current condition.
• The size of the state space has an enormous effect on how effective a search algorithm is.
To effectively search the state space, it is crucial to select a suitable representation and
search strategy.
• The A* algorithm is the most well-known state space search algorithm. Other well-liked state
space search techniques include genetic algorithms, simulated annealing, hill climbing,
breadth-first search (BFS), and depth-first search (DFS).

18
State-space Search

19
State-space Search: 8-puzzle problem Example

20
State-space Search: 8-puzzle problem Example

21
State-space Search: 8-puzzle problem Example

22
1 3 2 1 2 3
4 6 4 5 6
5 7 7
23
TYPES OF SEARCH ALGORITHM

24
Uninformed Search

• Beyond what is specified in the problem statement, this search algorithms do not have
any other information about the goal node. This search method considers:
a) A problem graph, containing the start node S and the goal node G.
b)A strategy, describing the manner in which the graph will be traversed to get
to G.
c) A fringe, which is a data structure used to store all the possible states (nodes)
that you can go from the current states.
d)A tree, that results while traversing to the goal node.
e) A solution plan, which the sequence of nodes from S to G.

25
Depth First Search (DFS)

• This algorithm starts at the root node (selecting some arbitrary node as the root node in
the case of a graph) and explores as far as possible along each branch before
backtracking.
• It uses last in- first-out (LIFO) /Stack strategy.
Question. Which solution would DFS find to move from node S to node G if run on
the graph below?
E

S A B C D G
F

26
Depth First Search (DFS) Performance Matric

• Time complexity: Equivalent to the number of nodes traversed in DFS.

Where, d= the depth of the search tree = the number of levels of the search tree.
𝑛𝑖 =number of nodes in level i
• Space complexity: Equivalent to how large can the fringe get.

• Optimality: DFS is not optimal, meaning the number of steps in reaching the solution,
or the cost spent in reaching it is high.

27
Breadth First Search (BFS)
• It starts at the tree root (or some arbitrary node of a graph, sometimes referred to as a
‘search key’), and explores all of the neighbor nodes at the present depth prior to
moving on to the nodes at the next depth level.
• It uses first-in-first-out (FIFO) / Queue strategy.
• BFS traverses the tree “shallowest node first”, it would always pick the shallower
branch until it reaches the solution
Question. Which solution would BFS find to move from node S to node G if run on
the graph below? E

S A B C D G
F
28
BFS Performance Matric

• Time complexity: Equivalent to the number of nodes traversed in BFS until the
shallowest solution

Where, s= the depth of the shallowest solution.


𝑛 𝑠 =number of nodes in level i
• Space complexity: Equivalent to how large can the fringe get.

• Optimality: BFS is optimal as long as the costs of all edges are equal.

29
Uniform Cost Search (UCS)

• The goal is to find a path where the cumulative sum of costs is the least.
• UCS is complete only if states are finite and there should be no loop with zero weight.
• UCS is optimal only if there is no negative cost.
Question. Which solution would UCS find to move from node S to node G if run on
the graph below? 1
E
1 2 3 1
1
S A B C D G
2 6
F

30
Informed Search

• This algorithms have information on the goal state, which helps in more efficient
searching. This information is obtained by something called a heuristic (learn
something for themselves).
• Heuristic Search: A heuristic in an informed search is a function that calculates the
distance between a state and the objective state. Manhattan distance, Euclidean
distance, etc. are a few examples. (Closer to the objective, the shorter the distance.)

31
1. Greedy Search (best first search algorithm)

• The node that is closest to the goal node is expanded in a greedy search. The heuristic
h(x) is used to estimate the "closeness".
• Heuristic: The definition of a heuristic h is:
h(x) = Estimate of node x's distance from the goal node.
• The node is closer to the objective the lower the value of h(x).
• Approach: Enlarge the node that is nearest to the desired state, that is, the node with a
smaller h value.

32
1. Greedy Search (best first search algorithm)

20 25 12
A E F
7 15 10
Start S B D G Stop/Goal

C
F 10
40
1. Greedy Search (best first search algorithm)

Advantages:
• Simple and Easy to Implement
• Fast and Efficient
• Flexible
Disadvantages:
• Inaccurate Results
• Local Optima
• Lack of Completeness

34
2. A* Tree Search
• The advantages of both greedy and uniform-cost search are combined in A* Tree
Search, also referred to as A* Search.
• The cost in UCS, represented by g(x), and the cost in the greedy search, represented by
h(x), added together, serve as the search's heuristic.
• The symbol f(x) represents the total cost.

f(x) = g(x) + h(x)


• Here, h(x) is called the forward cost and is an estimate of the distance of the current
node from the goal node.
• And, g(x) is called the backward cost and is the cumulative cost of a node from the
root node.
• Provide optimal solution under estimation but not always true.
35
2. A* Tree Search

Path h(x) g(x) f(x


20 25 12 )
1 2
A E F S 7 0 7
3 5
2
7 15 10 3 S→A 20 3 23
2 4
Start S B D G S→F 40 2 42
2
Goal S→B 15 2 17
2 C 1 S→B→E 25 2+2 29
F 5
10 S→B→C 10 2+2 14
40 S→B→C→G 0 2+2+1 5
3. AO* (AND/OR) Search
• Breakdown a given problem into small pieces to efficiently explore a solution path.
• This algorithm is often used for the common path-finding problem in applications such
as video games, but was originally designed as a general graph traversal algorithm.
• Provide best/optimal solution

37
3. AO* Search example

Edge

38
A* vs AO* Search Algorithm

A* AO*
Best First Search Best First Search
Informed Search Informed Search
Used Heuristic Value Used Heuristic Value
Gives the Optimal Solution Doesn’t Guarantee to Give the Optimal
Solution
Explores all Paths. Doesn’t Explore all Possible Paths once
Get a Solution
More Memory Used Less Memory used

39
MiniMax Search Algorithm
• This is recursive or backtracking algorithm.
• In this algorithm two players play the game, one is called MAX and other is called
MIN.
• Both the players fight it as the opponent player gets the minimum benefit while they get
the maximum benefit.
• The minimax algorithm performs a depth-first search algorithm for the exploration of
the complete game tree.
• The minimax algorithm proceeds all the way down to the terminal node of the tree, then
backtrack the tree as the recursion.

40
MiniMax Search Algorithm
Step 1: A Max

Mini
B C
Max
D E F G

H I J K L M N O Terminal Node

3 2 1 -2 2 0 5 3

Terminal Values

41
MiniMax Search Algorithm
Step 2: A Max

Mini
B C
Max
3 D 1 E 2 F 5 G

H I J K L M N O Terminal Node

3 2 1 -2 2 0 5 3

Terminal Values

42
MiniMax Search Algorithm
Step 3: A Max

1 Mini
B 2 C
Max
3 D 1 E 2 F 5 G

H I J K L M N O Terminal Node

3 2 1 -2 2 0 5 3

Terminal Values

43
MiniMax Search Algorithm
2
Step 2: A Max

1 Mini
B 2 C
Max
3 D 1 E 2 F 5 G

H I J K L M N O Terminal Node

3 2 1 -2 2 0 5 3

Terminal Values

44
MiniMax Search Algorithm Performance
• Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
• Time complexity- As it performs DFS for the game-tree, so the
- time complexity of Min-Max algorithm is 𝑶 𝒃𝒅 , where 𝒃 is branching
factor of the game-tree, and 𝒅 is the maximum depth of the tree.
- Space Complexity- Space complexity of Mini-max algorithm is also
similar to DFS which is 𝑶 𝒃. 𝒎 .

45
Alpha-beta Pruning Search Algorithm
• Modified version of the minimax algorithm.
• It is an optimization technique for the minimax algorithm.
• Alpha is the best value that the maximizer currently can guarantee at that level or
above.
• Beta is the best value that the minimizer currently can guarantee at that level

46
Alpha-beta Pruning Search Algorithm
Alpha (𝛼)
Step 1: A Max

Beta (β) Mini


B β C
Max
𝛼 D 𝛼 E 𝛼 F 𝛼 G

H I J K L M N O Terminal Node

3 2 1 -2 2 0 5 3

Terminal Values

47
Alpha-beta Pruning Search Algorithm
Alpha (𝛼 ≥ 3)
Step 1: A Max

β≤3 Mini
B β C
Max
𝛼 ≥3 D 𝛼 ≥1 E 𝛼 F 𝛼 G

H I J K L M N O Terminal Node

3 2 1 -2 2 0 5 3

Terminal Values

48
Alpha-beta Pruning Search Algorithm
Alpha (𝛼 ≥ 3)
Step 1: A Max
×
β≤3 Mini
B β≤2 C
Max
𝛼 ≥3 D 𝛼 ≥1 E 𝛼 ≥2 F 𝛼≥5 G Prune

H I J K L M N O Terminal Node

3 2 1 -2 2 0 5 3

Terminal Values

49
Example:

50
51
52
53
AlphaBeta Pruning Algorithm Performance
• Time complexity- As it performs DFS for the game-tree, so the
- time complexity of Min-Max algorithm is 𝑶 𝒃𝒅/𝟐 , where 𝒃 is branching
factor of the game-tree, and 𝒅 is the maximum depth of the tree.

54
HILL- Climbing Search
• Hill climbing search is a local search problem.
• The purpose of the hill climbing search is to climb a hill and reach the topmost peak/
point of that hill.
• It is based on the heuristic search technique where the person who is climbing up on the
hill estimates the direction which will lead him to the highest peak.

Q.1. Heuristic Vs Non-heuristic Search Techniques?

55
STATE-SPACE landscape of hill climbing algorithm
• To understand the concept of hill climbing algorithm, consider the below landscape
representing the goal state/peak and the current state of the climber. The topographical
regions shown in the figure can be defined as:

56
• Global Maximum: It is the highest point on the hill, which is the goal state.
• Local Maximum: It is the peak higher than all other peaks but lower than the global
maximum.
• Flat local maximum: It is the flat area over the hill where it has no uphill or downhill.
It is a saturated point of the hill.
• Shoulder: It is also a flat area where the summit is possible (meeting between two
peaks is possible.
• Current state: It is the current position of the person.

57
Types of Hill Climbing search algorithm

58
• Simple hill climbing is the simplest technique to climb a hill. The task is to reach the highest
peak of the mountain. Here, the movement of the climber depends on his move/steps. If he finds
his next step better than the previous one, he continues to move else remain in the same state.
This search focus only on his previous and next step.
• Steepest-ascent hill climbing is different from simple hill climbing search. Unlike simple hill
climbing search, it considers all the successive nodes, compares them, and choose the node
which is closest to the solution. Steepest hill climbing search is similar to best-first search
because it focuses on each node instead of one.
• Stochastic hill climbing does not focus on all the nodes. It selects one node at random and
decides whether it should be expanded or search for a better one.

59
Advantages & limitations of hill climbing algorithm

• Hill climbing algorithm is a fast and furious approach. It finds the solution
state rapidly because it is quite easy to improve a bad state.
• But, there are following limitations of this search:
1. Local Maxima: It is not the goal peak because there is another peak
higher than it.
2. Plateau: It becomes difficult for the climber to decide that in which
direction he should move to reach the goal point. Sometimes, the person gets
lost in the flat area.
3. Ridges: It is a challenging problem where the person finds two or more
local maxima of the same height commonly. It becomes difficult for the
person to navigate the right point and stuck to that point itself.

60
UNIT II: PROBLEM-SOLVING THROUGH SEARCH
ROADMAP

❑ Forward and backward,


❑ State-space search,
❑ Blind search (uninformed search),
❑ Heuristic,
❑ Hill-Climbing
❑ Problem-reduction, Greedy, A*, AO*, minimax,
❑ Constraint propagation

61
Constraint Satisfaction Problem (CSP)
• Follow intelligent backtracking algorithm.
• Known as graph/map coloring method
Steps:
1. Sate is Define variables with values from domains
2. Define domains
3. Define constrains

62
Constraint Satisfaction Problem (CSP) example
MAP COLORING
Steps:
1. Define variables = {WA, NT, SA, QL, NSW, V}
2. Define domains ={Red, Green , Blue}
3. Define constrains → adjacent regions must have
different colors
Example: WA≠NT, NT ≠SA

63
Constraint Satisfaction Problem (CSP)
example
MAP COLORING
Steps:
1. Define variables = {WA, NT, SA, QL, NSW, V}
2. Define domains ={Red, Green , Blue}
3. Define constrains → adjacent regions must have
different colors
Example: WA≠NT, WA ≠SA
Solution:

64
Constraint Satisfaction Problem (CSP)
example
WA NT SA QL NSW V
Initial R,G,B R,G,B R,G,B R,G,B R,G,B R,G,B
Domain
WA=R R G,B G,B R,G,B R,G,B R,G,B
NT=G R G B R,B R,G,B R,G,B
SA=B R G B R R,G R,G
QL=R R G B R G R
NSW=G R G B R G R
V=R R G B R G R

65
END OF UNIT 2

66

You might also like