0% found this document useful (0 votes)
4 views139 pages

Artificial Intelligence1

The document discusses key concepts in artificial intelligence, focusing on search space definitions, problem formulation, and various search algorithms such as BFS, DFS, and A*. It outlines the characteristics of production systems and heuristic search methods, along with examples like the water jug problem and hill climbing search. Additionally, it highlights the importance of evaluating nodes in search algorithms to find optimal solutions.

Uploaded by

gamarew539
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views139 pages

Artificial Intelligence1

The document discusses key concepts in artificial intelligence, focusing on search space definitions, problem formulation, and various search algorithms such as BFS, DFS, and A*. It outlines the characteristics of production systems and heuristic search methods, along with examples like the water jug problem and hill climbing search. Additionally, it highlights the importance of evaluating nodes in search algorithms to find optimal solutions.

Uploaded by

gamarew539
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 139

Artificial Intelligence

R. K. Dash

Text book- Elaine Rich, Kevin Knight, & Shivashankar B Nair, Artificial Intelligence, McGraw Hill

Reference book- Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach
Search Space Definitions
• Problem formulation
• Describe a general problem as a search problem
• Solution
• Sequence of actions that transitions the world from the initial
state to a goal state
• Solution cost (additive)
• Sum of the cost of operators
• Alternative: sum of distances, number of steps, etc.
• Search
• Process of looking for a solution
• Search algorithm takes problem as input and returns solution
• We are searching through a space of possible states
• Execution
• Process of executing sequence of actions (solution)
Level of the Model
• To test psychological theories of human performance
• To enable computers to understand human reasoning
• To enable people to understand computer reasoning
• To exploit what knowledge we can glean from people

The physical system hypothesis


Problem, Problem spaces, and Search
• Tic-Tac-Toc
• 19,683 (3^9)
• Chess
• ~10^120 board state
• To solve a problem
• Define the problem precisely
• Analyze the problem
• Knowledge representation
• Choose the best problem solving techniques
• State space representation
• to convert some given situation into some desired solution using a set of permissible
operations
• To solve a problem as a combination of known techniques and search, the general
technique of exploring the space to try to find some path from current state to goal state
Problem Formulation

A search problem is defined by the

1. State space – containing all possible configurations


2. Initial state – the state from which problem solving process may start
3. Operators- a set of rules describing the actions
4. Goal test- state that would be acceptable as a solutions to the problem
5. Solution cost (e.g., path cost)
Production system
Production system consists of

• A set of rules consisting of a left side that determines the applicability of the rule and
and a right side that describes the operations to be performed if the rule applied.

• One or more knowledge/databases that contain whatever information is appropriate


for the particular task

• A control strategy that specifies the order in which rules will be compared to the database
and the way of resolving the conflicts that arise when several rules match at once

• A rule applier

Control strategy
- It must cause motion
- It must be systemic
Example Problems –Water Jug
States: Contents of 4-gallon jug and 3-gallon jug

Initial state: (0,0)

Operators:
Goal: (2,n)

Path cost: 1 per fill


Gallons in 4-Gallon Gallons in 3-Gallon Rule applied
Jug Jug
0 0 -

0 3 2

3 0 9

3 3 2

4 2 7

0 2 5 or 12

2 0 9 or 11

Production rules for water jug problem


BFS vs DFS
L🡨 make_queue(start) L🡨 make_stack(start)
While L not empty loop While L not empty loop
n 🡨 L.remove_front() n 🡨 L.remove_front()
If goal (n) If goal (n) return true
return true S 🡨 successors (n)
L.insert(S)
S 🡨 successors (n)
Return false
L.insert(S)
Return false
DFS vs BSS

DFS BFS
1. Less memory since the nodes on the 1. More memory since all of the tree that has so far
current path are stored been generated must be stored
2. It may find a solution without examining 2. All parts of the tree must be examined to level n
much of the search space at all. before any nodes on level n+1 can be examined
3. It may follow a single, unfruitful path for 3. It will not get trapped exploring a blind alley
for a long time before the path terminates
in a state that has no successor
4. It can not find multiple solutions 4. It can find multiple solutions
5. It may not find an optimal solution 5. If there is any solution, then it is guaranteed to find it.
DFS BFS If there are multiple solutions, a minimal solution will be
found
Complete N Y
Optimal N N
Heuristic N N
Time bm bd+1
Space bm bd+1
Heuristic search

Problem characteristics
• Is the problem decomposable?
• Can solution steps be ignored or undone?
• Is the universe predictable?
• Is a goo solution absolute or relative?
• Is the solution a state or a path?
• Is a large amount of knowledge absolutely required to solve the
problem, or is knowledge important only to constrain the search?
• Does the task require interaction with a person?
Production system characteristics
• Monotonic production system
• The application of a rule never prevents the later application of another rule
that could also have been applied at the the first rule was selected
• Nonmonotonic production system

• Partially commutative production system


• If the application of a particular sequence of rules transforms state x into
state y, then any permutation of those rules that is allowable also transforms
state x in to state y
• Commutative production system
• Both monotonic and partially commutative
Heuristic search

- Exhaustive search of the problem space


- Depth-first search procedure with
Generate a possible solution backtracking
- Also known as British museum algorithm
when solutions are generated randomly

No Correct Yes
solution?

stop
Hill climbing search- Simple
If (initial _state==goal _state)
return (initial_ state)
Else
current_ state= initial_ state
repeat until (a solution is found or no more new operators left to be applied in the current state)
select an operator that has not yet been applied to current_ state
Apply the selected operator to current _state to produce new _ state
if (new_ state==goal_ state)
return ( new _ state)
else if (h(new _ state) < h(current _ state))
current_ state= new _state
else
continue
Example- Simple hill climbing
A A
6 7 6 7
8 8
B D B D
8 4 1 8 1
C 4 C
7 5 7 5
E H L E H L
F G F G
6 3 2 3 2
6
I J K I J K
Puzzle of four colored block

States: list of colors for each cell on each face

Initial state: one specific cube configuration

Heuristic: Sum of number of different colors on each of the four side

Goal: Sum should be 16 for each face

Rule: Pick as block and rotate it 90 degrees in any direction


Hill climbing search- Steepest-Ascent

Example- Steepest-Ascent hill climbing
A A
6 7 7
8 6
8
B D B
4 1 D
4 C 8 4 1
4 4 7 C
E H L 5
F G E H L
2 F G
6 3 3 2
6
I J K I J K
Limitations/drawbacks- Hill climbing search
• Local Maximum-
• A state that is better than all its neighbours but
is not better than some other states farther
away
• Backtrack to earlier node
• Plateau –
• A flat area of a search space in which a whole Local minima
set of neighbouring states have the same value
• Big jumps to new search space
• Ridge –
• Area of a search space that is higher than
surrounding area and that itself has a slope.
• Move to several directions at once
Issue with heuristic function -1
4, -28
A
-7
8, 28
H

+1 H -6 G
Heuristic1: (an object is either a block or a
table) +1 G
-5 F

● Add one for every block that is attached +1 F


-4 E
on the right object (+1) E D
● Subtract one for every block that is +1 -3
D -2 C
attached on the wrong object (-1) +1
4, -28 C
-1 B
+1
A
B A
Heuristic 2: -1
H
● For each block that has correct 4, -16 4, -15
support, add one for every block in the G G G

support ( +1, +2, +3 ……) F F F


● For each block that has an incorrect
E E E
support, subtract one for each block in
the support (-1,-2,-3 ……) D D D

C H C C

B A B A H B
Best-first search
DFS- it allows a solution to be found BSF- it does not get trapped on dead-end paths
without all competing branches having to
be expanded

• Data structure used-


• OPEN –A list of nodes that have been generated but not yet examined
(implemented by a priority queue)
• CLOSED- A list of nodes that have been examined
• OR-graph-
• A graph in which each of its branches represents an alternative path
Best-first search

Hill climbing Best-first search


One move is selected and others one move is selected but others are kept
are rejected (never to be considered) around so that they can be revisited later if
the selected path becomes less promising.

It stops if there are no successor states The best available state is selected
with better value than the current state
Best-first search algorithm
OPEN=initial _ state
Until ( a goal _ state is found or OPEN=empty)
pick the best node N from OPEN
If N is a goal node
exit with a traced solution path
Remove N from OPEN to CLOSED
Generate successors of N
for each successor of N
if it has not been generated before
evaluate it, add it to OPEN and record its parent
if it has been generated before
change its parent if this new path is better than the previous one
Update the cost of getting to this node and its successors
Example-1 N OPEN CLOSED P

- A10 - -

A A10 B3, C5, D1 - -

D1 E4, F6, B3, C5 A10 A

B3 G6, H5, E4, F6, C5 D1, A10 A


A
A E4 I2, J1, G6, H5, F6, C5 B3, D1, A10 D

J1 K0, I2, G6, H5, F6, C5 E4, B3, D1, E


A10
B C D K0 I2, G6, H5, F6, C5 J1, E4, B3, J
B C D [5] D1, A10

[3] [5] [1]


G [6] H [5] E [4] F [6]
A

I [2] J [1]
B C D
[3] [5] K [0]
E [4] F [6]
Example-2 N OPEN CLOSE Parent

A25 -

Initial node A25 B17, C15

A [25] C15 D10,E4, B17 A25 A

Goal state E4 F0, D10, B17 C15, A25 C


F [0]
B [17] C [15] F0 D10, B17 E4, C15, A25 E

E [4] Solution- ACEF


D [10]
Initial node
A [50]
8 7

B [27] C [25] Goal state


4 3
E [2] F [0]
5
4
D [1] 3
A* search algorithm
OPEN = [initial _ node]
CLOSED = [];
Repeat while ( true)
If OPEN = []
exit with FAIL
Choose best node (N) from OPEN
If N is a goal node
exit with a traced solution path
Remove N from OPEN to CLOSED
Generate successors of N
For each successor S of N
record N as the parent of S
Evaluate f(S)= g(S) + h′(S)
If S is new (i.e., not on OPEN, CLOSED)
add it to OPEN
If S is not new
compare f(S) on this path to previous one If previous is better or same, keep previous and discard S
Otherwise, replace previous by S
If S is in CLOSED, move it back to OPEN
Example- A* OPEN CLOSE

A 25 -

Initial node B 26 A, C22A 9+17, 7+15 A25

A [25] D21CA, A39CA, B26A, 11+10, 14+25, 18+4 C22A, A25


E22CA
9 7 Goal state
B31DCA, C30DCA, 14+17, 16+4 D21CA, C22A, A25
F [0] E20DCA, B26A, E22CA
B [17] C [15] F20EDCA, D31EDCA, E20DCA, D21CA, C22A,
C42EDCA, B26A A25
11 4
3 4
E [4]
D [10] 5
Observations on the role of g and h’
• Role of g(node)
• It helps to choose the next node to be expanded on the basis of
• How good the node itself looks (h’ estimate)
• How good is the path to the node (g)
• g=0
• It always chooses the node that seems closest to the goal
• To find path quickly
• Set the cost of going from a node to its successor as a constant (preferably 1)
• To find the cheapest path
• Actual cost of going from one node to other
• Role of h’(node)
• If h’(node)=h(node), A* will converge immediately to goal without search
• If h’(node)=0, the search is controlled by g
• If g(node)=1, the search will be BFS
• If h’ never overestimates h, A* is guaranteed to find an optimal path to goal
h’- underestimated vs overestimated
A A

B [3+1] C [4+1] D [5+1] B [3+1] C [4+1] D [5+1]

E [3+2] E [2+2] H [0+2]

F [3+3] F [1+3]

G [0+4]
Beam search
• A heuristic approach where only the most promising ß nodes (instead
of all nodes) at each step of the search are retained for further
branching
• ß is called Beam Width
• restricted, or modified, version of either a breadth-first search or a
best-first search
• Beam search is an optimization of best-first search that reduces its
memory requirements
Beam search
OPEN = {initial state}
while OPEN is not empty do
Remove the best node (N) from OPEN
If N is the goal state, backtrace path to N (through recorded parents)
and return path
Create N's successors
Evaluate each successor, add it to OPEN, and record its parent
If |OPEN| > ß , take the best ß nodes (according to heuristic) and
remove the others from the OPEN.
Example- N OPEN

A25
Initial node
A25 B17, C15
A [25]
Goal state C15 D10,E4

F [0] E4 F0, D10


B [17] C [15]
F0 D10

E [4] Solution- ACEF


D [10]
N OPEN N OPEN
A
A A

A B1, C3 A B1, C3

B1 D2, E2 B1 D2, E2, C3

D2 E2 D2 E2, C3
B [1] C [3]
E3 {} E2 C3

C3 F3, G0

G0 F3
D E F G

[2] [2] [3] [0]

• This can happen because the beam width and an inaccurate heuristic function
may cause the algorithm to miss expanding the shortest path

• A more precise heuristic function and a larger beam width can make Beam
Search more likely to find the optimal path to the goal.
Comparison of Search Techniques
DFS BFS Best HC Beam A*

Complete N Y N N N Y

Optimal N N N N N Y

Heuristic N N Y Y Y Y

Time bm bd+1 bm bm nm bm
Space bm bd+1 bm b bn bm

b- branching factor
Goal may be at level d but tree may continue to level m, m>=d
n- number of nodes in OPEN list
Problem reduction
When a problem can be divided into a set of sub problems, where each
sub problem can be solved separately and a combination of these will
be a solution, AND-OR graphs or AND - OR trees are used for
representing the solution.
Problem reduction
• Terminal node Own a cellular phone
• If it is goal state, it is labelled as SOLVED. Else
UNSOLVED or and

• Nonterminal AND-node Get a gift Save money Buy it


• It is an intermediate node which can be marked
UNSOLVED if any of its successors is/are UNSOLVED
• It is marked as SOLVED if all its successors are
marked as SOLVED.
• Nonterminal OR-node
• It is an intermediate node which can be marked
UNSOLVED if all of its successors are marked
UNSOLVED
• It is marked as SOLVED if any of its successors is/are
marked as SOLVED.
AND-node OR-node
A UNSOLVED A SOLVED

B C
B C
SOLVED UNSOLVED
SOLVED UNSOLVED

A SOLVED
A UNSOLVED

B C
B C
SOLVED SOLVED
UNSOLVED UNSOLVED
AO* algorithm
Step-1

A [5] A [6]

[9]
B C D

[3] [4] [5]


Step-2
A [11
]
[9
B ] C D
[10
[3 [4 [4 [10 ] [4
] ] ]E ] F
]

[4 [4
] ]
Step-3 Step-4
Step-5
A A
A
[12] [12]
[12]
B [6] C D [10] B [6] C D [8] B [6] C D [9]
[4] [4] [4]
[10] [8] [9]
G H E F G H E F [4] G H E F [4]
[2] [3]
[5] [7] [4] [4] [5] [7]
[5] [7]

I [2] J [2]
Step-6 I [2] J [1]

A [9] Step-7
A K [0] L [0]
[12]
B [6] C D [8] [12] Step-8
B [6] C D [7] A
[4] [8]
[4]
[8] [8] [12]
G G H
H [3] E F [3] [3] E F [2] B [6] C D [7]
[5] [7] [5] [7]
[4]
[7]
J G H
I [2] [2] M [4] N [1] [3] E F [2]
I [2] J [2] M [4] N [2] [5] [7]

P I [2] J [2] M [4] N [1]


K [0] L [0] O [3]

K [0] L [0]
K [0] L [0] O [3] P [0]
AO* algorithm

A [5] A [6]

[9]
B C D

[3] [4] [5]


A [5] A [6] A [11]

[9] [9]
B C D B C D [10]

[3] [4] [5] [3] [4] [4] [10]


[4]

E F
[4] [4]
A

[12]
B [6] C D [10]
[4]
[10]
G H E F
[5] [7] [4] [4]
A A

[12] [12]
B [6] C D [8] B [6] C D [9]
[4] [4]
[8] [9]
G H F [4] G F [4]
[2] E H [3] E
[5] [7] [5] [7]

I [2] J [1] I [2] J [2]

K [0] L [0]
A [9] A

[12] [12]
B [6] C D [8] B [6] C D [7]
[4] [4]
[8] [8]
G H G H [3] E [2]
[3] E F [3] F
[5] [7] [5] [7]

I [2] J [2] M [4] N [2] I [2] J [2] M [4] N [1]

L [0] K [0] L [0] O [3] P [0]


K [0]
A

B [6] C D [7]
[4]

G H [3] E F [2]
[5] [7]

I [2] J [2] M [4] N [1]

K [0] L [0] O [3] P [0]


A [8]

[12]
B [6] C D [7]
[4]
[7]
G H [3] E F [2]
[5] [7]

I [2] J [2] M [4] N [1]

K [0] L [0] O [3] P [0]


Means-ends analysis
• Centers around the detection of differences between the current state and goal
state
• Operator subgoaling
• The kind of backward chaining in which operators are selected and then sub-goals are setup
to establish the reconditions of the operators
• Recursive procedure
• Relies on a set of rules that can transform one problem state to another
• Rules are represented as left side and right side
• Preconditions (left side)
• The conditions that must be met for the rule to be applicable
• Right side
• Those aspects of the problem state that will be changed by the application of the rule
• Difference table
• A data structure indexes the rule by the differences that they can be used to reduce
Means-ends analysis
Compare CURRENT and GOAL
If no difference
exit
Else
select operator O that is applicable to current difference
if no such operator
exit
Apply O to CURRENT
Generate two states:
O-START (a state in which O’s preconditions are satisfied) and
O-RESULT ( resultant state after applying O on O-start)
if ( FIRST _ PART MEA(CURRENT, O-START))
AND
(LAST _ PART MEA(O-RESULT,GOAL))
are success
return(Concatetion(FIRST _PART, O, LAST _ PART) )
Example

Proof /Solve :

R2 R1

R4 R5
Constraint satisfaction problem
OPEN={set of all objects that must have value assigned to them in a complete solution}
While(inconsistency is not detected or OPEN is not empty)
{
select OB from OPEN
Const_set= the set of constraints that apply to OB
if Const_set(OB)!= previous(Const_set(OB)) or First_time(OB)
OPEN= OPEN U Const_set(OB)
remove OB from OPEN
}
If union of constraints discovered above defines a solution
quit and report the solution
Else If union of constraints discovered above defines a contradiction
quit and report the failure
Else
While (a solution is not found or all possible solutions have not been eliminated)
select an object whose value is not yet determined and select a way of strengthening the constraints on that object
recursively invoke constraint satisfaction with the current set of constraints augmented by the strengthening constraint just selected
M=1
S=8 or 9
O= 0 C4 C3 C2 C1
S E ND
N=E+1
+MOR E
C2=1
N+R > 8 M O NE Y
E <> 9
S+M+C3 >9
E=2
M=1, S+C3>8
N=3 C3=0 or 1
R=8 or 9 O=0 or 1
2+D=Y or 2+D=10+Y

2+D=Y 2+D=10+Y
N+R=10+E D=8+Y
R=9 D= 8 or 9
S=8

Y=0 Y=1
M=1
S=8 or 9
C4 C3 C2 C1
O= 0 S E ND
C ROSS N=E+1 +MOR E
+ROADS C2=1
N+R > 8 M O NE Y
DANGER E <> 9
E=5
E=3 E=4

N=4 N=5 N=6


N+R=E+10 or C1+N+R=E+10 N+R=E+10 or C1+N+R=E+10 N+R=E+10 or C1+N+R=E+10
R=9 or 8 R=9 or 8 R=9 or 8
D+3=Y or D+3=Y+10 D+4=Y or D+4=Y+10 D+5=Y or D+5=Y+10

D+5=Y+10
D+4=Y D+4=Y+10 D+5=Y
D+3=Y D+3=Y+10 1+6+R=15
N+R=14 1+5+R=14 N+R=15
N+R=13 D-7=Y R=8
R=9 R=8 R=9
R=9 D=9 D-5=Y
S=8 D-6=Y S=8
S=8 R=8 D=7
D=9 or 8
Y=2
D=9
S=9
Knowledge representation
• Representations and Mapping
• Knowledge is a description of the world. It determines a system’s competence
by what it knows
• Representation is the way knowledge is encoded. It defines a system’s
performance in doing something

Reasoning programs
*
Internal
Facts
* Representation

English English
understanding Representation

English
Representation
Mapping between facts and representation Spot has a tail
Approaches to knowledge representation
• Representational adequacy
• The ability to represent all of the kinds of knowledge that are needed in that
domain
• Inferential adequacy
• The ability to manipulate the representational structures to derive new
structures corresponding to new knowledge inferred from old
• Inferential efficiency
• The ability to incorporate additional information into the knowledge structure
that can be used to focus the attention of the inference mechanisms in the
most promising direction
• Acquisitional efficiency
• The ability to acquire new knowledge using automatic methods wherever
possible rather than reliance on human intervention
Using predicate logic
• Representation of simple facts in logic
• Propositional logic
• Represent real-world facts as logical propositions written as well-formed formulas (wff’s)
in propositional logic
• Demerits
• It is too coarse to easily describe properties of objects
• It lacks the structure to express relations that exists among two or more entities
• It does not permit to make generalized statements about classes of similar objects
Facts Propositional logic

It is raining RAINING
It is sunny SUNNY
If it is raining, then it is not sunny Plato is mortal?
Socrates is a man SOCRATESMAN
Plato is a man PLATOMAN
All men are mortal MORTALMAN
Predicate logic
Predicate logic, first-order logic or quantified logic is a formal language
in which propositions are expressed in terms of predicate, variables and quantifiers.

Predicate is a statement that contains variables and


that may be true or false depending on the values of these variables.

Sentence — Atomic Sentence


| Sentence Connective Sentence
|Quantifier Variable, . . . Sentence
| ~Sentence
| (Sentence)

An atomic sentence is formed from a predicate symbol followed by a parenthesized list of terms
Father(Ram, Hari)
Terms – Function(term,….)
| Constant
| Variable
Predicate Logic
• Terms represent specific objects in the world and can be constants,
variables or functions.
• Predicate Symbols refer to a particular relation among objects.
• Sentences represent facts, and are made of of terms, quantifiers and
predicate symbols.
• Functions allow us to refer to objects indirectly (via some
relationship).
• Quantifiers and variables allow us to refer to a collection of objects
without explicitly naming each object.

Definition

Quantifier
• Definitions of quantifiers:
∀x P(x) ⇔ P(a) ∧ P(b) ∧ P(c) ∧ …
∃x P(x) ⇔ P(a) ∨ P(b) ∨ P(c) ∨ …

∀x P(x) ⇔ ¬∃x ¬P(x)


∃x P(x) ⇔ ¬∀x ¬P(x)

¬∃x P(x) ⇔ ∀x ¬ P(x)


¬∀x P(x) ⇔ ∃x ¬ P(x)

∀x ∀y P(x, y) ⇔ ∀y ∀x P(x, y)
∃x ∃y P(x, y) ⇔ ∃y ∃x P(x, y)

∀x (P(x) ∧ Q(x)) ⇔ (∀x P(x)) ∧ (∀x


Q(x))
∃x (P(x) ∨ Q(x)) ⇔ (∃x P(x)) ∨ (∃x
Q(x))
Examples The collection of values that a variable x can take is
called x’s universe of discourse.

~Pompeiians

Romans

Pompeian
Example-


Representing Instance and ISA relationship

Computable function and predicates


Proof:
Ram is a B. Tech. student of CET. His branch is IT. He took admission in the year 2017. All the B. Tech. students who
took admission at CET in the year 2016 or later studied through online mode in 2020 due to Covid pandemic.
Prove that Ram studied through online mode.

Online(RAM)

3
gt(2017, 2015)

Nil
Resolution

Algorithm- Conversion of Predicate to Clause
form






The basic of resolution
Resolution in propositional logic
Input- Set of axioms F
Proposition P

Algorithm –
Convert all the propositions to clause form
negate P and convert the result into clause form
Add ~P to F
Repeat either a contradiction is found or no progress can be made
Select two clause (parent clauses)
Remove them together
If the resolvent is empty clause
contradiction
else
Add it to set of clauses available to procedure
Examlpe-
~R

T
The unification algorithm
• L1= X L2= RAM
RAM/X
L1= IT(x) L2= CSE(SHYAM)
FAIL
L1= IT(x, y) L2=IT(RAM)
FAIL
L1=IT(x, y) L2=(RAM, z)

S=Unify(x, RAM)
S=RAM/x
S=Unify(y, z)
S=FAIL

L1=IT(x, y, z) L2=(RAM, SHYAM, HARI)

S=Unify(x, RAM)
S=RAM/x
S=Unify(y, SHYAM)
S={RAM/x, SHYAM/x}
S=Unify(Z, HARI)
S={RAM/x, SHYAM/y, HARI/z}
Resolution in predicate logic
Ram is a student of IT branch. Some students of IT branch are brilliant. Ram is also a brilliant student.
Brilliant students IT branch either get good job or decide that they must pursue higher studies. Ram did not
Get good job.
Prove – Ram pursued higher study
~study(RAM, HS)

RAM/x

. Brilliant(RAM)

~get(RAM, JOB)

IT(RAM)
Ram is a B. Tech. student of CET. His branch is IT. He took admission in the year 2017. All the B. Tech. students
who took admission at CET in the year 2016 or later studied through online mode in 2020 due to Covid
pandemic.
Prove that Ram studied through online mode.
Proof
~Online(RAM)

RAM/x

admission(RAM, 2017)

2017/t1

~Gt(2017,2015)
1. The member of the Elm St. Bridge Club are
Joe, Sally, Bill and Ellen
2. Joe is married to Sallay
3. Bill is Ellen’s brother
4. The spouse of every married person in the
club is also in the club
5. The last meeting of the club was at Joe’s
house
Proof: The last meeting of the club was at
Sally’s house
~Meeting(CLUB, JH)

Club(JOE)

Club(SALLY)
Resolution- Demerits
• The valuable heuristic information that is contained in the original
representation of facts is lost by converting them to clause form
• People do not think in resolution

• Natural deduction
It describes a blend of techniques, used in combination to solve problems that
are not traceable by any one method alone.

One common technique is to talk about objects involved in the predicate and not
the predicate itself.
Representing knowledge using rules
• A Declarative representation is one in which • A Procedural representation is one in which the
knowledge is specified but the use to which thatcontrol information that is necessary to use the
knowledge is to be put in, is not given knowledge is considered to be embedded in the
knowledge itself.
• To use declarative representation, it must be • To use a procedural procedural representation, it
augmented with a program which specifies what must be augmented with an interpreter that
is to be knowledge and how. follows the instructions given in the knowledge.
Logic Programming

Differences between logic and PROLOG
representation
PROLOG representation
Quantification is provided implicitly by the way
The variables are interpreted.
(Variables- UPPER CASE letters, Constant- lowercase or
number)

Explicit symbol for And (,) but none for Or

Interpreter always works backward from goals

Conversion to Horn Clause


1. If the Horn clause contains no negative literals, then leave it as it is
2. Else
1. Rewrite Horn clause as an implication
2. Combine all the negative literals into the antecedent of the implication and leaving the
the single positive literals as consequent
Example-

Close world assumption
• Negation of failure
• Logical negation can not be represented explicitly in PROLOG. However,
negation is represented implicitly by the lack of assertation. This leads to the
problem solving strategy called negation as failure.
• Closed world assumption
• It states that all relevant, true assertations are contained the knowledge
base or are derivable from the assertations that are so contained. Any
assertation that is not present can therefore be assumed to be false.
?- cat(fluffy)
?- cat(mittens)
Forward vs Backward Reasoning
Forward reasoning Backward reasoning
1. Begin building a tree of move sequences that might 1. Begin building a tree of move sequences that might
be solution by starting with the initial be solution by starting with the goal
configuration(s) at the root of the tree configuration(s) at the root of the tree.
2. Generate the next level of tree by finding all the 2. Generate the next level of tree by finding all the
rules whose left sides match the root node and use rules whose right sides match the root node and use
the right sides to create the new configurations. the left sides to create the new configurations.
3. Generate each node by taking each node generated 3. Generate each node by taking each node generated
at the previous level and applying to it all of the at the previous level and applying to it all of the
rules whose left sides match it. rules whose right sides match it.
4. Continue 4. Continue
5. This is also called Goal-Directed Reasoning.
5. Example- OPS5 6. Examples- PROLOG, MYCIN

Forward Rules : which encode knowledge about how to respond to certain input configurations.

Backward Rules : which encode knowledge about how to achieve particular goals.
Forward reasoning OR Backward reasoning??
• Are there more possible start states or goal states?
• Move from smaller set of states to larger set of states.
• In which direction is the branching factor (the average number of
nodes that can be reached directly from a single node) greater?
• proceed in the direction with the lower branching factor.
• Will the program be asked to justify its reasoning process to the user?
• What kind of event is going to trigger a problem-solving episode?
• If it is the arrival of a new fact , forward reasoning should be used.
• If it is a query to which response is desired, use backward reasoning.
Matching
• Indexing
• Perform a simple search through all the rules, comparing each one’s
precondition to the current state and extracting all the ones that match.
• Problems-
• It is required to use large number of rules
• It does not guarantee whether a rule’s preconditions are satisfied by a particular state

• Matching with variables


• To match a single condition against a single element in a state description
• Unification procedure
• Backward Chaining Systems usually use depth-first backtracking to select
individual rules
• Forward chaining systems use Conflict Resolution Strategies
Many-many match problem

• Complex and approximate matching
• When the preconditions of a rule specify the required properties that are not
stated explicitly in the description of the current state.
• If the preconditions approximately match the current situation
• Conflict resolution
• Conflict resolution strategies are used in production systems in artificial
intelligence to help in choosing which production rule to fire. The need for
such a strategy arises when the conditions of two or more rules are satisfied
by the currently known facts.
• Assign a preference based on the rules that matched
• Assign a preference based on the objects that matched
• Assign a preference based on the states that matched

If X and Y then Z
If Ram will go, I would also go.
If X and Y or A then Z
Control knowledge
• Search control knowledge
• Knowledge about which paths are more likely to lead quickly to goal state is often called search controlled
knowledge
1. Knowledge about which states are more preferable to others.
2. Knowledge about which rule to apply in a given situation.
3. Knowledge about the order in which to pursue subgoals.
4. Knowledge about useful sequences of rules to apply.
• Search control strategy represents knowledge about knowledge and hence, it is also called as meta
knowledge
• SOAR
• general architecture for building intelligent system
• SOAR is based on set of specific, cognitively motivated hypotheses about the structure of human problem
solving
1. Long-Term Memory is stored as a set of productions (or rules).
2. Short-Term Memory (also called working memory) is a buffer and it is analogues to the state description in
problem solving.
3. All problem-solving activity takes place as a state space traversal.
4. All intermediate and final results of problem solving are remembered (or, chunked) for further reference.
Assignment
1. Question- 3, 4, 5, 6, 9 [page# 166-168, Book- Elaine Rich, Kevin
Knight, &Shivashankar B Nair, Artificial Intelligence, McGraw Hill,3rd
ed.,2009 ]
2. Question- 1 [page#192]
Symbolic reasoning under uncertainty



Minimalist reasoning
• Minimal model
• A model is defined to be minimal if there are no other models in which fewer
thing are true
• Closed world assumption
• Its assumptions are not always true in the world
• It is purely syntactic reasoning process
• Limitations
• It operates on individual predicates without considering the interaction among the
predicates
• It assumes that all predicates have all of their instances listed
A(RAM) V B(RAM) Single(RAM) ~Married(RAM)
~A(RAM) Single(HARI) ~Married(HARI)
~B(RAM) SHYAM? SHYAM?
• Circumscription
• The effect of new axioms is to force a minimal interpretation on a selected portion of the knowledge
base
A(RAM) V B(RAM)
Circumscribe only A
this assertation describes those models in which A is true of no one and B is true of at least RAM
Circumscribe only B
this assertation describes those models in which B is true of no one and A is true of at least RAM
Circumscribe A and B together
It describes those models in which
A is true of only RAM and B is true of no one
or
B is true of only RAM and A is true of no one
Implementation issue
• How to derive exactly those nonmonotonic conclusions that are
relevant to solving the problem at hand
• How to update knowledge incrementally as problem solving
progresses
• More than one interpretation of the known facts is licensed by
available inference rules.
• These theories are not computationally effective. None of them are
decidable. Some are semi decidable, but only in their propositional
forms. And none is efficient
Semantic nets Has-part
• Represent knowledge as a graph College Hostel
• Nodes correspond to facts or concepts
• Arcs correspond to relations or associations instance

between concepts Student-of Dept.- of


B. Tech CET IT
• Nodes and arcs are labeled
• Representing nonbinary predicates
John gave the book to Mary gift CHOCOLATE
Give(JOHN, BOOK, MARY)
instance instance
Ram gifted a chocolate to Hari
agent object
Gift(RAM, CHOCOLATE, HARI) RAM V34 BK07
beneficiary

HARI
Ram is taller than Hari Ram is 6 feet tall and taller and that
he is taller than Hari

RAM HARI

height height

Greater-than
H1 H2

6
Partitioned semantics nets
• Partition the semantic net into
Dogs Bite Mail-carrier
hierarchical set of spaces each
of which corresponds to the isa isa isa

scope of one or more d assailant


b victim
m
variables.

Dogs Bite Mail-carrier SA


GS
isa isa
isa form isa
S1
g d assailant
b victim
m
Every dog in town has bitten the constable

GS Dogs Bite Mail-carrier SA

Town-Dogs
isa
isa
form isa isa S1
assailant victim
g d b m
Frames
• A collection of attributes (called slots) and associated values that
describe the entity in the world.
Student
isa: Person
Cardinality: 3000
*Work: Study
Conceptual Dependency

Set of primitive acts
with a spoon
Game playing
• Minimax search procedure
• A depth-first, depth-limited search procedure
• Plausible-move generator to generate the set of possible successor positions
• Apply static evaluation function to those positions and choose the best one
• Goal is to maximize the value of static evaluation function of the next board
position
• Auxiliary procedures
• MOVEGEN(position, player) – the plausible move-generator which returns a list of nodes
representing the moves that can be made by player in position
• STATIC((position, player)- the static evaluation function which returns a number
representing the goodness of position from the standpoint of
• DEEP-ENOUGH(position, depth)- it will ignore its position parameter and simply return
TRUE if its depth parameter exceeds constant cut-off values
Algorithm
MINIMAX(position, depth player)
if DEEP-ENOUGH(position, depth)
return Value=STATIC(position, player); path=nil
else
SUCC= MOVEGEN(position, player)
if SUCC=empty
return Value=STATIC(position, player); path=nil
else
BEST-SCORE= STATIC(position, player)
for each element S of SUCC
RESULT-SUCC=MINIMAX(SUCC, depth+1, OPPOSITE(player)
NEW-VALUE=VALUE(RESULT-SUCC)
if NEW-VALUE> BEST-SCORE
BEST-SCORE= NEW-VALUE
BEST-PATH=PATH(RESULT-SUCC)
return
VALUE=BEST-SCORE, PATH=BEST-PATH
Minimax –Example
Max 3 6 The computer can
obtain 6 by
choosing the right
6 hand edge from the
Min 5 3 first node.

Max 1 3 6 0 7
5

5 2 1 3 6 2 0 7

CS 484 – Artificial Intelligence 119


Alpha-beta pruning
• Two threshold values
• One representing a lower bound A MAXIMIZE
on the value that a maximizing
node may ultimately be assigned(
alpha) B C MINIMIZE
• Other representing an upper
bound on the value that a
D E F G MAXIMIZE
minimizing assigned (beta)
(3) (5)

I J M N MINIMZE
(5) (7) (8)

K L
(0) (7)
Iterative deepening search d=0
Iterative deepening search d=1
Iterative deepening search d=2
Iterative Deepening Search d=3
Iterative deepening
Depth-First Iterative Deepening
Iterative_deepening()
Depth=1
Repeat while(true)
Path=DFS(Depth)
If Path= Solution_path
return Solution_path
Else
Depth=Depth+1

*Maximum amount of memory used is proportional to the number of


nodes in the solution path
Planning
• Search and problem solving strategies.
• Knowledge Representation schemes.
• Problem decomposition -- breaking problem into smaller pieces and trying to
solve these first.
Classic search techniques could be applied to planning state in this manner:
• A* Algorithm-- best first search,
• Problem decomposition-- Synthesis, Frame Problem.
• AO* Algorithm-- Split problem into distinct parts.
• Heuristic reasoning-- ordinary search backtracking can be hard so introduce
reasoning to devise heuristics and to control the backtracking.
An example domain- The Blocks World
The world consists of:
• A flat surface such as a table top
• An adequate set of identical blocks which are identified by letters.
• The blocks can be stacked one on one to form towers of apparently unlimited height.
• The stacking is achieved using a robot arm which has fundamental operations and states which can be assessed using logic and combined using
logical operations.
• The robot can hold one block at a time and only one block can be moved at a time.
Four actions
UNSTACK(A,B)-- pick up clear block A from block B;
STACK(A,B)-- place block A using the arm onto clear block B;
PICKUP(A)-- lift clear block A with the empty arm;
PUTDOWN(A)-- place the held block A onto a free space on the table.
Five predicates:
ON(A,B)-- block A is on block B.
ONTABLE(A)-- block A is on the table.
CLEAR(A)-- block A has nothing on it.
HOLDING(A)-- the arm holds block A.
ARMEMPTY-- the arm holds nothing.
Components of planning system
1. Choose the best rule based upon heuristics.
2. Apply this rule to create a new state.
3. Detect when a solution is found.
4. Detect dead ends so that they can be avoided.
5. Detect when a nearly solved state occurs and use special methods
to make it a solved state.
Goal Stack Planning
Goal Stack Planning Push the original goal to the stack. Repeat until the
stack is empty:
– If stack top is a compound goal, push its unsatisfied subgoals to the
stack.
– If stack top is a single unsatisfied goal, replace it by an operator that
makes it satisfied and push the operator’s precondition to the stack.
– If stack top is an operator, pop it from the stack, execute it and
change the database by the operation’s affects.
– If stack top is a satisfied goal, pop it from the stack.
Sussman Anamoly
Solution
Nonlinear planning using constraint posting
• Nonlinear plan
• A plan which is not composed of a linear sequence of complete subplans

• Constraint posting
• To build up a plan incrementally hypothesizing operators
• Partial ordering between operators
• Binding of variables within operators
• Operations
• Step addition
• Promotion
• Declobbering- insert S3 between S1 and S2 such that S2 reasserts some preconditions
that was negated (clobbered) by S1
• Simple establishment
• Separation
TWEAK -algorithm

CLEAR(B) CLEAR(C)
*HOLDING(A) *HOLDING(B)
_____________ _____________
STACK(A,B) STACK(B, C)
_____________ _____________
ARMEMPTY ARMEMPTY
ON(A,B) ON(B, C)
~CLEAR(B) ~CLEAR(C)
~HOLDING(A) ~HOLDING(B)
TWEAK - algorithm
*CLEAR(A) *CLEAR(B)
ONTABLE(A) ONTABLE(B)
*ARMEMPTY *ARMEMPTY
_____________ _____________
PICKUP(A) PICKUP(B)
_____________ _____________
~ONTABLE(A) ~ONTABLE(A)
~ARMEMPTY ~ARMEMPTY
HOLDING(A) HOLDING(B)
TWEAK - algorithm

*CLEAR(A) *ON(X, A)
ONTABLE(A) *CLEAR(x)
*ARMEMPTY *ARMEMPTY
_____________ _____________
PICKUP(A) UNSTACK(x, A)
_____________ _____________
~ONTABLE(A) ~ARMEMPTY
~ARMEMPTY CLEAR(A)
HOLDING(A) HOLDING(A)
~ON(x, A)
TWEAK - algorithm

Declobbering
HOLDING(C)
___________
PUTDOWN(C)
___________
~HOLDING( C)
ONTABLE(x) Solution
ARMEMPTY UNSTACK( C, A)
PUTDOWN( C)
PICKUP(B)
STACK(B, C)
PICKUP(A)
STACK(A, B)
Necessary truth criterion
A proposition P is necessarily true in a state S if and only if the following two conditions are met:
(1) there is a state T equal or necessarily before state S in which P is necessarily asserted;

(2) for every step C possibly to be executed before S and for every proposition Q codesignating with P
and C denies, there is a step W necessarily between C and S which asserts R, a proposition such that R and
codesignates whenever P and Q codesignates.

You might also like