Artificial Intelligence

Download as pdf or txt
Download as pdf or txt
You are on page 1of 480

Artificial Intelligence

Early History of AI
Philosophy:- The Three Great
Movement:- empiricism vs rationalism

Expert System(rule-based):- Dendral (analyze chemical compounds)


MYCIN(diagnosing blood infections)
SHRDLU: illusion of intelligence

❏ Rule Based System


❏ A language parser that allowed user interaction using English terms.
❏ SHRDLU in action
Introduction
1. Introduction
a. What is intelligence? Why artificial ?
b. Models
c. Representation
d. Problem Solving and representation
e. Algorithms
f. The Rumplestiltskin Principle: Naming things
g. Trivial vs. simple
h. The gift (or curse) of language
i. Visual problem solving
j. Origins of AI
k. Science fiction and AI
Agent
Agent = Architecture + Program

An environment in artificial intelligence is the surrounding of the agent. The agent


takes input from the environment through sensors and delivers the output to the
environment through actuators.
Dynamic environment, An environment is dynamic if it changes while an agent is
in the process of responding to a percept sequence. The agents need to keep
looking at the world at each action. The Environment changes over time.
Example: Taxi driving, Playing soccer
Static environment: It is static if it does not change while the agent is deciding on
an action i.e the agent does not to keep in touch with time.
Example: Crossword puzzles The Environment is constant over time.
Examples: Agent’s Environment
DIKW Pyramid
Is Knowledge is Power ?
● Humans (Homo sapiens) are considered to be the most intelligent living
organisms on earth. Humans have the ability to think and react to situations.
● A body of facts and principles accumulated by human (other species) act, fact
or state of Knowing.
● Consists of strategies, trics, prejudice,belief, heuristics (role of thumb)
● Stored as symbolic structures.
● May be declarative or procedural.
● e.g. physician treating a patient by using knowledge and data (patient’s
records)
Intelligence requires Knowledge
Knowledge requires some less desirable properties

● It is voluminous.
● It is hard to characterize accurately.
● It is constantly changing.
● It differs organized data corresponds to the ways they used.

Intelligence requires the possession of and access of knowledge.


Dimensions of AI
❏ Automation
❏ Augmentation
i. Thinking & reasoning ❏ Adaptation
❏ Analysis
ii. Behaviour ❏ Accuracy
❏ Acquisition
iii. Human performance

iv. Rationality: Doing the right thing?


The Four Approaches to AI
The term AI is defined by each author in own
ways which falls into 4 categories.

i. Thinking humanly

ii. Thinking rationally

iii. Acting humanly

iv. Acting rationally


c. The Turing Test: For acting humanly

d. What is required for passing the Turing Test?

e. Extended Turing test: What more is required?


1. Which type of artificial intelligence focuses on performing tasks that would typically require human intelligence?
a) Weak AI b) Strong AI c) General AI d) Narrow AI
Solution: d) Narrow AI
2. Which type of artificial intelligence aims to replicate human-like intelligence, possessing the ability to understand, learn, and apply
knowledge across various domains?
a) Weak AI b) Strong AI c) General AI d) Narrow AI
Solution: c) General AI
3. Which type of artificial intelligence is designed to excel at a specific task or set of tasks, often outperforming humans within that
narrow domain?
a) Weak AI b) Strong AI c) General AI d) Narrow AI
Solution: a) Weak AI
4. Which type of artificial intelligence seeks to mimic human cognitive functions such as problem-solving, learning, and pattern
recognition?
a) Weak AI b) Strong AI c) General AI d) Narrow AI
Solution: b) Strong AI
5. Which type of artificial intelligence has been achieved so far, demonstrating proficiency in specialized tasks like image recognition,
natural language processing, and game playing?
a) Weak AI b) Strong AI c) General AI d) Narrow AI
Solution: d) Narrow AI
AI problems
Tic-Tac-Toe
Chess
Question Answering
Water Jug Problem
Travelling Salesman Problem
8/16 puzzle Problem
Missionaries and Cannibals Problem
The Tower of Hanoi
Monkey and Banana Problem
Block world
Bridge
Cryptarithmetic Problem
What is AI technique?
Method that exploits knowledge
Solve AI problems without using AI techniques
Artificial Intelligence On a Practical Level
● Speech Recognition allows an intelligent system to convert human speech into text or code.
● Natural Language Processing enables conversational interaction between humans and computers.
● Computer Vision allows a machine to scan an image and use comparative analysis to identify objects
in the image.
● Machine learning focuses on building algorithmic models that can identify patterns and relationships in
data.
● Expert systems gain knowledge about a specific subject and can solve problems as accurately as a
human expert on this subject.

Why Content mining is a not application in Artificial intelligence?


Knowledge Base Vs Database
Three Classes of Problem
1. Ignorable:- Solutions steps can be ignored, solves using complex structure,
Never Backtrack e.g., theorem proving (ignore the first approach
and start with another one to prove the theorem.)
2. Recoverable:- Solution steps can be undone. Correct our mistake we need to
undo incorrect steps. Solves using complicated control structure
with backtrack.
3. Irrecoverable:- Great deal of effort for making decision. Solution steps cannot
be undone. Use Planning
Production (rule) System
- Production vs production system
- Components
- A set of rules
- Knowledge/database
- Control Strategy
- A rule applier
- Features
- Simplicity (if-then)
- Modularity
- Modifiability
- Knowledge Intensive
Four Categories of Production Systems
- Monotonic Production System:-
Application of a rule never prevents the later application of rule.
- Partially Commutative Production System:-
A set of rules transforms State A to B, then multiple combinations
of those rules will be capable to convert State A to B.
e.g., chemical process (changes are irreversible)
- Commutative Production System:-
Both monotonic and partially commutative.
Order of an operation is not important.
- Non-monotonic Production System:-
Not require backtracking to correct the previous incorrect moves.
Four Categories of Production Systems
Water Jug Problem
Two jugs of water are given, one is a 4-litre and one is a 3-litre one.
None of the measuring markers is mentioned on any of it.
There is a pump available to fill the jugs with water.
How can you exactly pour 2 litres of water into a 4-litre jug?
Assuming that both the jugs are empty.
Also, the jugs can be filled any number of times with water, and they can be
emptied any number of times.
Solution of Water Jug Problem
State Representations: (x, y) where x represents the amount of water in the 4-gallon
jug and y represents the amount of water in the 3-gallon jug.
Here 0 ≤x≤ 4, and 0 ≤y ≤3. Litres in 4L Litres in 3L Litres in 4L Litres in 3L
x=0,1,2,3 or 4
0 0 0 0
y=0,1,2 or 3
4 0 0 3
Initial state: (0, 0)
1 3 3 0
Goal state:(2, y) for any value of y
1 0 3 3

0 1 4 2

4 1 0 2

2 3 2 0

Solution-1 Solution-2
Step Operator Description (Action) Production Rules

1 Fill the 4-litre Jug (x,y) if x<4 → (4, y)


2 Fill the 3-litre Jug (x,y) if y<3 → (x, 3)

3 Pour some water from a 4-litre jug x = 0,1,2,3,4 (x,y) if x>0 → (x-x’, y)

4 Pour some water from a 3-litre jug y = 0,1,2,3 (x,y) if y>0 → (x, y-y’)

5 Empty 4-litre jug on the ground (x,y) if x>0 → (0, y)

6 Empty 3-litre jug on the ground (x,y) if y>0 → (x, 0)

7 Pour water from a 3-litre jug into a 4-litre jug until it is full (x,y) if x+y>=4 and y>0 → (4,y-(4-x))

8 Pour water from a 4-litre jug into a 3-litre jug until it is full (x,y) if x+y>=3 and x>0 → (x-(3-y),3)

9 Pour all the water from 3-litre jug into 4-litre jug (x, y) if x+y <=4 and y>0 → (x+y, 0)

10 Pour all the water from 4-litre jug into 3-litre jug (x, y) if x+y <=3 and x>0 → (0,x+y)

11 Pour 2-litre water from 3-litre jug into 4-litre jug (0, 2) → (2,0)

12 Empty 2-litre in the 4-litre jug on the ground (2, y) → (0,y)


Requirement of Good Control/Search Strategies
● It should cause motion: It would solve the problem (continue indefinitely filling 4L jug)
● It should be systematic: May not arrive the same state repeatedly
May not use more steps than necessary
● Finally, it must be efficient in order to find a good answer.

One level Breadth FS


Two level Breadth FS
Expert System
Components of Expert System
- User Interface: allows system users to interact with the expert system.
- Knowledgebase: contains high-quality, domain-specific knowledge.
- Inference Engine: make decision or obtain new information by applying logical
rules to the knowledge base. Resolve any conflict. It employs backward and
forward chaining techniques.
Forward Chaining Backward Chaining
● Data Driven ● Goal Driven
● Bottom up reasoning ● Top down reasoning
● BFS ● DFS
● Data to goal (What can happen next ? ) ● Goal to data (Why did this happen?)
● Apply in automatic, unconscious processing, ● Appropriate for problem solving– e.g., Where
e.g., object recognition, routine decisions are my keys? How do I get into a PhD
● Example: XCON program?

DENDRAL uses forward chaining to create chemical structures. To diagnose bacterial infections, MYCIN
employs the backward chaining technique.
Advantages of Expert Systems Limitations of Expert Systems

● Availability: They are easily available due ● Difficult knowledge acquisition


to the mass production of software. ● Maintenance costs
● Less Production Cost: The production ● Development costs
costs of expert systems are extremely ● Adheres only to specific domains.
reasonable and affordable. ● Requires constant manual updates; it
● Speed: They offer great speed and reduce cannot learn by itself.
the amount of work. ● It is incapable of providing logic behind the
● Less Error Rate: The error rate is much decisions.
lower as opposed to human errors.
● Low Risks: They are capable of working
in environments that are dangerous to
humans.
● Steady Response: They avoid motions,
tensions, and fatigue.
Types of Expert Systems
● rule-based expert systems,
● frame-based expert systems,
● fuzzy expert systems,
● neural expert systems, and
● neuro-fuzzy expert systems.
Production systems
● long-term memory or production memory (the rulebase): domain-specific and
problem-solving knowledge
● short-term or working memory (data or fact base): any information collected
by the KE or extracted from information systems.
● Recognize-act cycle
● rules consist of a condition and an action (or conclusion). sometimes the rules
are called productions.
● Inference engine is often called the production-system interpreter that
executes rules.
● Action behaviour may affect the Working Memory by inserting,deleting, or
modifying any of its elements.
Rules as a Knowledge Representation Technique
● A rule provides some description of how to solve a problem.
● Any rules consists of two parts: the IF part, called the antecedent (premise or
condition) and the THEN part called the consequent (conclusion or action)
● A rule can have multiple antecedents joined by the keywords AND
(conjunction), OR (disjunction) or a combination of both.
● The antecedent of a rule incorporates two parts: an object (linguistic object)
and its value. The object and its value are linked by an operator.

antecedent→consequent
condition →action
premise →conclusion
X→Y
Rules can represent:
● Relation: IF the ‘fuel tank’ is empty THEN the car is dead.
● Recommendation: IF the season is autumn AND the sky is cloudy AND the
forecast is drizzle THEN the advice is ‘take an umbrella’
● Directive: IF the car is dead AND the ‘fuel tank’ is empty THEN the action is
‘refuel the car’
● Strategy: IF the car is dead THEN the action is ‘check the fuel tank’; step1
complete IF step1 is complete AND the ‘fuel tank’ is full THEN the action is
‘check the battery’; step2 is complete
● Heuristic: IF the spill is liquid AND the ‘spill pH’ < 6 AND the ‘spill smell’ is
vinegar THEN the ‘spill material’ is ‘acetic acid’
IF the ‘traffic light’ is green
THEN the action is go

Rule based System


● Knowledge is represented as a set of rules. Each rule specifies a relation,
recommendation, directive, strategy or heuristic and has the IF (condition) THEN
(action) structure.
● When the condition part of a rule is satisfied, the rule is said to fire and the action
part is executed.
● Database includes a set of facts used to match against the IF (condition) parts of
rules stored in the knowledge base.
● Inference engine carries out the reasoning whereby the expert system reaches a
solution.
● Explanation facilities enable the user to ask the expert system how a particular
conclusion is reached and why a specific fact is needed
● User interface is the means of communication between a user seeking a solution to
the problem and an expert system
Expert System Vs Human Expert Vs Conventional Program
IE executes in cycles: three phases (RAC)
The patterns contained in working memory
are matched against the conditions of the
production rules, which produces a subset
of rules known as the conflict set, whose
conditions match the contents of working
memory. One (or more) of the rules in the
conflict set is selected (conflict resolution)
and fired, which means its action is
performed. The process terminates when
no rules match the contents of working
memory
Think-Feel-Act Cycle
(Know-Choose-Give)Yourself .
Match on Classical math(a) and RETE algorithm(b)
(1) If X=2 & Y=5 & Z=3 Then ACTION1
(2) If X=2 & Y=5 Then ACTION2
Conflict Set (Specific)
R1: IF: engine does not turn AND battery is not flat The conflict set is defined as the set of pairs of
THEN: ask user to test starter motor the form:

R2: IF: there is no spark 〈 Production rule, matching working memory


THEN: ask user to check the points elements 〉

R3: IF: engine turns AND engine does not start If the initial facts are “engine does not turn” and
THEN: ask user to check the spark “battery is not flat”, the conflict set is:

R4: IF: engine does not turn { 〈 R1, engine does not turn, battery is not flat 〉,
THEN: ask user to check the battery 〈 R4, engine does not turn 〉 }

R5: IF: battery is flat


THEN: ask user to charge battery AND EXIT
Conflict Resolution
Two simple rules for crossing a road:
Rule 1: IF the ‘traffic light’ is green
THEN the action is go
Rule 2: IF the ‘traffic light’ is red
THEN the action is stop
Now Add a third rule
Rule 3: IF the ‘traffic light’ is red
THEN the action is go
Two rules have same IF part. Thus both of them can be set to fire when the
condition part is satisfied. A method for choosing a rule to fire when more than one
rule can be fired in a given cycle is called conflict resolution.
Example: String Rewriting (choose the lowest numbered rule)
1. ba → ab
2. ca → ac
3. cb → bc

● A production matches if its LHS matches


any portion of the string in working
memory.
Example: Propositional Calculus (the least recently used rule)
Production rules:

1. p ^ q → goal
2. r^s→p
3. w^r→p
4. t^u→q
5. v →s
6. start → v ^ r ^ q

More Examples
Methods used for conflict resolution
If two rules could be chosen:
● Rule Ordering: Arrange all rules in one long priority list. The triggering rule
appearing earliest in the list has the highest priority.
● Refractoriness: If it has matched exactly the same data before and been
chosen, then ignore it. This helps to stop the system getting into infinite loops.
● Specificity: fire the one whose conditions are most specific. E.g. ``it has wings
and swims'' over ``it has wings''. This method is also known as the longest
matching strategy.
● Recency: If two rules could be chosen, fire the one that matches the most
recent facts in the database.
Advantages of rule-based expert systems
● Separation of Knowledge (the Rules) and Control (Recognize-Act Cycle)
● Reduce the Decision-Making Time
● Modularity of Production Rules (Rules represent chunks of knowledge)
● Pattern-Directed Control (More flexible than algorithmic control)
● Opportunities for Heuristic Control can be built into the rules
● Language Independence
● Natural Mapping onto State Space Search (Data or Goal Driven)
Uncertainty Management in Rule-based Expert Systems
● Information can be incomplete, inconsistent,uncertain, or all three (lack of the
exact knowledge).
● Uncertainty can be expressed numerically as certainty/confidence factor (cf)
or measure of belief (mb)
● cf usually is a real number in a particular range, eg, 0 to 1 or -1 to 1
Combining certainties of propositions and rules
Let P1 and P2 be two propositions and cf(P1) and cf(P2) denote their
certainties
Then cf(P1 and P2) = min(cf(P1), cf(P2))
cf(P1 or P2) = max(cf(P1), cf(P2))
Given the rule
if P1 then P2: cf = C
then certainty of P2 is given by cf(P2) = cf(P1) * C
Example
if it rains then John will catch a taxi: cf = 0.6
Suppose cf for ‘it rains’ is 0.5,
then cf of ‘John catching a taxi’ = 0.5 * 0.6 = 0.3

if it rains and I forget the umbrella then I’ll get wet: cf = 0.9
cf of ‘if it rains and I forget the umbrella’ = min(0.7, 0.6) = 0.6

cf of ‘I’ll get wet’


= cf(if it rains and I forget the umbrella) * 0.9

= 0.6 * 0.9

= 0.54
Goal-Driven and Data-Driven Strategies

Goal-driven search uses knowledge of the goal to guide the search. Use goal-driven search if;
● A goal or hypothesis is given in the problem or can easily be formulated.
(Theorem proving; medical diagnosis; mechanical diagnosis)
● There are a large number of rules that match the facts of the problem and would thus
produce an increasing number of conclusions or goals. (inefficient)
● Problem data are not given but must be acquired by the problem solver.
(e.g., medical tests determined by possible diagnosis)
Data-driven search uses knowledge and constraints found in the given data to search along lines
known to be true. Use data-driven search if:
● All or most of the data are given in the initial problem statement.
● There are a large number of potential goals, but there are only a few ways to use the facts
and the given information of a particular problem.
● It is difficult to form a goal or hypothesis.
Problem Solving by Searching
The components that define a problem in AI
Initial state: The state from which the agent infers that it is at the beginning.
Successor function: Description of possible actions and their outcome.
Given a state x,SUCCESSOR-FN(x)Returns a set of < action,successor> ordered
pairs.
A path from an initial state to a goal state is a solution.
The initial state and successor function define the state space.
Path cost: Assigns numeric cost to each path.Step cost for taking action a from
state x to y-denoted by c(x,a,y).
Goal test : Whether the given state is a goal state?
Which of the following is not a component of AI Problem?
(a) Initial State
(b) Successor function or Action
(c) Goal test
(d) Determining start state

Well-defined Problem: have clear starting and ending point.(initial and goal)
Ill-defined Problem: have ambiguity on initial and goal state. E.g How to
enjoy a Happy life?
Methods of Problem Solving: a) trial and Error b) algorithm c) heuristics
Problem Characteristics
● Is the problem decomposable into small sub-problems which are easy to solve?
● Can solution steps be ignored or undone?
● Is the universe of the problem is predictable?
● Is a good solution to the problem is absolute or relative?
● Is the solution to the problem a state or a path?
● What is the role of knowledge in solving a problem using artificial intelligence?
● Does the task of solving a problem require human interaction?
Problem Solving by Searching
- Searching is core of many intelligent process
- State space = {all possible configurations}
(3,62,880 different configurations of the 8 tiles and blank space.)
- Concept: finding information one needs.
- The process of problem-solving using
searching consists of the following steps.
● Define the problem
● Analyze the problem
● Identification of possible solutions
● Choosing the optimal solution
● Implementation
Control/Search Strategies
● The first requirement for a good control strategy is that it should cause motion.
● The second requirement for a good control strategy is that it should be systematic.
● Finally, it must be efficient in order to find a good answer.
Evaluate performance of Search algorithm

1. Completeness: Is the algorithm guaranteed to find a solution when there is one?


2. Optimality: Does the strategy find best (lowest path cost) among all the solutions?
3. Time complexity: How long does it take to find a solution?
4. Space complexity: How much memory is needed to perform the search?
Terminologies
Branching factor (b): maximum number of successors of any node.
Depth (d): shallowest goal node (no. of steps along the path from the root)
m: maximum length of any path in the state space.
Search cost: time complexity
Total cost: search cost + path cost
Fringe:
Problem can be defined with five components:-
States, Initial state, Actions, Transition model, Goal test and Path cost.
State Space Search
Measuring the structure and complexity of problems and analyzing efficiency,
correctness and generality of solution strategy
Example of SSS: TSP
Starting at A, find the shortest path through all the cities, visiting each city exactly
once and returning to A.
Example SSS
Example of SSS: Water Jug Problem
Types of search algorithms
Based on the search problems

● Uninformed search: no domain knowledge (closeness, location of goal state etc)


and have information about how to traverse the given tree and find the goal state.
● Informed search: require details such as distance to reach the goal, steps to reach
the goal, cost of the paths etc.
Types of Uninformed and Informed Search
Uninformed search (blind search/brute force method) Informed search (heuristics search)
Search without information Search with information
● Breadth-first search ● Hill Climbing
● Uniform cost search ● Best-first search
● Depth-first search [Demo]
● A*
● Depth-limited search
● Memory-bounded heuristic search
● Iterative deepening depth-first search
● Heuristics DFS
● Bidirectional search
● Beam search
Basic Terminologies
- Search tree
- State space search: all possible states
- Nodes: states of the in the state space
- Expanding: applying legal actions to the current state
- Generating: generate new set of states
- Parent node, child node and leaf node (with no child node)
- Frontier (open list): set of all leaf nodes available for expansion at any given point
- Search strategy: algorithm of expanding nodes on the frontier
- Explored set (closed list)
- Repeated state and redundant path
Uninformed Search
Referred from Russell and Norvig
Breadth-first search: Time and Memory

-
-
-
-
-
- Consider uniform tree where every state has b successors.
- b + b2 + b3 + ··· + bd = O(bd)
- O(bd−1) nodes in the explored set and O(bd) nodes in the frontier: Space?
- memory requirements are a bigger problem for BFS than is the execution time.
- exponential-complexity search problems cannot be solved by uninformed methods
for any but the smallest instances.
Breadth-first search
Breadth First Search: Example
Breadth First Search: Example
S 1

5 6

A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
S 1
S
5 6

A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
S 1
Is S is GOAL? S
5 6

A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
S 1
S
5 6

A B D A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
1
S ✔ S
5 6

A B D A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
1
S ✔ S
5 6

A B D A 2 9
6
D
3 2
2
9 1
B
B G1 C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
1
S ✔ S
5 6

A ✔ B D A 2 9
6
D
3 2
2
9 1
B
B G1 C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
1
S ✔ S
5 6

A ✔ B D A 2 9
6
D
3 2
2
9 1
B
B G1 A C C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
1
S ✔ S
5 6


A ✔ B D A 2 9
6
D
3 2
2
9 1
B
B G1 A C C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
1
S ✔ S
5 6


A ✔ B D A 2 9
6
D
3 2
2
9 1
B
B G1 A C C E S C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
1
S ✔ S
5 6


A ✔ B D ✔ A 2 9
6
D
3 2
2
9 1
B
B G1 A C C E S C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
1
S ✔ S
5 6


A ✔ B D ✔ A 2 9
6
D
3 2
2
9 1
B
B G1 A C C E S C
2 E

5 7 7
G1
A C G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
1
S ✔ S
5 6


A ✔ B D ✔ A 2 9
6
D
3 2
2
✔ 9 1
B
B G1 A C C E S C
2 E

5 7 7
G1
A C G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
1
S ✔ S
5 6


A ✔ B D ✔ A 2 9
6
D
3 2
2
✔ 9 1
B
B G1 A C C E S C
2 E
GOAL
5 7 7
G1
A C G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
1
S ✔ S
5 6


A ✔ B D ✔ A 2 9
6
D
3 2
2
✔ ✔ 9 1
B
B G1 A C C E S C
2 E
GOAL
5 7 7
G1
A C G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
1
S ✔ S
5 6


A ✔ B D ✔ A 2 9
6
D
3 2
2
✔ ✔ 9 1
B
B G1 A C C E S C
2 E
GOAL
5 7 7
G1
A C G3
8
G2 F

Referred from John Levine


Breadth First Search: Example
S 1
S ✔
5 6


A ✔ B D ✔ A 2 9
6
D
3 2
2
✔ ✔ 9 1
B
B G1 A C C E S C
2 E
GOAL
5 7 7
G1
A C G3
8
G2 F
Path: S A G1
Referred from John Levine
Breadth First Search: Example
S 1
S ✔
5 6


A ✔ B D ✔ A 2 9
6
D
3 2
2
✔ ✔ 9 1
B
B G1 A C C E S C
2 E
GOAL
5 7 7
G1
A C G3
8
G2 F
Path: S A G1
Path cost: 14 Referred from John Levine
Breadth First Search: Example
S 1
S ✔
5 6


A ✔ B D ✔ A 2 9
6
D
3 2
2
✔ ✔ 9 1
B
B G1 A C C E S C
2 E
GOAL
5 7 7
G1
A C G3
8
G2 F
Path: S A G1
Path cost: 14 Referred from John Levine
Breadth First Search: Example with less work
Visited Nodes Empty S 1
S
5 6

A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example with less work
Visited Nodes S 1
S
5 6

A B D A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example with less work
Visited Nodes S S 1
S
5 6

A B D A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example with less work
Visited Nodes S S 1
S
5 6

A B D A 2 9
6
D
3 2
2
9 1
B
B G1 C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example with less work
Visited Nodes S, A S 1
S
5 6

A B D A 2 9
6
D
3 2
2
9 1
B
B G1 C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example with less work
Visited Nodes S, A, S 1
S
5 6

A B D A 2 9
6
D
3 2
2
9 1
B
B G1 C C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example with less work
Visited Nodes S, A, B S 1
S
5 6

A B D A 2 9
6
D
3 2
2
9 1
B
B G1 C C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example with less work
Visited Nodes S, A, B S 1
S
5 6

A B D A 2 9
6
D
3 2
2
9 1
B
B G1 C C E C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example with less work
Visited Nodes S, A, B, D S 1
S
5 6

A B D A 2 9
6
D
3 2
2
9 1
B
B G1 C C E C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example with less work
Visited Nodes S, A, B, D S 1
S
5 6

A B D A 2 9
6
D
3 2
2
9 1
B
B G1 C C E C
2 E
DEAD
5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example with less work
Visited Nodes S, A, B, D S 1
S
5 6

A B D A 2 9
6
D
3 2
2
9 1
B
B G1 C C E C
2 E
DEAD
5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example with less work
Visited Nodes S, A, B, D S 1
S
5 6

A B D A 2 9
6
D
3 2
2
9 1
B
B G1 C C E C
2 E
DEAD GOAL
5 7 7
G1
G3
8
G2 F

Referred from John Levine


Breadth First Search: Example with less work
Visited Nodes S, A, B, D S 1
S
5 6

A B D A 2 9
6
D
3 2
2
9 1
B
B G1 C C E C
2 E
DEAD GOAL
5 7 7
G1
G3
8
G2 F
Path: S A G1
Path cost: 14 Referred from John Levine
Depth-first search
Depth First Search: Example

S 1

5 6

A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Depth First Search: Example
S
S 1

5 6

A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Depth First Search: Example
Is S is GOAL? S
S 1

5 6

A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Depth First Search: Example
S
S 1

5 6

A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Depth First Search: Example
S
S 1

A B D 5 6

A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Depth First Search: Example
S
S 1

A B D 5 6

B G1
A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Depth First Search: Example
S
S 1

A B D 5 6

B G1
A 2 9
6
C D
A 2
3
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Depth First Search: Example
S
S 1

A B D 5 6

B G1
A 2 9
6
C D
A 2
3
2
9 1
A already explored B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Depth First Search: Example
S
S 1

A B D 5 6

B G1
A 2 9
❌ 6
C D
A 2
3
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Depth First Search: Example
S
S 1

A B D 5 6

B G1
A 2 9
❌ 6
C D
A 2
3
2
9 1
B
F G2 S C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Depth First Search: Example
S
S 1

A B D 5 6

B G1
A 2 9
❌ 6
C D
A 2
3
2
9 1
B
F G2 S C
2 E
S already explored
5 7 7
G1
G3
8
G2 F

Referred from John Levine


Depth First Search: Example
S
S 1

A B D 5 6

B G1
A 2 9
❌ 6
C D
A 2
3
2
❌ 9 1
B
F G2 S C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Depth First Search: Example
S
S 1

A B D 5 6

B G1
A 2 9
❌ 6
C D
A 2
3
2
❌ 9 1
B
F G2 S C
2 E

D G3 5 7 7
G1
G3
8
G2 F

Referred from John Levine


Depth First Search: Example
S
S 1

A B D 5 6

B G1
A 2 9
❌ 6
C D
A 2
3
2
❌ 9 1
B
F G2 S C
2 E

D G3 5 7 7
G1
G3
C E S 8
G2 F

Referred from John Levine


Depth First Search: Example
S
S 1

A B D 5 6

B G1
A 2 9
❌ 6
C D
A 2
3
2
❌ 9 1
B
F G2 S C
2 E

D G3 5 7 7
G1
G3
C E S 8
G2 F
C already explored
Referred from John Levine
Depth First Search: Example
S
S 1

A B D 5 6

B G1
A 2 9
❌ 6
C D
A 2
3
2
❌ 9 1
B
F G2 S C
2 E

D G3 5 7 7
❌ G1
G3
C E S 8
G2 F

Referred from John Levine


Depth First Search: Example
S
S 1

A B D 5 6

B G1
A 2 9
❌ 6
C D
A 2
3
2
❌ 9 1
B
F G2 S C
2 E

D G3 5 7 7
❌ G1
G3
C E S 8
G2 F

G3 Referred from John Levine


Depth First Search: Example
S
S 1

A B D 5 6

B G1
A 2 9
❌ 6
C D
A 2
3
2
❌ 9 1
B
F G2 S C
2 E

D G3 5 7 7
❌ G1
G3
C E S 8
G2 F

G3 Referred from John Levine


GOAL
Breadth First Search Depth First Search
- Space Complexity O(b^d), where b is branching - Space Complexity O(bd), where b is branching factor
factor and d is depth of the shallowest solution. and d is depth of the shallowest solution.
- Time Complexity O(b^d) - Time Complexity O(b^m), m is max depth of the tree.
- Guaranteed to find the solution (Complete) - Not guaranteed to find the solution (not Complete)
- Guaranteed to find the shortest path (Optimal) - Not guaranteed to find the shortest path (not Optimal)
- Exponential space complexity makes it impractical - Polynomial space complexity makes it applicable for
even for toy problems. non-toy problems.
Types of Uninformed and Informed Search
Uninformed search (blind search/brute force method) Informed search (heuristics search)
Search without information Search with information
● Breadth-first search ● Best-first search
● Depth-first search ● A*
● Depth-limited search ● Memory-bounded heuristic search
● Iterative deepening depth-first search ● Heuristics DFS
● Bidirectional search
● Uniform cost search
Uniform-cost search
- all step costs are equal, breadth-first search is optimal because it always
expands the shallowest unexpanded node.
- an algorithm that is optimal with any step-cost function.
- Instead of expanding the shallowest node, uniform-cost search expands the
node n with the lowest path cost g(n).
- storing the frontier as a priority queue ordered by g.
- uniform-cost search expands nodes in order of their optimal path cost.
(care about total path cost rather than number of steps in a path)
Uniform-cost search
S 1

5 6

A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Uniform-cost search
S 1
S Visited: Empty
Is S is GOAL state?
5 6

A 2 9
6
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Uniform-cost search
S 1
S Visited: S,
5 6
5 9 6
A 2 9
6
A B D
D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Uniform-cost search
S 1
S Visited: S,
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Uniform-cost search
S 1
S Visited: S,
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
2
9 1
B
C
2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Uniform-cost search
S 1
S Visited: S, A,
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2
9 1
B
C
B G1 2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Uniform-cost search
S 1
S Visited: S, A,
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2
9 1
B
C
B 8 G1 14 2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Uniform-cost search
S 1
S Visited: S, A,
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2
9 1
B
C
B 8 G1 14 2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Uniform-cost search
S 1
S Visited: S, A, D,
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2 2 2
9 1
B
C
B 8 G1 14 C E 2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Uniform-cost search
S 1
S Visited: S, A, D,
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Uniform-cost search
S 1
S Visited: S, A, D,
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
G1
G3
8
G2 F

Referred from John Levine


Uniform-cost search
S 1
S Visited: S, A, D, B,
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1
G1
G3
8
C G2 F

Referred from John Levine


Uniform-cost search
S 1
S Visited: S, A, D, B,
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1
G1
G3
8
C 9 G2 F

Referred from John Levine


Uniform-cost search
S 1
S Visited: S, A, D, B,
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1
G1
G3
8
C 9 G2 F

Referred from John Levine


Uniform-cost search
S 1
S Visited: S, A, D, B, C,
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1 7 5 G1
G3
8
C 9 F G2 G2 F

Referred from John Levine


Uniform-cost search
S 1
S Visited: S, A, D, B, C,
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1 7 5 G1
G3
8
C 9 F G2 G2 F
15 13
Referred from John Levine
Uniform-cost search
S 1
S Visited: S, A, D, B, C,
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1 7 5 G1
G3
8
C 9 F G2 G2 F
15 13
Referred from John Levine
Uniform-cost search
S 1
S Visited: S, A, D, B, C, E
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1 7 5 7
G1
G3
8
C 9 F G2 G3 G2 F
15 13
Referred from John Levine
Uniform-cost search
S 1
S Visited: S, A, D, B, C, E
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1 7 5 7
G1
G3
8
C 9 F G2 G3 G2 F
15 13 15
Referred from John Levine
Uniform-cost search
S 1
S Visited: S, A, D, B, C, E
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1 7 5 7
G1
G3
8
C 9 F G2 G3 G2 F
15 13 15
Referred from John Levine
Uniform-cost search
S 1
S Visited: S, A, D, B, C, E
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1 7 5 7
G1
G3
8
C 9 F G2 G3 G2 F
15 13 15
DEAD
Referred from John Levine
Uniform-cost search
S 1
S Visited: S, A, D, B, C, E
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1 7 5 7
G1
G3
8
C 9 F G2 G3 G2 F
15 13 15
DEAD
Referred from John Levine
Uniform-cost search
S 1
S Visited: S, A, D, B, C, E
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 DEAD 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1 7 5 7
G1
G3
8
C 9 F G2 G3 G2 F
15 13 15
DEAD
Referred from John Levine
Uniform-cost search
S 1
S Visited: S, A, D, B, C, E
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 DEAD 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1 7 5 7
G1
G3
8
C 9 F G2 G3 G2 F
15 13 15
DEAD
Referred from John Levine
Uniform-cost search
S 1
S Visited: S, A, D, B, C, E
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 DEAD 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1 7 5 7
G1
G3
8
C 9 F G2 G3 G2 F
15 13 15
DEAD
GOAL Referred from John Levine
Uniform-cost search
S 1
S Visited: S, A, D, B, C, E
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 DEAD 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1 7 5 7
G1
G3
8
C 9 F G2 G3 G2 F
15 13 15
DEAD
Referred from John Levine
Uniform-cost search
- Time Complexity:

Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node. Then the number of steps is
= C*/ε+1. Here we have taken +1, as we start from state 0 and end to C*/ε.
Hence, the worst-case time complexity of Uniform-cost search is O(b1 + [C*/ε]).

- Space Complexity:

The same logic is for space complexity so, the worst-case space complexity of Uniform-cost search is O(b1 + [C*/ε]).

- Complete and Optimal ?


Uniform-cost search
S 1
S Visited: S, A, D, B, C, E
5 6
5 9 6
A 2 9
6
A 5 B 9 D 6 D
3 2
3 9 DEAD 2 2 2
9 1
B
C
B 8 G1 14 C 8 E 8 2 E

5 7 7
1 7 5 7
G1
G3
8
C 9 F G2 G3 G2 F
15 13 15
DEAD
Referred from John Levine
Depth limited Search

- The embarrassing failure of DFS in infinite state space can be alleviated by


supplying DFS with a predetermined limit (l).
- Implemented by Stack
- Time Complexity O(b^l) and Space Complexity O(bxl)
Iterative Deepening Search (IDS)
or
Iterative Deepening Depth First Search (IDDFS)
– gradually increasing the limit—first 0, then 1,
then 2, and so on—until a goal is found.

– Iterative deepening combines the benefits of


depth-first (linear space ) and breadth-first
search. (Complete)
Iterative Deepening Search
– Like DFS, its memory requirements are modest: O(bd)
– Like BFS, it is complete when the branching factor is finite and optimal when the
path cost is a nondecreasing function of the depth of the node.
– Total number of nodes generated in the worst case is
N (IDS) = (d)b + (d−1)b2 + · · · + (1)bd ≅ O(bd)
– Some extra cost for generating the upper levels multiple times, but it is not large.
For example, if b = 10 and d = 5, the numbers are

N (IDS) = 50 + 400 + 3,000 + 20,000 + 100,000 = 123,450

N (BFS) = 10 + 100 + 1,000 + 10,000 + 100,000 = 111,110


– Space complexity: O(bd), follows DFS scheme, only explore one branch at a time.
Time Complexity
Space Complexity
– O(bd), As it follows DFS scheme, only explore one branch at a time.
Bidirectional search
– As BFS
Bidirectional search
Bidirectional search
Bidirectional search
– Both forward and backward search will meet after each search expands d/2
levels, d is the depth of shallowest goal node.
– Each search generates O(bd/2) nodes.
– Time Complexity : O(bd/2)
– Space Complexity : O(bd/2)
– The amount of space required can be reduced by half using IDS
Bidirectional search: a comparison
branching factor, b =10
depth of shallowest goal, d = 6

Bidirectional search: forward (d/2) + backward (d/2)


(10 + 100 + 1000)*2 = 2220 nodes

BFS: 10+100+1000+10000+100000+1000000 = 1111110 nodes


Comparing uninformed search strategies
Types of Uninformed and Informed Search
Uninformed search (blind search/brute force method) Informed search (heuristics search)
Search without information Search with information
● Breadth-first search ● Best-first search
● Depth-first search ● A*
● Depth-limited search ● Hill Climbing
● Iterative deepening depth-first search ● Memory-bounded heuristic search
● Bidirectional search ● Heuristics DFS
● Uniform cost search ● Means ends analysis
Informed Search
- looked through search space for all possible solutions of the problem without
having any additional knowledge about search space.
- A rule of thumb is a guideline that provides simplified advice or some basic
rule-set regarding a particular subject or course of action.
- Such as intelligent guesswork, trial and error, the process of elimination, past formulas, and the
analysis of historical data to solve a problem.
- Heuristic methods make decision-making simpler and faster through shortcuts
and good-enough calculations.
Informed Search
- Knowledge about:
how far we are from the goal, path cost, how to reach to goal node, etc.
- Have tour guide with domain-specific knowledge.
- More useful for large search space.
- Uses the idea of heuristic (अनुमान), so it is also called Heuristic search.
- Google-Map
8-puzzle heuristics
- slide the tiles horizontally or vertically into the
empty space until the configuration matches
the goal configuration
- Branching factor: 4,2,3
- The average solution cost for a randomly
generated 8-puzzle instance is about 22 steps.
- exhaustive tree search to depth 22 would look
at about 322 ≈ 3.1 × 1010 states.
- for the 15-puzzle is roughly 1013

Which move is best?


8-puzzle heuristics
Heuristics 1: number of tiles in the correct position.↑
Heuristics 2: number of misplaced tiles.↓
Heuristics 3: the sum of the distances of the tiles
from their goal positions.

Which move is best?


8-puzzle heuristics
Heuristics 1: number of tiles in the correct position.↑
Heuristics 2: number of misplaced tiles.↓
Heuristics 3: the sum of the distances of the tiles
from their goal positions.
2 4
4

Which move is best?


8-puzzle heuristics
Heuristics 1: number of tiles in the correct position.↑
Heuristics 2: number of misplaced tiles.↓ (8)
Heuristics 3: the sum of the distances of the tiles
from their goal positions.

h3 = city block distance or Manhattan distance.


count is the sum of the horizontal and vertical distances.
= 3 + 1 + 2 + 2 + 2 + 3 + 3 + 2 = 18 .
Heuristics functions
- Greek word ‘heurisken’ - to discover
- Improves efficiency of search process by compromising with other
- Heuristics function maps the problem descriptions to measure of desirability.
- It is not exhaustive search
- h(n) is non negative and if it is goal node then h(n) = 0
Heuristics functions
Heuristics functions

- For a maze: the Manhattan distance to


- Block world problem: is a NP-hard problem
the target row difference + column
– h(s) = Number of places with incorrect
difference. block immediately on top of it
Heuristics functions

- Tic-Tac-Toe: the number of tuples (2 or 3) or pieces that are aligned.


- Jigsaw puzzle: the length of the completed border and the number of large
continuous completed areas.
- The 8-Queens problem: the number of cells that are not yet attacked by any
of the queens.
-
-
Hill Climbing

- local search algorithm: considers the immediate neighbors of current solution.


- Greedy algorithm: always makes the change that it believes will lead to the best
possible solution.
- No backtracking: Search space does not go backward it does not remember the
previous states.
- Heuristic function: All possible alternatives explored on heuristic values only
Generate and Test
1. Generate a possible solution. For example, generating a particular point in the
problem space or generating a path for a start state.
2. Test to see if this is a actual solution by comparing the chosen point or the
endpoint of the chosen path to the set of acceptable goal states
3. If a solution is found, quit. Otherwise go to Step 1

Note: Generate and Test method produce feedback which helps to


decide which direction to move in the search space.
Simple Hill Climbing (G&T and Heuristics)
Step 1: Evaluate the initial state, if it is goal state then return success and Stop.

Step 2: Loop Until a solution is found or there is no new operator left to apply.

Step 3: Select and apply an operator to the current state.

Step 4: Check new state:

i. If it is goal state, then return success and quit.


ii. else if it is better than the current state then assign new state as a current state.
iii. else if not better than the current state, then return to step 2.

Step 5: Exit.
Problems with Hill Climbing
Ridge
Topographical (contour) maps

A line of high ground with height variations along its crest. The ridge is not simply
a line of hills; all points of the ridge crest are higher than the ground on both sides
of the ridge.
Usually contour lines close together with two ends which are visible will form a
ridge. If it is a valley, there will often be some form of watercourse In it (drainage).
If contour lines are close together in a a circular (or approximate) shape, it’s
usually a hill.
A continuous elevated terrain with sloping sides. In the map represented by “U” or
“V” shaped contour lines where the higher ground is in the wide opening
Problems with Hill Climbing: Local maximum
A state better than all neighbours but not better than some other states farther
away.

Solution: Backtrack to some earlier


node and try to going in different
directions.
Problems with Hill Climbing: Plateau

Is a flat area of the search space a whole set of


neighbouring states have the same value.

Solution: Make a big jump in some direction to try to get


a new section of the search space.
Problems with Hill Climbing: Ridge
- Special kind of local Maximum
- Area higher than its surrounding
areas that itself has a slope.

Example:- Contour Map

Solution: Apply two or more


rules before doing the test.
This corresponds to moving
several directions at once to
find a solution.
Use Bi-dirn Search
Example: Block World
Hill Climbing Visualization
Visualization by Java
Best First Search (A) referred
Combining the advantages of both DFS and BFS (OR graph)
In the following example explain the Best First Search algorithm considering the
cost of arc as 1, 2, 3 …(the values of g(n)).
In best first search algorithm system moves to the next state(promising node) based
on heuristics function , the lowest heuristic value is chosen , however in A*
algorithm the next state depends on the heuristic as well as g component which is
the path from initial to particular state.

When considering the actual cost along with the inequality h(n) <= h*(n) for all n in
the search space, it strictly an A* search. We explain it in the later slides by taking
graph.
Example for Best First Search and A*(always h(n)<= h*(n))
A* Algorithm continued from Best First Search

Start Actual n Estimated Goal


g(n) h(n)

f(n)

- f(n) = g(n) + h(n)

g(n) = actual cost to get from initial state to n

h(n) = estimated or heuristic cost that takeaway from n to Goal node.

For A* Algorithm, Use a heuristics which is always h(n) <= h*(n) for all n in the search tree.
A* Algorithm
- This h(n) function is an admissible heuristic because the h value is less than or equal to the actual/exact
cost of a lowest-cost path from the node to a goal.
- Or h(n) never overestimates the true cost from node n to a goal node

h(n) <= h*(n) never overestimate means underestimate


h*(n) = optimal cost from n to goal
(That is the true cost to reach the Goal state from node ‘n’)
h(n) = estimated or heuristic value
- A* does not produce the shortest path always, because it heavily depends on heuristics.
- So not Optimal? (Yes/No) If admissible heuristic then definitely optimal.
A* in 8-Puzzle Problem (Python Code)
g(n) = Actual cost (Number of steps taken to the current state)
h(n) = estimated distance to goal (estimated number of steps taken from current state to goal state)
h(n) may be Manhattan distance (city block distance), Number of misplaced tiles, Number of tiles in correct
positions
A* Search Example
Directed or Undirected graph (trees)
* Admissible
Class Exercise
Find the best path by applying A* search from initial none a to goal node z
Admissible Heuristics

h(I) is an estimate, since not in the


S open list or not explored from I to G

Heuristic function:- straight line distance


Consistent Heuristics

- Consistent if its estimate to reach the target is always less than or equal to the sum of estimated cost
from any of its neighboring points and the cost of reaching that neighbor from current position.
- consistent, or monotone, if its estimate is always less than or equal to the estimated distance from any
neighbouring vertex to the goal, plus the cost of reaching that neighbour.
- A consistent heuristic is also admissible, i.e. it never overestimates the cost of reaching the goal

h(n)-h(n′) <= c(n,n′)

h is the consistent heuristic function


n is any node in the graph
n′ is any descendant of n
g is any goal node
c(n,n′) is the cost of reaching node n′ from n
Example
Admissibility Example
Admissible heuristics Inequality:-
h(n) never overestimate true cost. That is estimated cost
always less than or equal to true cost
h(n) <= C(n,G),
<= h*(n)
h*(n) is the true cost to reach from node ‘n’ to the Goal state

h(S) <= C(S,A) + C(A,C) + C(C,G) [4<=12]


<= C(S,A) + C(A,D) + C(D,G) [4<=8]
<= C(S,B) + C(B,D) + C(D,G) [4<=10]
<= C(S,B) + C(B,G) [4<=11]
h(A) <= C(A,C) + C(C,G) [3<=10]
<= C(A,D) + C(D,G) [3<=6]
h(B) <= C(B,D) + C(D,G) [3<=9]
<= C(B,G) [3<=10]
h(C) <= C(C,G) [1<=7]
h(D) <= C(D,G) [2<=4]
*
The heuristic function ℎ is admissible, if for all nodes ‘n’ in the search tree the inequality holds: h(n) ≤ h (n)
Example : Consistent or Monotone Heuristics
Consistent heuristics Inequality:-

′ ′
h(n) <= C(n,n ) + h(n )

OR ′
h(n) - h(n ) <= C(n,n′)

S—------>A: h(S) <= C(S,A) + h(A)


4 <= 2 + 3
4 <= 5 .

S —--> B —--> D: h(S) <= C(S,D) + h(D)


4 <= (1+5) + 2
4 <= 8
Consistency Check Admissibility Check
example of admissible but
Not Consistent inconsistent
Admissible heuristic
A three-node network: The red dotted line corresponds to the total
estimated goal distance.
If h(A) = 4
A →C → G: Distance from A to the goal is 4 ≥ h(A), [ i.e. h(A)≤ h(A*)] 4≤ 4
C →G: Distance from C to the goal is 3 ≥ h(C) [ i.e. h(C)≤ h(C*)] 1≤ 3

Heuristic cost from A to C is h(A)−h(C) = 4−1 = 3.


The true value is cost(A,C) = 1 hence h(A) − h(C) ⋠ cost(A,C)
If h(A) = 2

h(A)−h(C) = 2−1 = 1 ≤ cost(A,C). Consistent


Underestimation and Overestimation
Proof By Example:

Underestimation: actual price is higher than your estimated price h(n) ≤ h*(n)

Overestimation: actual price is less than estimated price h(n) ≥ h*(n)


I estimated that the Laptop will be worth 45K, h(n)
But the shopkeeper informed the Laptop’s actual price 40K. h*(n)
i.e. 45K ≥ 40K
Problem Reduction

Want to pass in CIE-I

Prepare
Study Do
ASTU
Hard Cheating
Notes
AO* (Adaptive Optimal) Algorithm
- Best-first search algorithm (Informed)
- Based on AND-OR graphs to break complex problems
- Evaluation function: f(n) = g(n) + h(n)
- "* " symbolize that the algorithm is iterative
- Objective: find the optimal solution while minimizing computation
- Explore a solution path
-
Correcting
Reachingthe Path
Last fromand
Label
Forward
Start Node Propagation
Back propagation
Practice
Comparison A* and AO*
- Both works on the best first search.
- Once AO* got a solution doesn’t explore all possible paths but A* explores all
paths.
- A* always gives the optimal solution but AO* doesn't guarantee to give the
optimal solution.
- AO* algorithm uses less memory.
Problem Reduction Search
- Is a Planning to solve a Complex problem
- Recursively decomposed into many sub-problems
- Top Down Approach
- P—-----> P1XP2XP3…PN
–> P1(P2XP3) or (P1XP2)P3 or …
- Example Matrix Multiplication
- Example: Tower of Hanoi P Q R Move all disks from P to R using Q
T(n,P,R)---> T(n-1,P,Q) and T(1, P,R) and T(n-1,Q,R)
Beyond Classical Search
● Local search
● Generate and Test
● Hill Climbing
● Problem reduction
● Constraint satisfaction
● Simulated annealing
● Local beam search
● Means-ends analysis
Sub-Problem Relationship
- AND-OR
- OR node: represents choice between decompositions.
- AND node: represents a given decomposions.
- MIN-MAX: MAX: Choice of my opponent and MIN: My choice
Problem Reduction
Problem Reduction
Problem Reduction
Beam Search
- Beam Search 2
Genetic Algorithm

A genetic algorithm is an adaptive heuristic search algorithm inspired by "Darwin's


theory of evolution in Nature. Genetic Algorithm (GA) is a search-based
optimization technique based on the principles of Genetics and Natural Selection.
What is meant by gene, chromosome, population in genetic algorithm in terms of
feature selection?
Mean-End-Analysis
Memory bounded heuristic Search Algorithm
Its structure is similar to that of recursive DFS, but rather than continuing
indefinitely down the current path, it keeps track of the F – Value of the best
alternative path available from any ancestor of the current node. If the current
node exceeds this limit, the recursion unwinds back to the alternative path.

Memory bound refers to a situation in which the time to complete a given


computational problem is decided primarily by the amount of free memory required
to hold the working data. This is in contrast to algorithms that are compute-bound,
where the number of elementary computation steps is the deciding factor.
Example: Recursive Best First Search (RBFS) (Extensions of BFS), Iterative
Deepening A* (IDA*) Search, Simplified Memory bound A* (SMA*) search etc.
Knowledge Representation
Predicate Logic
- Develops information about the objects in a more easy way and can also express the
relationship between those objects.
- "Socrates is a man", one can have expressions in the form "there exists x such that x
is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable
propositional logic is the foundation of first-order logic.
- Proposition is a predicate with no arguments.
- In first-order predicate logic, variables can appear only inside a predicate.
∀x∃y:p(x,y)
Higher-order logic(HOL)
- predicates having predicates or functions as arguments
- In second-order logic, allows quantify over predicates or functions, or both.
e.g. ∀p∀x: p(x)∨¬p(x) is true: for every predicate p, either p(x) or not p(x) is true,
regardless of what x is.
Propositional Logic
- Propositions, also called statements, are
declarative sentences that are either true
or false, but may not both.
- last two examples, unable to determine
the truth value for the sentence. They are
paradoxes and open sentences respect.

- Reference(Chapter01-16)
Examples
- Grass is green. - Close the door.
- 2+5=5 - Is it hot outside ?
- x is greater than 2.
- x=x
- 3=3
- Wine = Wine
Compound Statement
How statements can be combined to produce new statements?
Meaning of the Connectives

- if P is true, then ( ¬ P) is false, and that if P is


false, then ( ¬ P) is true.
- (P Λ Q) is true if both P and Q are true,
-
- Let X and Y represent arbitrary propositions.
Then

[ ¬ X], [X V Y], [X Λ Y], [X Y], and [X


Y]
are propositions.
-
Examples
- If a person is cool or funny, then he is popular. c ∨ f ⇒ p
- If a person is popular, then he is either cool or funny. p ⇒ c ∨ f
- A person is popular if and only if he is either cool or funny. p ⇔ c ∨ f
- There is no one who is both cool and funny. (if rephrasing)
- It is not the case that there is a person who is both cool and funny. ¬(c ∧ f)
Logical Equivalence
Basic Terminologies
- Literal is either an atomic sentence or a negation of atomic sentence. P or ¬P
- Clause is a set of literals { P, ¬P}, { P} { ¬P}
- Clausal sentence is either a literal or disjunction of literal P, ¬P, P ∨ ¬P
Well-Formed Formula
A predicate name followed by a list of variables such as P(x, y), is called an atomic
formula.
Wffs are constructed using the following rules:
- True and False are wffs.
- Each propositional constant variable are wffs.
- Each atomic formula is a wff.
- If X, Y, and Z are wffs, Then [ ¬ X], [X V Y], [X Λ Y], [X Y], and [X Y]
are propositions.
- If x is a variable (representing objects of the universe of discourse), and A is a wff, then
so are x A and x A .
Converse and Contrapositive
P → Q, is proposition (If it rains, then I get wet)
Converse: Q → P (If I get wet, then it rains)
Contrapositive: ¬ Q → ¬ P (If I don't get wet, then it does not rain.)

– converse of a proposition is not necessarily logically equivalent. It may or may


not take the same truth value at the same time.
– the contrapositive of a proposition is always logically equivalent. It take the same
truth value regardless of the values of their constituent variables.
– "If it rains, then I get wet." and "If I don't get wet, then it does not rain."
Types of Proposition
Tautology: always true regardless of the value of the proposition P

Contradiction: propositions that are always false.

Contingency: proposition that is neither a tautology nor a contradiction.


If_Then Variations
- If p , then q. - "If she smiles then she is happy",
- p implies q. - "If she smiles, she is happy",
- If p, q. - "If she is not happy, she does not smile.(contrapositive)
- p only if q. - "She is happy whenever she smiles",
- p is sufficient for q. - "She smiles only if she is happy" etc.
- q if p.
- q whenever p.
- q is necessary for p.
- It is necessary for p that q.
Predicate Logic: Modus Ponens (latin for "mode that affirms")
P Q

antecedent (hypothesis) consequent (conclusion)


1. If A then B.
2. A.
______________
Therefore, B.

(A B) ∧ A B
Modus Ponens: Examples
1. If your is in checkmate then you have lost the game.
2. Your king is in checkmate .
Therefore, you have lost the game.
1. If today is Wednesday, then Jyoti will go to school.
2.Today is Wednesday .
Therefore, Jyoti will go to school.
1. If I do the dishes, then my wife will be happy with me.
2. I do the dishes .
Therefore, my wife is happy with me.
Modus Ponens: Exercise
If Priyanka is at home, the home will be quiet and if the home is quiet or it is
Sunday it’s easy to relax. Prove that it’s easy to relax.

P: Priyanka is at home.
Q: The home is Quiet.
S: It is Sunday.
R: It’s easy to Relax.
Premises:
1. P Priyanka is at home. Given
2. P Q If Priyanka is at home, the home will be quiet. Given
3. (Q ∨ S) R if the home is quiet or it is Sunday it’s easy to relax. Given
4. Q It’s easy to relax. Modus Ponens ([1][2])
Modus Ponens: Exercise

Premises: P, P Q and (Q ∨ S) R
Conclusion: R

1. P Given
2. P Q Given
3. (Q ∨ S) R Given
4. Q Modus Ponens ([1][2])
5. Q ∨ S Addition [4]
6. R Modus Ponens ([3][5])
Modus Tollens (the mode that denies)
1. If you have a current password, then you can log on to the network.
2. You have a current password.
Therefore, you can log on to the network.
You can't log into the network
If you have a current password, then you can log into the network.
Therefore, you don't have a current password.

¬q
p→q
¬p

Note:- In logic problems take the statements literally and at face value.
Universal forms: modus ponens and modus tollens
∀x((P(x)→Q(x))
P(a), where a ∈ {domain of the predicate P}
Q(a)
E.g. All fish have scales. This salmon is a fish. Therefore, this salmon has scales.
All men are mortal. Jyoti is a man. Therefore, Jyoti is mortal.

∀x((P(x)→Q(x))
¬Q(a), where a ∈ {domain of the predicate P}
¬P(a)

E.g. All surfers are hot. Conrad is not hot. Therefore Conrad is not a surfer.
Conversion English Sentence into logical statements
Sentence into WFF
1. Marcus was a man. 1. man(Marcus).
2. Marcus was a Pompeian. 2. man(Pompeian)
3. All Pompeians were Roman. 3. ∀x [ pompeian(x) → roman(x)]
4. Caesar was a ruler. 4. ruler(Caesar)
5. All Romans were either loyal to Caesar or 5. ∀x [ Roman(x) → loyalto(x, Caesar)∨hate(x,
hated him. Caesar)]
6. Everyone is loyal to someone. 6. ∀x ∃y [loyalto(x,y)]
7. People only try to assassinate rulers they 7. ∀x ∀y [ person(x) ∧ ruler(y)
are not loyal to. ∧tryassassinate(x,y) →¬loyalto(x,y) ]
8. Marcus tried to assassinate Caesar. 8. tryassassinate(Marcus,Caesar)
Was Marcus loyal to Caesar ?
Using 7 and 8 we can able to prove it (ignoring the tenses)
A formal attempt to prove by using backward reasoning (from goal)

Negate the goal: ¬loyalto(Marcus,Caesar)

↑ (7, substitution)
person(Marcus)
ruler(Caesar)
tryassassinate(Marcus,Caesar)

↑ (4)
person(Marcus)
tryassassinate(Marcus,Caesar)

↑ (8)
person(Marcus)
Was Marcus loyal to Caesar ?

Using 7 and 8 we can able to prove it (ignoring the tenses)


A formal attempt to prove by using backward reasoning (from goal)
Negate the goal: ¬loyalto(Marcus,Caesar)
↑ (7, substitution)
person(Marcus)
ruler(Caesar)
tryassassinate(Marcus,Caesar)
↑ (4)
person(Marcus)
tryassassinate(Marcus,Caesar)
↑ (8)
person(Marcus)
Was Marcus loyal to Caesar ?
Using 7 and 8 we can able to prove it (ignoring the tenses)
A formal attempt to prove by using backward reasoning (from goal)

Negate the goal: ¬loyalto(Marcus,Caesar)


↑ (7, substitution)
person(Marcus)
ruler(Caesar)
tryassassinate(Marcus,Caesar)
↑ (4)
person(Marcus)
tryassassinate(Marcus,Caesar)
↑ (8)
person(Marcus)
A failed attempt, as we don’t have any statement to satisfy person(Marcus)
We need to add a fact into our system. As we know that Marcus was a man.
Add an another facts into the system
1. Marcus was a man. 1. man(Marcus).
2. Marcus was a Pompeian. 2. Pompeian((Marcus))
3. All Pompeians were Roman. 3. ∀x [ pompeian(x) → roman(x)]
4. Caesar was a ruler. 4. ruler(Caesar)
5. All Romans were either loyal to Caesar or 5. ∀x [ Roman(x) → loyalto(x, Caesar)∨hate(x,
hated him. Caesar)]
6. Everyone is loyal to someone. 6. ∀x ∃y [loyalto(x,y)]
7. People only try to assassinate rulers they 7. ∀x ∀y [ person(x) ∧ ruler(y)
are not loyal to. ∧tryassassinate(x,y) →¬loyalto(x,y) ]
8. Marcus tried to assassinate Caesar. 8. tryassassinate(Marcus,Caesar)

9. All men are people. 9. ∀x man(x) → person(x)

Now we can satisfy the last goal and produce proof that Marcus was not loyal to Caesar.
If a statement if false negation of that is true and vice versa.
Computable predicate and resolution tree (for proof)
Turing Machine: Computable function
3 ways of representing Class Membership: Instance and Isa
- Class membership - Instance predicate - Isa predicate derives
represented with unary represented with binary subclass-superclass relationship.
predicates. If an object is instance of
Instance(object,class) subclass then it is an instance of
superclass.(requires extra axiom)
1. instance(Marcus, man)
1. man(Marcus) 1. instance(Marcus, man)
2. instance(Marcus, Pompeian)
2. Pompeian(Marcus) 2. instance(Marcus, Pompeian) 3. Pompeian is a subclass
3. ∀x [ pompeian(x) → 3. ∀x [ instance(x, pompeian) → Roman is a superclass
roman(x)] instance(x, roman)] isa(Pompeian, Roman)
4. ruler(Caesar) 4. instance (Caesar, ruler) 4. instance (Caesar, ruler)
5. ∀x [ Roman(x) → loyalto(x, 5. ∀x [ instance(x,Roman) → 5. ∀x [ instance(x,Roman) →
Caesar)∨hate(x, Caesar)] loyalto(x, Caesar)∨hate(x, loyalto(x, Caesar)∨hate(x,
Caesar)] Caesar)]
6. ∀x∀y∀z [instance(x,y)
∧isa(y,z) →instance(x,z)]
Points to ponder on class membership representation
- Although class and superclass memberships are important facts for
representations, need not be represented with instance and Isa predicates.
- Usually several different ways of representing a given fact within a particular
representational framework. (deduction and test)
- Many errors in reasoning performed by KB programs are the results of
inconsistent representation decisions.
- Isa hierarchies is used in many logic-based systems.
Facts involving Marcus
1. Marcus was a man. 1. man (Marcus)
2. Marcus was a Pompeian. 2. Pompeian (Marcus)
3. Marcus was born in 40 A.D. 3. born(Marcus, 40)
4. All men are mortal. 4. ∀x [ man(x) → mortal(x)]
5. All Pompeians died when volcano erupted 5. erupted(volcano, 79) ∧∀x [(Pompeian(x) → died(x, 79))]
in 79 A.D. 6. ∀x: ∀t1:∀t2 [mortal(x) ∧born(x, t1 ) ∧ gt(t2- t1, 150) →
6. No mortal lives longer than 150 years. dead(x, t2))] [gt is a computable predicates]
7. It is now 2023. 7. now = 2023
8. Alive means not dead. 8. ∀x: ∀t [alive(x,t) → ¬ dead (x,t)] ∧[¬ dead (x,t) →
9. If someone dies, then he is dead at all alive(x,t)]
later times. 9. ∀x: ∀t1:∀t2 [died(x,t1) ∧ gt(t2 , t1) → dead(x, t2)]
Is Marcus alive ? Summary of facts about Marcus
Negate the goal: ¬ alive(Marcus, now)
1. man (Marcus)
↑ (8, substitution) [Marcus/x, now/t] 2. Pompeian (Marcus)
dead(Marcus,now) 3. born(Marcus, 40)
4. ∀x [ man(x) → mortal(x)]
↑ (9, substitution) [Marcus/x,now/t2] 5. erupted(volcano, 79) ∧
died(Marcus, t1) ∧ gt(now , t1) ∀x [(Pompeian(x) → died(x, 79))]
6. ∀x: ∀t1:∀t2 [mortal(x) ∧born(x, t1 ) ∧
↑ (5, substitution) [Marcus/x,79/t1]
gt(t2- t1, 150) → dead(x, t2))]
Pompeian(Marcus )∧ gt(now , 79) 7. now = 2023
8. ∀x: ∀t [alive(x,t) → ¬ dead (x,t)] ∧
↑ (2)
[¬ dead (x,t) → alive(x,t)]
gt(now , 79) 9. ∀x: ∀t1:∀t2 [died(x,t1) ∧ gt(t2 , t1) →
dead(x, t2)]
↑ (7, substitution)
gt(2023 , 79)
↑ (compute gt)
NIL [empt list for proof]
Is Marcus alive ? (Another way) Summary of facts about Marcus
Negate the goal: ¬ alive(Marcus, now)

↑ (8, substitution) [Marcus/x, now/t]


1. man (Marcus)
2. Pompeian (Marcus)
dead(Marcus,now)
3. born(Marcus, 40)
↑ (6, substitution) [Marcus/x,now/t2]
4. ∀x [ man(x) → mortal(x)]
mortal(Marcus) ∧born(Marcus, t1 ) ∧ gt(now - t1,150)
5. erupted(volcano, 79) ∧
↑ (4, substitution) [Marcus/x]
∀x [(Pompeian(x) → died(x, 79))]
man(Marcus) ∧born(Marcus, t1 ) ∧ gt(now - t1,150) 6. ∀x: ∀t1:∀t2 [mortal(x) ∧born(x, t1 ) ∧
↑ (1) gt(t2- t1, 150) → dead(x, t2))]
born(Marcus, t1 ) ∧ gt(now - t1,150) 7. now = 2023
↑ (3, substitution) [40/t1] 8. ∀x: ∀t [alive(x,t) → ¬ dead (x,t)] ∧
gt(now-40 , 150)
[¬ dead (x,t) → alive(x,t)]
9. ∀x: ∀t1:∀t2 [died(x,t1) ∧ gt(t2 , t1) →
↑ (7)
dead(x, t2)]
gt(2023-40 , 150)

↑ (compute subtraction)

gt(1983 , 150)

↑ (compute gt)

NIL [empt list for proof]


Clause
A clause that contains only ∨ is called a disjunctive clause and only ∧ is called a
conjunctive clause.

p∨¬q∨r: a disjunctive clause

¬p∧q∧¬r: a conjunctive clause

¬p∧¬q∨r: neither
Conjunctive Normal Form (CNF)

- Standard representation of logical formulas in propositional logic.


- A formula consists of a conjunction (AND) of clauses, and each clause is a
disjunction (OR) of literals.
- A literal is either a propositional variable (e.g., A, B, C) or the negation of a
propositional variable (e.g., ¬A, ¬B, ¬C).
- "If it rains, then the streets will be wet." → (¬rain∨wet)
- (¬rain∨wet) is a clause consists of two literals ‘¬rain’ and ‘wet’
- CNF can also be described as AND of ORS [OR का AND]
Two Normal Forms
- Putting a bunch of disjunctive clauses together with ∧, it is called conjunctive
normal form.
For example: (p∨r)∧(¬q∨¬r)∧q is in conjunctive normal form.
- Putting conjunctive clauses together with ∨, it is called disjunctive normal
form.
For example: (p∧¬q∧r)∨(¬q∧¬r) is in disjunctive normal form.
Examples of logical formulas represented in CNF
Example1: Simple CNF Formula
Logical Formula: (A∨B)∧(¬A∨C)
This formula is in CNF because it is a conjunction of two disjunctions, each of which contains literals.
Example2: More Complex CNF Formula
Logical Formula: (A∨B∨¬C)∧(¬A∨D)∧(C∨¬D∨E)
This formula consists of three clauses, each containing disjunctions of literals, and they are all connected with the
conjunction operator.
Example3: CNF Formula with Implications
Logical Formula: (A∧B→C)∧(A→D)
To represent implications in CNF, we can use the following equivalence:
A→B is equivalent to ¬A∨B. So, the CNF representation would be: (¬A∨¬B∨C)∧(¬A∨D)
Example4: CNF Formula with Equivalences
Logical Formula: (A∧B)↔C
To represent equivalences in CNF, we can use the following equivalences:
A↔B is equivalent to (A→B)∧(B→A). So, the CNF representation would be:
((¬A∨C)∧(¬B∨C))∧((A∨¬C)∧(B∨¬C))
Conjunctive Normal Form (CNF)
1) Eliminate bi-conditional implication by replacing A↔B with (A→B)∧(B→A)
2) Eliminate implication by replacing A→B with ¬A V B.
3) In CNF, negation(¬) appears only in literals, therefore we move it inwards as:
¬ ( ¬A) = A (double-negation elimination)
¬ (A Λ B) = ( ¬A V ¬B) (De Morgan)
¬(A V B) = ( ¬A Λ ¬B) (De Morgan)
4) Finally, using distributive law on the sentences, and form the CNF as:
(A1 V B1) ? (A2 V B2) ? … ? (An V Bn).
p∨(q∧r)≡(p∨q)∧(p∨r)
p∧(q∨r)≡(p∧q)∨(p∧r)
p↔q=(p→q)∧(q→p)

Note: CNF can also be described as AND of ORS


Conjunctive Normal Form (CNF)

¬((¬p→¬q)∧¬r) ≡ ¬((¬¬p∨¬q)∧¬r) Eliminate implication

≡ ¬((p∨¬q)∧¬r) Eliminate double-negation


≡ ¬(p∨¬q)∨¬¬r DeMorgan's Law

≡ ¬(p∨¬q)∨r Eliminate double-negation

≡ (¬p∧¬¬q)∨r DeMorgan's Law

≡ (¬p∧q)∨r Eliminate double-negation

≡ (¬p∨r)∧(q∨r) Apply Distributive Law


Disjunctive Normal Form (DNF)

(p→q)→(¬r∧q) ≡ ¬(p→q)∨(¬r∧q) (Eliminate implication)


≡ ¬(¬p∨q)∨(¬r∧q) (??)
≡ (¬¬p∧¬q)∨(¬r∧q) (??)
≡ (p∧¬q)∨(¬r∧q)
Natural Sentence CNF Representation
"If it rains, then the streets will be wet."
(¬rain∨wet)

"The sun shines or it's daytime." (sun∨daytime)

"Either the food is delicious, or I won't eat it."


(delicious_food)∨(¬eat_food)

"If the computer crashes, either I'll reboot it (¬crash_computer ∨ reboot) ∨


or call tech support." (¬crash_computer ∨ call_support)

"The car won't start unless the battery is (start_car→


charged and the ignition is turned on." (charged_battery∧ignition_on))
"Either Alice or Bob will win the competition, (Alice_wins ∨ Bob_wins) ∧ (¬Alice_wins ∨
but not both." ¬Bob_wins)
Eliminate Implication

All Romans who know Marcus either hate Caesar or think that anyone who hates anyone is crazy.
WFF: ∀x [Roman(x) ∧ know(x, Marcus)]→[hate(x, Caesar) ∨ ∀y: ∃z: (hate(y,z) → thinkcrazy(x,y)) ]

∀x: [¬ [Roman(x) ∧ know(x, Marcus)] ] ∨ [hate(x, Caesar) ∨ ∀y: ∃z: (hate(y,z) → thinkcrazy(x,y)) ]
∀x: [¬ Roman(x) ∨ ¬ know(x, Marcus)] ∨ [hate(x, Caesar) ∨ ∀y: ∃z: (hate(y,z) → thinkcrazy(x,y)) ]
∀x: [Roman(x) ∧ know(x, Marcus)]∨ [hate(x, Caesar) ∨ ∀y: ∃z: (¬hate(y,z) ∨ thinkcrazy(x,y)) ]

For Part2: ∀y:∃z:(hate(y,z)→ thinkcrazy(x,y)) becomes ≡


∀y:∃z:(¬hate(y,z)∨thinkcrazy(x,y))
Now, ∀x [Roman(x) ∧ know(x, Marcus)]→ ∀y:∃z(¬hate(y,z)∨thinkcrazy(x,y))
≡∃x(¬Roman(x)∨¬know(x,Marcus))∨∀y:∃z(¬hate(y,z)∨thinkcrazy(x,y))
≡ ∀x: ¬ [Roman(x) ∧ know(x, Marcus)]∨[hate(x, Caesar) ∨ ∀y: ∃z (hate(y,z) → thinkcrazy(x,y)) ] [given
in the book]
Conversion WFF to clause form
All Romans who know Marcus either hate Caesar or think that anyone who hates anyone is crazy.
WFF: ∀x [Roman(x) ∧ know(x, Marcus)]→[hate(x, Caesar) ∨ ∀y: ∃z (hate(y,z) → thinkcrazy(x,y)) ]

1. Eliminate Implications:
∀x: ¬ [Roman(x) ∧ know(x, Marcus)]∨[hate(x, Caesar) ∨ ∀y: ∃z (hate(y,z) → thinkcrazy(x,y)) ]

2. Push ¬ symbol to each of the literal


¬∀xP(x) ≡ ∃x¬P(x) and ¬∃xP(x)≡∀x¬ P(x) , ¬ (¬ P) ≡ P, ¬(a ∧ b) ≡ ¬ a ∨ ¬ b and ¬(a ∨ b) ≡ ¬ a ∧ ¬ b

3. Standardize the variables in each of the quantifier that binds a unique variable.
∀x P(x) ∨ ∀x Q(x) converted to ∀x P(x) ∨ ∀y Q(y)

4. Move all quantifiers to the left of the formula without changing the relative order
∀x ∀y ∃z [¬Roman(x) ∨¬ know(x, Marcus)]∨[hate(x, Caesar) ∨¬(hate(y,z) ∨ thinkcrazy(x,y)) ]

Prenex Normal Form

5..Eliminate Existential Quantifiers (Skolemization)

6. Drop Prefix (universal quantifier)

7. Convert into conjunction of disjunct using associative property.

8. Create separate clause corresponding to each conjunct.

9. Standardize apart the variables in the set of clauses so that no two clauses make references to the same variable.
Prenex Normal Form (PNF)
All quantifiers appear at the beginning of the formula

Q1x1 · · Qnxn. F [x1, · · ·, xn]

where Qi ∈ {∀, ∃} and F is quantifier-free.

Every FOL formula F can be transformed to formula F′ in PNF such that F′ ⇔ F .

Example: Find equivalent PNF of F : ∀x. ¬(∃y . p(x, y ) ∧ p(x, z)) ∨ ∃y . p(x, y )

1. Push ¬, to the end of the formula


F1 : ∀x. (∀y . ¬p(x, y ) ∨ ¬p(x, z)) ∨ ∃y . p(x, y)
2. Rename quantified variables to fresh names
F2 : ∀x. (∀y . ¬p(x, y ) ∨ ¬p(x, z)) ∨ ∃w . p(x, w )
3. Remove all quantifiers to produce quantifier-free formula
F3 : ¬p(x, y ) ∨ ¬p(x, z) ∨ p(x, w )
4. Add the quantifiers before F3
F4 : ∀x. ∀y . ∃w . ¬p(x, y ) ∨ ¬p(x, z) ∨ p(x, w )
F4′ : ∀x. ∃w . ∀y . ¬p(x, y ) ∨ ¬p(x, z) ∨ p(x, w )
Resolution in propositional logic
- Fundamental inference rule
- Used for proving the validity of logical statements or resolving logical
inconsistencies.
- resolution rule: if two clauses contain complementary literals, then infer a new
clause that is the union of remaining literals from both clauses after removing the
complementary ones.
- Proceeds by building refutation proofs, i.e., proofs by contradictions.
Resolution in propositional logic
1.Given Clauses:
C1 = (p∨q∨r)
C2 = (¬p∨¬q∨s)
2. Identify Complementary Literals: 1. C1 = (¬p∨r) [taken consideration for proof
¬p in C1 and p in C2 are complementary. ]
3. Apply Resolution: C2 = (p∨q)
removing the complementary literals and
forming a new clause with the remaining literals. 2. ¬p in C1 and p in C2 are complementary.
(q∨r∨¬q∨s)
4. Simplify the New Clause: 3. Resolving C1 and C2 on p ¬p gives:
removing any redundant/complementary literals. (q∨r)
5. Conclusion:
If the empty clause is derived, then the original 4. Result: (q∨r)
set of clauses is unsatisfiable or contradictory. Here, empty clause (⊥), cannot be derived,
If the empty clause cannot be derived, indicates that the original set of clauses is
then indicates that the original set of clauses is satisfiable.
Empty clause (⊥) is unsatisfiable.
satisfiable.
Prenex Normal Form (PNF)
1. Eliminate Biconditional and Implication.
2. Move Negations Inwards (De Morgan's Laws)
3. Skolemization (eliminate Existential Quantifiers)
Move existential quantifiers (∃) to the left of statement as far as possible and
universal quantifiers (∀) to the left of existential quantifiers. Maintain the scope of
quantifiers while pushing them left.
4. Quantifier Pushdown (Universal Quantifiers)
Replace universal quantifiers with concrete instances, effectively eliminating them.
5. Standardize Variables
Example1: ∀x(P(x)→Q(x))∨∃yR(y)
1. Eliminate Implication: ∀x(¬P(x)∨Q(x))∨∃yR(y)

2. Move Negations Inwards: ∀x(¬P(x)∨Q(x))∨∃yR(y)

3. Quantifier Pushing: The quantifiers are already at the beginning, so no


change needed.

4. Result: ∀x∃y(¬P(x)∨Q(x))∨R(y)
Example2: ∃x∀y(P(x,y)∧Q(y))
1. Eliminate Implication: ∃x∀y(P(x,y)∧Q(y))

2. Move Negations Inwards: ∃x∀y(P(x,y)∧Q(y))

3. Quantifier Pushing: ∃x∀y(P(x,y)∧Q(y))

4. Result: ∃x∀y(P(x,y)∧Q(y))
Example3: ∀x∃y(P(x)∧Q(y))→∃zR(z)
1. Eliminate Implication: ¬∀x∃y(P(x)∧Q(y))∨∃zR(z)

2. Move Negations Inwards: ∃x∀y(¬(P(x)∧Q(y)))∨∃zR(z)

3. Quantifier Pushing: ∀y∃x(¬(P(x)∧Q(y)))∨∃zR(z)

4. Result: ∀y∃x∃z(¬(P(x)∧Q(y))∨R(z))
Example4: ∀x(P(x)∨∃yQ(x,y))→∃z∀wR(z,w)
1. Eliminate Implication: ¬∀x(P(x)∨∃yQ(x,y))∨∃z∀wR(z,w)

2. Move Negations Inwards: ∃x¬(P(x)∨∃yQ(x,y))∨∃z∀wR(z,w)


∃x(¬P(x)∧∀y¬Q(x,y))∨∃z∀wR(z,w)

3. Quantifier Pushing: ∃x∀y(¬P(x)∧¬Q(x,y))∨∃z∀wR(z,w)

4. Result: ∃x∀y∃z∀w(¬P(x)∧¬Q(x,y)∨R(z,w))
Resolution in Predicate Logic
1. Anyone passing his/her AI exams and winning the lottery is happy.
∀X(pass(X, AI)) ⋀ win(X, lottery) → happy(X))
2. Anyone who studies or is lucky can pass all his/her exams.
∀X ∀Y(study(X) ∨ lucky(Y) → pass(X,Y))
3. Jyoti did not study but he/she is lucky.
¬ study(Jyoti) ⋀ lucky(Jyoti)
4. Anyone who is lucky wins the lottery.
∀X(lucky(X) →win(X,lottery)) ∨∧
Predicate into clausal form
1. ∀X(pass(X, AI)) ⋀ win(X, lottery) → happy(X))
1. ¬pass(X, AI)) ⋁ ¬ win(X, lottery) ⋁ happy(X)
2. ∀X ∀Y(study(X) ∨ lucky(X) → pass(X,Y))
2. ¬study(Y) ⋁ pass(Y,Z)
3. ¬lucky(W) ⋁ pass(W,V)
3. ¬ study(Jyoti) ⋀ lucky(Jyoti)
4. ¬ study(Jyoti)
5. lucky(Jyoti)
4. ∀X(lucky(X) →win(X,lottery))
6. ¬lucky(U) ⋁ win(U,lottery)
7. ¬happy(Jyoti)
Axioms in clause form
Prove that Jyoti is happy
1. ¬pass(X, AI)) ⋁ ¬ win(X, lottery) ⋁ happy(X)
2. ¬study(Y) ⋁ pass(Y,Z)
3. ¬lucky(W) ⋁ pass(W,V)
4. ¬ study(Jyoti)
5. lucky(Jyoti)
6. ¬lucky(U) ⋁ win(U,lottery)
7. ¬happy(Jyoti)
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U)
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U) ¬happy(Jyoti)
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U) ¬happy(Jyoti)
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U) ¬happy(Jyoti)
{Jyoti/U}
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U) ¬happy(Jyoti)
{Jyoti/U}

¬pass(Jyoti, AI) ⋁ ¬lucky(Jyoti)


¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U) ¬happy(Jyoti)
{Jyoti/U}

¬pass(Jyoti, AI) ⋁ ¬lucky(Jyoti) lucky(Jyoti)


¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U) ¬happy(Jyoti)
{Jyoti/U}

¬pass(Jyoti, AI) ⋁ ¬lucky(Jyoti) lucky(Jyoti)


¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U) ¬happy(Jyoti)
{Jyoti/U}

¬pass(Jyoti, AI) ⋁ ¬lucky(Jyoti) lucky(Jyoti)

{}
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U) ¬happy(Jyoti)
{Jyoti/U}

¬pass(Jyoti, AI) ⋁ ¬lucky(Jyoti) lucky(Jyoti)

{}
¬pass(Jyoti, AI)
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U) ¬happy(Jyoti)
{Jyoti/U}

¬pass(Jyoti, AI) ⋁ ¬lucky(Jyoti) lucky(Jyoti)

{}
¬pass(Jyoti, AI) ¬lucky(W) ⋁ pass(W,V)
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U) ¬happy(Jyoti)
{Jyoti/U}

¬pass(Jyoti, AI) ⋁ ¬lucky(Jyoti) lucky(Jyoti)

{}
¬pass(Jyoti, AI) ¬lucky(W) ⋁ pass(W,V)
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U) ¬happy(Jyoti)
{Jyoti/U}

¬pass(Jyoti, AI) ⋁ ¬lucky(Jyoti) lucky(Jyoti)

{}
¬pass(Jyoti, AI) ¬lucky(W) ⋁ pass(W,V)

{Jyoti/W,AI/V}
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U) ¬happy(Jyoti)
{Jyoti/U}

¬pass(Jyoti, AI) ⋁ ¬lucky(Jyoti) lucky(Jyoti)

{}
¬pass(Jyoti, AI) ¬lucky(W) ⋁ pass(W,V)

{Jyoti/W,AI/V}
¬lucky(Jyoti)
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U) ¬happy(Jyoti)
{Jyoti/U}

¬pass(Jyoti, AI) ⋁ ¬lucky(Jyoti) lucky(Jyoti)

{}
¬pass(Jyoti, AI) ¬lucky(W) ⋁ pass(W,V)

{Jyoti/W,AI/V}
¬lucky(Jyoti)
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U) ¬happy(Jyoti)
{Jyoti/U}

¬pass(Jyoti, AI) ⋁ ¬lucky(Jyoti) lucky(Jyoti)

{}
¬pass(Jyoti, AI) ¬lucky(W) ⋁ pass(W,V)

{Jyoti/W,AI/V}
¬lucky(Jyoti) lucky(Jyoti)
¬pass(X, AI) ⋁ ¬ win(X, lottery) ⋁ happy(X) ¬lucky(U) ⋁ win(U,lottery)

{U/X}
¬pass(U, AI) ⋁ happy(U) ⋁ ¬lucky(U) ¬happy(Jyoti)
{Jyoti/U}

¬pass(Jyoti, AI) ⋁ ¬lucky(Jyoti) lucky(Jyoti)

{}
¬pass(Jyoti, AI) ¬lucky(W) ⋁ pass(W,V)

{Jyoti/W,AI/V}
¬lucky(Jyoti) lucky(Jyoti)

Note: Resolvent is empty clause means contradiction


Multiple Quantification
(1) (Vx)(Vy)Lxy Everybody loves everybody (including themselves).

(2) (Ǝx)(Ǝy)Lxy Somebody loves somebody. (The somebody can be oneself or someone else.)

(3) (Ǝx)(Vy)Lxy There is one person who is such that he or she loves everyone. There is one person who loves
everyone. (There is one person who is such that, for all persons, the first loves the second, think of God as an example.)

(4) (Ǝx)(Vy)Lyx There is one person who is loved by everyone.


(From (3) reversing the order of the 'x' and 'y' in the arguments of 'L')

(5) (Vx)(Ǝy)Lxy Every person is such that there is one person such that the first loves the second.
(each person has an object of their affection.)

(6) (Vx)(Ǝy)Lyx Everyone is loved by someone or other. No one goes unloved. (6) says something significantly weaker
than (3).

(7) (Ǝy)(Vx)Lyx Is it a new sentence. Actually it is logically equivalent to (3).


Multiple Quantification (pattern of quantifiers and variables)
(3) (Ǝx)(Vy)Lxy There is one person who is such that he or she loves everyone. There is one person who loves
everyone. (There is one person who is such that, for all persons, the first loves the second, think of God as an example.)

(7) (Ǝy)(Vx)Lyx

The diagrams shows that the pattern is the same.


Variable at position 1 in the existential quantifier is tied to the variable at position 3.
Variable at position 2 in the universal quantifier binds the variable at position 4.

Referred From:
More examples on Multiple
Quantification:

Let L(x, y) be the statement “x


loves y”, where the domain for
both x and y consists of all
people in the world
Basic steps of Unification
Fail if any of the following:
➢ Predicate Symbol
➢ Number of Arguments
➢ Arguments Pairs (constants, variables, functions etc…)
○ Already Identical
○ Not Identical (substitution required)
➢ Substitution: finite set of pairs of variables and terms are called replacements.
e.g. a|x , f(b)|y , w|v
Replacing every occurrence of every variable (here x,y,w) in the substitution.
➢ P(a,a,y,z) [{a|x} , {f(b)|y} , {w|v}] = P(x,x,f(b),z)
➢ Programming language Prolog (for pattern matching) is based on Unification.
Unification Predicate
man(jyoti) and ¬man(jyoti) : Complementary Predicate
man(jyoti) and ¬man(roop) : Not Complementary Predicate
love(x,y) and love(roop, jyoti): unify by substitution one pair at a time
trytoassassinate(x,y) and trytoassassinate(Marcus,Caesar)
like(x,x) and like(y,z): [x/y, x/z] read as x for y then x for z X
:[ ? ]
Unification Predicate
man(jyoti) and ¬man(jyoti) : Complementary Predicate
man(jyoti) and ¬man(roop) : Not Complementary Predicate
love(x,y) and love(roop, jyoti): unify by substitution one pair at a time
trytoassassinate(x,y) and trytoassassinate(Marcus,Caesar)
like(x,x) and like(y,z): [x/y, x/z] read as x for y then x for z X
: [ y/x, z/x ]
Comments on unification

constant variable function


SUCCESS if constants {variable/constant}
constant are the same, FAIL FAIL
otherwise

{variable/function} if variable does


variable {variable/constant} {variable/variable} not occur in function, FAIL
otherwise

{variable/function} if the result of unifying the


variable does not occur arguments if the functions are the
function FAIL
in function, FAIL same and the arguments unify,
otherwise FAIL otherwise
Examples
P Q variable
SUCCESS if constants {variable/constant}
constant are the same, FAIL
otherwise

variable {variable/constant} {variable/variable}

{variable/function} if
variable does not occur
function FAIL
in function, FAIL
otherwise
Examples
P(x) unifies with P(A) using the substitution {x/A}

P(f(x), y, g(y)) unifies with P(f(w), z, g(A)) using the substitutions {x/w, y/z, z/A}

P(f(x), y, g(y)) fails to unify with P(f(w), B, g(A)) since y is unified with B and so it
cannot be unified with A

P(f(x), y, g(y)) fails to unify with P(f(w), f(A), g(A)) since y is unified with f(A) and so
it cannot be unified with A
knows(John, X) unifies with
Examples Unification
Predicates Substitutions (Theta)
love(x,y) and love(roop, jyoti) [roop/x, jyoti/y] or [jyoti/y,roop/x ] or [ ? ]
P(x,f(y)) and P(a,f(g(y))) [a/x,g(z)/y]
Q(a,g(x,a),f(y)) and Q(a, q(f(b),a),x) [f(b)/x,b/y]
R(h(x),a) and R(f(y),b) X
L(x, f(a)) and L(b,f(g(z)) [x/b,g(z)/a,a/z]
Cascaded Substitution
Two or more substitution is possible define a single substitution has same effect.
r{a,b,c} [{x|a},{f(e)|b},{z|c}] = r{x,f(e),z}
r{x,f(e),z} [ {u|e},{v|z}] = r{x,f(u),v}
r{a,b,c} [{x|a},{f(d)|b},{e|c}] = r{x,f(d),e}
Non-Uniqueness of Unification
Unifier 1:
p(x,y) [{a|x},{b|y},{b|v}] = p(a,b)
p(a,v) [{a|x},{b|y},{b|v}] = p(a,b)
Unifier 2:
p(x,y) [{a|x},{f(z)|y},{f(z)|b}] = p(a,f(z))
p(a,v) [{a|x},{f(z)|y},{f(z)|b}] = p(a,f(z))
Unifier 3:
p(x,y) [{a|x},{v|y}] = p(a,v)
p(a,v) [{a|x},{v|y}] = p(a,v)
Most General Unifier (MGU)
If two expressions are unifiable then they have an MGU that is unique upto
variable permutation.

p(x,y) [{a|x},{v|y}] = p(a,v)


p(a,v) [{a|x},{v|y}] = p(a,v)

p(x,y) [{a|x},{y|v}] = p(a,y)


p(a,v) [{a|x},{y|v}] = p(a,y)
Example Questions
Find the disagreement set
W = {P(x, f(y, z)), P(x, a), P(x, g(h(k(x))))} D = {f(y, z), a, g(h(k(x)))}
Find a most general unifier for the set
W = {P(a, y), P(x, f (b))} θ = {a/x, f (b)/y}

Find a most general unifier for the set


W = {P(a, x, f (g(y))), P(z, f (z), f (u))} θ = {a/z, f (a)/x, g(y)/u}

Determine whether or not the set


W = {Q(f(a), g(x)), Q(y, y)} is uniable. W is not uniable
Exercise: Determine whether each of the following set of expressions is uniable. If yes give a MGU
- Two FOL terms unify with each other if there is a substitution list that makes
them syntactically identical:
- man(x), man(Socrates) unify using the substitution Socrates/x
- Can we unify: knows(John, x) knows(x, Mary) No!
- What about knows(John, x) knows(y, Mary) Mary/x, John/y
- loves(x, x), loves(John, y) unify using John/x, and John/y
- loves(x, x), loves(John, Mary) can’t unify
Apply a substitution to an expression
Syntactically substitute terms/var:

- mortal(x) Λ man(y) , mortal(Socrates) Λ man(uncle(z)): Socrates/x,uncle(z)/y


- loves(uncle(x), y), loves(z, aunt(z)): unify with uncle(x)/z, aunt(uncle(x))/y
loves(uncle(x), aunt(uncle(x)))
- W = {Q(a, x, f (x)), Q(a, y, y)}
- W = {Q(x, y, z), Q(u, h(v, v), u)}

- Standardize apart before unifying:


- knows(x, Mary) is logically equivalent to knows(y, Mary)!
- May be many substitutions that unify two formulas
- MGU is unique (up to renaming)
Structured Knowledge Representation
Slot-and-filler: slot→ attribute-value pair, filler → value (string, numeric,
pointer to another slot). It enables attribute values quickly
● Weak slot-and-filler knowledge representation:
- Semantic Networks(Net)
- frames: Used to describe a collection of attributes that a given object
possesses (eg: description of a object e.g. chair).
● Strong slot-and-filler knowledge representation:
- conceptual dependency
- scripts: It used in Conceptual Dependency framework. Unlike frame it
describes a sequence of events.
AI data structure
- Graphical structures designed to represent and organize knowledge,
- Semantic Network
- Frame
- Conceptual dependency
- Script
Semantic Nets
● Network of semantic relations between concepts. It is networks of words with
rich sets of relations.
● Used for propositional information,originally developed for mapping sentences
(NLP). Example with Shank’s graphs.
● Declarative graphics representation for language understanding and
translation consisting of nodes and arcs.
● Nodes represent objects, concepts, events (Generic and Individual/Instance)
● Arcs represent relationships between nodes
Examples of statements
- Jyoti is JECian, Jyoti is ASTUrian.
- Jyoti gave the green flowered vase to her lovely brother.
- Volleyball is a game, it is played by ball, it is popular is India.
- Roop is 46 years old, Jyoti is 45 years old. Roop is older than Jyoti.
- Circus Elephant is an Elephant. Elephant has head. Elephant has trunk.
Trunk has mouth. Elephant is an animal. Animal has heart. Circus elephant is
a performer.Performer has costumes.
- Jyoti is taller than Roop.
- Jyoti’s height is 167 CM.
Examples of statements
- Jyoti is an Assistant Professor. Jyoti works in the department of CSE at JEC
campus. Jyoti is 46 years old. Jyoti has blue eye. Jyoti is taller than Roop.
- Jyoti is an Assistant Professor, Jyoti works in the department of CSE at JEC
campus, Jyoti is 46 years old, Jyoti has blue eye, Jyoti is taller than Roop.
- Roop’s height is 165 CM. Jyoti’s height is 175 CM. Jyoti is taller than Roop.
- Emu’s are bird, typically birds fly and have wings, Emu runs.
- Some Roses has yellow.
An Example:
Draw the semantic network that represents the
statements given below:
-Tom is a cat.
-Tom caught a bird.
-Tom is owned by John.
-Tom is ginger in colour.
-Cats like cream.
-The cat sat on the mat.
-A cat is a mammal.
-A bird is an animal.
-All mammals are animals.
-Mammals have fur.
SN represent instances of binary predicate Shree is taller than Amit.
relationship in PL.
hometeam(Cricket, India)
instance(Cricket, game)
visitingteam(Cricket,South Africa)

Make clear distinction:


Non Binary Relation
We can represent the generic give event as a relation involving three things:
– A giver
– A recipient
– An object
Knowledge Graph

Event Extraction

Dependency Parse Tree UD-Example


Partitioned Semantic Network
- A dog bites a mail-carrier.
- Every dog has beaten the every, mail-carrier.
Partitioned Semantic Network
- Every batter hit a ball.
Advantages of Semantic Nets
● Semantic networks are easy to visualize and understand by humans.
● They are easier to implement and can be more efficient (since they can use
special purpose procedures).
● They can be more expressive than first-order logic in some regards (for
instance, inheritance with exceptions).
Limitation of Semantic Nets
● Lack of standard link/arc name difficult to understand the meaning
● Some properties are not easily expressed :e.g., negation, disjunction, and
general non-taxonomic knowledge. Answering the negative query
“Is Dezire is a car? ” takes very long time.
● Less expressive than first-order logic (negation and disjunction are problems)
● SNs are intractable for large domains: traverse all for a question
● SNs are logically inadequate.
● Problems with multiple inheritance of incompatible properties.
● Runtime is very less, since the size of SN very large.
● Programs can’t handle more easily.

Hindi WordNet
Frames: Weak slot-and-filler representation
● Frames – semantic net with properties
● Can represent a specific entry, or a general concept.
● Represents an entity as a set of slots (attributes) and associated values.
● A node in a semantic network becomes an object in a set of frames, so an object
can define a class, a subclass or an instance of a class.
● Implicitly associated with another frame: the value of a slot can be another frame.
● Represent conceptual and commonsense knowledge.
Three components of a frame
- frame name
- attributes (slots)
- values (fillers: list of values, range, string, etc.)
Example using Predicates: Entity Jyoti
– Jyoti is an engineer as a profession, and his age is 45, he lives in city Jorhat,
and the country is India.
Use the Predicates IF ADDED → Jorhat
Slot Name Filler (Slot value)

Name Jyoti

Profession Engineer

Age 45

Marital status Mingle

Weight 77

Salary IF NEEDED

City Jorhat
Features of Frame Representation
● More natural support of values than semantic nets
(each slots has constraints describing legal values that a slot can take)
● Can be easily implemented using object-oriented programming techniques
● Inheritance is easily controlled
Declarative Frame: root and leaf frame
Example: JEC (Administrator,Officeroom,Class room, Lab,Lib…)
Procedural Frame
- Actor Slot
- Object Slot
- Source Slot
- Destination Slot
- Task Slots
Advantages of Frame
● Makes programming easier by grouping related knowledge
● Easily understood by non-developers
● Expressive power
● Easy to set up slots for new properties and relations
● Easy to include default information and detect missing values
Drawbacks of Frame
● No standards (slot-filler values)
● More of a general methodology than a specific representation:
— Frame for a class-room will be different for a professor and for a
maintenance worker
● No associated reasoning/inference mechanisms
Concepts
भोजन उतना ही लें थाली में , क व्यथर्थ न जाये नाली में ।
थाली में उतना ही खाना लें, िजससे वह बेकार ना जाए ।
खाना खाओ मनभर ना छोडो कणभर ।
প্লেটত ইমানিখিন খাদ্যেহ লওক, যােত নলাত অপচয় নহয়।
थाली में एतना खाना ले लीं, ता क नाली में बेकार ना होखे।
প্লেেট শুধু এতটু কু খাবার িনন, যােত ড্রেেন নষ্ট না হয়।
Conceptual Dependency: Strong and Filler Structures
● defines a semantic base for natural language: different sentences have same
meaning have same unique CD representation.
● CD theory designed for everyday actions.
● CD is a set of actions (about world) that can be done by people.
● a model of NLU used in AI systems. Roger Schank at Stanford University
introduced the model in 1969.
● To help in the drawing of inference from sentences.
● Objective was to understand Natural Language stories.
● It has been used by many programs that portend to understand English
(MARGIE, SAM, PAM).
Application of CD
MARGIE
(Meaning Analysis, Response Generation and Inference on English) -- model
natural language understanding.
SAM
(Script Applier Mechanism) -- Scripts to understand stories.
PAM
(Plan Applier Mechanism) -- Scripts to understand stories.
Conceptual Dependency (Strong slot and filler structure)
Structured knowledge representation.
Kind of knowledge about events contained in NL Sentences
Conceptual Dependency
- conceptun at least a two-way dependency
- nominal are objects and people.
- actions are acts for nominals.
- modifier give addn info. on nominal/object
Sarah fixed the chair with glue.

Case frame Representation


A story : Mary-Lily-John
● Mary went to the playroom when she heard Lily crying.
● Lily said, “Mom, John hit me.”
● Mary turned to John, “You should be gentle to your little sister.”
● “I’m sorry mom, it was an accident, I should not have kicked the ball towards
her.” John replied.

What are the facts we know after reading this?


Possible concepts on the story
● Mary went to the playroom when ● Mary’s location changed.
she heard Lily crying.
● Lily said, “Mom, John hit me.” ● Lily was sad, she was crying.
● Mary turned to John, “You should ● John hit Lily (with an unknown
be gentle to your little sister.” object).
John is Lily’s brother.
John is taller (bigger) than Lily.
● “I’m sorry mom, it was an ● John kicked a ball, the ball hit
accident, I should not have Lily.
kicked the ball towards her.”
John replied.
Example: John hit the cat
● classify the situation as of type Action.
● Actions have conceptual cases
○ Act (the particular type of action)
○ Actor (the responsible party)
○ Object (the thing acted upon)

ACT: [apply a force] or PROPEL


ACTOR: john
OBJECT: cat

John ⇔ PROPEL ← cat


Four Conceptual Primitives entities
Picture Producers (PP) Mummy stroked her fat daughter.
Picture Aiders (PA)
PP: Mummy, daughter, her[Mummy]
Actions (ACT)
Action Aiders (AA) PA: fat

ACT: stroke

PPs : objects, actors


PAs : modifiers of objects
ACTs: actions
AAs : properties/attributes/modifiers of actions
Primitive Actions

s
tion
1) ATRANS: Transfer of an abstract relationship (give, accept, take)

ac
2) PTRANS: Transfer the physical location of an object ( Go, Come, Run, Walk)

own
3) MTRANS: Transfer the mental information (Tell)

nkn
for u
4) PROPEL: Application of physical force to an object (push, pull, throw)
5) MOVE: Movement of a body part by its owner (kick,punch).

used
6) GRASP: Grasping of an object by an action (clutch)

tion
y Ac
7) INGEST: Ingestion of an object by an animal (eat)

: An
8) EXPEL: Expulsion of anything from an animal body (cry)
9) MBUILD: Building new information out of old (decide)

Do
10) SPEAK: Production of sounds (say)
11) ATTEND: Focusing of a sense organ towards a stimulus (listen, watch )
Conventions
● Arrow indicate direction of dependency
● Double arrow indicates two way link between actor and action.
● All actions involve one or more of these
○ O for object case relation
○ R for recipient case relation
○ Directive case relation
○ Instrumental case relation

Conceptual tenses (Time of actions or state of being)
Representing Picture Aiders (PAs) or states
thing <≡> state-type (state-value)
The ball is red. ball <≡> color (red)
John is 6 feet tall. john <≡> height (6 feet)
John is tall. john <≡> height (>average)
John is taller than Jane. john <≡> height (X) jane <≡> height (Y) X > Y
John is angry. john <≡> anger(5)
John is furious. john <≡> anger(7)
John is irritated. john <≡> anger (2)
John is ill. john <≡> health (-3)
John is dead. john <≡> health (-10)
CD is a decompositional approach
Mary took a book from John.

Mary received the book from John.

John gave Mary a book.


Scales
John grew an inch.

This is supposed to be a state


change: somewhat like an
action but with no responsible
agent posited.
The big man took the book
Variations on the story:
John applied a force to the cat by moving some object to come in contact with the
cat.
Variations on the story (cont’d):
John kicked the cat.

kick = hit with one’s foot


Variations on the story (cont’d):
John hit the cat.

Hitting was detrimental to the cat’s health.


cy
d en
p en
e
lD
tua
e p
onc
C
sic
Ba
Adapted from Schank[1973]
Adapted from Schank[1973]
14
Ru
leso
f C
on
cep
tua
lD
ep
en
de
nc
y

CP: Cognitive Processor


Examples:
1. John cried because Mary said she loved Bill.
Advantage and Disadvantages
Advantages of CD:
● Using these primitives involves fewer inference rules.
● Many inference rules are already represented in CD structure.
● The holes in the initial structure help to focus on the points still to be
established.
Disadvantages of CD:
● Knowledge must be decomposed into fairly low level primitives.
● Impossible or difficult to find correct set of primitives.
● A lot of inference may still be required.
Example: Jyoti bet Roop five pounds that India would win the FIFA world cup
2022.
● Complex representations require a lot of storage
Script
Example –
• Going to movie
• Shopping in a supermarket
• Eating in a restaurant
• Visiting a dentist
Script
A script is a data structure used to represent a sequence of events. Scripts are used for
interpreting stories.

Scripts have been used to


1) Interpret, understand and reason about stories,
2) Understand and reason about observed events
3) Reason about observed actions
4) Plan actions to accomplish tasks.

A script is composed of
1) Entry Condition
2) Props (objects manipulated in the script)
3) Roles/actors (agents that can change the state of the world).
4) Tracks/ Events
5) Scene/Acts: A set of actions by the actors. (lamp post scene:Barfi)
6) Results
Entry Conditions: Conditions that must be satisfied for execution of the script. Whenever a script is referred, one can assume the
preconditions to be true. For example when we read a statement, Jay went to watch a cricket match, it is also
concluded that he has money to buy tickets and he is also a cricket fan. Even when not explicitly mentioned, entry
conditions can safely assumed to be true.

Results: The Conditions that will be true after exit. This is a general (and thus default) assumption. It might be false under
exceptional circumstances. For example when there is rain and the match is abandoned, watching does not happen.
Happy may also be false.

Props: Objects involved in the script. Tickets, seats etc. are other objects that the person deals with while watching a match.
Here also are some varieties, for example it is possible that with a pavilion ticket, he might also receive a food
coupon.

Roles: Persons involved in the script. Again, this is a general assumption. We have not mentioned fellow spectators, or
umpires or players in the simplified version of the script that we draw. If might involve all of them in the professionally
written scripts. A more detailed script might also involve events like tossing the coin between rival captains,
information about innings, score cards and so on.

Track: Specialization of the script derived from a general pattern. A general pattern may inherit multiple tracks. That means
multiple such tracks look quite similar but they have their own individually different scenes or other items associated.
For example watching a football match might contain referees, linemen, and so on while a detailed cricket match
might have a wicket keeper, umpires, a third umpire and so on.

Scenes: The sequence of events following a general default path. Events are represented in CD form but mentioned as a
semi-CD form, just describing the ACT and rest in English for pedagogy purpose.
Script for going to the bank to withdraw money.
SCRIPT : Withdraw money Scene 1: Entering Scene 2: Filling form
TRACK : Bank
P PTRANS P into the Bank P MTRANS signal to E
PROPS : Money E ATRANS form to P
P ATTEND eyes to E
Counter
P PTRANS P to E P PROPEL form for writing
Form
Token P ATRANS form to P
E ATRANS form to P
Roles : P= Customer E= Employee C= Cashier

Entry conditions: P has no or less money.


The bank is open. Scene 3: Withdrawing money Scene 4: Exiting the bank
Results : P has more money. P ATTEND eyes to counter P PTRANS P to out of bank
P PTRANS P to queue at the counter
P PTRANS token to C
C ATRANS money to P
Script : watching a match Various Scenes

Track: Cricket match Scene 1: Going to a stadium


Props: • P PTRANS P to the stadium
• Tickets • P ATTEND eyes to ticket counter
Scene 2: Buying ticket
• Seat
• P PTRANS P to ticket counter
• Match
• P MTRANS ticket requirement to BC
Roles:
• P MTRANS stand information to BC
• Person (who wants to Watch a match) – P
• BC ATRANS ticket to P
• Booking Clerk – BC Scene 3: Going inside stadium and sitting on a seat
• Security personal – SP • P PTRANS P into Stadium
• Ticket Checker - TC • SP MOVE security check P
Entry Conditions: • TC ATTEND eyes on ticket POSS by P
• P wants to watch match • TC MOVE to tear part of ticket
• P has a money • TC MTRANS (showed seat) to P
Results: • P PTRANS P to seat
• P saw a match • P MOVES P to sitting position
• P has less money Scene 4: Watching a match
• P is happy (if his team has won) • P ATTEND eyes on match
or not (if his team has lost) or • P MBUILD (moments) from the match
some other problem at stadium Scene 5: Exiting
• P PTRANS P out of Stadium
Example of a script: restaurant script
Scene: A restaurant with an entrance and tables.

Props: The table setting, menu, table, chair.

Actors: The diners, servers, chef and Maitre d'Hotel.

Acts: Entry, Seating, Ordering a meal, Serving a meal, Eating the meal, requesting
the check, paying, leaving.
Beyond Classical Search: Adversarial
- In a normal/local search, follow a sequence of actions to reach the goal or to
finish the game optimally. (without strategy)
- Search in conflicting environment: one can trace the movement of an enemy
or opponent. (parties in a dispute)
- A situation where you are planning while another actor prepares against you.
- the players will decide the result of the game: compete or fight for win
- very prominent in Playing a Game

Can refer this


Adversarial Search
- Games are good examples of Adversarial search
- Game Playing: modeling of the possible interactions between Rational Agents or
players.
- Rational mind, Intelligence,logics for win the game or draw the game (worse case)
- Deep Blue's (vs Kasparov) victory is milestone of AI

Which one Adversarial Game?


Interfere a player?
Features of Adversarial Search over Conventional
- searches must have been a two-player game.
- played in the form of turn-taking. e.g. chess, ludo, Poker, etc.
- The rules must have been precise.
- It is competitive that becomes hard to solve.
- The competitive environments: easy, medium, and hard
- Business strategy

Some abstract points

- Some games come under the luck of chance like dice games
- Some games two or more players play for the same goal.(teams)
GAME on AI: A reinforcement Learning
There are following types of adversarial search:

Minimax Algorithm: Pick the best next move against your best move

Alpha-beta Pruning:
Formal definition of game
game can a kind of search problem with the following elements:
S0 : The initial state, which specifies how the game is set up at the start.
PLAYER (s): Defines which player has the move in a state.
ACTIONS (s): Returns the set of legal moves in a state.
RESULT (s, a): The transition model, which defines the result of a move.
TERMINAL -TEST (s): A terminal test, which is true when the game is over and false
otherwise. States where the game has ended are called terminal states.
UTILITY (s, p): A utility function (also called an objective function or payoff function),
defines the final numeric value for a game that ends in terminal state s for a player p.
Chess: +1, 0, -1 , Poker: Cash won or lost
How to calculate the Utility/value/rank/goodness of NT nodes ?
Minimax Function
- Both Max and Min play optimally
- Whichever move Max takes, Min will choose the countermove that yields the
lowest utility score.
- Traverse the tree depth-first until the terminal nodes and assign their parent
node a value that is best for the player whose turn it is to move.
-

-
Working example of minimax procedure
- —-> Max’s move
A
—-> Min’s move.
MAX a1 a2 a3
- The game ends when a terminal (leaf) —----------------------------------------------------------
node is reached.
- The terminal value is the utility function’s
B C D
value written below the leaf node.
- if Max choose B, then Min would MIN b1 b2 b3 c1 c2 c3 d1 d2 d3
choose b3 because that’s the lowest —----------------------------------------------------------------------------------------------------

possible value.
E F G H I J K L M
- if Max choose C, then Min would 31 20 12 2 8 16 12 1 5
choose __?
B = MIN(E,F,G) = MIN (31,20,12) = 12
C = MIN(H,I,J) = MIN (2,8,16) = 2
A = MAX(B,C,D)
= MAX (MIN(E,F,G), MIN(H,I,J), MIN(K,L,M)) All Non-terminal nodes are Calculated.
= MAX(12,2,1) = 12
Game Tree for Tic-tac-toe (noughts and crosses)

Points are awarded to the winning player and


penalties are given to the loser.
An Instance of Tic Tac Toe utility calculation
Minimax properties
- Completeness: Yes, if tree is finite
- Optimality: Yes, against an optimal (rational) opponent
- Time Complexity: O(bm) Time and Space Complexity
- Space Complexity:O(bm) (depth-first exploration) Time and Space
The search component of Minimax is similar to DFS (since agent will try to
explore from the goal state upward)
- Time complexity of this algorithm is terrible.
- For chess, b ≈ 35, m ≈ 100 for “reasonable” games
- Time complexity ≈ O(35100)
- 1 + b + b^2 + b^3 + ... + b^d = b^d (1 - 1/b^d)/(1 - 1/b).
- The stuff in parentheses at the end of the formula is very close to one, so the
overall time is very close to b^d.
- Pruning: eliminate large parts of the tree from consideration
Game possible outcomes Pay off
Chess win, loss, or draw +1, 0, or ½
Chess is zero-sum win, loss, or draw 0 + 1, 1 + 0 or ½ + ½
Alpha-Beta Pruning
- Alpha-beta pruning is an optimization technique for the minimax algorithm
(Not alternative)
- Alpha-beta pruning does not affect the final result as getting from minimax.
- Some cases decision trees become very complex.
- Some useless branches increase the complexity of the model.
- These unusual nodes make the algorithm slow.
- An instance: minimax decision in the below tree :
Minimax Decision = MAX {MIN {3, 5, 10}, MIN {2, a, b}, MIN {2, 7, 3}}
= MAX {3, c, 2} = 3
- We have reached a decision without looking at those nodes. And this is where
alpha-beta pruning comes into the play.
Condition for Alpha-beta pruning
Key points in Alpha-beta Pruning
Alpha: Alpha is the best choice or the highest value that we have found at any instance along the path of
Maximizer. The initial value for alpha is – ∞.

Beta: Beta is the best choice or the lowest value that we have found at any instance along the path of
Minimizer. The initial value for alpha is + ∞.

The condition for Alpha-beta Pruning is that α >= β.

Each node has to keep track of its alpha and beta values. Alpha can be updated only when it’s MAX’s turn
and, similarly, beta can be updated only when it’s MIN’s chance.

MAX player will update only alpha values and MIN player will update only beta values.

The node values will be passed to upper nodes instead of values of alpha and beta during go into reverse
of tree.

Alpha and Beta values only be passed to child nodes.


Move Ordering in Pruning:effectiveness of alpha-beta pruning
Worst Ordering: In some cases of alpha beta pruning none of the node pruned by
the algorithm and works like standard minimax algorithm. This consumes a lot of
time as because of alpha and beta factors and also not gives any effective results.
This is called Worst ordering in pruning. In this case, the best move occurs on the
right side of the tree.
Ideal Ordering: In some cases of alpha beta pruning lot of the nodes pruned by
the algorithm. This is called Ideal ordering in pruning. In this case, the best move
occurs on the left side of the tree. We apply DFS hence it first search left of the
tree and go deep twice as minimax algorithm in the same amount of time.
Demonstration: Alpha-Beta
- José Manuel Torres@2011
- Alpha-Beta Pruning Practice :- Check your answer by swapping MIN/MAX
- University of Pittsburgh
- Depaul University: 6-ply
- A-B Tree Practice

Terminal Nodes 3 12 8 2 4 6 14 5 2 Tree Structure : 3 3 3 3


8 7 3 9 9 8 2 4 1 8 8 9 9 9 3 4 Tree Structure: 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
4 6 7 9 1 2 0 1 8 1 9 2 Tree Structure: 3 2 2 2 2 2 2 2 2 2
10 5 7 11 12 8 9 8 5 12 11 12 9 8 9 10 Tree Structure: 2 2 2 2 2 2 2 2 2 2 2 2
222
3 4 2 1 7 8 9 10 2 11 1 12 14 9 13 16
Shallow and Deep Pruning
Link
Can refer from
Efficiency of AB pruning
Efficiency of AB pruning depends upon the order in which nodes are encountered
in the frontier of the search tree.

Three cases:
Worst case (Time Complexity is similar to the minimax algorithm) BAD
Best case: Best case means perfect move where lots of pruning could exist.
Average case: Is the point of discussion here.

Perfect move ordering improves effectiveness (Time Complexity) of pruning.


Average Case
Which Computation are relevant: metareasoning
Alpha Beta Pruning Analysis
The Alpha-Beta algorithm is a significant enhancement to the minimax search
algorithm that eliminates the need to search large portions of the tree applying a
branch-and-bound technique.

Analysis
Alpha-Beta Analysis

Analysis wiki

Saving
Even Depth
O(b×1×b×1×...×1) for even depth
all the first player's moves must be studied to find the best one, but for each, only
the second player's best move is needed to refute all but the first (and best) first
player move—alpha–beta ensures no other second player moves need be
considered.
Where the ply of a search is even, the effective branching factor is reduced to its
square root, or, equivalently, the search can go twice as deep with the same
amount of computation.
It is O(b^(d/2)) with small values of b and d doesn't really make sense.
explanation of b*1*b*1*...
All the first player's moves must be studied to find the best one, but for each, only
the best second player's move is needed to refute all but the first (and best) first
player move – alpha–beta ensures no other second player moves need be
considered.

Max-all then Min-one alternatively


Multi Player Game
- Chinese Checker with 6 players
- Othello
Compare minimax with Alpha-Beta
SSS* Algorithm: introduction
- Introduced by George Stockman in 1979.
- It conducts a SSS traversing a game tree in best-first fashion similar to A*.
- Based on the notion of Solution Trees (Strategies)
- Informally, solution tree can be formed by pruning the number of branches at
each MAX node to one, since it specifies exactly one MAX action for every
possible sequence of moves made by the opponent.
SSS* Algorithm
- For a given game tree, SSS* searches through the space of partial solution trees,
gradually analyzing larger and larger subtrees, eventually producing a single
solution tree with same root and same MIN-Max value as the original game tree.
- It never examines a node like alpha-beta pruning. It may prune some branches that
alpha-beta would not.
- SSS* search in a space of strategies(S).
- A strategy(s) for MAX is if MAX makes a move then whatever move MIN makes
MAX has an answer. Thus MAX performs a look ahead.
- Value of a strategy V(S) = min(Value(Leaves ))
s
SSS* Algorithm
- SSS* maintains an OPEN list with descriptors of the active nodes. Descriptors
are sorted in decreasing order of their merit (heuristic value) implemented as
a priority queue.
- It is a Best First Variation of alpha beta algorithm.
- Every leaf node belongs to a cluster of strategies.
- SSS* algorithm search through all strategies using alpha beta technique.
One Strategy: S1

MAX

One for MAX and all for MIN

MIN

MAX

MIN

A B C D E F
Two Strategies: S1 and S2

A B C D A B C D

{50,40, 70,60} {50,40, B, C}


Two Strategies: S3 and S4

Strategy s3 Strategy s4

A B C D A B C D

{30, A, 70,60 } {30,A, B,C}


Four Strategies for left Sub tree: S1, S2, S3 and S4
Question: Find Out the other four strategies from right subtree: S5, S6, S7 and S8

- 4-ply game tree: number of strategies are ?? 8

#ply #strategies (branching factor = 2)


4 8 = 2^3
5/6 2^7
7/8 2^15
9/10 2^31
Every leaf node belongs to two strategies
S1: {50,40, 70,60}

S2: {50,40, B, C}

S3: {30, A, 70,60 }

S4: {30,A, B, C}
Value of a Strategy

- Value of a strategy V(S) = min(Value(Leaves ))


s
- e.g Value of a strategy, s1 is V(S1) = min(Value(Leaves ))
s1
- That is S1: {50,40, 70,60} V(S1) = 40
S2: {50,40, B, C} V(S2) = 40
S3: {30, A, 70,60 } V(S3) = 30
- Thus value of a leaf, V(L) >= V(S)
- That is V(L) is upper bound on the strategy that it contains.
SSS* algorithm
Handling Uncertainty A→ B
- Lack of complete knowledge
- Information from unreliable sources, experimental errors, equipment fault…
- Leads to ambiguity and unpredictability
-
An uncertain Knowledge
Diagnosis: Medicine, automobile repair, law, business, gardening, dating etc…
First Order Rule of dental diagnosis:
∀p symptoms(p, Toothache) => Disease(p, cavity) breakdown into
∀p symptoms(p, Toothache) => Disease(p, cavity) ∨ Disease(p, swallowing)
∨ Disease(p, ToothErosion)

The agent takes action on degree of belief that will be probability theory (0 ≤ p ≤1)
Sources of Uncertainty in AI

- Data Uncertainty: Noisy or incomplete data can affect quality and accuracy
- Model Uncertainty: model architecture, optimization algorithm, and
hyperparameters can significantly impact the performance.
- Algorithmic Uncertainty: different ML algorithms can produce different predictions
for the same dataset.
- Environmental Uncertainty: autonomous vehicle may encounter unexpected
weather conditions or road construction
- Human Uncertainty: Human behavior and preferences part of the decision-making
process and adoption of AI systems.
Image Segmentation
Uncertainty in image processing : uncertainty of which class or category a pixel or
region belongs to.
Techniques for Addressing Uncertainty
01- programming paradigm combines logic programming with probabilities.
It is based on the distribution semantics. Word Embedding an NLP application.
Probabilistics reasoning
- 0 ≤ P(A) ≤ 1
0 → uncertainty
1 → certainty
- Event: each possible outcome of a variable
Sample space: all possible events
Random variable: represents events and object
Prior probability:
Posterior probability:
Conditional probability
- Thomas Bayes rule
Machine Translation (paper)
several semantically equivalent translations of the same source sentence
Semantic similarity is the similarity between two words or two sentences/phrase/text.
Fuzzy Logic
- Boolean Logic (two possibility) {0,1}
- Real time?
- Frizzy meaning ?
- Confront circumstances where we can't decide if something is valid or bogus.
- Lotfi Zadeh conceptualized fuzzy logic in 1965
- Examples “Is it hot water? Am I tall ? I have eaten Pizza”
- Diversity of values
- A technique for computational “thinking” that resembles human thinking.
Membership Function
- Represent in degrees
- Represent belongingness of a member of a crisp set to Fuzzy set

Intell. Agent Accelerator Brake


Membership Function (𝝁)
A = {1, 2, 3, 4, 5, 6, 7} [Universe of discourse]
S = {1, 2, 4, 6}
(x,𝜇)
= {(1,1), (2,1),(3,0),(4,1),(5,0),(6,1), (7,0)}

Check the degree of Fastness of truck

0 if speed(x) <= 40 0 if speed(x) <= 40


speed(x) -50 if 40 < speed(x) < 50
1 if 40 <= speed(x) <= 80 10
1 if 50 <= speed(x) <= 80
Classical set theory
● Ordered pair of elements
● Singleton, subset, finite,infinite, empty, proper, universal, equivalent, disjoint etc.
● Set with Crisp boundaries are called classical set
● Mathematical representation:
- Roaster form: N = {1,2,3,4,5…}
- Set builder form: J = (K | 0 <= K <=20 and (K mod 2) = 0)
● Operation: Union, Intersection, Difference, Complement
● Properties: Commutative, Associative, Idempotent, Identity, Absorption, distributive,
Transitive , Demorgan
Fuzzy set
- Fuzzy set (Ã) is a pair of (x, 𝜇 ) takes the value [0,1]
- Ã = {(x, 𝜇Ã(x))| x ∊ X} X = {...x…} or [ A̰ = {(x, 𝜇A̰ (x))| x ∊ X} ]
- Operations on Fuzzy set
- Given à and B̃ are the two fuzzy sets, and X be the universe of discourse
- The respective member functions are: 𝝁Ã(x) and 𝝁B(x)
̃
Union on Fuzzy set

𝝁ÃUB ̃ (x) = Max {𝝁Ã(x), 𝝁B(x)


̃
}

Fuzzy set, Ã = {(x1, 0.6), (x2, 0.2), (x3, 1), (x4, 0.4)}
Fuzzy set, B̃ = {(x1, 0.1), (x2, 0.8), (x3, 0), (x4, 0.3)}

𝝁ÃUB ̃ (x) = {(x1, 0.6), (x2, 0.8), (x3, 1), (x4, 0.4)}
= {(0.6/x1), (0.8/x2), (1/x2), (0.4/x4)}

﹣₋
Intersection on Fuzzy set

𝝁Ã∩B ̃ (x) = Min {𝝁Ã(x), 𝝁B(x)


̃
}

Fuzzy set, Ã = {(x1, 0.6), (x2, 0.2), (x3, 1), (x4, 0.4)}
Fuzzy set, B̃ = {(x1, 0.1), (x2, 0.8), (x3, 0), (x4, 0.3)}

𝝁Ã∩B ̃ (x) = {(x1, 0.1), (x2, 0.2), (x3, 0), (x4, 0.3)}
Complement on Fuzzy set
μĀ(x) = 1-μĀ(x)

à = {( x1, 0.3 ), (x2, 0.8), (x3, 0.5), (x4, 0.1)}

A̰̅ = {( x1, 0.7 ), (x2, 0.2), (x3, 0.5), (x4, 0.9)}

Example: given a fuzzy set of tall men

- fuzzy set for tall men


- (0/180, 0.25/182.5, 0.5/185, 0.75/187.5, 1/190)
- fuzzy set of NOT tall men will be:
(1/180, 0.75/182.5, 0.5/185, 0.25/187.5, 0/190)
Product on Fuzzy set

𝝁Ã.B ̃ (x) = 𝝁Ã(x). 𝝁B(x)


̃

Fuzzy set, Ã = {(x1, 0.8), (x2, 0.2), (x3, 1)}


Fuzzy set, B̃ = {(x1, 0.5), (x2, 0.7), (x3, 0)}

𝝁Ã.B ̃ (x) = {(x1, 0.4), (x2, 0.14)}


Equality on Fuzzy set
Ã=B̃ if 𝝁Ã(x) = 𝝁B(x)
̃

- Same number of elements


- Membership function/grade are equal
- Example
- Ã = {(1, 0.2),(2,0.3)} and B̃ = {(1, 0.2),(2,0.3)} then Ã=B̃
- X = {1,2,3,4}
à = {(1/0.2),(2,0.4),(3/0.6),(4/.8)} and à = {(1/0.1),(2,0.3),(3/0.6),(4/.8)}
𝝁Ã(1) = 0.2 and 𝝁B(1) ̃
= 0.1 → 𝝁Ã(x) != 𝝁B̃ (x) so, Ã != B̃
Cartesian product on Fuzzy set
R = Ã X B̃
𝝁R(X,Y) = 𝝁 (X,Y)
̃ Ã X B̃
= Min( 𝝁 (x), 𝝁 (y))
à B̃
Ã(X) = {(x1/0.6),(x2/0.5),(x3/0.4),(x4/0.2)}
B̃ (Y) = {(y1/0.2),(y2/0.8),(y3/0.9),(y4/0.6)}

Ã(X) X B̃ (Y) = y1 y2 y3 y4
x1 0.2 0.6 0.6 0.6
x2 0.2 0.5 0.5 0.5
x3 0.2 0.4 0.4 0.4
x4 0.2 0.2 0.2 0.2
Prolog
Logic vs functional programming
PROgramming in LOGic
● 4GL supports the declarative programming paradigm.
● suitable for symbolic or non-numeric computation.
● Prolog always performs depth-first-search, Matches facts & rules (i.e.
knowledge base) in top-down manner.
● Prolog resolves the goals or subgoals in left-to-right manner.
● Prolog query will return 'true' (success) only if all of the subgoals are satisfied.
● References:
■ Logic Programming:
■ Booklm
■ AIPP-PDF-SLIDES
■ Books

The central ideas of Prolog
● SUCCESS/FAILURE
– any computation can “succeed'' or “fail'', and this is used as a ‘test‘ mechanism.
● MATCHING
– any two data items can be compared for similarity, and values can be bound to
variables in order to allow a match to succeed.
● SEARCHING
– the whole activity of the Prolog system is to search through various options to find
a combination that succeeds.
• Main search tools are backtracking and recursion
● BACKTRACKING
– when the system fails during its search, it returns to previous choices to see if
making a different choice would allow success.
Prolog Terminologies

Predicate:- denotes a property or relationship between objects.


fact:- predicate that is true fact and predicate
rule:- contain conditional clauses.
Clause:- KB consists of clauses. A clause has head and body (Rule) or head (Fact).
Head:- head consists of a predicate name and arguments.
Query:- query is the action of asking the program about the information
arity:- the number of arguments a predicate takes.
Prolog-syntax: fact-rule-queries
Predicate Definitions
● Both facts and rules are predicate definitions.
● ‘Predicate’ is the name given to the word occurring before the bracket in a fact
or rule:
parent(jane,alan).

● By defining a predicate you are specifying which information needs to be


known for the property denoted by the predicate to be true.
Clauses
● Predicate definitions consist of clauses.
= An individual definition (whether it be a fact or rule).
e.g. mother(kareena,taimur). = Fact
parent(P1,P2):- mother(P1,P2). = Rule

● A clause consists of a head and sometimes a body.


● Facts don’t have a body because they are always true.
Arguments
● A predicate head consists of a predicate name and sometimes some
arguments contained within brackets and separated by commas.
mother(kareena,taimur).

Predicate Arguments
● A body can be made up of any number of subgoals (calls to other
predicates) and terms.
● Arguments also consist of terms, which can be:
■ Constants e.g. jyoti,
■ Variables e.g. Person1, or X
■ Compound terms
Terms: Constants
Constants can either be:

● Numbers:
■ integers are the usual form (e.g. 1, 0, -1, etc)
■ floating-point numbers can also be used (e.g. 3.0E7)
● Symbolic (non-numeric) constants:
■ always start with a lower case alphabetic character and contain any mixture of letters,
digits, and underscores (but no spaces, punctuation, or an initial capital).
■ e.g. abc, big_long_constant, x4_3t).
● String constants:
■ anything between single quotes e.g. ‘Like this’.
Terms: Variables
● Variables always start with an upper case alphabetic character or an
underscore.
● Other than the first character they can be made up of any mixture of letters,
digits, and underscores. e.g. X, ABC, _89two5, _very_long_variable
● There are no “types” for variables (or constants) – a variable can take any
value.
● All Prolog variables have a “local” scope:
– they only keep the same value within a clause; the same variable used
outside of a clause does not inherit the value (this would be a “global” scope).
Naming Tips
● Use real English when naming predicates, constants, and variables.
e.g. “John wants to help Somebody.”
Could be: wants(john,to_help,Somebody).
Not: x87g(j,_789).

● Use a Verb Subject Object structure:


wants(john,to_help).

● BUT do not assume Prolog Understands the meaning of your chosen


names!
– You create meaning by specifying the body (i.e. preconditions) of a clause.
Naming Tips
● name of prolog file may contain _ (underscore) but not - (dash)
e.g. student_professor.pl but not student-professor.pl

● name of the arguments would begin with small letters


e.g. studies(hRISHIKESH,dataAnAnalytics). Or
studies(hrishikesh,dataananalytics).
But not studies(HRISHIKESH,DataAnAnalytics).

● name of arguments does not contain(.) dot


e.g. teaches(george_F._Colony, dataAnalytics).

● name of arguments may contain(-) hyphen but not double hyphen (--)
e.g. teaches(george-F-Colony, dataAnalytics).
But not teaches(george–F-Colony, dataAnalytics).
Prolog Syntax: Symbols
Prolog expressions are comprised by truth-functional symbols:

All Prolog data structures are called terms. A term is either:

● A constant, which can be either an atom or a number.


● A variable.
● A compound term.
Input and Output Terms in Prolog
● built-in predicate write and read
● write (‘hello world’).
● read(X). Or read(_read_a_Value_from_keayBoard).
● Example: helloworld.pl , greetings.pl
● ?- write(What). // What is variable as the beginning letter is in capital
● ?- write(X).
● ?- write(_What).
Basic Prolog Commands

-> halt. // exit from the Prolog (short form: control-d)


-> shell(clear). //clear a terminal screen while inside gprolog.
-> consult('family.pl').
Or consult(family). load the knowledge base
Or [family].

-> trace and notrace


-> Single line comments: % %
-> Multiple line comments: /* */
Explanation
Explanation
Small world knowledge representation
Represent the scholar’s room such that the following
queries are possible:

Which furniture is in the room?

How many doors, windows, tables, ... are in the room?

Where is the table, the chair, ... ?

What is to the left (right) of the table, ... (with respect to


the center of the room)?

What is at the wall 2, at the window 1, ... ?

What is in the corner 1, ... ?


Scotland Family Tree
James I
|
+----------------+-----------------+
| |
Charles I Elizabeth
| |
+----------+------------+ |
| | | Sophia
Catherine Charles II James II |
George I
male(james1). Formulate the following queries:-

male(charles1). ● Was George I the parent of Charles I?


male(charles2).
Query: parent(charles1, george1).
male(james2).
● Who was Charles I's parent?
male(george1).
female(catherine). Query: parent(charles1,X).
female(elizabeth). ● Who were the children of Charles I?
female(sophia).
Query: parent(X, charles1).
parent(charles1, james1).
parent(elizabeth, james1).
parent(charles2, charles1).
parent(catherine, charles1).
parent(james2, charles1).
parent(sophia, elizabeth).
parent(george1, sophia).
Trace and retrace using family.pl (Debugging)

| ?- trace.
The debugger will first creep -- showing everything (trace)
yes
{trace}
| ?-

| ?- notrace.
The debugger is switched off
yes
| ?-
Compound Terms
● representation of data with substructure
● consists of a functor followed by a sequence of one or more
subterms called arguments.
For example, the term
sentence(nphrase(suriya),vbphrase(verb(likes),nphrase(jyothika)))
could be depicted as the structure:
______sentence______
| |
nphrase _____vbphrase_____
| | |
suriya verb nphrase
| |
likes jyothika
sentence is the principal functor
arguments are nphrase(suriya) and vbphrase(verb(likes),nphrase(jyothika)).
Compound Terms
predicate functor(Term, F, A),
succeeds if Term has functor F and arity A. Its behaviour is as follows:

| ?- functor(sentence(nphrase(suriya),vbphrase(verb(likes),nphrase(jyothika))),Functor,
Arity).
Arity = 2
Functor = sentence
yes
| ?-

|?- functor(parent, Functor, Arity).


|?- functor(parent(x,y), Functor, Arity).
Unification leads to Instantiation
?- owns(X, car(bmw)) = owns(Y, car(C)).
// built in Prolog operator '=' can be used to unify two terms.
You will get Answer : X = Y, C = bmw.

(i) predicate names 'owns' are same on both side


(ii) number of arguments for that predicate, i.e. 2, are equal both side.
(iii) 2nd argument with 'car' predicate inside the brackets are same both side and
even in that predicate again number of arguments are same.

unify in which X=Y. So, Y is substituted with X i.e. written as {X | Y} and


C is instantiated to bmw, -- written as {bmw | C}
?- owns(X, car(bmw)) = likes(Y, car(C)). Is unified or instantiated ?
Unification (binding) leads to Instantiation
| ?- foo(a,X) = foo(Y,b).
| ?- a = a. ** Two identical atoms unify **
| ?- a = b. ** Atoms don't unify if they aren't identical **
| ?- X = a. ** Unification instantiates a variable to an atom **
| ?- X = Y. ** Unification binds two differently named variables to a single, unique variable name **
| ?- foo(a,b) = foo(a,b). ** Two identical complex terms unify **
| ?- foo(a,b) = foo(X,Y). ** Two complex terms unify if they are of the same arity, have the same principal**
** functor and their arguments unify **
| ?- foo(a,Y) = foo(X,b). ** Instantiation of variables may occur in either of the terms to be unified **
| ?- foo(a,b) = foo(X,X). ** In this case there is no unification because foo(X,X) have the same **
** 1st and 2nd arguments is a confusion whether X unified with ‘a’ or ‘b’ **
| ?- foo(a,b) = foo(X,P). // No confusion
| ?- foo(a,a) = foo(X,P). // No confusion
| ?- 2*3+4 = X+Y. ** The term 2*3+4 has principal functor + and therefore unifies with X+Y with X**
** to 2*3 and Y instantiated to 4 **
| ?- bigger(6,4) = bigger(Answer,5). // ??
Example of Conjunction: comma ',' read as 'and'.
● Conjunction in Rule :
likes( jyoti, day) :- sunny, warm.
// jyoti likes a day, if it is sunny and warm.

● Conjunction in Goal :
?- place(sunny), place(windy).
// Is place sunny and windy? OR Is place sunny? and Is place windy?
Example of Disjunction: semicolon ';' read as 'or'.
disjunction_goal.pl
greet(jyoti):-
write('How are you doin, roop?'), nl.
greet(roop):-
write('Awfully nice to see you!').

Disjunction Goal Conjunction Goal


| ?- greet(roop); greet(jyoti). | ?- greet(jyoti), greet(roop).
Awfully nice to see you! How are you doin, roop?
true ? ; Awfully nice to see you
How are you doin, roop?
yes
Example of Backtracking :
● repeatedly tries to find match and satisfy the goals by looking KB in top-down
manner i.e. backtracking.
● in conjunction goals, Prolog finds the match and satisfy goals in L-to-R manner.
Facts : likes(jyoti, food). // Read as : jyoti likes food.
likes(roop, wine). // Read as : roop likes wine.
Rule : likes(roop, X) :- likes(jyoti,X).
//Read as : roop likes everything whatever jyoti likes.
Goal :
?- likes(roop, What). // Read as : What does roop like?
Answer : What = food;
What = wine.
Arithmetic Operators
● Comparisons: <, >, =<, >=, =:= (equals), =\= (not equals)
= Infix operators: go between two terms.
=</2 is used
• 5 =< 7. (infix)
• =<(5,7). (prefix) all infix operators can also be prefixed
● Equality is different from unification
=/2 checks if two terms unify
=:=/2 compares the arithmetic value of two expressions

?- X=Y. ?- X=:=Y. ?-X=4,Y=3, X+2 =:= Y+3.


yes Instantiation error X=4, Y=3?
yes
Example using Arithmetic operators
bigger.pl: (Check backtracking with bigger(20,10) by using trace-notrace)
bigger(N,M):-
N < M, write('The bigger number is '), write(M).
bigger(N,M):-
N > M, write('The bigger number is '), write(N).
bigger(N,M):-
N =:= M, write('Numbers are the same').
Lists
● data structures for holding and manipulating groups of things.
● list is collection of terms (data types,including structures and other lists.)
● list of alcohol [Tequila,Whisky,Vodka,Martini,Muscat,Malibu,Soho,Epita]
● either empty or composed of a first element (head) and a tail, which is a list
itself.
● empty list by the atom [ ]
● non-empty list by a term [H|T] where H denotes the head and T denotes the
tail.
● notation for list structures: [X | Y]
Lists
● Has zero or more elements enclosed by square brackets (‘[ ]’) and separated
by commas (‘,’).
● [a] a list with one element
● [] an empty list
Example: [34,tom,[2,3]] a list with 3 elements where the 3rd element is a list of 2
elements.
● a list can be unified with a variable
| ?- [Any, list, 'of elements'] = X.
X = [Any,list,'of elements']
yes
List Unification
|?-[a,B,c,D]=[A,b,C,d]. // Yes
|?-[(a+X),(Y+b)]=[(W+c),(d+b)]. // Yes
|?- [[X,a]]=[b,Y]. // Yes/No ?
|?-[[a],[B,c],[]]=[X,[b,c],Y]. // Yes/No ?
Lists
References:
Definition of a List
● Lists are recursively defined structures.
“An empty list, [], is a list.
A structure of the form [X, …] is a list if X is a term and […] is a list, possibly
empty.”
● This recursiveness is made explicit by the bar notation
– [Head|Tail] (‘|’ = bottom left PC keyboard character)
● Head must unify with a single term.
● Tail unifies with a list of any length, including an empty list, [].
– the bar notation turns everything after the Head into a list and unifies it with
Tail.
Lists in Prolog
[] Empty List
[1,2,3] H = 1 and T=[2,3]
[[1,2,3]] H= [1,2,3] and T = [ ]
[[a,b,c], 1,2] H = [a,b,c] and T = [1,2]
[arif,akib,arunabh] = [P,Q,R] P = arif
Q = akib
R = arunabh
[[she,Y]|Z] = [[X, was],[for, mine]]. X = she
Y = was
Z = [[for,mine]]
Lists in Prolog
[] Empty List
[1,2,3] H = 1 and T=[2,3]
[[1,2,3]] H= [1,2,3] and T = [ ]
[[a,b,c], 1,2] H = [a,b,c] and T = [1,2]
[arif,akib,arunabh] = [P,Q,R] P = arif
Q = akib
R = arunabh
[[she,Y]|Z] = [[X, was],[for, mine]]. X = she
Y = was
Z = [[for,mine]]
Lists
?- [X|Y] = [a, b, c, d, e]. X = a Y = [b, c, d, e]
| ?- [X|Y] = [apple, banana, mango, chiku, kiwi, dragon-fruit].
X = ? and Y = ?
|?-[a,b,c,d]=[Head|Tail].
|?-[a] = [H|T].
|?-[a,b,c]=[W|[X|[Y|Z]]].
|?-[ ] = [H|T].
|?-[a|[b|[c|[ ]]]]= List.
Member of a list
Cut and fail(!)
Probable Questions On Prolog:
What is Prolog and what are its main features?
How does Prolog handle logic programming?
Explain the syntax and structure of Prolog predicates.
What are the basic data types in Prolog?
How does Prolog perform unification and backtracking?
Describe the difference between facts and rules in Prolog.
Describe the use of lists in Prolog and common list operations.
Discuss the concept of negation in Prolog.
Explain how Prolog can be used for natural language processing tasks.
Explain the difference between depth-first search and breadth-first search in
Prolog's search strategy.
Probable Questions On Prolog:
How does Prolog handle arithmetic expressions?
References
1. Artificial Intelligence: Representation and Problem Solving
A solution of 8-5-3 liters Jug that contain 4 liters in 5-3L jug
Discussion
in the Class

You might also like