0% found this document useful (0 votes)
107 views

AI Unit I Lecture Notes

The document provides an overview of an online class on artificial intelligence. It defines AI and discusses key concepts like production systems, search techniques including breadth-first search, depth-first search, hill climbing, best-first search, and the A* algorithm. It also covers the history and types of AI, including narrow AI and artificial general intelligence. Problem solving methods and heuristic search problems are explained.

Uploaded by

muskan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
107 views

AI Unit I Lecture Notes

The document provides an overview of an online class on artificial intelligence. It defines AI and discusses key concepts like production systems, search techniques including breadth-first search, depth-first search, hill climbing, best-first search, and the A* algorithm. It also covers the history and types of AI, including narrow AI and artificial general intelligence. Problem solving methods and heuristic search problems are explained.

Uploaded by

muskan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 159

IT V Sem Online class

IT-504(A)
Artificial Intelligence
Faculty Name-
Prof. Shubha Mishra
Unit 1 Syllabus
• Meaning and definition of artificial intelligence,
Production systems, Characteristics of
production systems,
• Study and comparison of breadth first search
and depth first search techniques, other
Search Techniques like hill Climbing, Best first
Search. A* algorithm, AO* algorithms etc, and
various types of control strategies.
AI?
Meaning of AI
• Data(Input) Machine(processing/use
logic(algorithms)) Output

• “Artificial intelligence (AI), the ability of a


digital computer or computer-controlled robot
to perform tasks commonly associated with
intelligent beings.”
Examples
• Game Playing (Chess etc)
• medical diagnosis
• computer search engines
• voice or handwriting recognition.
• Expert Systems

“AI is the Mapping of human intelligence


features into machines”.
AI programming languages
• LISP- List Processor(1960s)
• PROLOG- Programmation en Logique(1970s)
• SHRDLU(microworld approach), written by
Terry Winograd of MIT(MIT AI Lab)
• Shakey(mobile robot) developed at the
Stanford Research Institute by Bertram
Raphael, Nils Nilsson, and others (1968–72).
History
• Alan Turing – Mathematician (1950)

“Can Machines think?”

Building machines that are intelligent.


Intelligence?
• Thinking
• Decision Making
• Analyzing
• Reasoning(Forward & backward)
• Learning(trial error, experimental)
• Feeling/Sentiments/emotions
• Problem Solving(special & general purpose)
• Language(NLP)
AI Definition
• Artificial intelligence (AI) makes it possible for
machines to learn from experience, adjust to
new inputs and perform human-like tasks.
• Examples-
• chess-playing computers
• self-driving cars
• Siri, Alexa or Cortana
Types of AI
• Narrow AI: Sometimes referred to as "Weak AI," this kind of
artificial intelligence operates within a limited context and is a
simulation of human intelligence.
• Narrow AI is often focused on performing a single task extremely
well and while these machines may seem intelligent, they are
operating under far more constraints and limitations than even the
most basic human intelligence.
• Image recognition software
• Siri, Alexa and other personal assistants
• Self-driving cars
• IBM's Watson
Types of AI
• Artificial General Intelligence (AGI): AGI,
sometimes referred to as "Strong AI," is the
kind of artificial intelligence we see in the
movies, like the robots from Westworld or
Data from Star Trek: The Next Generation. AGI
is a machine with general intelligence and,
much like a human being, it can apply that
intelligence to solve any problem.
History
• Early AI research in the 1950s explored topics
like problem solving and symbolic methods. In
the 1960s, the US Department of Defense
took interest in this type of work and began
training computers to mimic basic human
reasoning. For example, the Defense
Advanced Research Projects Agency (DARPA)
completed street mapping projects in the
1970s. And DARPA produced intelligent
personal assistants in 2003
History
• Intelligent robots and artificial beings first appeared in the ancient Greek
myths of Antiquity. Aristotle's development of the syllogism and it's use of
deductive reasoning was a key moment in mankind's quest to understand
its own intelligence. While the roots are long and deep, the history of
artificial intelligence as we think of it today spans less than a century. The
following is a quick look at some of the most important events in AI.
• 1943
• Warren McCullough and Walter Pitts publish "A Logical Calculus of Ideas
Immanent in Nervous Activity." The paper proposed the first mathematic
model for building a neural network.
• 1949
• In his book The Organization of Behavior: A Neuropsychological
Theory, Donald Hebb proposes the theory that neural pathways are
created from experiences and that connections between neurons become
stronger the more frequently they're used. Hebbian learning continues to
be an important model in AI.
• 1950
• Alan Turing publishes "Computing Machinery and Intelligence, proposing
what is now known as the Turing Test, a method for determining if a
machine is intelligent.
• 1952
• Arthur Samuel develops a self-learning program to play checkers.
• 1954
• The Georgetown-IBM machine translation experiment automatically
translates 60 carefully selected Russian sentences into English.
• 1956
• The phrase artificial intelligence is coined at the "Dartmouth Summer
Research Project on Artificial Intelligence." Led by John McCarthy, the
conference, which defined the scope and goals of AI, is widely considered
to be the birth of artificial intelligence as we know it today.
• Allen Newell and Herbert Simon demonstrate Logic Theorist (LT), the first
reasoning program.
• 1958
• John McCarthy develops the AI programming language Lisp and publishes
the paper "Programs with Common Sense." The paper proposed the
hypothetical Advice Taker, a complete AI system with the ability to learn
from experience as effectively as humans do.
• 1959
• Allen Newell, Herbert Simon and J.C. Shaw develop the General Problem
Solver (GPS), a program designed to imitate human problem-solving.
• Herbert Gelernter develops the Geometry Theorem Prover program.
• Arthur Samuel coins the term machine learning while at IBM.
• John McCarthy and Marvin Minsky found the MIT Artificial Intelligence
Project.
• 1963
• John McCarthy starts the AI Lab at Stanford.
• 1966
• The Automatic Language Processing Advisory Committee (ALPAC) report
by the U.S. government details the lack of progress in machine translations
research, a major Cold War initiative with the promise of automatic and
instantaneous translation of Russian. The ALPAC report leads to the
cancellation of all government-funded MT projects.
• 1969
• The first successful expert systems are developed in DENDRAL, a XX
program, and MYCIN, designed to diagnose blood infections, are created
at Stanford.
• 1972
• The logic programming language PROLOG is created.
• 1973
• 1991
• U.S. forces deploy DART, an automated logistics planning and scheduling tool,
during the Gulf War.
• 1997
• IBM's Deep Blue beats world chess champion Gary Kasparov
• 2005
• STANLEY, a self-driving car, wins the DARPA Grand Challenge.
• The U.S. military begins investing in autonomous robots like Boston Dynamic's "Big
Dog" and iRobot's "PackBot."
• 2008
• Google makes breakthroughs in speech recognition and introduces the feature in
its iPhone app.
• 2011
• IBM's Watson trounces the competition on Jeopardy!.
• 2012
• Andrew Ng, founder of the Google Brain Deep Learning project, feeds a neural
network using deep learning algorithms 10 million YouTube videos as a training set.
The neural network learned to recognize a cat without being told what a cat is,
ushering in breakthrough era for neural networks and deep learning funding.
• 2014
• Google makes first self-driving car to pass a state driving test.
• 2016
• Google DeepMind's AlphaGo defeats world champion Go player Lee Sedol. The
complexity of the ancient Chinese game was seen as a major hurdle to clear in AI.
Problem Solving
• Identify
• Define
• Analyze
• Find state space(set of possible solution)
• Test state space
• Find optimal Solution
Heuristic
Heuristic is derived from the Greek word
meaning “to discover”.
Mental shortcuts to be used for problem solving
Heuristic Search Problems
• A search problem consists of:
• A State Space. Set of all possible states where
you can be.
• A Start State. The state from where the search
begins.
• A Goal Test. A function that looks at the
current state returns whether or not it is the
goal state.
• The Solution to a search problem is a
sequence of actions, called the plan that
transforms the start state to the goal state.
Heuristic Search Problems

• This plan is achieved through search


algorithms.
Types of Search Algos
Uninformed Search
• The search algorithms have no additional
information on the goal node other than the
one provided in the problem definition.
• The plans to reach the goal state from the
start state differ only by the order and/or
length of actions. Uninformed search is also
called Blind search.
• Depth First Search
• Breath First Search
• Uniform Cost Search
Components of US
• A problem graph, containing the start node S and
the goal node G.
• A strategy, describing the manner in which the
graph will be traversed to get to G .
• A fringe, which is a data structure used to store all
the possible states (nodes) that you can go from
the current states.
• A tree, that results while traversing to the goal
node.
• A solution plan, which the sequence of nodes
from S to G.
DFS
• Depth-first search (DFS) is an algorithm for
traversing or searching tree or graph data
structures. The algorithm starts at the root
node (selecting some arbitrary node as the
root node in the case of a graph) and explores
as far as possible along each branch before
backtracking.
• Which solution wouldDFS find to move from node S
to node G if run on the graph below?
DFS Traversal
Analysis of DFS
• Time complexity: Equivalent to the number of nodes
traversed in DFS. T(n) = 1 + n^2 + n^3 + ... + n^d =
O(n^d)
• Space complexity: Equivalent to how large can the
fringe get. S(n) = O(n \times d)
• Completeness: DFS is complete if the search tree is
finite, meaning for a given finite search tree, DFS will
come up with a solution if it exists.
• Optimality: DFS is not optimal, meaning the number
of steps in reaching the solution, or the cost spent in
reaching it is high.
Breadth First Search
• Breadth-first search (BFS) is an algorithm for
traversing or searching tree or graph data structures.
It starts at the tree root (or some arbitrary node of a
graph, sometimes referred to as a ‘search key’), and
explores all of the neighbor nodes at the present
depth prior to moving on to the nodes at the next
depth level.
Breadth First Search
• Which solution would BFS find to move from node S
to node G if run on the graph below?
BFS Traversal
Analysis of BFS
• Let s = the depth of the shallowest solution.
• n^i = number of nodes in level i.

• Time complexity: Equivalent to the number of nodes


traversed in BFS until the shallowest solution. T(n) = 1 +
n^2 + n^3 + ... + n^s = O(n^s)
• Space complexity: Equivalent to how large can the fringe
get. S(n) = O(n^s)
• Completeness: BFS is complete, meaning for a given
search tree, BFS will come up with a solution if it exists.
• Optimality: BFS is optimal as long as the costs of all
edges are equal.
Uniform Cost Search(Dijkstra for
large Graphs)
• instead of inserting all vertices into a priority queue, we
insert only source, then one by one insert when needed.
• In every step, we check if the item is already in priority
queue (using visited array). If yes, we perform decrease
key, else we insert it.
• This variant of Dijsktra is useful for infinite graphs and
those graph which are too large to represent in the
memory.
UCS Example
UCS
• Find the minimum cost from S to G?

• In this algorithm from the starting state we will visit


the adjacent states and will choose the least costly
state then we will choose the next least costly state
from the all un-visited and adjacent states of the
visited states, in this way we will try to reach the goal
state.
UCS
• Complexity: O( m ^ (1+floor(l/e)))
• where,
• m is the maximum number of neighbor a node has
• l is the length of the shortest path to the goal state
• e is the least cost of an edge
Informed Search
• informed search algorithm contains an array of
knowledge such as how far we are from the goal, path
cost, how to reach to goal node, etc.
• This knowledge help agents to explore less to the search
space and find more efficiently the goal node.
• The informed search algorithm is more useful for large
search space.
• Informed search algorithm uses the idea of heuristic, so
it is also called Heuristic search.
Heuristic function
• It takes the current state of the agent as its input
and produces the estimation of how close agent is
from the goal.
• The heuristic method, however, might not always
give the best solution, but it guaranteed to find a
good solution in reasonable time.
• Heuristic function estimates how close a state is to
the goal. It is represented by h(n), and it calculates
the cost of an optimal path between the pair of
states.
• The value of the heuristic function is always positive.
Heuristic function
• Admissibility of the heuristic function is given as:

• h(n) <= h*(n)

• Here h(n) is heuristic cost, and h*(n) is the estimated


cost. Hence heuristic cost should be less than or equal
to the estimated cost.
Pure Heuristic Search:
• the simplest form of heuristic search algorithms.
• It expands nodes based on their heuristic value h(n).
It maintains two lists, OPEN and CLOSED list. In the
CLOSED list, it places those nodes which have already
expanded and in the OPEN list, it places nodes which
have yet not been expanded.
• On each iteration, each node n with the lowest
heuristic value is expanded and generates all its
successors and n is placed to the closed list. The
algorithm continues unit a goal state is found.
Types of Informed Search
• Best First Search
Algorithm(Greedy
search)
• A* Search Algorithm
Best First Search
• It always selects the path which appears best at that
moment.
• It is the combination of depth-first search and
breadth-first search algorithms.
• It uses the heuristic function and search.
• at each step, we can choose the most promising
node.
• In the best first search algorithm, we expand the
node which is closest to the goal node and the
closest cost is estimated by heuristic function, i.e.
f(n)= g(n).
BFS Algo Steps
• Were, h(n)= estimated 1: Place the starting node into
cost from node n to the the OPEN list.
goal. 2: If the OPEN list is empty,
Stop and return failure.
• The greedy best first 3: Remove the node n, from
algorithm is the OPEN list which has the
implemented by the lowest value of h(n), and
priority queue. places it in the CLOSED list.
4: Expand the node n, and
generate the successors of
node n.
BFS Algo Steps
5: Check each successor 6: For each successor
of node n, and find node, algorithm checks for
whether any node is a evaluation function f(n),
goal node or not. If any and then check if the node
successor node is goal has been in either OPEN
node, then return success or CLOSED list. If the node
and terminate the search, has not been in both list,
else proceed to Step 6. then add it to the OPEN
list.
7: Return to Step 2.
BFS Analysis
• Advantages • Disadvantages
• Best first search can • It can behave as an
switch between BFS and unguided depth-first
DFS by gaining the search in the worst case
advantages of both the scenario.
algorithms. • It can get stuck in a loop
• This algorithm is more as DFS.
efficient than BFS and • This algorithm is not
DFS algorithms. optimal.
Example
• Consider the
search problem,
and we will
traverse it using
greedy best-first
search.
• At each iteration,
each node is
expanded using
evaluation
function f(n)=h(n) ,
which is given in
the table.
Solution
• Expand the nodes of S and
put in the CLOSED list
• Initialization: Open [A, B],
Closed [S]
• Iteration 1: Open [A], Closed
[S, B]
• Iteration 2: Open [E, F, A],
Closed [S, B]
• : Open [E, A],
Closed [S, B, F]
• Iteration 3: Open [I, G, E, A],
Closed [S, B, F] : Open [I, E,
A], Closed [S, B, F, G]
Solution
• Hence the
final solution
path will be:
• S----> B----->F-
---> G
Example

• Common sense
• Rule of thumb- It
allows an individual to
make an approximation
without having to do
exhaustive research.
Example
Why Heuristics
• Heuristics facilitate timely decisions. Analysts
in every industry use rules of thumb such as
intelligent guesswork, trial and error, process
of elimination, past formulas and the analysis
of historical data to solve a problem. Heuristic
methods make decision making simpler and
faster through short cuts and good-enough
calculations.
Heuristics in AI
• A Heuristic is a technique to solve a problem
faster than classic methods, or to find an
approximate solution when classic methods
cannot.

• How is it set?
it can be taken as a mathematical value to solve
any real time AI based problem.
Heuristic Search
• Method
Heuristic Search Function
• A Heuristic (or a heuristic function) takes a
look at search algorithms. At each branching
step, it evaluates the available information
and makes a decision on which branch to
follow. It does so by ranking alternatives. The
Heuristic is any device that is often effective
but will not guarantee work in every case.
Heuristic Search Techniques
• Direct- use entire state space to perform
search of next possible move
• BFS
• DFS
• Weak-These are effective if applied correctly
to the right types of tasks and usually demand
domain-specific information. We need this
extra information to compute preference
among child nodes to explore and expand.
Searching Algorithms
• Best-First Search
• A* Search
• AO* Search
• Bidirectional Search
• Simulated Annealing
• Hill Climbing
• Constraint Satisfaction Problems
Why AI?
• Problem Solving

Problem formulation, Problem Characteristics


Solution Search, Optimum Solution finding
Scope & Applications
• Health Care
• AI applications can provide personalized
medicine and X-ray readings. Personal health
care assistants can act as life coaches,
reminding you to take your pills, exercise or
eat healthier.
Scope & Applications

• Retail
• AI provides virtual shopping capabilities that
offer personalized recommendations and
discuss purchase options with the consumer.
Stock management and site layout
technologies will also be improved with AI.
Scope & Applications

• Banking
• Artificial Intelligence enhances the speed,
precision and effectiveness of human efforts.
In financial institutions, AI techniques can be
used to identify which transactions are likely
to be fraudulent, adopt fast and accurate
credit scoring, as well as automate manually
intense data management tasks.
Scope & Applications

• Manufacturing
• AI can analyze factory IoT data as it streams
from connected equipment to forecast
expected load and demand using recurrent
networks, a specific type of deep learning
network used with sequence data.
Challenges of using AI

• The principle limitation of AI is that it learns


from the data. There is no other way in which
knowledge can be incorporated. That means
any inaccuracies in the data will be reflected in
the results. And any additional layers of
prediction or analysis have to be added
separately.
Challenges of using AI

Today’s AI systems are trained to do a clearly


defined task. The system that plays poker cannot
play solitaire or chess. The system that detects
fraud cannot drive a car or give you legal advice.
In fact, an AI system that detects health care
fraud cannot accurately detect tax fraud or
warranty claims fraud.
Challenges of using AI

• In other words, these systems are very, very


specialized. They are focused on a single task
and are far from behaving like humans.
• Likewise, self-learning systems are not
autonomous systems. The imagined AI
technologies that you see in movies and TV
are still science fiction. But computers that can
probe complex data to learn and perfect
specific tasks are becoming quite common.
Challenges of using AI

• Likewise, self-learning systems are not


autonomous systems. The imagined AI
technologies that you see in movies and TV
are still science fiction. But computers that can
probe complex data to learn and perfect
specific tasks are becoming quite common.s
Production Systems
Definition
• Production systems can be defined as a kind of
cognitive architecture, in which knowledge is
represented in the form of rules. So, a system
that uses this form of knowledge
representation is called a production system.
To simply put, production systems consists of
rules and factors. Knowledge is usually
encoded in a declarative from which
comprises of a set of rules of the form.
Major Components Of An AI
Production System
• A global database
• A set of production rules
• A control system
global database

• The global database is the central data


structure which used by an AI production
system. The production system.
production rules
• The production rules operate on the global
database. Each rule usually has a precondition
that is either satisfied or not by the global
database. If the precondition is satisfied, the
rule is usually be applied. Application of the
rule changes the database.
control system

• The control system then chooses which


applicable rule should be applied and ceases
computation when a termination condition on
the database is satisfied. If multiple rules are
to fire at the same time, the control system
resolves the conflicts.
control system characteristics

• Simplicity
• Modularity
• Modifiability
• Knowledge Intensive
Types
• Monotonic Production System: It’s a
production system in which the application of
a rule never prevents the later application of
another rule, that could have also been
applied at the time the first rule was selected.
Types
• Non-Monotonic Production Systems are useful
for solving ignorable problems. These systems
are important for man implementation
standpoint because they can be implemented
without the ability to backtrack to previous
states when it is discovered that an incorrect
path was followed. This production system
increases the efficiency since it is not
necessary to keep track of the changes made
in the search process.
Types
• Partially Commutative Production System: It’s
a type of production system in which the
application of a sequence of rules transforms
state X into state Y, then any permutation of
those rules that is allowable also transforms
state x into state Y. Theorem proving falls
under the monotonic partially communicative
system.
Types
• Commutative Systems are usually useful for
problems in which changes occur but can be
reversed and in which the order of operation is
not critical for example the, 8 puzzle problem.
Production systems that are not usually not
partially commutative are useful for many
problems in which irreversible changes occur, such
as chemical analysis. When dealing with such
systems, the order in which operations are
performed is very important and hence correct
decisions must be made at the first time itself.
Advantages of Production System
• The system uses pattern directed control which is
more flexible than algorithmic control
• Provides opportunities for heuristic control of
search
• Tracing and Explanation – Simple Control,
Informative rules
• Language Independence
• A plausible model of human problem solving -
SOAR, ACT
• A good way to model the state-driven nature of
intelligent machines
Advantages of Production System
• Provides excellent tools for structuring AI programs
• The system is highly modular because individual
rules can be added, removed or modified
independently
• Expressed in natural form.
• Separation of knowledge and Control – Recognises
Act Cycle
• A natural mapping onto state space research –
data or goal-driven
• Modularity of production rules
Disadvantages
• It’s very difficult to analyse the flow of control
within a production system
• It describes the operations that can be performed in
a search for a solution to the problem. They can be
classified as follows-
• There is an absence of learning due to a rule-based
production system which does not store the result
of the problem for future use.
• The rules in the production system should not have
any type of conflict resolution as when a new rule is
added to the database it should ensure that it does
Eg. Inference rule
• It is a type of rule that consists of a logical
form used for transformation.
• Deductive Inference Rule
It consists of a logic that helps reasoning with
the help of multiple statements to reach a
conclusion.
Example
If it is given that ‘A implies B,’ then we can infer
the conclusion as ‘B.’

• A: B ⇒ B
Where,
• A: The students are studying well.
• B: If the students are studying well, then all
the students will pass the exam.
Example
• Output:

• B: All the students will pass the exam.


Constraint Satisfaction problem
• A CSP consists of:
• Finite set of variables X1, X2, …, Xn
• Nonempty domain of possible values for each
variable D1, D2, … Dn where Di = {v1, …, vk}
• Finite set of constraints C1, C2, …, Cm—Each
constraint Ci
• limits the values that variables can take,
• e.g., X1 ≠ X2 A state is defined as an
assignment of values to
• some or all variables.
Constraint Satisfaction problem

• A consistent assignment does not violate the


• constraints.
• Example: Sudoku
Constraint Satisfaction problem
• An assignment is complete when every variable is
assigned a value.
• A solution to a CSP is a complete assignment that
satisfies all constraints.
Applications:
• Map coloring
• Line Drawing Interpretation
• Scheduling problems—Job shop scheduling
—Scheduling the Hubble Space Telescope
• Floor planning for VLSI
• Beyond our scope: CSPs that require a solution
that maximizes an objective function.
Benefits of CSP
• Clean specification of many problems, generic goal,
• successor function & heuristics
• Just represent problem as a CSP & solve with
general package
• CSP “knows” which variables violate a constraint
And hence where to focus the search
• CSPs: Automatically prune off all branches that
violate constraints
• (State space search could do this only by hand-
building
• constraints into the successor function)
Variety of Constraints
• Unary constraints involve a single variable,
• e.g., SA ≠ green
Binary constraints involve pairs of variables,
• • e.g., SA ≠ WA
• Higher-order constraints involve 3 or more
variables e.g., crypt-arithmetic column constraints
• Preference (soft constraints)
• Constrained optimization problems.
• BASE
BALL
----------
GAMES
• Output:
A=4B=2E=1G=0L=5M=9S=6
BASE 2461
BALL 2455
--------- ---------
GAMES 04916
Examples of heuristic search
• Water Jug problem
we are provided with two jugs: one having the
capacity to hold 3 gallons of water and the other has
the capacity to hold 4 gallons of water. There is no
other measuring equipment available and the jugs
also do not have any kind of marking on them. So,
the agent’s task here is to fill the 4-gallon jug with 2
gallons of water by using only these two jugs and no
other material. Initially, both our jugs are empty.
Production rules for solving the
water jug problem
S.No. Initial State Condition Final state Description of
action taken
1. (x,y) If x<4 (4,y) Fill the 4 gallon
jug completely
2. (x,y) if y<3 (x,3) Fill the 3 gallon
jug completely
3. (x,y) If x>0 (x-d,y) Pour some part
from the 4
gallon jug
4. (x,y) If y>0 (x,y-d) Pour some part
from the 3
gallon jug
5. (x,y) If x>0 (0,y) Empty the 4
gallon jug
6. (x,y) If y>0 (x,0) Empty the 3 gallon
jug

7. (x,y) If (x+y)<7 (4, y-[4-x]) Pour some water


from the 3 gallon
jug to fill the four
gallon jug

8. (x,y) If (x+y)<7 (x-[3-y],y) Pour some water


from the 4 gallon
jug to fill the 3
gallon jug.

9. (x,y) If (x+y)<4 (x+y,0) Pour all water from


3 gallon jug to the 4
gallon jug

10. (x,y) if (x+y)<3 (0, x+y) Pour all water from


the 4 gallon jug to
the 3 gallon jug
Solution
S.No. 4 gallon jug 3 gallon jug Rule followed
contents contents

1. 0 gallon 0 gallon Initial state

2. 0 gallon 3 gallons Rule no.2

3. 3 gallons 0 gallon Rule no. 9

4. 3 gallons 3 gallons Rule no. 2

5. 4 gallons 2 gallons Rule no. 7

6. 0 gallon 2 gallons Rule no. 5

7. 2 gallons 0 gallon Rule no. 9


N-Queen's Problem
• The goal is to place “N” Number of queens on
an “N x N” sized chess board such that no
queen is under attack by another queen.
Eg. 8-Queen's problem
• The eight queens puzzle is the problem of
placing eight chess queens on an 8×8
chessboard so that no two queens threaten
each other; thus, a solution requires that no
two queens share the same row, column, or
diagonal.
• Chess composer Max Bezzel published the
eight queens puzzle in 1848
• Formulation – I
• ‹ A state is any arrangement of 0 to 8 A state
is any arrangement of 0 to 8
• queens on board queens on board
• ‹ Operators add a queen to any square
Operators add a queen to any square
• The eight queens puzzle has 92 distinct
solutions.
• If solutions that differ only by the symmetry
operations of rotation and reflection of the
board are counted as one, the puzzle has 12
solutions. These are called fundamental
solutions
Solution tricks
Solution tricks

• The initial state is given by the empty chess


board. Placing a queen on the board
represents an action in the search problem. A
goal state is a configuration where none of the
queens attacks any of the others. Note that
every goal state is reached after exactly 8
actions.
Solution tricks

• This formulation as a search problem can be


improved when we realize that, in any
solution, there must be exactly one queen in
each of the columns. Thus, the possible
actions can be restricted to placing a queen in
the next column that does not yet contain a
queen. This reduces the branching factor from
(initially) 64 to 8.
Missionaries and cannibals(classic
river-crossing logic puzzle)
Three missionaries and three cannibals are on one
side of a river, along with a are on one side of a river,
along with a boat that can hold one or two people.
Find boat that can hold one or two people.
Find a way to get everyone to the other side, a way
to get everyone to the other side, without ever
leaving a group of without ever leaving a group of
missionaries outnumbered by cannibals
• State: (#m, #c, 1/0) (#m, #c, 1/0)
• #m: number of missionaries in the first bank
• #c: number of cannibals in the first bank ‹
The last bit indicates whether the boat is in
the first bank.
Start state:(3, 3, 1) (3, 3, 1)
Goal state: (0, 0, 0) (0, 0, 0)
Operators:
• Boat carries (1, 0) or (0, 1) or (1, 1) or (2, 0) or
(0, 2)
Outline of a search algorithm
1. Initialize: Initialize: Set OPEN = {s}
2. Fail:
If OPEN = { }, Terminate with failure
3. Select: Select a state, n, from OPEN
4. Terminate: If n ∈ G, terminate with success G,
terminate with success
5. Expand:
Generate the successors of n using O and insert
them in OPEN
6. Loop: Go To Step 2.
Outline of a search algorithm
• OPEN is a queue (FIFO) vs a stack (LIFO) a
stack (LIFO)
• Is this algorithm guaranteed to terminate?
• Under what circumstances will it terminate?
Hill Climbing
• Hill climbing algorithm is a local search
algorithm which continuously moves in the
direction of increasing elevation/value to find
the peak of the mountain or best solution to
the problem.
• It terminates when it reaches a peak value
where no neighbor has a higher value.
• Hill climbing algorithm is a technique which is
used for optimizing the mathematical
problems.
• examples of Hill climbing algorithm is
Traveling-salesman Problem in which we need
to minimize the distance traveled by the
salesman.
• It is also called greedy local search as it only
looks to its good immediate neighbor state
and not beyond that.
• A node of hill climbing algorithm has two
components which are state and value.
• Hill Climbing is mostly used when a good
heuristic is available.
Features of Hill Climbing
• Generate and Test variant: Hill Climbing is the
variant of Generate and Test method. The
Generate and Test method produce feedback
which helps to decide which direction to move
in the search space.
• Greedy approach: Hill-climbing algorithm
search moves in the direction which optimizes
the cost.
• No backtracking: It does not backtrack the
search space, as it does not remember the
previous states.
State-space Diagram for Hill
Climbing
Different regions in the state space
landscape
• Local Maximum: Local maximum is a state
which is better than its neighbor states, but
there is also another state which is higher
than it.
• Global Maximum: Global maximum is the best
possible state of state space landscape. It has
the highest value of objective function.
• Current state: It is a state in a landscape
diagram where an agent is currently present.
• Flat local maximum: It is a flat space in the
landscape where all the neighbor states of
current states have the same value.
• Shoulder: It is a plateau region which has an
uphill edge.
Types of Hill Climbing Algorithm

• Simple hill Climbing


• Steepest-Ascent hill-climbing
• Stochastic hill Climbing
Simple hill climbing
• Itis the simplest way to implement a hill climbing
algorithm.
• It only evaluates the neighbor node state at a time
and selects the first one which optimizes current
cost and set it as a current state.
• It only checks it's one successor state, and if it
finds better than the current state, then move else
be in the same state.
Features
• This algorithm has the following features:

• Less time consuming


• Less optimal solution and the solution is not
guaranteed
Algorithm Steps
1.Evaluate the initial state, if it is goal state then
return success and Stop.
2. Loop Until a solution is found or there is no new
operator left to apply.
3. Select and apply an operator to the current state.
4. Check new state:
• If it is goal state, then return success and quit.
• Else if it is better than the current state then
assign new state as a current state.
• Else if not better than the current state, then
Algorithm Steps

• return to step2.
5. Exit.
Steepest-Ascent hill climbing

• The steepest-Ascent algorithm is a variation of


simple hill climbing algorithm.
• This algorithm examines all the neighboring
nodes of the current state and selects one
neighbor node which is closest to the goal
state.
• This algorithm consumes more time as it
searches for multiple neighbors
S.A Algorithm steps
• Evaluate the initial state, if it is goal state then
return success and stop, else make current
state as initial state.
• Step 2: Loop until a solution is found or the
current state does not change.
• Let SUCC be a state such that any successor of
the current state will be better than it.
• For each operator that applies to the current
state:
S.A Algorithm steps
• Apply the new operator and generate a new
state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else
compare it to the SUCC.
• If it is better than SUCC, then set new state as
SUCC.
• If the SUCC is better than the current state, then
set current state to SUCC.
• Step 5: Exit.
Stochastic hill climbing

• Stochastic hill climbing does not examine for


all its neighbor before moving.
• this search algorithm selects one neighbor
node at random and decides whether to
choose it as a current state or examine
another state.
Problems in Hill Climbing Algorithm
• Local Maximum: A local
maximum is a peak
state in the landscape
which is better than
each of its neighboring
states, but there is
another state also
present which is higher
than the local maximum.
Solution
• Backtracking technique can be a solution of
the local maximum in state space landscape.
• Create a list of the promising path so that the
algorithm can backtrack the search space and
explore other paths as well.
Problems in Hill Climbing Algorithm
• Plateau: A plateau is the
flat area of the search
space in which all the
neighbor states of the
current state contains the
same value, because of
this algorithm does not
find any best direction to
move.
• A hill-climbing search
might be lost in the
plateau area.
Solution
• The solution for the plateau is to take big steps or
very little steps while searching, to solve the problem.
• Randomly select a state which is far away from the
current state so it is possible that the algorithm could
find non-plateau region.
Problems in Hill Climbing Algorithm

• Ridges: A ridge is a
special form of the local
maximum. It has an
area which is higher
than its surrounding
areas, but itself has a
slope, and cannot be
reached in a single
move.
Solution
• With the use of bidirectional search, or by moving in
different directions, we can improve this problem.
A* Algorithm
• A* Algorithm is one of
the best and popular
techniques used for
path finding and graph
traversals.
• A lot of games and web-
based maps use this
algorithm for finding
the shortest path
efficiently.
Description of A*
• When A* enters into a • Here ‘n’ denotes the
problem, firstly it neighbouring nodes.
calculates the cost to
travel to the neighbouring
nodes and chooses the
node with the lowest cost.
• If The f(n) denotes the
cost, A* chooses the node
with the lowest f(n) value.
Working mechanism
• It maintains a tree of f(n) = g(n) + h(n)
paths originating at the Here,
start node. • ‘n’ is the last node on the
• It extends those paths path
one edge at a time. • g(n) is the cost of the path
• It continues until its from start node to node ‘n’
termination criterion is • h(n) is a heuristic function
satisfied. that estimates cost of the
• A* Algorithm extends the cheapest path from node
path that minimizes the ‘n’ to the goal node
following function-
Algorithm assumptions
• The implementation of • CLOSED contains those
A* Algorithm involves nodes that have already
maintaining two lists- been visited.
OPEN and CLOSED.
• OPEN contains those
nodes that have been
evaluated by the
heuristic function but
have not been
expanded into
successors yet.
Algorithm Steps
• Define a list OPEN. • Expand node n.
• Initially, OPEN consists • If any successor to n is the
solely of a single node, goal node, return success
the start node S. and the solution by tracing
• If the list is empty, return the path from goal node to
failure and exit. S.
• Remove node n with the • Otherwise, go to next Step.
smallest value of f(n) from • For each successor node,
OPEN and move it to list • Apply the evaluation
CLOSED. function f to the node.
• If node n is a goal state, • Go back to step 2.
return success and exit.
Pros & Cons
• pathfinder algorithms • The disadvantage is that
like A* help you plan it is a bit slower than
things rather than the other algorithms.
waiting until you
discover the problem.
• They act proactively
rather than reacting to
a situation
find the most cost-effective path to
reach from start state A to final state
G using A* Algorithm.
Solution
• Since A is a starting node, • Now from A, we can go to
therefore, the value of point B or point E, so we
g(x) for A is zero and from compute f(x) for each of
the graph, we get the them
heuristic value of A is 11,
therefore
• A→B=2+6=8
• A → E = 3 + 7 = 10
• g(x) + h(x) = f(x)
• Since the cost for A → B
• 0+ 11 =11
is less, we move forward
• Thus for A, we can write with this path and
• A=11 compute the f(x) for the
children nodes of B
Solution contd...
• Since there is no path • A → E → D = (3 + 6) + 1 =
between C and G, the 10
heuristic cost is set infinity
• Comparing the cost of A
or a very high value
→ E → D with all the paths
• A → B → C = (2 + 1) + 99= we got so far and as this
102 cost is least of all we move
• A → B → G = (2 + 9 ) + 0 = forward with this path.
11 And compute the f(x) for
• Here the path A → B → G the children of D
has the least cost but it is • A → E → D → G = (3 + 6 +
still more than the cost of 1) +0 =10
A → E, thus we explore
this path further
Solution
• Now comparing all the paths that lead us to the goal,
we conclude that A → E → D → G is the most cost-
effective path to get from A to G.
PRACTICE PROBLEMS BASED ON A*
ALGORITHM-
Given an initial state of a 8-puzzle problem and final state to
be reached.
Find the most cost-effective path to reach the final state
from initial state using A* Algorithm.
Consider g(n) = Depth of node and h(n) = Number of
misplaced tiles.
Solution
Problem 2

• Find the most cost-effective path to reach from start state A


to final state J using A* Algorithm.
AO* Algorithm(Problem Reduction
with AO* Algorithm)
• When a problem can be • The decomposition of the
divided into a set of sub problem or problem
problems, where each sub reduction generates AND
problem can be solved arcs. One AND are may
separately and a point to any number of
combination of these will successor nodes. All these
be a solution, AND-OR must be solved so that the
graphs or AND - OR trees arc will rise to many arcs,
are used for representing indicating several possible
the solution. solutions. Hence the graph
is known as AND - OR
instead of AND.
AO* Example
Description
• In figure (a) the top node A has been expanded
producing two area one leading to B and leading to
C-D .
• it is assumed that every operation(i.e. applying a
rule) has unit cost, i.e., each are with single
successor will have a cost of 1 and each of its
components.
• C is the most promising node to expand since its f '
= 3 , the lowest but going through B would be better
since to use C we must also use D' and the cost
would be 9(3+4+1+1). Through B it would be 6(5+1).
Description contd..
• Thus the choice of the next node to expand
depends not only n a value but also on whether
that node is part of the current best path form the
initial mode.
• In Figure (b) the node G appears to be the most
promising node, with the least f ' value. But G is
not on the current beat path, since to use G we
must use GH with a cost of 9 and again this
demands that arcs be used (with a cost of 27). The
path from A through B, E-F is better with a total
cost of (17+1=18).
Searching in And-Or Graph
• traverse the graph starting at the initial node and
following the current best path, and accumulate the
set of nodes that are on the path and have not yet
been expanded.
• Pick one of these unexpanded nodes and expand it.
Add its successors to the graph and computer f '
(cost of the remaining distance) for each of them.
• Change the f ' estimate of the newly expanded node
to reflect the new information produced by its
successors. Propagate this change backward
through the graph. Decide which of the current best
path.
Key point
• in AO* algorithm expanded nodes are re-
examined so that the current best path can be
selected.
Example of And-Or Graph
Solution description
• The initial node is expanded and D is Marked
initially as promising node.
• D is expanded producing an AND arc E-F. f ' value of
D is updated to 10.
• Going backwards we can see that the AND arc B-C is
better . it is now marked as current best path.
• B and C have to be expanded next. This process
continues until a solution is found or all paths have
led to dead ends, indicating that there is no solution.
An A* algorithm the path from one node to the
other is always that of the lowest cost and it is
independent of the paths through other nodes.
AO* Algorithm
• Initialise the graph to start node
• Traverse the graph following the current path
accumulating nodes that have not yet been
expanded or solved
• Pick any of these nodes and expand it and if it
has no successors call this value FUTILITY
otherwise calculate only f' for each of the
successors.
• If f' is 0 then mark the node as SOLVED
AO* Algorithm
• Change the value of f' for the newly created
node to reflect its successors by back
propagation.
• Wherever possible use the most promising
routes and if a node is marked as SOLVED then
mark the parent node as SOLVED.
• If starting node is SOLVED or value greater
than FUTILITY, stop, else repeat from 2.
Example of AO*
A* vs AO*
• Both are part of informed search technique
and use heuristic values to solve the problem.
• The solution is guaranteed in both algorithm.
• A* always gives an optimal solution (shortest
path with low cost) But It is not guaranteed to
that AO* always provide an optimal solutions.
• Reason: Because AO* does not explore all the
solution path once it got solution.
Example
Solution
• In the above diagram we have two ways from
A to D or A to B-C (because of and condition).
calculate cost to select a path
• F(A-D)= 1+10 = 11 and
• F(A-BC) = 1 + 1 + 6 +12 = 20
• As we can see F(A-D) is less than F(A-BC) then
the algorithm choose the path F(A-D).
• Form D we have one choice that is F-E.
• F(A-D-FE) = 1+1+ 4 +4 =10
Solution contd...
• Basically 10 is the cost of reaching FE from D.
And Heuristic value of node D also denote the
cost of reaching FE from D. So, the new
Heuristic value of D is 10.
• And the Cost from A-D remain same that is 11.
Another path
• In the above diagram we have two ways from A
to D or A to B-C (because of and condition).
calculate cost to select a path
• F(A-D)= 1+10 = 11 and F(A-BC) = 1 +
1 + 6 +12 = 20
• As we know the cost is more of F(A-BC) but let's
take a look
• Now from B we have two path G and H , let's
calculate the cost
• F(B-G)= 5+1 =6 and F(B-H)= 7 + 1 = 8
Another path
• So, cost from F(B-H) is more than F(B-G) we will
take the path B-G.
• The Heuristic value from G to I is 1 but let's
calculate the cost form G to I.
• F(G-I) = 1 +1 = 2. which is less than Heuristic value
5. So, the new Heuristic value form G to I is 2.
• If it is a new value, then the cost from G to B must
also have changed. Let's see the new cost from (B
to G)
• F(B-G)= 1+2 =3 . Mean the New Heuristic value of
B is 3.
• But A is associated with both B and C .
• As we can see from the diagram C only have one
choice or one node to explore that is J. The
Heuristic value of C is 12.
• Cost form C to J= F(C-J) = 1+1= 2 Which is less
than Heuristic value
• Now the New Heuristic value of C is 2.
• And the New Cost from A- BC that is F(A-BC) =
1+1+2+3 = 7 which is less than F(A-D)=11.
• In this case Choosing path A-BC is more cost
effective and good than that of A-D
Simulated Annealing
• A hill-climbing algorithm which never makes a move
towards a lower value guaranteed to be incomplete
because it can get stuck on a local maximum.
• if algorithm applies a random walk, by moving a
successor, then it may complete but not efficient.
• Simulated Annealing is an algorithm which yields both
efficiency and completeness.
• the algorithm picks a random move, instead of picking
the best move. If the random move improves the state,
then it follows the same path. Otherwise, the algorithm
follows the path which has a probability of less than 1 or
it moves downhill and chooses another path.

You might also like