0% found this document useful (0 votes)
138 views91 pages

Unit 1 - AIML

Uploaded by

Herr Direktor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
138 views91 pages

Unit 1 - AIML

Uploaded by

Herr Direktor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 91

ARTIFICIAL INTELLIGENCE AND MACHINE

LEARNING(18CS62)
UNIT 1

COURSE INSTRUCTOR
Prof.Merin Meleet
Department of ISE, RVCE
CONTENTS – UNIT 1 AND REFERENCES
• Introduction, intelligent agents, searching:
• What is AI? Chapter 1: 1.1
• Intelligent Agents: Agents and environment; Rationality; the nature
of environments; the structure of agents. Chapter 2 : 2.1-2.4
• Problem-solving: Problem-solving agents; Searching for solution;
Uninformed search strategies; Informed search strategies, Heuristic
Functions
Chapter 3: 3.1, 3.3, 3.4, 3.5, 3.6

DEPT OF ISE, RVCE


WHAT IS AI ???- AN INFORMAL DEFINITION
• Artificial intelligence refers to the ability of a computer or
machine to mimic the capabilities of the human mind—
learning from examples and experience, recognizing
objects, understanding and responding to language, making
decisions, solving problems—and combining these and
other capabilities to perform functions a human might
perform, such as greeting a hotel guest or driving a car.

DEPT OF ISE, RVCE


SOME DEFINITIONS

DEPT OF ISE, RVCE


DEPT OF ISE, RVCE
• Modeling exactly how humans actually think
• cognitive models of human reasoning

• Modeling exactly how humans actually act


• models of human behavior (what they do, not how they think)

• Modeling how ideal agents “should think” - Rationally


• models of “rational” thought (formal logic)
• note: humans are often not rational!

• Modeling how ideal agents “should act” – Rationally


• rational actions but not necessarily formal rational reasoning
• i.e., more of a black-box/engineering approach
DEPT OF ISE, RVCE
• Rational behavior: Doing that was is expected to maximize one’s
“utility function” in this world.
• An agent is an entity that perceives and acts.
• A rational agent acts rationally.
• A rational agent is one that acts so as to achieve the best outcome or,
when there is uncertainty, the best expected outcome.

DEPT OF ISE, RVCE


DEPT OF ISE, RVCE
• To pass the total Turing Test, the computer will need
• COMPUTER VISION • computer vision to perceive objects, and
• ROBOTICS • robotics to manipulate objects and move about

DEPT OF ISE, RVCE


The computer would need to possess the following capabilities

DEPT OF ISE, RVCE


ACTING RATIONALLY: RATIONAL AGENTS

• An agent is an entity that perceives its environment and is able to


execute actions to change it.
• Agents have inherent goals that they want to achieve (e.g. survive,
reproduce).
• A rational agent acts in a way to maximize the achievement of its
goals.
• Maximization of goals requires unlimited computational abilities.
• Limited rationality involves maximizing goals within the
computational and other resources available.

DEPT OF ISE, RVCE


CHAPTER 2

INTELLIGENT AGENTS

DEPT OF ISE, RVCE


• Rational agents : central to our approach to artificial intelligence.
• Develop a small set of design principles for building successful
agents—systems that can reasonably be called intelligent

DEPT OF ISE, RVCE


WHAT WE’LL BE LEARNING IN THIS CHAPTER
• We begin by examining agents, environments, and the coupling between them.
• The observation that some agents behave better than others leads naturally to the idea of a
rational agent—one that behaves as well as possible.

• How well an agent can behave depends on the nature of the environment; some environments are
more difficult than others.
• We’ll also examine a crude categorization of environments and show how properties of an
environment influence the design of suitable agents for that environment.

DEPT OF ISE, RVCE


AGENTS AND ENVIRONMENTS
• An agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators

• Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth,
and other body parts for actuators

• Robotic agent: cameras and infrared range finders for sensors; various
motors for actuators

• What about software agents?

DEPT OF ISE, RVCE


• We use the term percept to refer to the agent’s perceptual
inputs at any given instant.
• An agent’s percept sequence is the complete history of
everything the agent has ever perceived.
• In general, an agent’s choice of action at any given instant
can depend on the entire percept sequence observed to date,
but not on anything it hasn’t perceived.

DEPT OF ISE, RVCE


DEPT OF ISE, RVCE
• Mathematically speaking, we say that an agent’s behavior is described by the agent
function that maps any given percept sequence to an action
• Given an agent to experiment with, we can, in principle, construct this table by
trying out all possible percept sequences and recording which actions the agent does
in response.
• The table is, of course, an external characterization of the agent.
• Internally, the agent function for an artificial agent will be implemented by an agent
program.
• It is important to keep these two ideas distinct.
• The agent function is an abstract mathematical description; the agent program is a
concrete implementation, running within some physical system.

DEPT OF ISE, RVCE


VACUUM-CLEANER WORLD

• Percepts: location and contents, e.g., [A,Dirty]



• Actions: Left, Right, Suck, NoOp

DEPT OF ISE, RVCE


DEPT OF ISE, RVCE
• Our definition requires a rational agent not only to gather information
but also to learn as much as possible from what it perceives.
• There are extreme cases in which the environment is completely
known apriori. In such cases, the agent need not perceive or learn; it
simply acts correctly.
• To the extent that an agent relies on the prior knowledge of its
designer rather than on its own percepts, we say that the agent lacks
autonomy.
• A rational agent should be autonomous—it should learn what it can to
compensate for partial or incorrect prior knowledge.
• After sufficient experience of its environment, the behavior of a
rational agent can become effectively independent of its prior
knowledge

DEPT OF ISE, RVCE


DEPT OF ISE, RVCE
THE CONCEPT OF RATIONALITY
• A rational agent is one that does the right thing—conceptually speaking, every entry
in the table for the agent function is filled out correctly.
• When an agent is plunked down in an environment, it generates a sequence of actions
according to the percepts it receives.
• This sequence of actions causes the environment to go through a sequence of states.
• If the sequence is desirable, then the agent has performed well.
• This notion of desirability is captured by a performance measure that evaluates any
given sequence of environment states.

DEPT OF ISE, RVCE


• Can there be a fixed performance measure for all tasks ?????
• As a general rule, it is better to design performance measures
according to what one actually wants in the environment, rather than
according to how one thinks the agent should behave.

DEPT OF ISE, RVCE


RATIONALITY
• What is rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date
• This leads to a definition of a rational agent:

For each possible percept sequence, a rational agent should select an action that is
expected to maximize its performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the agent has.

DEPT OF ISE, RVCE


• The performance measure awards one point for each clean square at each time step ,
over a “lifetime” of 1000 time steps.

• The “geography” of the environment is known a priori but the dirt distribution and
the initial location of the agent are not. Clean squares stay clean and sucking cleans
the current square. The Left and Right actions move the agent left and right except
when this would take the agent outside the environment, in which case the agent
remains where it is.

• The only available actions are Left , Right, and Suck.

• The agent correctly perceives its location and whether that location contains dirt.

DEPT OF ISE, RVCE


OMNISCIENCE, LEARNING, AND AUTONOMY
• An omniscient agent knows the actual outcome of its actions and can act accordingly; but
omniscience is impossible in reality.

• Rationality maximizes expected performance, while perfection maximizes actual performance.

• Doing actions in order to modify future percepts—sometimes called information gathering— is


an important part of rationality
• Our definition requires a rational agent not only to gather information but also to learn as much
as possible from what it perceives.
• To the extent that an agent relies on the prior knowledge of its designer rather than on its own
percepts, we say that the agent lacks autonomy. A rational agent should be autonomous—it
should learn what it can to compensate for partial or incorrect prior knowledge.

DEPT OF ISE, RVCE


NATURE OF ENVIRONMENT

DEPT OF ISE, RVCE


TASK ENVIRONMENT
Specifying the task environment
PEAS (Performance, Environment, Actuators, Sensors)

• Must first specify the setting for intelligent agent design


• Consider, e.g., the task of designing an automated taxi
driver:

• Performance measure
• Environment
• Actuators
• Sensors

DEPT OF ISE, RVCE


DEPT OF ISE, RVCE
DEPT OF ISE, RVCE
PROPERTIES OF TASK ENVIRONMENTS
• Fully observable (vs. partially observable): An agent's sensors give it access to the
complete state of the environment at each point in time. A task environment is
effectively fully observable if the sensors detect all aspects that are relevant to the
choice of action
An environment might be partially observable because of noisy and inaccurate
sensors or because parts of the state are simply missing from the sensor data.
If the agent has no sensors at all then the environment is unobservable.

DEPT OF ISE, RVCE


• Single agent vs. multiagent:
For example, an agent solving a crossword puzzle by itself is clearly in a single-agent
environment, whereas an agent playing chess is in a two agent environment.
Chess is a competitive multiagent environment.
• Deterministic vs. stochastic
The next state of the environment is completely determined by the current state and the
action executed by the agent.
If the environment is partially observable, however, then it could appear to be stochastic.
Most real situations are so complex that it is impossible to keep track of all the unobserved
aspects; for practical purposes, they must be treated as stochastic

DEPT OF ISE, RVCE


• Episodic (vs. sequential): In an episodic task environment, the agent’s experience is
divided into atomic episodes. In each episode the agent receives a percept and then
performs. a single action. Crucially, the next episode does not depend on the actions taken in
previous episodes.
• Static vs. dynamic: If STATIC the environment can change while an agent is deliberating,
then we say the environment is dynamic for that agent; otherwise, it is static.
• Discrete vs. continuous: The discrete/continuous distinction applies to the state of the
environment, to the way time is handled, and to the percepts and actions of the agent.
Forexample, the chess environment has a finite number of distinct states. Taxi-driving
actions arealso continuous

DEPT OF ISE, RVCE


DEPT OF ISE, RVCE
THE STRUCTURE OF AGENTS
• The job of AI is to design an agent program that implements the agent function— the mapping
from percepts to actions.
• We assume this program will run on some sort of computing device with physical sensors and
actuators—we call this the architecture:
agent = architecture + program .

•The agent function maps from percept histories to actions:

[f: P*  A]

•The agent program runs on the physical architecture to produce f

agent = architecture + program

DEPT OF ISE, RVCE


• The agent programs that we design in this book all have the same skeleton: they
take the current percept as input from the sensors and return an action to the
actuators.
• The agent program, which takes the current percept as input, and the agent
function, which takes the entire percept history.
• The agent program takes just the current percept as input because nothing more is
available from the environment; if the agent’s actions need to depend on the entire
percept sequence, the agent will have to remember the percepts.

DEPT OF ISE, RVCE


a rather trivial agent program that keeps track of the percept sequence and then uses
it to index into a table of actions to decide what to do.

DEPT OF ISE, RVCE


AGENT TYPES
 Simple reflex agents;
 Model-based reflex agents;
 Goal-based agents;
 Utility-based agents.

DEPT OF ISE, RVCE


SIMPLE REFLEX AGENTS
• Simplest

• These agents select actions on the basis of the current percept, ignoring the rest of the percept
history. For example, the vacuum agent whose agent function is tabulated in Figure 2.3 is a
simple reflex agent, because its decision is based only on the current location and on whether that
location contains dirt

DEPT OF ISE, RVCE


• a condition–action rule written as
if car-in-front-is-braking then initiate-braking.

DEPT OF ISE, RVCE


DEPT OF ISE, RVCE
MODEL-BASED REFLEX AGENTS
• The agent should maintain some sort of internal state that depends on the percept history
and thereby reflects at least some of the unobserved aspects of the current state.
• Updating this internal state information as time goes by requires two kinds of knowledge to
be encoded in the agent program.
• First, we need some information about how the world evolves independently of the
agent.
• Second, we need some information about how the agent’s own actions affect
the world
• This knowledge about “how the world works”—whether implemented in simple
Boolean circuits or in complete scientific theories—is called a model of the world.
An agent that uses such a model is called a model-based agent.

DEPT OF ISE, RVCE


MODEL-BASED REFLEX AGENTS
 Know how world evolves
 Overtaking car gets closer from behind
 How agents actions affect the world
 Wheel turned clockwise takes you right

 Model base agents update their state

DEPT OF ISE, RVCE


GOAL-BASED AGENTS
• knowing state and environment? Enough?
– Taxi can go left, right, straight
• Have a goal
• A destination to get to

• Uses knowledge about a goal to guide its actions


• E.g., Search, planning
• Although the goal-based agent appears less efficient, it is more flexible because the knowledge
that supports its decisions is represented explicitly and can be modified.
• If it starts to rain, the agent can update its knowledge of how effectively its brakes will
operate; this will automatically cause all of the relevant behaviors to be altered to suit the new
conditions.
• For the reflex agent, on the other hand, we would have to rewrite many condition–action rules.

DEPT OF ISE, RVCE


GOAL-BASED AGENTS

• Reflex agent breaks when it sees brake lights. Goal based agent reasons
– Brake light -> car in front is stopping -> I should stop -> I should use brake

DEPT OF ISE, RVCE


UTILITY-BASED AGENTS
• Goals are not always enough
• Many action sequences get taxi to destination
• Consider other things: quicker, safer, more reliable, or cheaper than others
• A utility function maps a state onto a real number which describes the associated degree
of “happiness”, “goodness”, “success”.
• Where does the utility measure come from?
• Economics: money.
• Biology: number of offspring.
• Your life?

DEPT OF ISE, RVCE


• A performance measure assigns a score to any given sequence of environment states, so it
can easily distinguish between more and less desirable ways of getting to the taxi’s
destination.

• An agent’s utility function is essentially an internalization of the performance measure.

• If the internal utility function and the external performance measure are in agreement, then
an agent that chooses actions to maximize its utility will be rational according to the
external performance measure.

DEPT OF ISE, RVCE


A model-based, utility-based agent.
It uses a model of the world, along with a utility function that measures its preferences among
states of the world.
Then it chooses the action that leads to the best expected utility, where expected utility is computed
by averaging over all possible outcome states, weighted by the probability of the outcome.
DEPT OF ISE, RVCE
LEARNING AGENTS

 Performance element is what


was previously the whole agent
 Input sensor
 Output action
 Learning element
 Modifies performance
element.

DEPT OF ISE, RVCE


• A learning agent can be divided into four conceptual components.
• The most important distinction is between the learning element, which is
responsible for making improvements, and the performance element, which is
responsible for selecting external actions.
• The performance element is what we have previously considered to be the entire
agent: it takes in percepts and decides on actions.
• The learning element uses feedback from the critic on how the agent is doing and
determines how the performance element should be modified to do better in the
future.

DEPT OF ISE, RVCE


• The critic tells the learning element how well the agent is doing with respect to a
fixed performance standard.
• The critic is necessary because the percepts themselves provide no indication of the
agent’s success.
• The last component of the learning agent is the problem generator. It is
responsible for suggesting actions that will lead to new and informative
experiences.
• The problem generator’s job is to suggest these exploratory actions.

DEPT OF ISE, RVCE


CHAPTER 3

SOLVING PROBLEMS BY SEARCHING

DEPT OF ISE, RVCE


SOLVING PROBLEMS BY SEARCHING
• Reflex agent is simple
• base their actions on
• a direct mapping from states to actions
• but cannot work well in environments
• which this mapping would be too large to store
• and would take too long to learn
• Hence, goal-based agent is used

DEPT OF ISE, RVCE


PROBLEM-SOLVING AGENT

• Problem-solving agent
• A kind of goal-based agent
• It solves problem by
• finding sequences of actions that lead to desirable states (goals)
• To solve a problem,
• the first step is the goal formulation, based on the current
situation

DEPT OF ISE, RVCE


PROBLEM-SOLVING AGENTS
• Goal formulation, based on the current situation and the agent’s performance
measure, is the first step in problem solving.
• Problem formulation is the process of deciding what actions and states to consider,
given a goal
• If we assume that the environment is observable, the agent always knows the current
state.
• The process of looking for a sequence of actions that reaches the goal is called search.
• A search algorithm takes a problem as input and returns a solution in the form of an
action sequence.
• Once a solution is found, the actions it recommends can be carried out. This is called
the execution phase.

DEPT OF ISE, RVCE


WELL-DEFINED PROBLEMS AND SOLUTIONS
A problem can be defined formally by five components:
• The initial state that the agent starts in.
• A description of the possible actions available to the agent. Given a particular state s,
ACTIONS(s) returns the set of actions that can be executed in s. We say that each of these
actions is applicable in s.
• A description of what each action does; the formal name for this is the transition model,
specified by a function RESULT(s, a) that returns the state that results from doing action a
in state s. We also use the term successor to refer to any state reachable from a given state
by a single action. A path in the state space is a sequence of states connected by a sequence
of actions.
• The goal test, which determines whether a given state is a goal state. Sometimes there is an
explicit set of possible goal states, and the test simply checks whether the given state is one
of them.
• A path cost function that assigns a numeric cost to each path. The problem-solving agent
chooses a cost function that reflects its own performance measure.

DEPT OF ISE, RVCE


• The preceding elements define a problem and can be gathered into a single data
structure that is given as input to a problem-solving algorithm.
• A solution to a problem is an action sequence that leads from the initial state to a
goal state.
• Solution quality is measured by the path cost function, and an optimal solution has
the lowest path cost among all solutions.

DEPT OF ISE, RVCE


FORMULATING PROBLEMS
• Besides the four components for problem formulation
• anything else?
• Abstraction
• the process to take out the irrelevant information
• leave the most essential parts to the description of the states
( Remove detail from representation)
• Conclusion: Only the most important parts that are contributing
to searching are used

DEPT OF ISE, RVCE


Evaluation Criteria

• formulation of a problem as search task


• basic search strategies
• important properties of search strategies
• selection of search strategies for specific tasks

DEPT OF ISE, RVCE


SEARCHING FOR SOLUTIONS
• A solution is an action sequence, so search algorithms work by considering various possible
action sequences.
• The possible action sequences starting at the initial state form a search tree with the initial
state at the root; the branches are actions and the nodes correspond to states in the state
space of the problem.
• The root node of the tree corresponds to the initial state. The first step is to test whether this
is a goal state.
• Then we need to consider taking various actions. We do this by expanding the current state;
that is, applying each legal action to the current state, thereby generating a new set of
states.
• leaf nodes : the states having no successors
• Fringe : Set of search nodes that have not been expanded yet.

DEPT OF ISE, RVCE


EXAMPLE: ROMANIA

DEPT OF ISE, RVCE


DEPT OF ISE, RVCE
DEPT OF ISE, RVCE
INFRASTRUCTURE FOR SEARCH ALGORITHMS

Search algorithms require a data structure to keep track of the search tree that is being
constructed.
For each node n of the tree, we have a structure that contains four components:
• n.STATE: the state in the state space to which the node corresponds;
• n.PARENT: the node in the search tree that generated this node;
• n.ACTION: the action that was applied to the parent to generate the node;
• n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the initial state
to the node, as indicated by the parent pointers.

DEPT OF ISE, RVCE


DEPT OF ISE, RVCE
MEASURING PROBLEM-SOLVING
PERFORMANCE
• The evaluation of a search strategy
• Completeness:
• is the strategy guaranteed to find a solution when there is one?
• Optimality:
• does the strategy find the highest-quality solution when there are several different
solutions?
• Time complexity:
• how long does it take to find a solution?
• Space complexity:
• how much memory is needed to perform the search?
For effectiveness of a search algorithm
we can just consider the total cost
• The total cost = path cost (g) of the solution found + search cost
• search cost = time necessary to find the solution

DEPT OF ISE, RVCE


SEARCH STRATEGIES

• Uninformed search
• no information about the number of steps
• or the path cost from the current state to the goal
• search the state space blindly
• Informed search, or heuristic search
• a cleverer strategy that searches toward the goal,
• based on the information from the current state so far

DEPT OF ISE, RVCE


UNINFORMED SEARCH STRATEGIES
• Breadth-first search
• Uniform cost search
• modifies breadth-first strategy
• by always expanding the lowest-cost node

• Depth-first search
• Depth-limited search
• Iterative deepening depth first search
• Bidirectional search

DEPT OF ISE, RVCE


BREADTH-FIRST SEARCH
• Breadth-first search is a simple strategy in which the root node is expanded first, then all
the successors of the root node are expanded next, then their successors, and so on.

In general, all the nodes are expanded at a given depth in the search tree before any nodes at

the next level are expanded.


• When all step costs are equal,
breadth-first search is optimal because
it always expands the shallowest
unexpanded node.

DEPT OF ISE, RVCE


UNIFORM COST SEARCH
• Breadth-first finds the shallowest goal state
• but not necessarily be the least-cost solution
• work only if all step costs are equal
• Uniform cost search
• modifies breadth-first strategy
• by always expanding the lowest-cost node
• Expands the node n with the lowest path cost g(n)
• Goal test is applied to a node when it is selected for expansion.
• Implemented using a Priority Queue
• https://www.youtube.com/watch?v=-FY7t2kqWX4&ab_channel=AlanBlair

DEPT OF ISE, RVCE


DEPTH-FIRST SEARCH

• Always expands one of the nodes at the deepest level of


the tree
• Only when the search hits a dead end
• goes back and expands nodes at shallower levels
• Dead end  leaf nodes but not the goal
• Backtracking search
• only one successor is generated on expansion
• rather than all successors
• fewer memory
DEPT OF ISE, RVCE
DEPTH-LIMITED STRATEGY
• Depth-first with depth cutoff k (maximal depth below which nodes are not
expanded)
• If depth limit is set at k, then all nodes at depth k are treated as if they have no
successors. This solves infinite path problem
• Three possible outcomes:
• Solution
• Failure (no solution)
• Cutoff (no solution within cutoff)

DEPT OF ISE, RVCE


ITERATIVE DEEPENING SEARCH

• No choosing of the best depth limit


• It tries all possible depth limits:
• first 0, then 1, 2, and so on
• combines the benefits of depth-first and breadth-first search

DEPT OF ISE, RVCE


ITERATIVE DEEPENING SEARCH (ANALYSIS)

• optimal
• complete
• Time and space complexities
• reasonable
• suitable for the problem
• having a large search space
• and the depth of the solution is not known

• https://www.youtube.com/watch?v=T6uyDXtwru8&ab_channel=Brijeshkumar

DEPT OF ISE, RVCE


BIDIRECTIONAL SEARCH

• Run two simultaneous searches


• one forward from the initial state another backward from the goal
• stop when the two searches meet
• However, computing backward may become difficult

DEPT OF ISE, RVCE


BIDIRECTIONAL STRATEGY
2 fringe queues: FRINGE1 and FRINGE2

DEPT OF ISE, RVCE


INFORMED (HEURISTIC) SEARCH
STRATEGIES
DEPT OF ISE, RVCE
• The general approach considered is called best-first search
• A node is selected for expansion based on an evaluation function f(n).
• The evaluation function is considered as a cost estimate, so the node with the lowest
evaluation is expanded first.
• The implementation of best-first graph search is identical to that for uniform-cost search
except for the use of f instead of g to order the priority queue.
• The choice of f determines the search strategy
• Most best-first algorithms include as a component of f a heuristic function, denoted h(n):
h(n) = estimated cost of the cheapest path from the state at node n to a goal state.
• (Notice that h(n) takes a node as input, but, unlike g(n), it depends only on the state at that
node.)

DEPT OF ISE, RVCE


GREEDY BEST FIRST SEARCH
• Greedy best-first search tries to expand the node that is closest to the goal, on the grounds
that this is likely to lead to a solution quickly.
• Thus, it evaluates nodes by using just the heuristic function; that is, f(n) = h(n).

• In the example , they use straight line distance heuristic , hSLD

https://www.youtube.com/watch?v=dv1m3L6QXWs&ab_channel=PreethiSV
DEPT OF ISE, RVCE
A* SEARCH: MINIMIZING THE TOTAL
ESTIMATED SOLUTION COST
• The most widely known form of best-first search is called A∗ search
• It evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost
to get from the node to the goal:
f(n) = g(n) + h(n) .
• Since g(n) gives the path cost from the start node to node n, and h(n) is the
estimated cost of the cheapest path from n to the goal, we have
f(n) = estimated cost of the cheapest solution through n .

https://www.youtube.com/watch?v=Fwt9jhsCjC0&ab_channel=Brijeshkumar
DEPT OF ISE, RVCE
CONDITIONS FOR OPTIMALITY
• The first condition we require for optimality is that h(n )be an admissible heuristic.
• An admissible heuristic is one that never overestimates the cost to reach the goal.
• Because g(n) is the actual cost to reach n along the current path, and f(n)=g(n) + h(n), we
have as an immediate consequence that f(n) never overestimates the true cost of a solution
along the current path through n
• A second, slightly stronger condition called consistency (or sometimes monotonicity) is
required only for applications of A∗ to graph search.
• A heuristic h(n) is consistent if, for every node n and every successor n` of n generated by
any action a, the estimated cost of reaching the goal from n is no greater than the step cost
of getting to n` plus the estimated cost of reaching the goal from n
• h(n) ≤ c(n, a, n ) + h(n ) .
• This is a form of the general triangle inequality

DEPT OF ISE, RVCE


MEMORY BOUNDED HEURISTIC SEARCH
• The simplest way to reduce memory requirements for A∗ is to adapt the idea of iterative
deepening to the heuristic search context, resulting in the iterative-deepening A∗ (IDA∗)
algorithm.
• The main difference between IDA∗ and standard iterative deepening is that the cutoff used is the
f-cost (g+h) rather than the depth; at each iteration, the cutoff value is the smallest f-cost of any
node that exceeded the cutoff on the previous iteration

DEPT OF ISE, RVCE


RECURSIVE BEST FIRST SEARCH
• Idea: mimic the operation of standard best-first search, but use only linear space.
• Runs similar to recursive depth-first search, but rather than continuing indefinitely
down the current path, it uses the f-limit variable to keep track of the best
alternative path available from any ancestor of the current node.
• If the current node exceeds this limit, the recursion unwinds back to the alternative
path.
• As the recursion unwinds, RBFS replaces the f-value of each node along the path
with the best f-value of its children. In this way, it can decide whether it’s worth re-
expanding a forgotten subtree

DEPT OF ISE, RVCE


DEPT OF ISE, RVCE
MA∗ (MEMORY-BOUNDED A∗) AND MA* SMA∗
(SIMPLIFIED MA∗).
• SMA∗ proceeds just like A∗, expanding the best leaf until memory is full.
• At this point, it cannot add a new node to the search tree without dropping an old
one.
• SMA∗ always drops the worst leaf node—the one with the highest f-value.
• SMA* is complete if there is any reachable solution.

DEPT OF ISE, RVCE


HEURISTIC FUNCTION

DEPT OF ISE, RVCE


HEURISTIC FUNCTIONS
• heuristics for the 8-puzzle

DEPT OF ISE, RVCE


• here are two commonly used candidates:
• •h1 = the number of misplaced tiles. For Figure 3.28, all of the eight tiles are out of
position, so the start state would have h1 = 8.
• h2 = the sum of the distances of the tiles from their goal positions.
• Because tiles cannot move along diagonals, the distance we will count is the sum
of the horizontal and vertical distances.
• This is sometimes called the city block distance or Manhattan distance.
h2 = 3+1 + 2 + 2+ 2 + 3+ 3 + 2 = 18

DEPT OF ISE, RVCE


THE EFFECT OF HEURISTIC ACCURACY ON
PERFORMANCE
• One way to characterize the quality of a heuristic is the effective branching factor
b∗ .
• If the total number of nodes generated by A∗ for a particular problem is N and the
solution depth is d, then b∗ is the branching factor that a uniform tree of depth d
would have to have in order to contain N + 1 nodes. Thus,

DEPT OF ISE, RVCE


END OF UNIT 1

DEPT OF ISE, RVCE

You might also like