Unit 1 - AIML
Unit 1 - AIML
LEARNING(18CS62)
UNIT 1
COURSE INSTRUCTOR
Prof.Merin Meleet
Department of ISE, RVCE
CONTENTS – UNIT 1 AND REFERENCES
• Introduction, intelligent agents, searching:
• What is AI? Chapter 1: 1.1
• Intelligent Agents: Agents and environment; Rationality; the nature
of environments; the structure of agents. Chapter 2 : 2.1-2.4
• Problem-solving: Problem-solving agents; Searching for solution;
Uninformed search strategies; Informed search strategies, Heuristic
Functions
Chapter 3: 3.1, 3.3, 3.4, 3.5, 3.6
INTELLIGENT AGENTS
• How well an agent can behave depends on the nature of the environment; some environments are
more difficult than others.
• We’ll also examine a crude categorization of environments and show how properties of an
environment influence the design of suitable agents for that environment.
• Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth,
and other body parts for actuators
• Robotic agent: cameras and infrared range finders for sensors; various
motors for actuators
For each possible percept sequence, a rational agent should select an action that is
expected to maximize its performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the agent has.
• The “geography” of the environment is known a priori but the dirt distribution and
the initial location of the agent are not. Clean squares stay clean and sucking cleans
the current square. The Left and Right actions move the agent left and right except
when this would take the agent outside the environment, in which case the agent
remains where it is.
• The agent correctly perceives its location and whether that location contains dirt.
[f: P* A]
• These agents select actions on the basis of the current percept, ignoring the rest of the percept
history. For example, the vacuum agent whose agent function is tabulated in Figure 2.3 is a
simple reflex agent, because its decision is based only on the current location and on whether that
location contains dirt
• Reflex agent breaks when it sees brake lights. Goal based agent reasons
– Brake light -> car in front is stopping -> I should stop -> I should use brake
• If the internal utility function and the external performance measure are in agreement, then
an agent that chooses actions to maximize its utility will be rational according to the
external performance measure.
• Problem-solving agent
• A kind of goal-based agent
• It solves problem by
• finding sequences of actions that lead to desirable states (goals)
• To solve a problem,
• the first step is the goal formulation, based on the current
situation
Search algorithms require a data structure to keep track of the search tree that is being
constructed.
For each node n of the tree, we have a structure that contains four components:
• n.STATE: the state in the state space to which the node corresponds;
• n.PARENT: the node in the search tree that generated this node;
• n.ACTION: the action that was applied to the parent to generate the node;
• n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the initial state
to the node, as indicated by the parent pointers.
• Uninformed search
• no information about the number of steps
• or the path cost from the current state to the goal
• search the state space blindly
• Informed search, or heuristic search
• a cleverer strategy that searches toward the goal,
• based on the information from the current state so far
• Depth-first search
• Depth-limited search
• Iterative deepening depth first search
• Bidirectional search
In general, all the nodes are expanded at a given depth in the search tree before any nodes at
• optimal
• complete
• Time and space complexities
• reasonable
• suitable for the problem
• having a large search space
• and the depth of the solution is not known
• https://www.youtube.com/watch?v=T6uyDXtwru8&ab_channel=Brijeshkumar
https://www.youtube.com/watch?v=dv1m3L6QXWs&ab_channel=PreethiSV
DEPT OF ISE, RVCE
A* SEARCH: MINIMIZING THE TOTAL
ESTIMATED SOLUTION COST
• The most widely known form of best-first search is called A∗ search
• It evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost
to get from the node to the goal:
f(n) = g(n) + h(n) .
• Since g(n) gives the path cost from the start node to node n, and h(n) is the
estimated cost of the cheapest path from n to the goal, we have
f(n) = estimated cost of the cheapest solution through n .
https://www.youtube.com/watch?v=Fwt9jhsCjC0&ab_channel=Brijeshkumar
DEPT OF ISE, RVCE
CONDITIONS FOR OPTIMALITY
• The first condition we require for optimality is that h(n )be an admissible heuristic.
• An admissible heuristic is one that never overestimates the cost to reach the goal.
• Because g(n) is the actual cost to reach n along the current path, and f(n)=g(n) + h(n), we
have as an immediate consequence that f(n) never overestimates the true cost of a solution
along the current path through n
• A second, slightly stronger condition called consistency (or sometimes monotonicity) is
required only for applications of A∗ to graph search.
• A heuristic h(n) is consistent if, for every node n and every successor n` of n generated by
any action a, the estimated cost of reaching the goal from n is no greater than the step cost
of getting to n` plus the estimated cost of reaching the goal from n
• h(n) ≤ c(n, a, n ) + h(n ) .
• This is a form of the general triangle inequality