Module 1 (Part 2)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 63

FOUNDAMENTALS

IN AI and ML
CSA2001

J. Jayanthi
Recap…….
• 1. Difference bw AI ML and DL -case study
2. 15 application of AI
3. Watch the video ( I ll send the link
): https://www.youtube.com/watch?v=poLZq
n2_dv4
4. Deep blue ( chess case study )
5. Limitations in AI today
• Human agent:
– eyes, ears, and other organs for sensors;
– hands, legs, mouth, and other body parts for actuators

• Robotic agent:
– cameras and infrared range finders for sensors
– various motors for actuators
•agent = architecture + program
Recap…….
• 1. Difference bw AI ML and DL -case study
2. 15 application of AI
3. Watch the video ( I ll send the link
): https://www.youtube.com/watch?v=poLZq
n2_dv4
4. Deep blue ( chess case study )
5. Limitations in AI today
Recap…….
• PEAS?- Dance Recognition softbot system
• Rational Agent ? Four points?
• Percept ?
• Sensor
• Actuators
• Example –rational agents
• AI Vs Ml vs Dl
Agent Types

1. Table-driven agent
More 2. Simple reflex agent
sophisticated
3. Reflex agent with internal state
4. Agent with explicit goals
5. Utility-based agent
Simple Reflex Agents: Reacting Swiftly to the Present

Model-Based Agents: Planning for the Future

Goal-Based Agents: Working Towards Objectives

Utility-Based Agents: Balancing Preferences and Trade-offs

Learning Agents: Adapting and Improving Over Time


agent types
• (1) Table-driven agents
– use a percept sequence/action table in memory to find the next action. They are
implemented by a (large) lookup table.
• (2) Simple reflex agents
– are based on condition-action rules, implemented with an appropriate
production system. They are stateless devices which do not have memory of
past world states.
• (3) Agents with memory - Model-based reflex agents
– have internal state, which is used to keep track of past states of the world.
• (4) Agents with goals – Goal-based agents
– are agents that, in addition to state information, have goal information that
describes desirable situations. Agents of this kind take future events into
consideration.
• (5) Utility-based agents
– base their decisions on classic axiomatic utility theory in order to act rationally.
• (6) Learning agents
– they have the ability to improve performance through learning.
Example
• Model-based agents, such as autonomous vehicles, are essential in
domains where foresight is crucial. They can anticipate the behavior of
other entities and plan optimal trajectories, making them invaluable for
safe and efficient navigation.

• Goal based Agent: In industries like logistics, goal-based agents optimize


routes and distribution, minimizing costs and maximizing efficiency.

• Utility Agent : In fields like economics and resource management,


utility-based agents handle complex decisions, optimizing resource
allocation based on cost, time, and quality.

• Learning Agent: Notably, the capacity to adapt in real-time, incorporating


fresh data and adjusting strategies, stands as a defining hallmark of these
agents
I) --- Table-lookup driven agents

• Uses a percept sequence / action table in memory to


• find the next action. Implemented as a (large) lookup table.

Drawbacks:
– Huge table (often simply too large)
– Takes a long time to build/learn the table
Table-driven agent
function TABLE-DRIVEN-AGENT (percept) returns action
static: percepts, a sequence, initially empty
table, a table, indexed by percept sequences, initially fully specified

append percept to the end of percepts


action LOOKUP(percepts, table)
return action

An agent based on a prespecified lookup table. It keeps track of percept


sequence and just looks up the best action

• Problems
– Huge number of possible percepts (consider an automated taxi
with a camera as the sensor) => lookup table would be huge
– Takes long time to build the table
– Not adaptive to changes in the environment; requires entire table
to be updated if changes occur
II) --- Simple reflex agents

Agents do not have memory of past world states or percepts.


So, actions depend solely on current percept.
Action becomes a “reflex.”

Uses condition-action rules.


III) --- Model-based reflex agents

• Key difference (wrt simple reflex agents):

– Agents have internal state, which is used to


keep track of past states of the world.

– Agents have the ability to represent change in


the World.
Example: Rodney Brooks’ Subsumption Architecture
--- behavior based robots.
Module:
[Here] Logical Agents Model-based reflex agents
Representation and Reasoning: Part III/IV
R&N
How detailed?

“Infers potentially
dangerous driver
in front.”

If “dangerous driver in front,”


then “keep distance.”
An example:
Brooks’ Subsumption Architecture
• Main idea: build complex, intelligent robots by decomposing behaviors into a
hierarchy of skills, each defining a percept-action cycle for one very specific task.

• Examples: collision avoidance, wandering, exploring, recognizing doorways, etc.

• Each behavior is modeled by a finite-state machine with a few states (though


each state may correspond to a complex function or module;
• provides internal state to the agent).

• Behaviors are loosely coupled via asynchronous interactions.

• Note: minimal internal state representation.


IV) --- Goal-based agents

• Key difference wrt Model-Based Agents:

• In addition to state information, have goal information that


• describes desirable situations to be achieved.

• Agents of this kind take future events into consideration.


• What sequence of actions can I take to achieve certain goals?

• Choose actions so as to (eventually) achieve a (given or computed) goal.


Module:
Problem Solving Goal-based agents

Considers “future”
“Clean kitchen”

Agent keeps track of the world state as well as set of goals it’s trying to achieve: chooses
actions that will (eventually) lead to the goal(s).
More flexible than reflex agents may involve search and planning
V) --- Utility-based agents

• When there are multiple possible alternatives, how to decide which


one is best?

• Goals are qualitative: A goal specifies a crude distinction between a


happy and unhappy state, but often need a more general
performance measure that describes “degree of happiness.”

• Utility function U: State → R indicating a measure of success or


happiness when at a given state.

• Important for making tradeoffs: Allows decisions comparing choice


between conflicting goals, and choice between likelihood of success
and importance of goal (if achievement is uncertain).

Use decision theoretic models: e.g., faster vs. safer.


Utility-based agents
• Goals alone are not enough
● to generate high-quality behavior
● E.g. meals in Canteen, good or not ?
• Many action sequences the goals
● some are better and some worse
● If goal means success,
● then utility means the degree of success (how
successful it is)
Utility-based agents (4)
u le: ing
d
Mo Mak
i s i on
Dec
Utility-based agents

Decision theoretic actions:


e.g. faster vs. safer
More complicated when agent needs to learn VI) --- Learning agents
utility information: Reinforcement learning
(based on action payoff) Adapt and improve over time

Module:
Learning

“Quick turn is not safe”

No quick turn

Road conditions, etc


Takes percepts
and selects actions

Try out the brakes on


different road surfaces
Summary
• An agent perceives and acts in an environment, has an architecture, and is implemented
by an agent program.
• A rational agent always chooses the action which maximizes its expected performance,
given its percept sequence so far.
• An autonomous agent uses its own experience rather than built-in knowledge of the
environment by the designer.
• An agent program maps from percept to action and updates its internal state.
– Reflex agents (simple / model-based) respond immediately to percepts.
– Goal-based agents act in order to achieve their goal(s), possible sequence of steps.
– Utility-based agents maximize their own utility function.
– Learning agents improve their performance through learning.
• Representing knowledge is important for successful agent design.

• The most challenging environments are partially observable, stochastic, sequential,
dynamic, and continuous, and contain multiple intelligent agents.

You might also like