Module 1 (Part 2)
Module 1 (Part 2)
Module 1 (Part 2)
IN AI and ML
CSA2001
J. Jayanthi
Recap…….
• 1. Difference bw AI ML and DL -case study
2. 15 application of AI
3. Watch the video ( I ll send the link
): https://www.youtube.com/watch?v=poLZq
n2_dv4
4. Deep blue ( chess case study )
5. Limitations in AI today
• Human agent:
– eyes, ears, and other organs for sensors;
– hands, legs, mouth, and other body parts for actuators
• Robotic agent:
– cameras and infrared range finders for sensors
– various motors for actuators
•agent = architecture + program
Recap…….
• 1. Difference bw AI ML and DL -case study
2. 15 application of AI
3. Watch the video ( I ll send the link
): https://www.youtube.com/watch?v=poLZq
n2_dv4
4. Deep blue ( chess case study )
5. Limitations in AI today
Recap…….
• PEAS?- Dance Recognition softbot system
• Rational Agent ? Four points?
• Percept ?
• Sensor
• Actuators
• Example –rational agents
• AI Vs Ml vs Dl
Agent Types
1. Table-driven agent
More 2. Simple reflex agent
sophisticated
3. Reflex agent with internal state
4. Agent with explicit goals
5. Utility-based agent
Simple Reflex Agents: Reacting Swiftly to the Present
Drawbacks:
– Huge table (often simply too large)
– Takes a long time to build/learn the table
Table-driven agent
function TABLE-DRIVEN-AGENT (percept) returns action
static: percepts, a sequence, initially empty
table, a table, indexed by percept sequences, initially fully specified
• Problems
– Huge number of possible percepts (consider an automated taxi
with a camera as the sensor) => lookup table would be huge
– Takes long time to build the table
– Not adaptive to changes in the environment; requires entire table
to be updated if changes occur
II) --- Simple reflex agents
“Infers potentially
dangerous driver
in front.”
Considers “future”
“Clean kitchen”
Agent keeps track of the world state as well as set of goals it’s trying to achieve: chooses
actions that will (eventually) lead to the goal(s).
More flexible than reflex agents may involve search and planning
V) --- Utility-based agents
Module:
Learning
No quick turn