0% found this document useful (0 votes)
11 views20 pages

Chapter 2

The document discusses intelligent agents, defining them as entities that perceive their environment through sensors and act upon it via actuators to achieve goals. It outlines the types of agents, including simple reflex, model-based reflex, goal-based, and utility-based agents, and emphasizes the importance of performance measures in evaluating their success. Additionally, it highlights the role of learning agents, which improve their performance through feedback and adaptation to their environment.

Uploaded by

Lok Regmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views20 pages

Chapter 2

The document discusses intelligent agents, defining them as entities that perceive their environment through sensors and act upon it via actuators to achieve goals. It outlines the types of agents, including simple reflex, model-based reflex, goal-based, and utility-based agents, and emphasizes the importance of performance measures in evaluating their success. Additionally, it highlights the role of learning agents, which improve their performance through feedback and adaptation to their environment.

Uploaded by

Lok Regmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Artificial Intelligence

CMP 346

Unit:2 Intelligent Agents


Er. Nirmal Thapa
Email: [email protected]
Lumbini Engineering College
Pokhara University
Intelligent Agents
§ An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators.
§ An autonomous entity which acts upon an env using sensors and actuators for achieving
goals
§ Agents interact with environment through sensors and actuators.

Ref. Book: AI by Russel and Norvig


Intelligent Agents
§ What do you mean, sensors/percepts and effectors/actions?
For Humans
§ Sensors: Eyes (vision), ears (hearing), skin (touch), tongue (gestation), nose
§ Percepts:
§ At the lowest level – electrical signals from these sensors
§ After pre-processing – objects in the visual field (location, textures, colors, …), auditory
streams (pitch, loudness, direction), …
§ Effectors: limbs, digits, eyes, tongue, …..
§ Actions: lift a finger, turn left, walk, run, carry an object, …
Intelligent Agents
§ What do you mean, sensors/percepts and effectors/actions?
A more specific example: Automated taxi driving system
§ Percepts: Video, sonar, speedometer, odometer, engine sensors, keyboard input,
microphone, GPS, …
§ Actions: Steer, accelerate, brake, horn, speak/display, …
§ Goals: Maintain safety, reach destination, maximize profits (fuel, tire wear), obey laws,
provide passenger comfort, …
§ Environment: Urban streets, freeways, traffic, pedestrians, weather, customers, …
Intelligent Agents
The vacuum-cleaner world: Example of Agent

§ Environment: square A and B


§ Percepts: [location and content] E.g. [A, Dirty]
§ Actions: left, right, suck, and no-op
Intelligent Agents: Performance measure
The concept of rationality
§ A rational agent is one that does the right thing.
§ Every entry in the table is filled out correctly.
What is the right thing?
§ Right action is the one that will cause the agent to be most successful.
§ Therefore we need some way to measure success of an agent.
§ Performance measures are the criterion for success of an agent behavior.
§ The performance measure evaluates the behaviour of the agent in an environment.
§ It defines the agent’s degree of success.
§ E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up,
amount of time taken, amount of electricity consumed, amount of noise generated, etc.
§ It is better to design Performance measure according to what is wanted in the environment
instead of how the agents should behave.
Intelligent Agents: Performance measure
§ It is not easy task to choose the performance measure of an agent.
§ For example if the performance measure for automated vacuum cleaner is “The amount of
dirt cleaned within a certain time” Then a rational agent can maximize this performance by
cleaning up the dirt , then dumping it all on the floor, then cleaning it up again , and so on.
§ Therefore “How clean the floor is” is better choice for performance measure of vacuum
cleaner.
What is rational at a given time depends on four things:
• Performance measure,
• Prior environment knowledge,
• Actions,
• Percept sequence to date (sensors).
Definition: A rational agent chooses whichever action maximizes the expected value of the
performance measure given the percept sequence to date and prior environment knowledge.
Environment
§ To design a rational agent we must specify its task environment.
§ Task environment means: PEAS
§ description of the environment:
§ Performance
§ Environment
§ Actuators
§ Sensors
Example: Fully automated taxi:
§ PEAS description of the environment:
§ Performance: Safety, destination, profits, legality, comfort
§ Environment: Streets/freeways, other traffic, pedestrians, weather,, …
§ Actuators: Steering, accelerating, brake, horn, speaker/display,…
§ Sensors: Video, sonar, speedometer, engine sensors, keyboard, GPS, …
Types of Agent
1. Simple reflex agent:
§ The simplest agents
§ Take decisions on the basis of the current percepts and ignore the rest of the percept history
§ These agents only succeed in the fully observable environment
§ Works on condition-action rule, which means it maps the current state to action
§ It acts according to a rule whose condition matches the current state, as defined by the
percept.
Types of Agent
2. Model based reflex agent:
§ Can work in a partially observable environment, and track the situation.
§ Has two important factors:
§ Model: It is knowledge about “how things happen in the world”, so it is called model based
§ Internal state: it is a representation of the current state based on percept history
§ Have the model, “which is knowledge of the world” and based on the model they perform
actions.
§ Keeps track of the current state of the world using an internal model and then choose an
action
Types of Agent
3. Goal based agent:
§ The knowledge of the current state env is not always sufficient to decide for an agent to what to do
§ Agent need to know its goal which describes the situations. They choose an action, so that they can
achieve the goal
§ They may have to consider a long sequence of possible actions before deciding whether the goal is
achieved or not.

Ref. Book: AI by Russel and Norvig

§ Keeps track of the world state as well as a set of goals it is trying to achieve and choose an action
that will lead to the achievement of its goals.
Types of Agent
4. Utility based agent:
§ The best way to achieve the goal
§ Useful when there are multiple possible alternatives, and an agent has to choose in order to
perform the best action
§ The utility function maps each state to a real number to check how efficiently each action
achieves the goals
Characteristics of Intelligent Agent (IA)
§ Must learn and improve through interaction with the environment

§ Must adapt online and in the real time situation

§ Must learn quickly from large amounts of data

§ Must accommodate new problem solving rules inclemently

§ Must have memory which must exhibit storage and retrieval capacities

§ Should be able to analyze self in terms of behaviour, error and success


Learning Agents
Agent Programs Origins:
• Early idea by Turing (1950): program intelligent machines manually.
• Turing concluded that building learning machines and teaching them is more efficient.
• Modern AI often prefers learning-based systems for advanced performance.

Advantages of Learning Agents:


• Operate effectively in unknown environments.
• Enhance competence beyond initial knowledge.
• Suitable for all agent types (model-based, goal-based, utility-based, etc.).
Key Components of Learning Agents
1. Performance Element:
• Takes percepts and decides actions.
• Represents the agent's operational part.
2. Learning Element:
• Improves performance using feedback.
• Depends on feedback from the critic.
• Adjusts the performance element for better future actions.
3 .Critic:
• Evaluates performance based on a fixed performance standard.
• Provides feedback for learning since percepts alone lack evaluation context.
4. Problem Generator:
• Suggests exploratory actions for new experiences.
• Encourages long-term improvement by exploring beyond optimal immediate actions.
Learning Agents
• Learning agents refine their models to better align with reality.

• Example: Automated taxi learns braking effectiveness under varied conditions.

• Simple, slightly inaccurate models may sometimes be computationally preferable.

• External standards provide information for reflexes or utility adjustments.

• Example: A taxi driver learns to avoid violent maneuvers after losing tips due to
passenger discomfort.

• Observing human reactions helps update the agent's utility function.

• Example: A taxi learns not to honk continuously based on negative human responses.

• Learning is about modifying agent components to align with feedback.

• Improves the agent's overall performance through systematic updates.


Key Components of Learning Agents
Key Components of Learning Agents
1. Performance Element:
• Takes percepts and decides actions.
• Represents the agent's operational part.
2. Learning Element:
• Improves performance using feedback.
• Depends on feedback from the critic.
• Adjusts the performance element for better future actions.
3 .Critic:
• Evaluates performance based on a fixed performance standard.
• Provides feedback for learning since percepts alone lack evaluation context.
4. Problem Generator:
• Suggests exploratory actions for new experiences.
• Encourages long-term improvement by exploring beyond optimal immediate actions.
Questions ?
Thank you !

You might also like