0% found this document useful (0 votes)
16 views3 pages

Part 2

An agent is anything that perceives its environment and acts upon that environment. There are three main types of agents: human agents, robotic agents, and software agents. A rational agent is one that acts to achieve the best outcome and accomplish perfect rationality by always doing the right thing. Designing a rational agent requires specifying a performance measure, environment, and the agent's actuators and sensors, which is called the PEAS description of the task environment.

Uploaded by

mai elsayed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views3 pages

Part 2

An agent is anything that perceives its environment and acts upon that environment. There are three main types of agents: human agents, robotic agents, and software agents. A rational agent is one that acts to achieve the best outcome and accomplish perfect rationality by always doing the right thing. Designing a rational agent requires specifying a performance measure, environment, and the agent's actuators and sensors, which is called the PEAS description of the task environment.

Uploaded by

mai elsayed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

What is an Agent ?

Types of Agents:
An Agent : is anything that acts (‫)يعمل‬. It is anything that can be
viewed as perceiving its environment through sensors ( ‫حساسات أو‬
‫ )أجهزة استشعار‬and acting upon that environment through actuators 1. Human Agent.
(‫)مشغالت‬. 2. Robotic Agent.
3. Software Agent.

An agent :
 is expected to do more than a computer program.
 achieves perfect rationality ‫( يحقق العقالنية المثالية‬always doing right thing).

The Rational Agent : is one that acts to achieve the best outcome and
accomplish perfect rationality (always doing right thing). 1 2

Percept, Percept Sequence, Agent Function, and Agent Program


Types of Agent: • Percept ‫االدراك‬: is the agent perceptual inputs at any given instant.
• Percept sequence ‫تسلسل االدراك‬: is the complete history of everything the agent
1. A human agent has eyes, ears, and other organs (‫ )االجهزة‬for sensors, has perceived.
and hands, legs, and vocal tract (‫ )المسالك الصوتية‬for actuators. • Agent function: an agent’s behavior is described by the agent function that
2. A robotic agent may have cameras and infrared range finders for maps any given percept sequence to an action.
sensors and motors for actuators. • Agent program: The agent function for an artificial agent will be implemented
3. A software agent receives keystrokes, file contents, and network by an agent program.
packets as sensory inputs and acts on the environment by
displaying on the screen, writing files, and sending network
packets.

3 4

Agents and Environments


• The properties of environment can influence the design of a successful and
Good Behavior: Rationality
suitable agent .
• Agents include humans, robots, software agents (softbots), thermostats, etc. • A rational agent is one that does the right thing. —> What is the right thing?
• The agent function f maps from percept P histories to actions A: •A sequence of actions causes the environment to go through a sequence of
states.
f : P* → A
• If the sequence is desirable, then the agent has performed well.
• The agent program runs on the physical architecture to produce f
• The notion of desirability is captured by a performance measure that
evaluates any given sequence of environment states.
• • A rational agent chooses whichever action maximizes the expected value of
the performance measure given the percept sequence.

5 6

• Performance Measure: evaluates any given sequence of environment states. Rationality: Omniscience, Learning, and Autonomy
 The evaluation of the environment is after the action is done .
• An omniscient (perfect) agent knows the actual outcome of its actions and can
act accordingly; but perfection is impossible in reality.
 It's better to design the performance measure according to what one completely wants
in the environment rather than according to how one thinks the agent should behave. • Rationality is NOT the same as perfection.
• • Rationality maximizes the expected performance, while perfection maximizes
• Rationality depends on 4 things: the actual performance.
• 1. Performance measure that defines the criterion of successes. • • A rational agent not only to gather information (exploration ‫ )استكشاف‬but also
• 2. The agent’s prior knowledge of the environment. to learn as much as possible from what it perceives.
• 3. The actions that the agent can perform. • • An agent relies on ‫ يعتمد على‬the prior knowledge of its designer rather than on
• 4. The agent's percept sequence to date. its own percepts, we say that the agent lacks autonomy ‫االستقالل الذاتي‬. A rational
agent should be autonomous ‫ مستقل‬.
• • Rational —> exploration ‫ استكشاف‬, learning ‫تعلم‬, autonomy‫ االستقالل‬.
7 8
Task Environment: PEAS
Rational Agent:
• To design a rational agent, we must specify the task environment.
• For each possible percept sequence, a rational agent should select an • The performance measure, the environment, and the agent’s actuators and
action that is expected to maximize its performance measure. sensors are grouped as the task environment, and called as PEAS (Performance
 Omniscient: Knows everything. measure, Environment, Actuators, Sensors).
 Learning: gather information.
 Autonomy: information after learning that built over prior knowledge.
 P.E.A.S : Performance Measure + Environment + Actuators + Sensors.
1. Performance Measure
2. Environment
3. Actuators .
4. Sensors
• PEAS description of the task environment for an automated taxi.
9 10

Environment types Environment types


• • The environment type largely determines the agent design. • • Deterministic ‫ الحتمية‬vs. Stochastic ‫العشوائيه‬:
• • Fully observable vs. partially observable: • • If the next state of the environment is completely determined by the current state and
the action executed by the agent, then the environment is deterministic; otherwise, it is
• • If an agent’s sensors give it access to the complete state of the environment at each stochastic.
point in time, then task environment is called as fully observable.
• • Episodic ‫ العرضيه‬vs. Sequential ‫متسلسلة متتابعة‬: •
• • Fully observable environments are convenient because the agent need not maintain • In an episodic task environment, the agent’s experience is divided into atomic episodes. In
any internal state to keep track of the world. each episode the agent receives a percept and then performs a single action. The next
• • An environment might be partially observable because of noisy and inaccurate episode does not depend on the actions taken in previous episodes.
sensors. • • In sequential environments, on the other hand, the current decision could affect all
• • Single agent vs. multi agent: future decisions. Chess and taxi driving are sequential: in both cases, short-term actions
can have long-term consequences.
• • One or more agents. Solving a crossword puzzle ‫ لغز الكلمات المتقاطعة‬is a single agent
environment. • • Static ‫ ثابت‬vs. Dynamic ‫ مرن‬:
• • Chess is a competitive ‫ تنافسي‬multi agent environment because an agent tries to • • If the environment can change while an agent is deliberating, then we say the
maximize its performance while minimizing the performance of other agent. environment is dynamic for that agent; otherwise, it is static.
• • Discrete ‫ متقطع‬vs. Continuous ‫متواصل‬:
• • Taxi-driving partially cooperative ‫ تعاوني‬multi agent environment because avoiding
collisions ‫ االصطدامات‬maximizes all agents’ performances. • • The discrete/continuous distinction applies to the state of the environment.
11
• • Chess is discrete; Taxi-driving continuous 12

Agent types 1. Simple reflex agents


• • Simple reflex agents select actions on the basis of the current percept, ignoring
• Four basic types in order of increasing generality: the rest of the percept history.
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
 • Utility-based agents

• All these can be turned into learning agents

13 14

2. Model-based Reflex Agents


3. Goal-based Agents
• • A model-based reflex agent keeps track of the current state of the world,
using an internal model. It then chooses an action in the same way as the reflex • A goal-based agent keeps track of the world state as well as a set of goals it is
agent. trying to achieve, and chooses an action that will (eventually) lead to the
achievement of its goals.

15 16
4.Utility-based agents Learning agents
• • A utility-based agent uses a model of the world, along with a utility function • Learning element is responsible for making improvements,
that measures its preferences among states of the world. • Performance element is responsible for selecting external actions.
• • Then it chooses the action that leads to the best expected utility, where • The learning element uses feedback from critic ‫ ناقد‬on how agent is doing and
expected utility is computed by averaging over all possible outcome states, determines how performance element should be modified to do better in future.
weighted by the probability of the outcome.
• Problem generator is responsible for suggesting actions that will lead to new and
informative experiences.

17 18

The structure of agents


 Learning Agent’s 4 components:
1. Learning element: Responsible for making improvement.
2. Performance element (agent): The entire agent described before.
3. Critic: How the agent is doing, how should be modified to do better in
future.
4. problem generator: Responsible for suggesting actions that will lead to new
experiences.

The structure of agents


Agent = program + Architecture.
Agent program: Takes the current percept as input.
Agent Function: Takes the entire percept as input.
19

You might also like