0% found this document useful (0 votes)
38 views

Lecture Notes - 6

The document discusses different types of AI agents including simple reflex agents, model-based reflex agents, and goal-based agents. Simple reflex agents take actions based only on current percepts while model-based agents also track situations over time using an internal model and state. Goal-based agents expand on this by selecting actions to achieve desired goals.

Uploaded by

12112004it
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Lecture Notes - 6

The document discusses different types of AI agents including simple reflex agents, model-based reflex agents, and goal-based agents. Simple reflex agents take actions based only on current percepts while model-based agents also track situations over time using an internal model and state. Goal-based agents expand on this by selecting actions to achieve desired goals.

Uploaded by

12112004it
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Artificial Intelligence

Lecture Notes - 6
(Subject Code: CSC 602, ITC 651)

Prepared by
Dr. Sourabh Jain
Indian Institute of Information Technology Sonepat
AI- Agents and Environments
An AI system is composed of an agent and its environment. The agents act in their environment.
The environment may contain other agents.

What is an Agent?

An agent can be anything that perceive its environment through sensors and act upon that
environment through actuators. An Agent runs in the cycle of perceiving, thinking, and acting. An
agent can be:

 Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and
hand, legs, vocal tract work for actuators.

 Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors and
various motors for actuators.

 Software Agent: Software agent can have keystrokes, file contents as sensory input and act on
those inputs and display output on the screen.
Sensor: Sensor is a device which detects the change in the environment and
sends the information to other electronic devices. An agent observes its
environment through sensors.

Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system. An
actuator can be an electric motor, gears, rails, etc.

Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.
Intelligent Agents
An intelligent agent is an autonomous entity which act upon an environment using
sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an example of an intelligent
agent.

Following are the main four rules for an AI agent:

Rule 1: An AI agent must have the ability to perceive the environment.

Rule 2: The observation must be used to make decisions.

Rule 3: Decision should result in an action.

Rule 4: The action taken by an AI agent must be a rational action.


Structure of an AI Agent
The task of AI is to design an agent program which implements the agent function.
The structure of an intelligent agent is a combination of architecture and agent
program. It can be viewed as:

Agent = Architecture + Agent program

Following are the main three terms involved in the structure of an AI agent:

Architecture: Architecture is machinery that an AI agent executes on.

Agent Function: Agent function is used to map a percept to an action.

Agent program: Agent program is an implementation of agent function. An agent


program executes on the physical architecture to produce function f.
PEAS Representation
PEAS is a type of model on which an AI agent works upon. When we define an AI
agent or rational agent, then we can group its properties under PEAS
representation model. It is made up of four words:

P: Performance Measure : Performance measure is the unit to define the success


of an agent. Performance varies with agents based on their different precepts.

E: Environment : Environment is the surrounding of an agent at every instant. It


keeps changing with time if the agent is set in motion.

A: Actuators : An actuator is a part of the agent that delivers the output of action to
the environment.

S: Sensors : Sensors are the receptive parts of an agent that takes in the input for
the agent.
PEAS Representation
Types of AI Agents
Agents can be grouped based on their degree of perceived intelligence
and capability. All these agents can improve their performance and
generate better action over the time. These are given below:

 Simple Reflex Agent

 Model-based Reflex Agent

 Goal-based Agent
1. Simple Reflex Agent
 The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the current
percepts and ignore the rest of the percept history.

 These agents only succeed in the fully observable environment.

 The Simple reflex agent does not consider any part of percepts history during their decision and action process.

 The Simple reflex agent works on Condition-action rule, which means it maps the current state to action. Such as
a Room Cleaner agent, it works only if there is dirt in the room.
1. Simple Reflex Agent (Cont…)
Problems for the simple reflex agent design approach:

 They have very limited intelligence

 They do not have knowledge of non-perceptual parts of the current state

 Mostly too big to generate and to store.

 Not adaptive to changes in the environment.


2. Model-based Reflex Agent
 The Model-based agent can work in a partially observable environment, and track the situation.

 A model-based agent has two important factors:

 Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent.

 Internal State: It is a representation of the current state based on percept history.

 These agents have the model, "which is knowledge of the world" and based on the model they perform
actions.

 Updating the agent state requires information about:

a. How the world evolves

b. How the agent's action affects the world.


2. Model-based Reflex Agent (Cont…)
3. Goal-based Agents
• The knowledge of the current state environment is not always sufficient to decide
for an agent to what to do.

• The agent needs to know its goal which describes desirable situations.

• Goal-based agents expand the capabilities of the model-based agent by having


the "goal" information.

• They choose an action, so that they can achieve the goal.

• These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different
scenario are called searching and planning, which makes an agent proactive.
3. Goal-based Agents (Cont…)

You might also like