0% found this document useful (0 votes)
9 views11 pages

Unit 2 Fai

The document discusses the fundamentals of Artificial Intelligence (AI) and Machine Learning (ML), focusing on agents, their environments, and the concept of rationality. It categorizes different types of intelligent agents based on their capabilities, such as simple reflex agents, model-based agents, goal-based agents, utility-based agents, and learning agents. Additionally, it explores the nature of environments in which agents operate, the properties of these environments, and the structure of agents, emphasizing the importance of performance measures and rational decision-making.

Uploaded by

sitharavashisht
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views11 pages

Unit 2 Fai

The document discusses the fundamentals of Artificial Intelligence (AI) and Machine Learning (ML), focusing on agents, their environments, and the concept of rationality. It categorizes different types of intelligent agents based on their capabilities, such as simple reflex agents, model-based agents, goal-based agents, utility-based agents, and learning agents. Additionally, it explores the nature of environments in which agents operate, the properties of these environments, and the structure of agents, emphasizing the importance of performance measures and rational decision-making.

Uploaded by

sitharavashisht
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

VISHNU INSTITUTE OF TECHNOLOGY ::

BHIMAVARAM
DEPARTMENT OF CSE(ARTIFICIAL INTELLIGENCE AND MACHINE
LEARNING)

FUNDAMENTALS OF ARTIFICIAL INTELLIGENCE AND MACHINE


LEARNING

UNIT 2
Topics :
1. Agents and Environments
2. Good Behavior : The concept of Rationality
3. The Nature of Environments
4. Structure of Agents
Agents and Environments
An “agent” is an independent program or entity that interacts with
its environment by perceiving its surroundings via sensors, then
acting through actuators. Agents use their actuators to run through
a cycle of perception, thought, and action. Examples of agents in
general terms include:

a. Software: This Agent has file contents, keystrokes, and


received network packages that function as sensory input, then act
on those inputs, displaying the output on a screen.

b. Human: Yes, we’re all agents. Humans have eyes, ears, and
other organs that act as sensors, and hands, legs, mouths, and
other body parts act as actuators.

c. Robotic: Robotic agents have cameras and infrared range


finders that act as sensors, and various servos and motors perform
as actuators.

Intelligent agents in AI are autonomous entities that act upon an


environment using sensors and actuators to achieve their goals. In
addition, intelligent agents may learn from the environment to
achieve those goals. Driver less cars and the Siri virtual assistant
are examples of intelligent agents in AI. There are five different
types of intelligent agents used in AI. They are defined by their
range of capabilities and intelligence level:

 Simple Reflex Agents: These agents work here and now and
ignore the past. They respond using the event-condition-action
rule. The ECA rule applies when a user initiates an event, and the
Agent turns to a list of preset conditions and rules, resulting in
pre-programmed outcomes.

 Model-based Agents: These agents choose their actions like


reflex agents do, but they have a better comprehensive view of
the environment. An environmental model is programmed into
the internal system, incorporating into the Agent's history.

 Goal-based agents: These agents build on the information that


a model-based agent stores by augmenting it with goal
information or data regarding desirable outcomes and situations.

 Utility-based agents: These are comparable to the goal-based


agents, except they offer an extra utility measurement. This
measurement rates each possible scenario based on the desired
result and selects the action that maximizes the outcome. Rating
criteria examples include variables such as success probability or
the number of resources required.

 Learning agents: These agents employ an additional learning


element to gradually improve and become more knowledgeable
over time about an environment. The learning element uses
feedback to decide how the performance elements should be
gradually changed to show improvement.

Agent Terminology
1. Performance Measure of Agent − It is the criteria, which
determines how successful an agent is.
2. Behavior of Agent − It is the action that agent performs
after any given sequence of precepts.
3. Percept − It is agent’s perceptual inputs at a given instance.
4. Percept Sequence − It is the history of all that an agent has
perceived till date.
5. Agent Function − It is a map from the precept sequence to
an action.

Good Behavior : The concept of Rationality

Rationality is nothing but status of being reasonable, sensible, and


having good sense of judgment. Rationality is concerned with
expected actions and results depending upon what the agent has
perceived. Performing actions with the aim of obtaining useful
information is an important part of rationality.

What is Ideal Rational Agent?

An ideal rational agent is the one, which is capable of doing


expected actions to maximize its performance measure, on the
basis of −

 Its percept sequence


 Its built-in knowledge base
Rationality of an agent depends on the following :
a.The performance measures, which determine the degree of
success.
b. Agent’s Percept Sequence till now.
c.The agent’s prior knowledge about the environment.
d. The actions that the agent can carry out.
A rational agent always performs right action, where the right action
means the action that causes the agent to be most successful in the
given percept sequence. The problem the agent solves is
characterized by Performance Measure, Environment, Actuators,
and Sensors (PEAS).

The Nature of Environments

Some programs operate in the entirely artificial


environment confined to keyboard input, database, computer file
systems and character output on a screen.

In contrast, some software agents (software robots or soft


bots) exist in rich, unlimited domains. The software agent needs to
choose from a long array of actions in real time. A soft bot designed
to scan the online preferences of the customer and show interesting
items to the customer works in the real as well as
an artificial environment.

The most famous artificial environment is the Turing Test


environment, in which one real and other artificial agents are
tested on equal ground. This is a very challenging environment as it
is highly difficult for a software agent to perform as well as a
human.

Properties of Environment

1. Fully Observable / Partially Observable − If it is possible to


determine the complete state of the environment at each time point
from the percepts it is observable; otherwise it is only partially
observable.
2. Single agent / Multiple agents − The environment may
contain other agents which may be of the same or different kind as
that of the agent.
3. Deterministic / Non-deterministic − If the next state of the
environment is completely determined by the current state and the
actions of the agent, then the environment is deterministic;
otherwise it is non-deterministic.
4. Episodic / sequential − In an episodic environment, each
episode consists of the agent perceiving and then acting. The
quality of its action depends just on the episode itself. Subsequent
episodes do not depend on the actions in the previous episodes.
Episodic environments are much simpler because the agent does
not need to think ahead.
5. Static / Dynamic − If the environment does not change while an
agent is acting, then it is static; otherwise it is dynamic.
6. Discrete / Continuous − If there are a limited number of
distinct, clearly defined, states of the environment, the environment
is discrete (For example, chess); otherwise it is continuous (For
example, driving).
7. Known/ Unknown − If the agent’s sensory apparatus can have
access to the complete state of the environment, then the
environment is accessible to that agent.

PEAS System is used to categorize similar agents together.


The PEAS system delivers the performance measure with respect
to the environment, actuators, and sensors of the respective agent.
Most of the highest performing agents are Rational Agents. PEAS
stand for Performance, Environment , Actuators and Sensors. An
example PEAS system of an automated taxi driver is shown in the
figure below :
Structure of Agents
Agent’s structure can be viewed as the combination of architecture
that the agent is using along with the program it has to execute.

Agent = Architecture + Agent Program

Architecture is the machinery that an agent executes on and that


of Agent Program is an implementation of an agent function.
Simple Reflex Agents
 They choose actions only based on the current percept.
 They are rational only if a correct decision is made only on the
basis of current precept.
 Their environment is completely observable.

Condition-Action Rule − It is a rule that maps a state (condition)


to an action.
Model Based Reflex Agents

They use a model of the world to choose their actions. They


maintain an internal state.

Model − knowledge about “how the things happen in the


world”.

Internal State − It is a representation of unobserved


aspects of current state depending on percept
history.

Updating the state requires the information about −

 How the world evolves.


 How the agent’s actions affect the world.
Goal Based Agents

They choose their actions in order to achieve goals. Goal-based


approach is more flexible than reflex agent since the knowledge
supporting a decision is explicitly modeled, thereby allowing for
modifications.

Goal − It is the description of desirable situations.


Utility Based Agents

They choose actions based on a preference (utility) for each state.

Goals are inadequate when there are conflicting goals, out of which
only few can be achieved. Goals have some uncertainty of being
achieved and you need to weigh likelihood of success against the
importance of a goal.

Learning Agents
A learning agent can be divided into four conceptual
components, as shown in figure below. The most important
distinction is between the learning element, which is responsible
for making improvements, and the performance element, which is
responsible for selecting external actions. The performance element
is what we have previously considered to be the entire agent: it
takes in percepts and decides on actions. The learning element uses
feedback from the critic on how the agent is doing and determines
how the performance CRITIC element should be modified to do
better in the future.
How the Components of Agent Programs work
The agent programs consists of various components that
represent environment that the agent inhabits. To draw user’s
attention the components will make various representations about
the environments that the agent inhabits. In general,we can place
the representations along an axis of increasing complexity and
expressive power—atomic, factored, and structured as shown in
the below figure :

In an atomic representation each state of the world is indivisible


—it has no internal structure.A single atom of knowledge called a
“black box” whose only discernible property is that of being
identical to or different from another black box is visible in the
structure. The algorithms underlying search and game-playing,
Hidden Markov models , and Markov decision processes all
work with atomic representations—or, at least, they treat
representations as if they were atomic.
A factored representation splits up each state into a
fixed set of variables or attributes, each of which can have a
value. Two different factored states can share some attributes
(such as being at some particular GPS location) and not others (such
as having lots of gas or having no gas); this makes it much easier to
work out how to turn one state into another. With factored
representations, we can also represent uncertainty—for example,
ignorance about the amount of gas in the tank can be represented
by leaving that attribute blank. Many important areas of AI are
based on factored representations, including constraint
satisfaction algorithms, propositional logic, planning,
Bayesian networks, and the machine learning algorithms..
structured representation, in which objects and their
various and varying relationships can be described
explicitly.Structured representations underlie relational
databases and first-order logic,first-order probability models,
knowledge-based learning and much of natural language
understanding. In fact, almost everything that humans express in
natural language concerns objects and their relationships.

You might also like