Intelligent Agent
Intelligent Agent
Intelligent Agent
An intelligent agent may learn from the environment to achieve their goals. A
thermostat is an example of an intelligent agent.
Rational Agent:
Has clear preference, models uncertainty.
To maximize its performance measure with all possible actions.
Perform the right things.
For game theory and decision theory in various real-world scenarios.
AI reinforcement learning algorithm: For each best possible action, agent
gets the positive reward. For each wrong action, an agent gets a negative
reward.
Rationality:
The rationality of an agent is measured by its performance measure.
Performance measure which defines the success criterion.
Agent prior knowledge of its environment.
Best possible actions that an agent can perform.
The sequence of percepts.
Structure of an AI Agent:
To design an agent program, we can implements the agent function. The structure of
an intelligent agent is a combination of architecture and agent program.
Agent = Architecture + Agent program
Following are the main three terms involved in the structure of an AI agent:
f: P* → A
PEAS Representation:
PEAS is a type of model on which an AI agent works upon.
The right action is the one that will cause the agent to be most successful
Performance measure: An objective criterion for success of an agent's behavior.
Deterministic vs Stochastic:
An environment is deterministic if the next state of the environment is
completely determined by the current state of the environment and the action
of the agent;
In a stochastic environment, there are multiple, unpredictable outcomes.
(If the environment is deterministic except for the actions of other agents, then
the environment is strategic).
In a fully observable, deterministic environment, the agent need not deal with
uncertainty.
Note: Uncertainty can also arise because of computational limitations.
E.g., we may be playing an omniscient (“all knowing”) opponent but we may
not be able to compute his/her moves.
Episodic vs Sequential:
Single-agent vs Multi-agent
If only one agent is involved in an environment, and operating by itself then
such an environment is called single agent environment.
However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment.
The agent design problems in the multi-agent environment are different from
single agent environment.
Static vs Dynamic:
If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static
environment.
Static environments are easy to deal because an agent does not need to
continue looking at the world while deciding for an action.
However for dynamic environment, agents need to keep looking at the world at
each action.
Taxi driving is an example of a dynamic environment whereas Crossword
puzzles are an example of a static environment.
Discrete vs Continuous:
If in an environment there are a finite number of percepts and actions that can
be performed within it, then such an environment is called a discrete
environment else it is called continuous environment.
A chess game comes under discrete environment as there is a finite number of
moves that can be performed.
A self-driving car is an example of a continuous environment.
Known vs Unknown
Known and unknown are not actually a feature of an environment, but it is an
agent's state of knowledge to perform an action.
In a known environment, the results for all actions are known to the agent.
While in unknown environment, agent needs to learn how it works in order to
perform an action.
It is quite possible that a known environment to be partially observable and an
Unknown environment to be fully observable.
Accessible vs Inaccessible
If an agent can obtain complete and accurate information about the state's
environment, then such an environment is called an Accessible environment
else it is called inaccessible.
An empty room whose state can be defined by its temperature is an example of
an accessible environment.
Information about an event on earth is an example of Inaccessible
environment.
Table-lookup driven agents
Uses a percept sequence / action table in memory to find the next action.
Implemented as a (large) lookup table. Drawbacks: – Huge table (often simply too
large) – Takes a long time to build/learn the table I).
Percepts: robot senses it’s location and “cleanliness.” So, location and contents,
e.g., [A, Dirty], [B, Clean]. With 2 locations, we get 4 different possible sensor
inputs. Actions: Left, Right, Suck, NoOp Toy example: Vacuum world.
Table lookup Action sequence of length K, gives 4^K different possible sequences.
At least many entries are needed in the table. So, even in this very toy world, with K
= 20, you need a table with over 4^20 > 10^12 entries.
In more real-world scenarios, one would have many more different percepts (eg
many more locations), e.g., >=100. There will therefore be 100^K different possible
sequences of length K. For K = 20, this would require a table with over 100^20 =
10^40 entries. Infeasible to even store.
So, table lookup formulation is mainly of theoretical interest. For practical agent
systems, we need to find much more compact representations. For example, logic-
based representations, Bayesian net representations, or neural net style
representations, or use a different agent architecture.
Turing test in AI