0% found this document useful (0 votes)
21 views28 pages

Artificial Intelligence - Unit 2

Uploaded by

Garima Maharjan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views28 pages

Artificial Intelligence - Unit 2

Uploaded by

Garima Maharjan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Agents and Environments

UNIT2 | Yuba Raj Devkota


7 Hrs. | NCCS, Paknajol
Agent, Rational Agent, Intelligent Agent
Agent
• An agent is just something that acts.
Rational Agent
• A Rational Agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best
expected outcome. i.e., one that behaves as well as possible.
• How well an agent can behave depends on the nature of the environment; some environments are more
difficult than others.
Intelligent agent:
• A Successful system can be called intelligent agent.
• Fundamental faculties of intelligence are: Acting, Sensing, Understanding ,Reasoning, Learning
• In order to act intelligent agent must sense. Blind actions is not characterization of intelligence. Understanding is
essential to interpret the sensory percepts and decide on an action.
• Therefore, Intelligent agent: Must act, Must sense, Must be autonomous, Must be rational.
• Note: intelligent agent means it does things based on reasoning, while rational agent means it does the best action
(or reaction) for a given situation.
• However, Throughout this course we will use the term agent, rational agent and intelligence agent synonymously.
What are agent and
environment?

• An AI system is composed of an agent and its environment. The agents act in their
environment. The environment may contain other agents.
• An agent is anything that can perceive its environment through sensors and acts upon
that environment through effectors.
• A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the sensors,
and other organs such as hands, legs, mouth, for effectors.
• A robotic agent replaces cameras and infrared range finders for the sensors, and various motors and
actuators for effectors.
• A software agent has encoded bit strings as its programs and actions.
What is ideal Rational Agent?
• An ideal rational agent is the one, which is capable of doing expected actions to maximize its
performance measure, on the basis of −
• Its percept sequence
• Its built-in knowledge base
• Rationality of an agent depends on the following −
• The performance measures, which determine the degree of success.
• Agent’s Percept Sequence till now.
• The agent’s prior knowledge about the environment.
• The actions that the agent can carry out.
• A rational agent always performs right action, where the right action means the action that
causes the agent to be most successful in the given percept sequence. The problem the agent
solves is characterized by Performance Measure, Environment, Actuators, and Sensors (PEAS).
• Agent’s structure can be viewed as −
• Agent = Architecture + Agent Program
• Architecture = the machinery that an agent executes on.
• Agent Program = an implementation of an agent function.
Vacuum Cleaner World of Agent?

• To illustrate the intelligent agent, a very simple example-the vacuum-cleaner world is


used as shown in Figure below:
• This world is so simple that we can describe everything that happens; it's also a made-up
world, so we can invent many variations.
• This particular world has just two locations: squares A and B. – I.e. Environment: square
A and B
• The vacuum agent perceives which square it is in and whether there is dirt in the square.
– i.e., Percepts: [location and content] E.g. [A, Dirty]
• It can choose to move left, move right, suck up the dirt, or do nothing. – i.e., Actions: left,
right, suck, and no-op
• One very simple agent function is the following:
• if the current square is dirty, then suck, otherwise move to the other square.
• A partial tabulation of agent function is shown in Table below:

Function Vacuum Agent[location, Status] returns an action –


• If status = Dirty then Return suck
• Else if location = A then Return Right
• Else if location = B then return Left
Properties of Environments (Types)

• An environment in artificial intelligence is the surrounding of the agent. The


agent takes input from the environment through sensors and delivers the
output to the environment through actuators. There are several types of
environments:
• Fully Observable vs Partially Observable
• Deterministic vs Stochastic
• Competitive vs Collaborative
• Single-agent vs Multi-agent
• Static vs Dynamic
• Discrete vs Continuous
• Episodic vs Sequential
• Known vs Unknown
1. Fully Observable vs Partially Observable
• When an agent sensor is capable to sense or access the complete state of an agent at each point in time, it is
said to be a fully observable environment else it is partially observable.
• Maintaining a fully observable environment is easy as there is no need to keep track of the history of the
surrounding.
• An environment is called unobservable when the agent has no sensors in all environments.
• Examples:
• Chess – the board is fully observable, and so are the opponent’s moves.
• Driving – the environment is partially observable because what’s around the corner is not known.

2. Deterministic vs Stochastic
• When a uniqueness in the agent’s current state completely determines the next state of the agent,
the environment is said to be deterministic.
• The stochastic environment is random in nature which is not unique and cannot be completely
determined by the agent.
• Examples:
• Chess – there would be only a few possible moves for a coin at the current state and these moves can
be determined.
• Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to time.
3. Competitive vs Collaborative
• An agent is said to be in a competitive environment when it competes against another agent to
optimize the output.
• The game of chess is competitive as the agents compete with each other to win the game which
is the output.
• An agent is said to be in a collaborative environment when multiple agents cooperate to produce
the desired output.
• When multiple self-driving cars are found on the roads, they cooperate with each other to avoid
collisions and reach their destination which is the output desired.

4. Single-agent vs Multi-agent
• An environment consisting of only one agent is said to be a single-agent environment.
• A person left alone in a maze is an example of the single-agent system.
• An environment involving more than one agent is a multi-agent environment.
• The game of football is multi-agent as it involves 11 players in each team.
5. Dynamic vs Static
• An environment that keeps constantly changing itself when the agent is up with some action is
said to be dynamic.
• A roller coaster ride is dynamic as it is set in motion and the environment keeps changing every
instant.
• An idle environment with no change in its state is called a static environment.
• An empty house is static as there’s no change in the surroundings when an agent enters.

6. Discrete vs Continuous
• If an environment consists of a finite number of actions that can be deliberated in the
environment to obtain the output, it is said to be a discrete environment.
• The game of chess is discrete as it has only a finite number of moves. The number of moves
might vary with every game, but still, it’s finite.
• The environment in which the actions are performed cannot be numbered i.e. is not discrete, is
said to be continuous.
• Self-driving cars are an example of continuous environments as their actions are driving,
parking, etc. which cannot be numbered.
7. Episodic vs Sequential
• In an Episodic task environment, each of the agent’s actions is divided into atomic incidents
or episodes. There is no dependency between current and previous incidents. In each incident,
an agent receives input from the environment and then performs the corresponding action.
• Example: Consider an example of Pick and Place robot, which is used to detect defective
parts from the conveyor belts. Here, every time robot(agent) will make the decision on the
current part i.e. there is no dependency between current and previous decisions.
• In a Sequential environment, the previous decisions can affect all future decisions. The next
action of the agent depends on what action he has taken previously and what action he is
supposed to take in the future.
• Example:
• Checkers- Where the previous move can affect all the following moves.

8. Known vs Unknown
• In a known environment, the output for all probable actions is given. Obviously, in case of
unknown environment, for an agent to make a decision, it has to gain knowledge about how the
environment works.
The Structure of Agents

• Agent’s structure can be viewed as:


• Agent = Architecture + Agent Program
• Architecture = the machinery that an agent executes on.
• Agent Program = an implementation of an agent function.

• Agents are grouped into five classes based on their degree of perceived intelligence and
capability:
• simple reflex agents
• model-based reflex agents
• goal-based agents
• utility-based agents
• learning agents
Simple Reflex Agents
• They choose actions only based on the current percept.
• They are rational only if a correct decision is made only on the basis of current precept.
• Their environment is completely observable.

Condition-Action Rule −
It is a rule that maps a state
(condition) to an action.
Based on IF-THEN Rule.

If temp>40, then Switch on


AC
Model Based Reflex Agents

• They use a model of the world to choose their actions. They maintain an internal state.
• Model − knowledge about “how the things happen in the world”.
• Internal State − It is a representation of unobserved aspects of current state depending on
percept history.
• Model means knowledge from past. So it takes its previous knowledge to take decisions.
• Environment is partially observable.
• First check history, then only perform action…not immediate like reflex model
• Self driving cars may take decision if they see any obstacle, whether to stop or go left or
right.
• Updating the state requires the information about −
• How the world evolves.
• How the agent’s actions affect the world.
Goal Based Agents

• They choose their actions in order to achieve goals. Goal-based approach is more
flexible than reflex agent since the knowledge supporting a decision is explicitly
modeled, thereby allowing for modifications.
• Goal − It is the description of desirable situations.
• It is an expansion of model based agents. The goal is already defined.
• Goal can be achieved by searching and planning.
• For example, if you’re going for trekking, first you need to search the best path or
shortest path and then plan accordingly.
• It sees the past knowledge, then set goals.
• G-Plus is an example of goal based agents, which are robots that delivers products to
customers (Alibaba)
Utility
Based
Agents
Deals with
happy and
unhappy
state of users

• They choose actions based on a preference (utility) for each state not the goal.
• GPS showing shortest path, if user follow it and if any accident on that path, then agents finds that its in
unhappy state, so changes to next shortest path so that users are in happy state.
• Goals are inadequate when −
• There are conflicting goals, out of which only few can be achieved.
• Goals have some uncertainty of being achieved and you need to weigh likelihood of success against the
importance of a goal.
Learning Agents
A learning agent can be divided into four conceptual components:

• "learning element", which is responsible for making improvements


• "performance element“ (entire agent), which is responsible for selecting external actions. i.e., it takes
in percepts and decides on actions.
• The learning element uses feedback from the "critic" on how the agent is doing and determines how
the performance element should be modified to do better in the future.
• The last component of the learning agent is the "problem generator". It is responsible for suggesting
actions that will lead to new and informative experiences.
• It learns from the environment as well, so called learning agents. It keeps improving system
from learning from environments.
• Citric => Check agent working and gives feedback.
• Problem Generator => Suggest the taken to take and gain information
• Performance Element=> Select the action to perform
Learning agents in AI refer to systems or programs that are capable of learning from their environment and
improving their performance over time. They often utilize machine learning techniques to achieve this. Here
are some real-life examples of learning agents in AI:

1. Self-driving Cars: Self-driving cars use a variety of sensors, such as cameras, LiDAR, and radar, to
perceive their surroundings. Learning agents within these cars continuously analyze this data to learn how
to navigate roads, detect obstacles, and make driving decisions. They can improve their driving behavior
over time through reinforcement learning and neural networks.
2. Recommendation Systems: Online platforms like Netflix, Amazon, and YouTube use learning agents to
recommend content to users. These agents learn from users' past behavior, such as the movies they watch
or products they purchase, to make personalized recommendations that improve with more interactions.
3. Game Playing AI: AI agents that can play games like chess, Go, or video games fall under this category.
For instance, AlphaGo, developed by DeepMind, uses deep reinforcement learning to master the game of
Go and defeated world champions. OpenAI's agents like DQN and AlphaStar have also demonstrated high-
level gameplay in various games.
4. Chatbots and Virtual Assistants: Chatbots like Google's Duplex or OpenAI's GPT models are learning
agents that can hold natural language conversations with users. They learn from large datasets of text and
conversations to generate contextually relevant responses.
5. Industrial Automation: In manufacturing and industry, learning agents can optimize processes. For
example, robotic arms can learn how to perform complex tasks by observing human demonstrations or
through reinforcement learning, allowing them to adapt to changing conditions.
6. Healthcare Diagnostics: AI systems in healthcare can act as learning agents to aid in medical diagnosis.
They learn from large datasets of medical images and patient data to improve their accuracy in identifying
diseases, such as detecting tumors in medical images.
PEAS Descriptors of Task Environment
• PEAS is an AI agent representation system that focuses on evaluating the performance of the
environment, sensors, and actuators. We need to be aware of our job environment to create an
agent. The PEAS system aids in defining the task environment. Performance, Environment,
Actuators, and Sensors are abbreviated as PEAS. AI algorithms can be written more
effectively by identifying PEAS.
• Agents are devices that work in the environment to accomplish specific predetermined tasks.
They can be hardware, software, or a mix of the two. An intelligent agent does action
independently and endures for a longer time. To achieve a certain aim, it needs also to be
flexible. Agents interact with their surroundings through actuators and experience it through
sensors. One agent or several agents can be present in an environment.
• PEAS components:
• Performance
• Environment
• Actuators
• Sensors
• Performance − If the agent's performance is being evaluated by an objective function.
Things that we can use to measure an agent's performance.

• Environment − The environment refers to the agent's immediate surroundings at the time
the agent is working in that environment. Depending on the mobility of the agent, it might be
static or dynamic. The needed sensors and behaviors of the Agent will also alter in response
to a slight change in the surroundings.

• Actuators − Agents rely on actuators to function in their surroundings. Display boards,


object-picking arms, track-changing devices, etc. are examples of actuators. The
environment can alter as a result of actions taken by agents.

• Sensors − By providing agents with a comprehensive collection of Inputs, sensors enable


them to comprehend their surroundings. Agent behavior is influenced by their recent past
and their present input set. Various sensing devices, such as cameras, GPS, odometers, and
others, are examples of sensors.
Examples of PEAS Descriptors
Example1. PEAS Descriptor of Automated Car Driver
• Performance
• Safety − The automated system needs to be able to operate the vehicle securely without rushing.
• Optimized Speed − Depending on the environment, automated systems should be able to maintain the ideal speed.
• Journey − The end-user should have a comfortable journey thanks to automated systems.
• Environment
• Roads − Automated automobile drivers ought to be able to go on any type of route, from local streets to interstates.
• Traffic Conditions − For various types of roadways, there are various traffic conditions to be found.
• Actuators
• Steering wheel − to point an automobile in the appropriate direction.
• Gears and accelerators − adjusting the car's speed up or down.
• Sensors
• In-car driving tools like cameras, sonar systems, etc. are used to collect environmental data.
Example2: PEAS Descriptor of Soccer

• Performance −
• scoring goals, defending, speed
• Environment −
• playground, teammates, opponents, ball
• Actuators −
• body, dribbling, tackling, passing the ball, shooting
• Sensors −
• camera, ball sensor, location sensor, other players locator

You might also like