0% found this document useful (0 votes)
34 views29 pages

AI - Lec 03

The document provides information about artificial intelligence including agents and environments, branches of AI, and ethical AI. It discusses key concepts such as intelligent agents, environments, rationality, and the structure of agents. Specifically, it defines agents and environments, describes different types of environments and agent terminology. It also explains rationality and the ideal rational agent. Finally, it outlines simple reflex agents and model-based reflex agents.

Uploaded by

Manish Guleria
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views29 pages

AI - Lec 03

The document provides information about artificial intelligence including agents and environments, branches of AI, and ethical AI. It discusses key concepts such as intelligent agents, environments, rationality, and the structure of agents. Specifically, it defines agents and environments, describes different types of environments and agent terminology. It also explains rationality and the ideal rational agent. Finally, it outlines simple reflex agents and model-based reflex agents.

Uploaded by

Manish Guleria
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Artificial Intelligence

Lecture-03

Students will learn the


Agents & Environments of AI.

1
Future of AI

2
Branches of AI

3
ETHICAL - AI

• Trustworthy AI should comply with all applicable legislation and


regulations and a set of requirements; specific lists of evaluations are
intended to help verify the application of each of the main requirements.
• Robust and Safety: Dependable AI requires safe, reliable and robust
algorithms that address mistakes or inconsistencies throughout all the
life cycle phases of the AI systems.
• Privacy and data governance: Citizens should have full control over
their own personal data, whereas their data should not be used for harm
or discrimination against them. 4
Continued

• Transparency: Tractability should be guaranteed for AI systems.


• Diversity, non - discrimination and fairness: AI systems should
consider and guarantee accessibility and the full range of human capabilities,
skills and requirements.
• Societal and environmental well-being: AI systems should be used to
promote positive social change and improve environmental sustainability.
• Accountability: Mechanisms should be placed to ensure accountability and
responsibility for AI systems and their products.

5
Intelligent Agent’s:

• Agents and environments:

6
Agent:

Agent:
• An Agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators.
• An AI system is composed of an agent and its environment.
• The agents act in their environment. The environment may contain
other agents.
Types of Agents:
• Human Agents.
• Robotic Agents
• Software Agents

7
Types of Agents

• A Human Agent has sensory organs such as eyes, ears, nose, tongue
and skin parallel to the sensors, and other organs such as hands, legs,
mouth, for effectors.
• A Robotic Agent replaces cameras and infrared range finders for the
sensors, and various motors and actuators for effectors.
• A Software Agent has encoded bit strings as its programs and actions.

8
AI – Environments - Agents

9
AI Perception Action Cycle in Autonomous Cars

10
Environment
• An environment in artificial intelligence is the surrounding of
the agent.
• The agent takes input from the environment through sensors
and delivers the output to the environment through actuators.
There are several types of environments:
• Fully Observable vs Partially Observable
• Deterministic vs Stochastic
• Competitive vs Collaborative
• Single-agent vs Multi-agent
• Static vs Dynamic
• Discrete vs Continuous
11
1. Fully Observable vs Partially
Observable
• When an agent sensor is capable to sense or access the complete state of an
agent at each point in time, it is said to be a fully observable environment
else it is partially observable.
• Maintaining a fully observable environment is easy as there is no need to
keep track of the history of the surrounding.
• An environment is called unobservable when the agent has no sensors in
all environments.
• Example:
• Chess – the board is fully observable, so are the opponent’s moves
• Driving – the environment is partially observable because what’s
around the corner is not know.

12
2. Deterministic vs Stochastic

• When a uniqueness in the agent’s current state completely determines the


next state of the agent, the environment is said to be deterministic.
• The stochastic environment is random in nature which is not unique and
cannot be completely determined by the agent.
• Example:
Chess – there would be only a few possible moves for a coin at the current
state and these moves can be determined
Self Driving Cars – the actions of a self-driving car are not unique, it
varies time to time

13
3. Competitive vs Collaborative

• An agent is said to be in a competitive environment when it competes


against another agent to optimize the output.
• The game of chess is competitive as the agents compete with each
other to win the game which is the output.
• An agent is said to be in a collaborative environment when multiple
agents cooperate to produce the desired output.
• When multiple self-driving cars are found on the roads, they
cooperate with each other to avoid collisions and reach their
destination which is the output desired.

14
4. Single-agent vs Multi-agent

• An environment consisting of only one agent is said to be a single-


agent environment.
• A person left alone in a maze is an example of the single-agent
system.
• An environment involving more than one agent is a multi-agent
environment.
• The game of football is multi-agent as it involves 11 players in each
team.

15
5. Dynamic vs Static

• An environment that keeps constantly changing itself when the


agent is up with some action is said to be dynamic.
• A roller coaster ride is dynamic as it is set in motion and the
environment keeps changing every instant.
• An idle environment with no change in its state is called a static
environment.
• An empty house is static as there’s no change in the
surroundings when an agent enters.

16
6. Discrete vs Continuous
• If an environment consists of a finite number of actions that can
be deliberated in the environment to obtain the output, it is said
to be a discrete environment.
• The game of chess is discrete as it has only a finite number of
moves. The number of moves might vary with every game, but
still, it’s finite.
• The environment in which the actions performed cannot be
numbered ie. is not discrete, is said to be continuous.
• Self-driving cars are an example of continuous environments as
their actions are driving, parking, etc. which cannot be
numbered. 17
Agent Terminology
• Performance Measure of Agent: It is the criteria, which determines how
successful an agent is.
• Behavior of Agent: It is the action that agent performs after any given
sequence of percept's.
• Percept: It is agent’s perceptual inputs at a given instance.
Examples of percepts include inputs from touch sensors, cameras, infrared sensors,
sonar, microphones, mice, and keyboards.
A percept can also be a higher-level feature of the data, such as lines, depth, objects, faces, or
gestures.
• Percept Sequence: It is the history of all that an agent has perceived till
date.
• Agent Function: It is a map from the precept sequence to an action.

18
Rationality

• Rationality is nothing but status of being reasonable, sensible, and having


good sense of judgment.
• Rationality is concerned with expected actions and results depending upon
what the agent has perceived.
• Performing actions with the aim of obtaining useful information is an
important part of rationality.
• What is Ideal Rational Agent?
• An ideal rational agent is the one, which is capable of doing expected actions to
maximize its performance measure, on the basis of:
• Its percept sequence
• Its built-in knowledge base

19
Continued

Rationality of an agent depends on the following:


• 1. The performance measures, which determine the degree of success.
• 2. Agent’s Percept Sequence till now.
• 3. The agent’s prior knowledge about the environment.
• 4. The actions that the agent can carry out.
• A rational agent always performs right action, where the right action means the action
that causes the agent to be most successful in the given percept sequence. The
problem the agent solves is characterized by Performance Measure, Environment,
Actuators, and Sensors (PEAS).

20
The Structure of Intelligent Agents

• Agent’s structure can be viewed as:


• Agent = Architecture + Agent Program
• Architecture = the machinery that an agent executes on.
• Agent Program = an implementation of an agent function.
• Simple Reflex Agents
• They choose actions only based on the current percept.
• They are rational only if a correct decision is made only on the basis of
current precept.
• Their environment is completely observable.
• Condition-Action Rule – It is a rule that maps a state (condition) to an
action.
21
Simple Reflex Agents

22
Model-Based Reflex Agents

They use a model of the world to choose their actions. They maintain an internal state.
Model: knowledge about “how the things happen in the world”.
Internal State: It is a representation of unobserved aspects of current state depending
on percept history.
Updating state requires the information about
How the world evolves.
How the agent’s actions affect the world 23
Goal-Based Agents
• They choose their actions in order to achieve goals.
• Goal-based approach is more flexible than reflex agent since the knowledge
supporting a decision is explicitly modeled, thereby allowing for modifications.
• Goal: It is the description of desirable situations.

24
Utility-Based Agents
• They choose actions based on a preference (utility) for each state.
• Goals are inadequate when:
• There are conflicting goals only some of which can be achieved.
• Goals have some uncertainty of being achieved and one needs to weigh likelihood of
success against the importance of a goal.

25
Learning Agent :
• A learning agent in AI is the type of agent that can learn from its past
experiences or it has learning capabilities.
• It starts to act with basic knowledge and then is able to act and adapt
automatically through learning.
A learning agent has mainly four conceptual components, which are:
1. Learning element: It is responsible for making improvements by learning
from the environment
2.Critic: The learning element takes feedback from critics which describes
how well the agent is doing with respect to a fixed performance standard.
3.Performance element: It is responsible for selecting external action
4.Problem Generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
• 26
Learning Agent :

27
28
29

You might also like