AI - Lec 03
AI - Lec 03
Lecture-03
1
Future of AI
2
Branches of AI
3
ETHICAL - AI
5
Intelligent Agent’s:
6
Agent:
Agent:
• An Agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators.
• An AI system is composed of an agent and its environment.
• The agents act in their environment. The environment may contain
other agents.
Types of Agents:
• Human Agents.
• Robotic Agents
• Software Agents
7
Types of Agents
• A Human Agent has sensory organs such as eyes, ears, nose, tongue
and skin parallel to the sensors, and other organs such as hands, legs,
mouth, for effectors.
• A Robotic Agent replaces cameras and infrared range finders for the
sensors, and various motors and actuators for effectors.
• A Software Agent has encoded bit strings as its programs and actions.
8
AI – Environments - Agents
9
AI Perception Action Cycle in Autonomous Cars
10
Environment
• An environment in artificial intelligence is the surrounding of
the agent.
• The agent takes input from the environment through sensors
and delivers the output to the environment through actuators.
There are several types of environments:
• Fully Observable vs Partially Observable
• Deterministic vs Stochastic
• Competitive vs Collaborative
• Single-agent vs Multi-agent
• Static vs Dynamic
• Discrete vs Continuous
11
1. Fully Observable vs Partially
Observable
• When an agent sensor is capable to sense or access the complete state of an
agent at each point in time, it is said to be a fully observable environment
else it is partially observable.
• Maintaining a fully observable environment is easy as there is no need to
keep track of the history of the surrounding.
• An environment is called unobservable when the agent has no sensors in
all environments.
• Example:
• Chess – the board is fully observable, so are the opponent’s moves
• Driving – the environment is partially observable because what’s
around the corner is not know.
12
2. Deterministic vs Stochastic
13
3. Competitive vs Collaborative
14
4. Single-agent vs Multi-agent
15
5. Dynamic vs Static
16
6. Discrete vs Continuous
• If an environment consists of a finite number of actions that can
be deliberated in the environment to obtain the output, it is said
to be a discrete environment.
• The game of chess is discrete as it has only a finite number of
moves. The number of moves might vary with every game, but
still, it’s finite.
• The environment in which the actions performed cannot be
numbered ie. is not discrete, is said to be continuous.
• Self-driving cars are an example of continuous environments as
their actions are driving, parking, etc. which cannot be
numbered. 17
Agent Terminology
• Performance Measure of Agent: It is the criteria, which determines how
successful an agent is.
• Behavior of Agent: It is the action that agent performs after any given
sequence of percept's.
• Percept: It is agent’s perceptual inputs at a given instance.
Examples of percepts include inputs from touch sensors, cameras, infrared sensors,
sonar, microphones, mice, and keyboards.
A percept can also be a higher-level feature of the data, such as lines, depth, objects, faces, or
gestures.
• Percept Sequence: It is the history of all that an agent has perceived till
date.
• Agent Function: It is a map from the precept sequence to an action.
18
Rationality
19
Continued
20
The Structure of Intelligent Agents
22
Model-Based Reflex Agents
They use a model of the world to choose their actions. They maintain an internal state.
Model: knowledge about “how the things happen in the world”.
Internal State: It is a representation of unobserved aspects of current state depending
on percept history.
Updating state requires the information about
How the world evolves.
How the agent’s actions affect the world 23
Goal-Based Agents
• They choose their actions in order to achieve goals.
• Goal-based approach is more flexible than reflex agent since the knowledge
supporting a decision is explicitly modeled, thereby allowing for modifications.
• Goal: It is the description of desirable situations.
24
Utility-Based Agents
• They choose actions based on a preference (utility) for each state.
• Goals are inadequate when:
• There are conflicting goals only some of which can be achieved.
• Goals have some uncertainty of being achieved and one needs to weigh likelihood of
success against the importance of a goal.
25
Learning Agent :
• A learning agent in AI is the type of agent that can learn from its past
experiences or it has learning capabilities.
• It starts to act with basic knowledge and then is able to act and adapt
automatically through learning.
A learning agent has mainly four conceptual components, which are:
1. Learning element: It is responsible for making improvements by learning
from the environment
2.Critic: The learning element takes feedback from critics which describes
how well the agent is doing with respect to a fixed performance standard.
3.Performance element: It is responsible for selecting external action
4.Problem Generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
• 26
Learning Agent :
27
28
29