2 Agent and Environment
2 Agent and Environment
Semester – IV
1
Artificial Intelligence – SHIT601 - Syllabus
• Introduction to the course, What is AI, Fundamentals of AI, usage of AI in a Business Perspective
• Basic Programming and other Data structural (DS) concepts for AI
• Agents and Environments, Good Behaviour: The Concept of Rationality, the Nature of Environments, Structure of Agents.
• Constraint Satisfaction Problems, Backtracking Search for CSPs. Local Search for Constraint Satisfaction Problems, The
Structure of Problems.
• To Study Informed Search and Exploration: Heuristic Function, Hill Climbing Function, 8 queen problem
• To Study First order logic: Representation Revisited Syntax and Semantic of First-order Logic, Models for first order logic,
Symbols and interpretations Atomic Sentences Complex Sentences Quantifiers
• Inference in First-order Logic Forward and Backward Chaining
• Knowledge Representation
• Uncertain Knowledge and reasoning
• Probability Reasoning
• Deep Learning, Q Learning and applications, Neural Networks, Optimizing Processes
2
Resources for the course- Books
3
Expected Learning Outcomes:
4
Evaluation Timelines and Weightage
5
Introduction to AI
Definition
"It is a branch of computer science by which we can create intelligent machines which
can behave like a human, think like humans, and able to make decisions."
6
Agents
and
Environments
7
An agent can be anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators.
8
•Perceiving its environment through sensors and
9
Example:
10
11
Examples of Agents and Environments
12
Good Behaviour- The Concept of Rationality
A rational agent could be anything that makes decisions, such as a person, firm,
machine, or software. It carries out an action with the best outcome after
considering past and current percepts.
The agents act in their environment. The environment may contain other agents.
13
Properties of Rational Agent
• Omniscience : Expected Vs Actual Performance
14
Agent terminology
• Behavior/action of agent: It is the action performed by an agent after any specified sequence of the
percepts.
• Percept sequence: It is defined as the history of everything that an agent has perceived till date.
15
Agent terminology
Agent function: It is defined as a map from the precept sequence to an action.
where P is the set of all precepts, and A is the set of all actions.
Generally, an action may be dependent of all the precepts observed, not only the current percept,
ak = F( p0 p1 p2 … pk )
Where p0 , p1, p2, … ,pk is the sequence of percepts recorded till date, ak is the resulting action carried out
and F now maps percept sequences to action
16
Vacuum cleaner problem
Performance measure of vacuum cleaner agent: All the rooms are well cleaned.
Behavior / action of agent: Left, right, suck and no-op (Doing nothing).
Percept: Location and status, for example, [A, Dirty].
Agent function: Mapping of precept sequence to an action
17
Percept sequence Action
Left
[B, Clean]
Percept sequence Suck
[B, Dirty]
Right
[A, Dirty], [A, Clean]
Suck
[A, Clean], [B, Dirty]
Left
[B, Dirty], [B, Clean]
Suck
[B, Clean], [A, Dirty]
No-op
[A, Clean], [B, Clean]
No-op
[B, Clean], [A, Clean]
18
PEAS for vacuum cleaner
Performance Measure cleanness, efficiency: distance travelled
to clean, battery life, security
19
Example: PEAS description for An Automated taxi driver
● Sensors: Cameras, sonar, speedometer, GPS, odometer, accelerometer, engine sensors, keyboard.
20
Internet shopping agent
21
Internet shopping agent
22
23
Properties of Task Environment
24
Fully observable (accessible) vs. partially observable (inaccessible)
Fully observable if agents sensors detect all aspects of environment relevant to choice of action
Example Chess – the board is fully observable, as are opponent’s moves.
Could be partially observable due to noisy, inaccurate or missing sensors, or inability to measure everything that
is needed
Driving – what is around the next bend is not observable (yet).
25
Deterministic vs. stochastic (non-deterministic)
Deterministic - the next state of the environment is completely predictable from the current state and the action
executed by the agent
Example - Tic Tac Toe game
26
Episodic vs. sequential
In episodic task environment the agents experience is divided into number of episodes and
each episode the agent receives the input and takes single action.
The next episode the action do not depend on the action taken in the previous episode.
In Sequential environment the current decision can affect all the future decisions.
Example: Chess game
27
Discrete vs. continuous
Discrete - time moves in fixed steps, usually with one measurement per step (and perhaps
one action, but could be no action).
E.g. a game of chess
28
Static vs. Dynamic
Dynamic - if the environment may change over time. Other agents in an environment make it dynamic
E.g. – Playing football, other players make it dynamic
29
Single agent vs. Multi agent
30
31
32