0% found this document useful (0 votes)
20 views72 pages

Unit1-Part1 AIML

Uploaded by

Sadaf Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views72 pages

Unit1-Part1 AIML

Uploaded by

Sadaf Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 72

Content

• AI Definition
• Intelligent Agents – Definitions, Example

AI 1
Syllabus
Unit Content Books –
Chapter
number
Unit 1 Introduction. Why study AI? What is AI? The Turing test. Book 1:
Rationality. Branches of AI. chapter 1
Brief history of AI. Challenges for the future.

Unit 1 What is an intelligent agent? Doing the right thing (rational Book 1:
action). Performance measure. Autonomy. Environment and Chapter 2
agent design. Structure of Agents. Agent types.
Unit 1 Uninformed Search - Depth-first. Breadth-first. Uniform-cost. Book 1:
Depth-limited. Iterative deepening. Examples. Properties. Chapter 3
Informed search – Best-first. A* search. Heuristics. Hill climbing.
Problem of local extrema. Simulated annealing.

AI 2
Why AI?
Computer need to posses:
• Natural language processing- to communicate
• Knowledge representation- to store
• Automated reasoning- to answer & conclude
• Machine Learning- to adapt to new
circumstances & extrapolate patterns

AI 4
Artificial Intelligence A Modern Approach
Stuart Russel & Peter Norvig

Chapter 1

Introduction

5
Definition of AI
Categorized as

Thought Process
& Reasoning Thinking Humanly Thinking
Rationally

Acting Humanly Acting Rationally

Behaviour

Success in terms of Success measure against


fidelity to human ideal performance
performance (rationality)

6
Definition

AI 7
Total Turing Test
• Proposed to test if machine is truly intelligent
• The test is performed based on verbal or visual
knowledge
• The interrogator cannot distinguish if the
response is from machine or human
• Total turing test include a video signal so that
the interrogator can test subject’s perceptual
abilities as well as the opportunity for the
interrogator to pass physical objects ‘through the
hatch’.
• To pass the total turing test , the computer need
– Computer vision- to perceive object
– Robotics- to manipulate objects & move about

AI 8
● Rational - if system does right thing with what
it knows
● Human centered- involves observation &
hypothesis about human behaviour
● Rationalist approach- combination of
mathematics & engineering

9
4 Approaches
1.Acting humanly- The turing test approach
2.Thinking humanly- Cognitive modeling
approach
3.Thinking Rationally- The ‘Laws of thought’
4.Acting Rationally- The rational agent
approach

10
1. Acting humanly: The Turing Test
approach
• A computer passes the test if a human interrogator, after posing some
written questions, cannot tell whether the written responses come from a
person or from a computer.
• The computer would need to possess the following capabilities:
– NATURAL LANGUAGE • natural language processing to enable it to
communicate successfully in English;
– PROCESSING KNOWLEDGE • knowledge representation to store what it knows
or hears;
– AUTOMATED REASONING• automated reasoning to use the stored information
to answer questions and to draw new conclusions;
– MACHINE LEARNING • machine learning to adapt to new circumstances and to
detect and extrapolate patterns.
• To pass the total Turing Test, the computer will need
– COMPUTER VISION • computer vision to perceive objects, and
– ROBOTICS • robotics to manipulate objects and move about.

AI 11
2. Thinking humanly: The cognitive
modeling approach
• Incorporates neuro-physiological evidence into
computational models.
• To determine how human think:
• Through introspection(observing oneself)
• Through psychological experiments(observing
person in action)
• Through brain imaging(observing brain in action)
• With this determine theory of the mind
• Use this theory to produce as computer program

AI 12
• Cognitive Science –
– Computer Models from AI
– Experimental techniques from psychology
to construct precise and testable theories of the
human mind

13
3. Thinking rationally: The “laws of
thought” approach
• The Greek philosopher Aristotle was one of the
first to attempt to codify “right thinking,” that is,
irrefutable reasoning processes. His syllogisms
provided patterns for argument structures that
always yielded correct conclusions when given
correct premises
• For example, “Socrates is a man; all men are
mortal; therefore, Socrates is mortal.”
• These laws of thought were supposed to govern
the operation of the mind; their study initiated
the field called logic.

AI 14
4. Acting rationally: The rational agent
approach
• An agent is just something that acts (agent
comes from the Latin agere, to do). Of course, all
computer programs do something, but computer
agents are expected to do more:
– operate autonomously,
– perceive their environment, persist over a prolonged
time period,
– adapt to change, and
– create and pursue goals.
– A rational agent is one that acts so as to achieve the
best outcome or, when there is uncertainty, the best
expected outcome.

AI 15
Rational Agent has advantage over
other approaches
1.It is more general than ‘Laws of thought’
approach because correct inference is one of
several possible mechanisms for achieving
rationality
2.It is more amenable to scientific development
than approaches based on human behavior
or human thought

16
Chapter 2

Intelligent Agents

17
1.Intelligent Agents
• An agent is anything
that can be viewed as
perceiving its
environment through
sensors and acting
upon that environment
through actuators.

AI 18
Definitions
• The term percept to refer to the agent’s
perceptual inputs at any given instant.
• An agent’s percept sequence is the complete
history of everything the agent has ever perceived.
• An agent’s behavior is described by the agent
function that maps any given percept sequence to
an action.
• Agent Program-
– Agent function -abstract of mathematical program
– Agent program- concrete implementation running
within some physical system

AI 19
Example- Vacuum Cleaner

AI 20
Agent Function for Vacuum Cleaner
Problem

AI 21
Class3

22
• Good behavior - Rationality
• The nature of environments
• The structure of an agent

AI 23
2. Good Behavior: The concept of
rationality
• A rational agent is one that does the right
thing
• When an agent is plunked down in an
environment, it generates a sequence of
actions according to the percepts it receives.
• This sequence of actions causes the
environment to go through a sequence of
states.

AI 24
• Desirability is captured by a performance
measure that evaluates any given sequence of
environment states.
• It is better to design performance measures
according to what one actually wants in the
environment, rather than according to how
one thinks the agent should behave.

AI 25
Rationality depends on 4 things

• The performance measure that defines the


criterion of success.
• The agent’s prior knowledge of the
environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date.

AI 26
• For each possible percept sequence, a rational
agent should select an action that is expected
to maximize its performance measure, given
the evidence provided by the percept sequence
and whatever built-in knowledge the agent
has.
• Is Vacuum cleaner agent rational?

AI 27
• Is Vacuum cleaner agent rational?
1.Performance measure : award 1 pt for each
clean square, at each time step( lifetime -1000
steps)
2.Agent’s prior knowledge of environment- 2
squares are there, agent is not known about
dirt distribution& initial location
3.Actions - left,right,suck
4.Percept sequence- agent perceives its location,
whether location perceives dirt

28
Some agent would be irrational under different
circumstances

1.Once dirt in both square is cleared, and agent is moving


back & forth between the squares.
If there is penalty for movement left & right, agent will fare
poorly
2. Once agent is sure of clean environment, do occasional
check & clean
3. If geography is unknown, agent need to explore

29
• Rationality maximizes expected performance

• Perfection maximizes actual performance

30
Important aspects
• Doing actions in order to modify future
percepts—sometimes called information
gathering

AI 31
Why information gathering is
important?
• A rational agent not
only requires to gather
information but also to
learn as much as
possible from what it
perceives.

AI 32
Importance of autonomy
• If an agent relies on the prior knowledge of its
designer rather than on its own percepts, we
say that the agent lacks autonomy.
• A rational agent should be autonomous—it
should learn what it can to compensate for
partial or incorrect prior knowledge.

AI 33
• Reasonable to provide AI agent some prior
knowledge and ability to learn
• The agent require complete autonomy from
the start.
• After sufficient experience of its environment
the behavior of the rational agent can
become efficiently independent of its prior
knowledge.

34
The nature of environments
• Problem identification is - task environments,
the “problems” to which rational agents are
the “solutions.”
• In designing an agent, the first step must
always be to specify the task environment as
fully as possible.
• PEAS (Performance, Environment, Actuators,
Sensors)

AI 35
Example: Automated Taxi Driver

AI 36
37
Properties of task environment -1
•Fully observable vs partially observable –
– If an agent’s sensors give it access to the complete state
of the environment at each point in time, then we say that
the task environment is fully observable.
–Task environment is fully observable if the sensors detect
all aspect relevant to choice of action
–Environment is partially observable because of noisy,
inaccurate sensors or part of the state are simply missing,
ex: taxi driver cannot see what other drivers are thinking
–If the agent has no sensors at all then the environment is
unobservable
AI 38
Properties of task environment -2
• Single agent vs multiagent
– An agent solving cross world problem by itself is
single agent
– Agent playing chess is two agent environment
– When to consider other agent as agent?

AI 39
- In chess, opponent entity B is trying to maximize its
performance , which by the rules of chess minimizes
A’s performance measure. Chess is a competitive
multi agent environment
- In Taxi Driving environment, avoiding collisions
maximizes the performance of all the agent, so it
partially co-operative multi agent environment.
- Communication , emerges as rational behaviour in
multi agent environment
- In some competitive environment,
randomized behavior is rational because it
avoids the pitfalls of predictability

40
Properties of task environment -3
• Deterministic vs stochastic
– If the next state of the environment is completely
determined by the current state and the action executed
by the agent, then we say the environment is
deterministic; otherwise, it is stochastic.
– Vacuum cleaner deterministic, taxi driving is stochastic
– Uncertain- if the environment is not fully observable or
not deterministic
– Stochastics, actions/outcomes qualified by probabilities
– Nondeterministic- actions are possible outcomes, no
probabilities
41
Properties of task environment -4
• Episodic vs sequential
– divided into atomic episodes. In each episode the agent
receives a percept and then performs a single action.
– In each episode the action taken does not depend on
previous episodes.
– Ex. spot defective agent on assembly line
– chess or taxi driving are sequential
– Episodic environments are simpler than sequential as
the agent does not need to think ahead.

AI 42
Properties of task environment -5

• Static vs dynamic –
If the environment can change while an agent is
deliberating, then we say the environment is dynamic for that
agent; otherwise, it is static.
– Static environments are easy to deal with because the agent
need not keep looking at the world while it is deciding on an
action, nor need it worry about the passage of time.
– Dynamic environments, on the other hand, are continuously
asking the agent what it wants to do; if it hasn’t decided yet, that
counts as deciding to do nothing.
– Semi dynamic- Environment itself does not change , but the
agent’s performance score changes
– ex. taxi driving, chess when played with clock, crossword puzzle

43
Properties of task environment -6

• Discrete vs continuous - The discrete/continuous


distinction applies to the state of the environment,
to the way time is handled, and to the percepts and
actions of the agent.
• Example: chess environment has finite no. of distinct
state , discrete set of percept & actions
• Taxi driving environment is continuous-state and
continuous time problem, speed , location of the taxi
are continuous values. Actions are also continuous.

AI 44
Properties of task environment -7
• Known vs unknown - This distinction refers not to
the environment itself but to the agent’s (or
designer’s) state of knowledge about the “laws of
physics” of the environment.
• Known environment outcomes for all actions are
given
• Unknown environment the agent has to learn to
make good decisions
– Environment generator

45
• Known environment can be partially observable as in
solitaire game
• Unknown environment can be fully observable as in
video game , we can see the entire game state, but
don’t know what each button would do until we
press

46
47
Examples

AI 48
•Structure of an AI Agent
•The task of AI is to design an agent program which implements the agent
function. The structure of an intelligent agent is a combination of architecture and
agent program. It can be viewed as:
• Agent = Architecture + Agent program
•Following are the main three terms involved in the structure of an AI agent:
•Architecture: Architecture is machinery that an AI agent executes on.
•Agent Function: Agent function is used to map a percept to an action.
• F:P* → A
•Agent program: Agent program is an implementation of agent function. An
agent program executes on the physical architecture to produce function f.

49
• Many agent program have the same skeleton
• Agent Program- current percept as input
• Agent function- entire percept as history

50
Four basic kinds of agent programs
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents

AI 51
1. SIMPLE REFLEX
• The simplest kind of agent is the simple reflex agent.
These agents select actions on the basis of the current
percept, ignoring the rest of the percept history.

• “The car in front is braking.” Then, this triggers some


established connection in the agent program to the
action “initiate braking.” We call such a connection a
condition–action rule, written as
– if car-in-front-is-braking then initiate-braking.

AI 52
Percept history is ignored here
AI 53
Schematic diagram of Simple Reflex Agent

AI 54
The agent will work only if the correct decision can be made on
the basis of only the current percept—that is, only if the
environment is fully observable.

AI 55
2. Model based reflex Agent

• The most effective way to handle partial


observability is for the agent to keep track of
the part of the world it can’t see now.
• That is, the agent should maintain some sort
of internal state that depends on the percept
history and thereby reflects at least some of
the unobserved aspects of the current state.

AI 56
Agent program require knowledge to be encoded as:

• First, we need some information about how


the world evolves independently of the agent
• Second, we need some information about
how the agent’s own actions affect the world

AI 57
Knowledge of ‘how the world works is “ called
Model of the world

An agent that uses such model is called “Model


based Agent”

58
Model Based Reflex Agent

AI 59
Function

AI 60
3. Goal Based agents

AI 61
Working
• Agent needs current state description, along
with some sort of goal information that
describes situations that are desirable – to
take decision
– Search and planning are the sub fields
• Although the goal-based agent appears less
efficient, it is more flexible because the
knowledge that supports its decisions is
represented explicitly and can be modified.

AI 62
4. Utility based agent
• An agent’s utility function is essentially an
internalization of the performance measure.
• If the internal utility function and the external
performance measure are in agreement, then an
agent that chooses actions to maximize its utility
will be rational according to the external
performance measure.
• Decision-making agents that must handle the
uncertainty inherent in stochastic or partially
observable environments.

AI 63
Utility Based Agent

AI 64
Learning Agent

• A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt automatically
through learning. AI 65
• There is distinction is between the
 learning element - responsible for making
improvements
 performance element- responsible for
selecting external actions.
• The learning element uses feedback from the
critic on how the agent is doing and
determines how the performance element
should be modified to do better in the future.

AI 66
• The last component of the learning agent is
the problem generator.
• It is responsible for suggesting actions that will
lead to new and informative experiences.

AI 67
End of chapter 2

AI 68
How to choose the action
• Construct the table of percept sequence and
actions.
• Percept sequence is used as index to the action

69
Table driven agent Program

Is Table driven agent a good approach?


Even the lookup table for chess—a tiny, well-behaved fragment of the real
world—would have at least 10150 entries.

AI 70
• If P the set of possible percepts and T be the
lifetime of the agent( total no of percept it
will receive)
• Then lookup table will contain | |entries 𝛴𝑇
𝑡= 1 𝑃
𝑡

• For taxi driver environment,


– visual input from camera comes at the rate of 27
MB/s (30 frames/sec, 640X480 pixels with 24 bits of color information)
– This gives the look table with 10 250,000,000,000
entries for an hours driving

71
30/9/2021
• 30,
76,094,124,135,142,148,144,149,134,176,16
8,167,160,169,179,180,181,161, 172,
152,165, 164,, 188, 193, 194,192,184, 190,
202, 220, 206, 211 215, 203, 213, 400,408,
417, 413, 412,409

72

You might also like