0% found this document useful (0 votes)
44 views72 pages

2 Agents

Uploaded by

xyronias
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views72 pages

2 Agents

Uploaded by

xyronias
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 72

Artificial Intelligence

UNIT-I
BHCS13

1
Text book
 Rich and Knight, “Artificial Intelligence”, Tata
McGraw Hill, 1992
 S. Russel and P. Norvig, “Artificial Intelligence
– A Modern Approach”, Second Edition,
Pearson Edu.

2
Definition of
AI
“The Science and engineering of making
intelligent machines, especially intelligent
computer programs.” John MaCarthy

3
More Formal Definition of
AI
“We call Programs intelligent if they exhibit
behaviors that would be regarded intelligent
if they were exhibited by human being”
Herbert Simon
 “AI is the study of techniques
for solving exponentially hard
exploiting knowledgeproblems in thepolynomial
about problem
domain.” Eliane Rich

4
Goals of
AI:
 To Create Expert System

 To Implement Human Intelligence in


Machines.

5
6
Some fundamental
question:
What is intelligence?
What is thinking?
What is Machine?
Is the Computer a Machine?
Can a Machine think?
If yes we are machines?
Whether machine can be intelligent?
7
What is Intelligence?
 Ability of a system to calculate, reason,
perceive relationship, and analogies ,store,
retrieve information from memory , solve
problems, etc.
 take decision
 assumptions
 classify
 use of knowledge to respond new situations
 ability to learn
 inductive inference
8
Type of
Intelligence:
 Linguistic Intelligence

 Musical Intelligence
 Logical Mathematical Intelligence
 Spatial Intelligence
 Bodily – Kinesthetic Intelligence
 Intra Personal Intelligence
 Interpersonal intelligence

9
10
Characteristics of AI systems
 learn new concepts and tasks
 reason and draw useful conclusions about the
world around us
 remember complicated interrelated facts and draw
conclusions from them (inference)
 understand a natural language or perceive and
comprehend a visual scene
 look through cameras and see what's there (vision),
to move themselves and objects around in the real
world (robotics)
11
Contd..
 plan sequences of actions to complete a goal
 offer advice based on rules and situations
 may not necessarily imitate human senses and
thought processes
 but indeed, in performing some tasks differently, they may
actually exceed human abilities
 capable of performing intelligent tasks effectively
and efficiently
 perform tasks that require high levels of intelligence

12
Research Areas of
AIProblem Solving through heuristic techniques

 Knowledge representation
 Handling uncertain situations
 Theorem proving
 Game playing
 Natural language processing
 Expert system

13
14
Real Life Application of Research
areas:
 Expert System

 NLP
 Neural Network
 Robotics
 Fuzzy Logic

15
Categories of AI
System
 Systems that think like humans

 Systems that act like humans

 Systems that think rationally


 Systems that act rationally

16
Systems that think like
humans
 Most of the time it is a black box where we are not

clear about our thought process.


 One has to know functioning of brain and
its mechanism for possessing information.
 It is an area of cognitive science.
 The stimuli are converted into mental representation.
 Cognitive processes manipulate representation to build new
representations that are used to generate actions.
 Neural network is a computing model
for processing information similar to brain.
17
Systems that act like humans
 The overall behaviour of the
system should be human like.

 It could be achieved by observation.

18
Systems that think rationally
 Such systems rely on logic rather than human to
measure correctness.
 For thinking rationally or logically, logic formulas and
theories are used for synthesizing outcomes.
 For example,
Eg: John is a human and all humans are mortal then one can
conclude logically that John is mortal
 Not all intelligent behavior are mediated by logical
deliberation.

19
Systems that act rationally
 Rational behavior means doing right thing.

 Even if method is illogical, the observed


behavior must be rational.

20
The Turing Test: Preliminaries
• Designed by Alan Turing (1950)

• The Turing test provides a satisfactory operational


definition of AI

• It’s a behavioral test (i.e., test if a system acts like a human)

• Problem: it is difficult to make a mathematical analysis of it.

21
The Turing Test
Turing proposed operational test for intelligent
behavior in 1950.
Human

Human ?
Interrogator
AI system

22
Intelligent Agents

23
 What is an agent ?
 An agent is anything that perceiving its
environment through sensors and acting upon
that environment through actuators
 Example:
 Human is an agent
 A robot is also an agent with cameras and motors
 A thermostat detecting room temperature.

24
Intelligent
Agents

Goal:
High performance
Optimized result
Rational actions
Ex. Sensed the environment that it can rain
Action: take off/umbrella
25
 Human Agent
 Robotic agent
 Software agent

26
Diagram of an
agent

What AI will do
27
Simple Terms
 Percept: Agent’s perceptual inputs at any given instant
 Percept sequence: Complete history of everything that
the agent has ever perceived.
 Agent function: A function mapping any given
percept sequence to an action
 Performance Measure of agent: determines how successful
an agent is.
 Behavior of Agent: action that agent performs after any
given sequence of percepts.

28
Vacuum-cleaner world
 Perception: Clean or Dirty? where it is in?
 Actions: Move left, Move right, suck, do
nothing

29
Vacuum-cleaner world

30
Program implements the agent
function tabulated in Fig
Function Reflex-Vacuum-Agent([location,statuse]) return
an action
If status = Dirty then return Suck
else if location = A then return
Right else if location = B then
return left

31
Concept of
Rationality
 Rational agent
 One that does the right thing
 = every entry in the table for the agent function is
correct (rational).
 What is correct?
 The actions that cause the agent to be most
successful
 So we need ways to measure success.

32
Performance
measure
Performance measure
 An objective function that determines
 How the agent does successfully
 E.g., 90% or 30% ?
 An agent, based on its percepts: action sequence
 A general rule: Design performance measures according
to
 What one actually wants in the environment
 Rather than how one thinks the agent should behave
 E.g., in vacuum-cleaner world
 We want the floor clean, no matter how the agent behave
33  We don’t restrict how the agent behaves
Rationality

 What is rational at any given time depends on four things:


 The performance measure defining the criterion of

success
 The agent’s prior knowledge of the environment

 The actions that the agent can perform

 The agents’s percept sequence up to now

34
Omniscience
 An omniscient agent
 Knows the actual outcome of its actions in advance

 No other possible outcomes

 Based on the circumstance, it is rational.


 As rationality maximizes
 Expected performance

 Perfection maximizes
 Actual performance

 Hence rational agents are not omniscient.

35
Learning

 Does a rational agent depend on only


current percept?
 No, the past percept sequence should also be used
 This is called learning.

36
Autonomy

 If an agent just relies on the prior knowledge of its


designer rather than its own percepts then the agent
lacks autonomy
A rational agent should be autonomous- it
should learn what it can to compensate for
partial or incorrect prior knowledge.
 E.g., a clock
 No input (percepts)
 Run only but its own algorithm (prior knowledge)
 No learning, no experience, etc.
37
Task environments
 Task environments are the problems
 While the rational agents are the solutions
 Specifying the task environment
 PEAS description as fully as possible
 Performance
 Environment

 Actuators

 Sensors

In designing an agent, the first step must always be to specify the


task environment as fully as possible.
Eg: Automated taxi driver.
38
Task environments
 Performance measure
 How can we judge the automated driver?
 Which factors are considered?
 getting to the correct destination
 minimize fuel consumption
 minimize the trip time and/or cost
 minimize the violations of traffic laws
 maximize the safety and comfort, etc.

39
Contd..
 Environment
 A taxi must deal with a variety of roads

 Traffic lights, other vehicles, pedestrians, stray animals,

road works, police cars, etc.


 Interact with the customer

40
Contd..
 Actuators (for outputs)
 Control over the accelerator, steering, gear shifting

and braking
 A display to communicate with the customers

 Sensors (for inputs)


 Detect other vehicles, road situations

 GPS (Global Positioning System) to know where the

taxi is
 Many more devices are necessary

41
Task environments
 A sketch of automated taxi driver

42
Properties of task
environments
 Fully observable vs. Partially observable
 If an agent’s sensors give it access to the complete
state of the environment at each point in time then the
environment is effectively and fully observable
 if the sensors detect all aspects
 That are relevant to the choice of action

43
 Partially observable
An environment might be Partially observable because of
noisy and inaccurate sensors or because parts of the state
are simply missing from the sensor data.
Eg: A local dirt sensor of the cleaner cannot tell Whether
other squares are clean or not
 Discrete vs. continuous

 If there are a limited number of distinct states, clearly

defined percepts and actions, the environment is


discrete

44
Contd..
 Static vs. dynamic
 A dynamic environment is always changing over time

 E.g., the number of people in the street


 While static environment
 E.g., the destination
 Semi dynamic: Environment is not changed over time but
the agent’s performance score does

45
Contd..
 Single agent VS. multiagent
 Playing a crossword puzzle – single agent

 Chess playing – two agents

 Competitive multiagent environment

 Chess playing
 Cooperative multiagent environment
 Automated taxi driver
 Avoiding collision

46
Contd..
 Deterministic vs. stochastic
 next state of the environment Completely determined by

the current state and the actions executed by the


agent, then the environment is deterministic,
otherwise, it is Stochastic.

-Vaccum Cleaner and taxi driver are:


 Stochastic because of some unobservable aspects  noise
or unknown

47
Contd..
 Episodic vs. sequential
 An episode = agent’s single pair of perception & action

 The quality of the agent’s action does not depend on


other
episodes
 Every episode is independent of each other
 Episodic environment is simpler
 The agent does not need to think ahead
 Sequential
 Current action may affect all future decisions

-Ex. Taxi driving and chess.


48
Contd..
 Known vs. unknown
This distinction refers not to the environment itself but to
the agent’s (or designer’s) state of knowledge about the
environment.
-In known environment, the outcomes for all actions
are given. ( example: solitaire card games).
- If the environment is unknown, the agent will have to learn how
it works in order to make good decisions.( example: new video
game).

49
Examples of task
environments

50
Structure of agents

51
Structure of
agents
 Agent = architecture + program
 Architecture = some sort of computing device
(sensors + actuators)
 (Agent) Program = some function that implements the
agent mapping = “?”
 Agent Program = Job of AI

52
Agent programs
 Input for Agent Program
 take current percept as Input from the sensor and
return action to actuators.
 The entire percept sequence
 The agent must remember all of them
 Implement the agent program as
 A look up table (agent function)

53
Agent
Programs
 P = the set of possible percepts

 T= lifetime of the agent


 The total number of percepts it receives

T
 Size of the look up table t P
t

 Consider playing chess 1

 P =10, T=150
 Will require a table of at least 10150 entries

54
Types of agent
programs
 Four types
 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents

55
Simple reflex agents
 It uses just condition-action rules
 The rules are like the form “if … then …”
 efficient but have narrow range of applicability
 Because knowledge sometimes cannot be stated
explicitly
 Work only
 if the environment is fully observable

56
Simple reflex
agents

57
Simple reflex agents
(2)

58
A Simplepercepts
Reflex Agent in Nature
(size, motion)

RULES:
(1) If small moving object,
then activate SNAP
(2) If large moving object,
then activate AVOID and inhibit SNAP
ELSE (not moving) then NOOP
needed for
completeness Action: SNAP or AVOID or NOOP
59
Model-based Reflex
Agents
 For the world that is partially observable
 the agent has to keep track of an internal state
 That depends on the percept history
 Reflecting some of the unobserved aspects
 E.g., driving a car and changing lane
 Requiring two types of knowledge
 How the world evolves independently of the agent
 How the agent’s actions affect the world

60
Example Table Agent with Internal State
IF THEN
Saw an object ahead, and turned Go straight
right, and it’s now clear ahead

Saw an object on my right, Halt


turned right, and object ahead
again

See no objects ahead Go straight

See an object ahead Turn randomly

61
Model-based Reflex
Agents

The agent is with memory


62
Model-based Reflex
Agents

63
Goal-based agents
 Current state of the environment is always not
enough
 The goal is another issue to achieve
 Judgment of rationality / correctness
 Actions chosen  goals, based on
 the current state
 the current percept

65
Goal-based agents
 Conclusion
 Goal-based agents are less efficient
 but more flexible
 Agent  Different goals  different tasks
 Search and planning
 two other sub-fields in AI
 to find out the action sequences to achieve its goal

66
Goal-based
agents

67
Utility-based
agents
 Goals alone are not enough
 to generate high-quality behavior
 E.g. meals in Canteen, good or not ?
 Many action sequences  the goals
 some are better and some worse
 If goal means success,
 then utility means the degree of success (how
successful it is)

68
Utility-based agents
(4)

69
Utility-based
agents
 it is said state A has higher utility
 If state A is more preferred than others
 Utility is therefore a function
 that maps a state onto a real number
 the degree of success

70
Utility-based agents
(3) Utility has several advantages:
 When there are conflicting goals,
 Only some of the goals but not all can
be achieved
 utility describes the appropriate trade-
off
 When there are several goals
 None of them are achieved certainly
 utility provides a way for the decision-
making
71

You might also like