Artificial Intelligence
SEARCHES IN AI
Week- 5
Lecture- 9
Instructor: M. Arsalan Raza, Amna Nadeem, Omer Aftab and
Anila Amjad.
2
Lahore Garrison University
NAME: EMAIL: PHONE NUMBER: VISITING HOURS:
AMNA NADEEM
[email protected] 0304-1539322
U.PK
3
Permeable
Environment
Example of Environment
Agent/ Observation State
State/ Space Action
Environment definition
Types of Environment
Fully vs Partially observable environments
Deterministic vs Stochastic Environments
Episodic vs Sequential environment
Dynamic vs Static Environment
Discrete vs Continuous Environment
Single Agent vs Multi Agent Environment
4
Today’s Lecture Contents
Recall: Agent
Structure of Agent
Types of Agents
Simple reflex agents
Reflex agents with state/model
Goal-based agents
Utility-based agents
All these can be turned into learning agents
Recall: Structure of an Agent
The job of AI is to design an agent program that
implements the agent function—the mapping from
percepts to actions.
AGENT ARCHITECTURE
Program will run on some sort of computing device with
physical sensors and actuators—we call this the architecture:
agent = architecture +
program
Recall: Agent Programs
Agent Program vs Agent Function
difference between the agent program, which takes the current percept as input, and
the agent
function, which takes the entire percept history.
Recall: Lookup table
Designers must construct a table that contains the appropriate action for every possible percept sequence
Let
P be the set of possible percepts
T be the lifetime of the agent (the total number of percepts it will receive)
The lookup table will contain entries
AUTOMATED TAXI:
visual input from a single camera = 30 frames per second 640×480 pixels with 24 bits of color
Entries in lookup table = lookup table with over for 1hr drive
LOOKUP TABLE FOR CHESS:
at least entries
table-driven approach to agent construction is doomed to failure
Recall: The daunting size of these tables
means that!
no physical agent in this universe will have the space to store the table
the designer would not have time to create the table
no agent could ever learn all the right table entries from its experience
even if the environment is simple enough to yield a feasible table size, the designer still
has no guidance about how to fill in the table entries
9
Agent types
Five basic types in order of increasing generality:
Artificial Intelligence a modern approach
10
Simple reflex agents
Simple but very limited intelligence.
It ignores the rest of the percept history and acts only on the basis of the current percept. Percept
history is the history of all that an agent has perceived to date.
They are rational only if a correct decision is made only on the basis of current precept.
The agent function is based on the condition-action rule. A condition-action rule is a rule that
maps a state i.e., a condition to an action. If the condition is true, then the action is taken, else not.
Action does not depend on percept history, only on current percept.
Therefore no memory requirements.
Infinite loops
Suppose vacuum cleaner does not observe location. What do you do given location = clean?
Left of A or right on B -> infinite loop.
Fly buzzing around window or light.
Artificial Intelligence a modern approach
Possible Solution: Randomize action.
11
Simple reflex agents
Example:
The vacuum agent is a simple reflex agent because the decision is based only on the
current location, and whether the place contains dirt.
Fly buzzing around window or light.
A thermostat in a heating system.
Artificial Intelligence a modern approach
Simple reflex agent
Simple reflex agents
These agents select actions on the basis of the current percept, ignoring the rest of the percept
history
CONDITION–ACTION RULE
E.g. if car-in-front-is-braking then initiate-braking
14
States: Beyond Reflexes
• Recall the agent function that maps from percept histories to actions:
[f: P* A]
An agent program can implement an agent function by maintaining an
internal state.
The internal state can contain information about the state of the
external environment.
The state depends on the history of percepts and on the history of
actions taken:
[f: P*, A* S A] where S is the set of states.
If each internal state includes all information relevant to information
Artificial Intelligence a modern approachmaking, the state space is Markovian.
Simple reflex agent
Problems with Simple reflex agents are :
• Very limited intelligence.
• No knowledge of non-perceptual parts of the state.
• Usually too big to generate and store.
• If there occurs any change in the environment, then
the collection of rules needs to be updated.
Simple reflex agent
17
States and Memory: Game Theory
If each state includes the information about the
percepts and actions that led to it, the state space has
perfect recall.
Perfect Information = Perfect Recall + Full
Observability.
Artificial Intelligence a modern approach
Model-based reflex agents
The most effective way to handle partial observability is for the agent to keep track
of the part of the world it can’t see now.
The agent should maintain some sort of internal state that depends on the
percept history and thereby reflects at least some of the unobserved aspects of the
current state
Updating the internal state information of agent as time goes by requires two kinds
of knowledge to be encoded in the agent program
How the world evolves independently of the agent
How the agent’s own actions affect the world
The knowledge about “how the world works”—whether implemented
in simple Boolean circuits or in complete scientific theories—is called
a model of the world. An agent that uses such a model is called a
Model-based reflex agents
Model-based reflex agents
Example:
• Self-driving cars are a great example of a model-based reflex agent. The car is
equipped with sensors that detect obstacles, such as car brake lights in front of
them or pedestrians walking on the sidewalk. As it drives, these sensors feed
percepts into the car's memory and internal model of its environment.
• a robot may be programmed to avoid obstacles in its path. It slowly builds a
model of the environment as it moves around. As it encounters obstacles, it
stores this percept in its memory and updates its model accordingly. As it
encounters new obstacles that are similar to past encounters, the robot can use
memory and interpretation skills to identify the obstacle and take the
appropriate action.
Model-based reflex agents
22
Model-based reflex agents
Artificial Intelligence a modern approach
Goal-based agents
Along with the current state description, the GOAL agent needs
some sort of goal information that describes situations that are
desirable
involves consideration of the future — both
“What will happen if I do such-and-such?”
“Will that make me happy?”
Goal-based agents
Goal-Based Agents
These kinds of agents take decisions based on how far they are
currently from their goal(description of desirable situations).
Their every action is intended to reduce their distance from the
goal. This allows the agent a way to choose among multiple
possibilities, selecting the one that reaches a goal state.
The knowledge that supports its decisions is represented explicitly
and can be modified, which makes these agents more flexible.
They usually require search and planning. The goal-based agent’s
behavior can easily be changed.
25
Goal-based agents
• knowing state and environment? Enough?
– Taxi can go left, right, straight
• Have a goal
A destination to get to
Uses knowledge about a goal to guide its
actions
E.g., Search, planning
Artificial Intelligence a modern approach
Goal-based agents
Example:
• Google's Waymo driverless cars are
good examples of goal-based agents
when they are programmed with an end
destination, or goal, in mind. The car
will then ''think'' and make the right
decisions to deliver the passenger
where they intended to go.
Goal-based agents
Utility-based agents
UTILITY?
UTILITY FUNCTION?
a rational utility-based agent chooses the action that maximizes the
expected utility of the action outcomes—that is, the utility the agent
expects to derive, on average, given the probabilities and utilities of
each outcome.
A utility-based agent has to model and keep track of its
environment, tasks that have involved a great deal of research on
perception, representation, reasoning, and learning.
29
Utility-based agents
The agents that are developed having their end uses as building blocks
are called utility-based agents.
When there are multiple possible alternatives, to decide which one is
best, utility-based agents are used.
They choose actions based on a preference (utility) for each state.
Sometimes achieving the desired goal is not enough. We may look for a
quicker, safer, cheaper trip to reach a destination.
Agent happiness should be taken into consideration. Utility describes
how “happy” the agent is. Because of the uncertainty in the world, a
utility agent chooses the action that maximizes the expected utility.
A utility function maps a state onto a real number which describes the
associated degree of happiness.
Artificial Intelligence a modern approach
30
Utility-based agents
Goals are not always enough
Many action sequences get taxi to destination
Consider other things. How fast, how safe…..
A utility function maps a state onto a real number which describes the
associated degree of “happiness”, “goodness”, “success”.
Where does the utility measure come from?
Economics: money.
Biology: number of offspring.
Your life?
Artificial Intelligence a modern approach
31
Utility-based agents
Artificial Intelligence a modern approach
32
Utility-based agents
Example:
• route recommendation system which
solves the 'best' route to reach a
destination.
• could be a home thermostat that knows to
start heating or cooling your house based on
reaching a certain temperature
Artificial Intelligence a modern approach
Learning agents
learning agent can be divided into four conceptual
components:
Performance element
Learning element
Critic
Problem generator
Learning agents
Learning Agent
A learning agent in AI is the type of agent that can learn from its past experiences
or it has learning capabilities.
It starts to act with basic knowledge and then is able to act and adapt
automatically through learning.
A learning agent has four conceptual components, which are:
• Learning element is responsible for making improvements by learning from the
environment.
• Critic learning element takes feedback from critics which describes how well the
agent is doing with respect to a fixed performance standard.
• Performance element is responsible for selecting external action.
• Problem Generator component is responsible for suggesting actions that will
lead to new and informative experiences.
35
Learning agents
Performance element
is what was previously
the whole agent
Input sensor
Output action
Learning element
Modifies
performance
element.
Artificial Intelligence a modern approach
36
Learning agents
Critic: how the agent is doing
Input: checkmate?
Fixed
Problem generator
Tries to solve the problem
differently instead of
optimizing.
Suggests exploring new
actions -> new problems.
Artificial Intelligence a modern approach
37
Learning agents
Example:
• The human is an example of a
learning agent. For example, a
human can learn to ride a bicycle,
even though, at birth, no human
possesses this skill.
Artificial Intelligence a modern approach
38
Learning agents(Taxi driver)
Performance element
How it currently drives
Taxi driver Makes quick left turn across 3 lanes
Critics observe shocking language by passenger and other drivers and informs bad
action
Learning element tries to modify performance elements for future
Problem generator suggests experiment out something called Brakes on different
Road conditions
Exploration vs. Exploitation
Learning experience can be costly in the short run
shocking language from other drivers
Less tip
Fewer passengers
Artificial Intelligence a modern approach
How the components of agent
programs work
How the components of agent
programs work
Atomic representation
search
game-playing
Hidden Markov models
Markov decision processes
Factored representation (splits up each state into a fixed set of variables or
attributes, each of which can have a value)
constraint satisfaction algorithms
propositional logic
Planning
Bayesian networks
machine learning algorithms
How the components of agent
programs work
structured representation
underlie relational databases
first-order logic
first-order probability models
knowledge-based learning , and much of natural language understanding
42
References
Course Textbook
Artificial Intelligence: A Modern Approach by Stuart Russell and Peter
Norvig, 4th Edition, Prentice Hall, Inc.2015
Other Online sources for help
ai.berkeley.edu