0% found this document useful (0 votes)
20 views201 pages

ch1 2 3 AI

The document provides an overview of Artificial Intelligence (AI), defining it as a branch of computer science focused on creating intelligent machines that can mimic human behavior and decision-making. It discusses the history of AI, its applications, the hierarchy of data to wisdom, and the various perspectives on AI, including how it can think and act like humans. Additionally, it highlights the foundational disciplines of AI and the current capabilities and achievements in the field.

Uploaded by

Wami Mahammad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views201 pages

ch1 2 3 AI

The document provides an overview of Artificial Intelligence (AI), defining it as a branch of computer science focused on creating intelligent machines that can mimic human behavior and decision-making. It discusses the history of AI, its applications, the hierarchy of data to wisdom, and the various perspectives on AI, including how it can think and act like humans. Additionally, it highlights the foundational disciplines of AI and the current capabilities and achievements in the field.

Uploaded by

Wami Mahammad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 201

AI(CSEg3206)

4/27/2024 AI/CSEg3206 1
Introduction to Artificial Intelligence (CSEg3206)

Unit- I,II & III


 Introduction,
 Intelligent Agents,
 Problem Solving

Instructor: Dr.T.GopiKrishna
Assistant Professor
ASTU

4/27/2024 AI/CSEg3206 2
Introduction

• Chapter Objectives
– Define intelligence
– Define AI
– Describe what an agent is
– State what rational agent is
– Identifying areas and achievements of AI
– Explain AI history and trends

4/27/2024 AI/CSEg3206 3
Artificial Intelligence is composed of two
words Artificial and Intelligence, where Artificial
defines "man-made," and intelligence
defines "thinking power",
hence AI means "a man-made thinking power."

4/27/2024 AI/CSEg3206 4
What is Artificial Intelligence (AI)?
Def-1:"It is a branch of computer science by which
we can create intelligent machines which can behave
like a human, think like humans, and able to make
decisions."
Def-2: Programs that behave (externally) like humans.
Programs that operate (internally) the way humans do
(Another way is to make computational models of human
thought processes.)
What does it mean to behave intelligently?(Another
thing that we could do is build computational systems
that behave intelligently.)

4/27/2024 AI/CSEg3206 5
AI applications
Monitor trades,
Detect fraud,
schedule shuttle loading, etc.
Software that gathers information about an
environment and takes actions based on that (Agent)
• a robot
• web shopping program
• a factory
• a traffic control system…
4/27/2024 AI/CSEg3206 6
AI applications

4/27/2024 AI/CSEg3206 7
History of AI

4/27/2024 AI/CSEg3206 8
The emergence of intelligent agents (1993-2011)
Year 1997: In the year 1997, IBM Deep Blue beats
world chess champion, Gary Kasparov, and became the
first computer to beat a world chess champion.
Year 2002: for the first time, AI entered the home in the
form of Roomba, a vacuum cleaner.
Year 2006: AI came in the Business world till the year
2006. Companies like Facebook, Twitter, and Netflix
also started using AI.
Nowadays companies like Google, Facebook, IBM, and
Amazon are working with AI and creating amazing
devices. The future of Artificial Intelligence is inspiring
and will come with high intelligence.

4/27/2024 AI/CSEg3206 9
Why Artificial Intelligence?

• With the help of AI, you can create such software or


devices which can solve real-world problems very
easily and with accuracy such as health issues,
marketing, traffic issues, etc.
• With the help of AI, you can create your personal
virtual Assistant, such as Cortana, Google Assistant,
Siri, etc.
• With the help of AI, you can build such Robots which
can work in an environment where survival of humans
can be at risk.
• AI opens a path for other new technologies, new
devices, and new Opportunities.
4/27/2024 AI/CSEg3206 10
Data, information, knowledge and wisdom
– According to Russell Ackoff, the content of the human mind can be
classified into five categories:
• Data: is a raw fact (symbols)
• Information: data that are processed to be useful(giving meaning to
data)
• Knowledge: application of data and information; answers "how“
• Understanding: appreciation of "why“
• Wisdom: evaluated understanding
Data
• it simply exists and has no significance beyond its existence
• It can exist in any form, usable or not
• It does not have meaning by itself
• A spreadsheet generally starts out by holding data

4/27/2024 AI/CSEg3206 11
Types of AI
AI type-1: Based on Capabilities
Reactive Machines
• Reactive MachinReactive Machines
Es, EX:Theory of Mind

Artificial Intelligence type-2: Based on functionality


1. Reactive Machines
IBM's Deep Blue system is an example of
reactive machines.
Google's AlphaGo is also an example of
reactive machines.
2. Limited Memory
3. Theory of Mind 4. Self-Awareness

4/27/2024 AI/CSEg3206 12
Data…..contd
•Information
•Is data that has been given meaning by way of relational
connection
•This "meaning" can be useful, but does not have to be
•A relational database makes information from the data stored
within it

4/27/2024 AI/CSEg3206 13
Data…..contd

4/27/2024 AI/CSEg3206 14
Data…..contd

4/27/2024 AI/CSEg3206 15
Data…..contd

•Knowledge
• It is the appropriate collection of information, such that it's intent is to
be useful.
• Knowledge is a deterministic process.
• Most of the applications we use (modeling, simulation, etc.) exercise
some type of stored knowledge.
Understanding
•is a true cognitive and analytical ability
•Understanding is an interpolative and probabilistic process
•Synthesize new knowledge from the previously held knowledge

4/27/2024 AI/CSEg3206 16
Data…..contd
• Wisdom
• Is an extrapolative and non-deterministic, non probabilistic
process
• It calls upon all the previous levels of consciousness, and specifically upon
special types of human programming (moral, ethical codes, etc.).
• Most people believe that computers do not have, and will never have the
ability to posses wisdom.
• The following diagram represents the transitions from data, to
information, to knowledge, and finally to wisdom. It is called as
knowledge hierarchy

4/27/2024 AI/CSEg3206 17
Data…..contd
Knowledge Hierarchy

4/27/2024 AI/CSEg3206 18
Data…..contd
• The first four categories relate to the past.
• Only the fifth category, wisdom, deals with the future because it
incorporates vision and design.
• With wisdom, people can create the future rather than just grasp the
present and past.
• But achieving wisdom isn't easy.
• People must move successively through the other categories.
• The most important is, it is very hard to represent wisdom with a
computer system

4/27/2024 AI/CSEg3206 19
Data…..contd
Knowledge Hierarchy

4/27/2024 AI/CSEg3206 20
Data…..contd

How raw data gets converted to


wisdom through various levels of How our brain processes information
processing
4/27/2024 AI/CSEg3206 21
Views ….contd
• Intelligence: – capacity to learn and solve problems” (Webster
dictionary) – the ability to act rationally
• Natural Intelligence Versus Artificial Intelligence(?)
• There are different views/definitions to AI
– Views of AI fall into four different perspectives --- two dimensions:

4/27/2024 AI/CSEg3206 22
Views ….contd
• Think Like Humans: The cognitive modeling approach

• Involves cognitive modeling


– If we are going to say that a given program thinks like a human,
we must have some way of determining how humans think. We
need to get inside the actual workings of human
minds(sufficiently precise theory of the mind)
– The human thinking process is difficult to understand:
• how does the mind raises from the brain ?
• Think also about unconscious tasks such as vision and speech
understanding, reflex action
– Humans are not perfect !
• We make a lot of systemic mistakes
4/27/2024 AI/CSEg3206 23
Views ….contd
• Act Like Humans: The Turing Test approach
• To be intelligent, a program should simply act like a human
• Alan Turing Test(Operational test for intelligent behavior: the
Imitation Game)
– Indistinguishability from undeniably intelligent entities-human
beings.
– Capabilities needed(Suggested Major Components of AI)
• Natural Language Processing -successful communication
• Knowledge representation -store what it knows or hears
• Automated reasoning -Answer questions and make conclusions
• Machine learning-adaptation, detect and extrapolate patterns
• Computer vision-perceive objects
• Robotics-manipulate objects and move about
– Researchers have not devoted much(creating flying objects did not
fool pigeons)

4/27/2024 AI/CSEg3206 24
Views ….contd
• Think Rationally : the laws of thought
• Instead of thinking like a human , think rationally
– Find out how correct thinking must proceed
– Syllogism: “Socrates is a man; all men are mortal, therefore Socrates is
mortal.”
– These laws of thought were supposed to govern the operation of the
mind; their study initiated the field called logic
• A traditional and important branch of mathematics and computer
science.
– Problem:
• It is not always possible to model thought as a set of rules;
sometimes there is uncertainty.
• Even when a modeling is available, the complexity of the problem
may be too large to allow for a solution.
4/27/2024 AI/CSEg3206 25
Views ….contd
• Act Rationally: The rational agent approach
– An agent is an entity that perceives and acts.
– Rational agent: acts as to achieve the best outcome or, when
there is uncertainty, the best expected outcome.
– Logical thinking is only one aspect of appropriate behavior:
reactions like getting your hand out of a hot place is not the
result of a careful deliberation, yet it is clearly rational.
– Sometimes there is no correct way to do, yet something must be
done.
– Instead of insisting on how the program should think, there fore
it is better to insist on how the program should act: caring only
about the final result(goal).
4/27/2024 AI/CSEg3206 26
Views ….contd
•Summary of Views of AI

4/27/2024 AI/CSEg3206 27
Views ….contd
• Modeling exactly how humans actually think
– cognitive models of human reasoning
• Modeling exactly how humans actually act
– models of human behavior (what they do, not how they think)
• Modeling how ideal agents “should think”
– models of “rational” thought (formal logic)
– NB: humans are often not rational!
• Modeling how ideal agents “should act”
– rational actions but not necessarily formal rational reasoning
– i.e., more of a black-box/engineering approach
• Modern AI focuses on the last definition
– A focus on this “engineering” approach
– Success is judged by how well the agent performs
-- Modern methods are also inspired by cognitive & neuroscience (how people
think).
4/27/2024 AI/CSEg3206 28
Views ….contd
 AI is an attempt of the reproduction of human reasoning and
intelligent behavior by computational methods
 The goal of AI is to create computer systems(Machines) that
perform tasks regarded as requiring intelligence when done by
humans
 Take a task at which people are better, e.g.:
• Prove a theorem
• Play chess
• Plan a surgical operation
• Diagnose a disease
• Navigate in a building
and build a computer system that does it automatically
4/27/2024 AI/CSEg3206 29
Views ….contd

• There fore,
–AI involves modeling Human (Activities, behaviour, thoughts,
etc) and even other animals

• The goal of Artificial Intelligence(AI) is to build software systems


that behave "intelligently".

• AI involves building computer systems "do the right thing" in


complex environments

• The systems that are built act optimally given the limited
information and computational resources available. .

4/27/2024 AI/CSEg3206 30
Foundations of Artificial Intelligence
Philosophy Knowledge Rep., Logic, Foundation of AI (is AI possible?)
Mathematics Search, Analysis of search algos., logic
Economics Expert Systems, Decision Theory, Principles of Rational
Behavior
Psychology Behaviorist insights into AI programs
Brain Science Learning, Neural Nets
(Neuroscience)
Physics Learning, Information Theory & AI, Entropy, Robotics,
Image Processing
Computer Engg. Systems for AI
Linguistics Natural Language Processing(NLP), Speech Recognition,
Computational Linguistics, Knowledge Representation,
Expert systems, etc

4/27/2024 AI/CSEg3206 31
What can AI do today?(Roles of AI)
A concise answer is difficult, because there are so many activities in so
many subfields.
• Autonomous planning and scheduling
• Game playing(person Garry Kasparov, etc)
• Autonomous control
• Diagnosis
• Logistics Planning
• Robotics
• Language understanding and problem solving

4/27/2024 AI/CSEg3206 32
Some Achievements
 Computers have won over world
champions in several games,
including Checkers, Othello, and
Chess, but still do not do well in
Go
 AI techniques are used in many
systems: formal calculus, video
games, route planning, logistics
planning, pharmaceutical drug
design, medical diagnosis,
hardware and software trouble-
shooting, speech recognition,
traffic monitoring, facial
recognition,
medical image analysis, part
inspection, etc...
 Stanford’s robotic car, Stanley,
autonomously traversed 132 miles
of desert
 Some industries (automobile,
electronics) are highly robotized,
while other robots perform brain
and heart surgery, are rolling
on Mars, fly autonomously, …,
but home robots still remain
a thing of the future

4/27/2024 AI/CSEg3206 33
Some Big Open Questions
 AI (especially, the “rational agent” approach) assumes that
intelligent behaviors are only based on information processing?
Is this a valid assumption?

 If yes, can the human brain machinery solve problems that are
inherently intractable for computers?
 In a human being, where is the interface between “intelligence”
and the rest of “human nature”, e.g.:
• How does intelligence relate to emotions felt?
• What does it mean for a human to “feel” that he/she
understands something?
 Is this interface critical to intelligence? Can there exist a
general theory of intelligence independent of human beings?
What is the role of the human body?

4/27/2024 AI/CSEg3206 34
Some…contd
 AI (especially, the “rational agent” approach) assumes that
intelligent behaviors are based on information processing? Is this
a valid assumption?
In the movie I, Robot, the most impressive
 If yes, canofthethe
feature human brain
robots is machinery
not theirsolve problems
ability to that are
solve
inherently intractable for computers?
complex problems, but how they blend human-
like
 In reasoning
a human being, with
whereother key aspects
is the interface betweenof “intelligence”
human
and the rest(especially,
beings of “human nature”, e.g.:
self-consciousness, fear of
 How does
dying, intelligence
distinction relate toright
between emotions
andfelt?
wrong)
 What does it mean for a human to “feel” that he/she
understands something?
 Is this interface critical to intelligence? Can there exist a general
theory of intelligence independent of human beings? What is the
role of the human body?

4/27/2024 AI/CSEg3206 35
Some…contd
 AI contributes to building an information processing model of
human beings, just as Biochemistry contributes to building a
model of human beings based on bio-molecular interactions
 Both try to explain how a human being operates
 Both also explore ways to avoid human imperfections (in Biochemistry, by
engineering new proteins and drug molecules; in AI, by designing rational
reasoning methods)
 Both try to produce new useful technologies

 Neither explains (yet?) the true meaning of being human

4/27/2024 AI/CSEg3206 36
Main Areas of AI
 Knowledge representation
(including formal logic)
 Search, especially Agent Perception
heuristic search (puzzles, Robotics
games)
 Planning Reasoning
 Reasoning under Search
uncertainty, including Learning
probabilistic reasoning
 Learning Knowledge Constraint
Planning rep. satisfaction
 Agent architectures
 Robotics and perception
 Natural language
processing Natural
Expert
language
Systems
...

4/27/2024 AI/CSEg3206 37
Bits of History
 1956: The name “Artificial Intelligence” is coined

 60’s: Search and games, formal logic and theorem proving

 70’s: Robotics, perception, knowledge representation, expert


systems

 80’s: More expert systems, AI becomes an industry

 90’s: Rational agents, probabilistic reasoning, machine learning

 00’s: Systems integrating many AI methods, machine learning,


reasoning under uncertainty, robotics again

4/27/2024 AI/CSEg3206 38
Programming Without and With AI
The programming without and with AI is different in following ways −

Programming Without AI Programming With AI

A computer program without AI can answer A computer program with AI can answer
the specific questions it is meant to solve. the generic questions it is meant to solve.

AI programs can absorb new modifications by


putting highly independent pieces of information
together. Hence you can modify even a minute piece
Modification in the program leads to change in its of information of program without affecting its
structure. structure.

Modification is not quick and easy. It may lead to


affecting the program adversely. Quick and Easy program modification.

4/27/2024 AI/CSEg3206 39
What is Intelligence Composed of ?

• Reasoning
• Learning
• Problem Solving
• Perception
• Linguistic Intelligence

4/27/2024 AI/CSEg3206 40
Difference between Human and Machine Intelligence

• Humans perceive by patterns whereas the


machines perceive by set of rules and data.
• Humans store and recall information by
patterns, machines do it by searching
algorithms. For example, the number 40404040
is easy to remember, store, and recall as its
pattern is simple.
• Humans can figure out the complete object
even if some part of it is missing or distorted;
whereas the machines cannot do it correctly.

4/27/2024 AI/CSEg3206 41
Task Classification of AI

4/27/2024 AI/CSEg3206 42
Questions are
Welcome

4/27/2024 AI/CSEg3206 43
Artificial Intelligence(AI)
Chapter Two : Intelligent Agent

4/27/2024 AI/CSEg3206 44
Chapter Objectives

 Defining what an agent is in general


 Understanding the concept of rationality
 Giving ideas about agent, agent function, agent program
and architecture, environment, percept, sensor, actuator
(effectors),
 Giving ideas on how agent should act
 Explaining about agent types as well as agent environment
 Identifying ways of measuring agent success
 Describing rational agent, autonomous agent and
omniscience agent

4/27/2024 AI/CSEg3206 45
4/27/2024 AI/CSEg3206 46
4/27/2024 AI/CSEg3206 47
Understanding AI Intelligent Agent

4/27/2024 AI/CSEg3206 48
4/27/2024 AI/CSEg3206 49
What is an Agent?

– An agent is any thing that can be viewed as perceiving(observing)


its environment through sensors and acting upon the environment
through the effectors(actutor) . 3 types of agents they are:
• Human as an agent has eyes, ears, and other organs for
sensors; and hands, legs, mouth, and other organs as effectors
• Robots as an agent has camera, sound recorder, infrared range
finder for sensors; and various motors for effectors.
• A software agent receives keystrokes, file contents, and
network packets as sensory inputs and acts on the environment
by displaying on the screen, writing files, and sending network
packets.

4/27/2024 AI/CSEg3206 50
What is…..contd
• We use the term percept to refer to the agent's perceptual
inputs at any given instant.
• An agent's percept sequence is the complete history of
everything the agent has ever perceived.
• In general, an agent's choice of action at any given instant can
depend on the entire percept sequence observed to date.
• If we can specify the agent's choice of action for every possible
percept sequence, then we have said more or less everything
there is to say about the agent.
• Mathematically speaking, we say that an agent's behavior is
described by the agent function that maps any given percept
sequence to an action. [f: P*  A]

4/27/2024 AI/CSEg3206 51
What is…..contd

Agents interact with environments through sensors and actuators.


4/27/2024 AI/CSEg3206 52
What is…..contd
• Given an agent to experiment with, it is possible to construct a
table by trying out
• all possible percept sequences and
• recording which actions the agent does in response.'
• Sometimes it may be infinite
• The table is an external characterization of the agent.
• Internally, the agent function for an artificial agent will be
implemented by an agent program.
• It is important to keep these two ideas distinct.
• The agent function is an abstract mathematical description
• The agent program is a concrete implementation, running on the
agent architecture.
Rule1. Ability to perceive environment, Rule2. Observations used to make decisions

4/27/2024 AI/CSEg3206 53
What is…..contd
• An intelligent agent perceives its environment via sensors and
acts rationally upon that environment with its effectors.
• A discrete agent receives percepts one at a time, and maps this
percept sequence to a sequence of discrete actions.
• Properties
–Reactive to the environment
–Pro-active or goal-directed
–Interacts with other agents through
communication or in the environment
–Autonomous

4/27/2024 AI/CSEg3206 54
What is…..contd
• So, any agent consists of two parts:
– Agent architecture Agents are usually directed by Humans

– Agent program
• The architecture is the hardware and the program is the
software.
• The role of the agent program is to implement the agent
function.
• The agent function is a mapping from percept histories to
actions.

4/27/2024 AI/CSEg3206 55
VCW example

4/27/2024 AI/CSEg3206 56
Vacuum Cleaner World Example

4/27/2024 AI/CSEg3206 57
4/27/2024 AI/CSEg3206 58
4/27/2024 AI/CSEg3206 59
4/27/2024 AI/CSEg3206 60
4/27/2024 AI/CSEg3206 61
4/27/2024 AI/CSEg3206 62
4/27/2024 AI/CSEg3206 63
4/27/2024 AI/CSEg3206 64
What is…..contd
Ideal Example of Agent
Vacuum-cleaner world
• Percepts: location and contents
– e.g., [A,B, Dirty, Clean]
 Actions:[ Left, Right, Suck, Do Nothing]

Partial tabulation of a simple agent function for the vacuum-cleaner


world
4/27/2024 AI/CSEg3206 65
What is…..contd
• A rational agent is one that does the right thing
• Conceptually speaking, every entry in the table for the agent
function is filled out correctly.
• Obviously, doing the right thing is better than doing the
wrong thing, but what does it mean to do the right thing?
 Agents may be rational or human like
 We have seen how human act or think is difficult to
understand since due to the complex structure of human
intelligence.
 Our agent should be designed from rationality view that act
rationally

4/27/2024 AI/CSEg3206 66
What is…..contd
How should Agents act?
 A rational agent is an agent that does the right thing for the
perceived data from the environment
 What is right is an ambiguous concept but we can consider
the right thing as the one that makes the agent more
successful.
 Success is also measured by using performance measure
and a performance measure embodies the criterion for
success of an agent's behavior
 Question
 How and when do you measure success in performance?

4/27/2024 AI/CSEg3206 67
What is…..contd
 Performance measure (how?)
– Subjective Measure using the Agent
• How happy is the agent at the end of the action
• Agent should answer based on its opinion
• Some agents are unable to answer, some delude them
selves, some over estimate and some under estimate their
success
• Therefore, subjective measure is not a better way.
– Objective Measure imposed by Some Authority is an
alternative
– The selection of a performance measure is not always easy.

4/27/2024 AI/CSEg3206 68
What is…..contd
 Objective Measure
◦ Needs standard to measure success
◦ Provides quantitative value of success measure of an agent
◦ Involves factors that affect performance and weight to each factors
E.g., performance measure of a vacuum-cleaner agent could be
 amount of dirt cleaned up, (average but way to achieve)
 amount of time taken,
 amount of electricity consumed,
 amount of noise generated, etc.
 Time factor in measuring performance is also important for
success.
 It may include knowing starting time, finishing time, duration of job, etc
•Which is better-an economy where everyone lives in moderate poverty,
or one in which some live in plenty while others are very poor?
4/27/2024 AI/CSEg3206 69
What is…..contd
 Omniscience versus Rational Agent
– Omniscience agent is distinct from Rational agent
– An omniscient agent knows the actual outcome of its actions
and can act accordingly
– Is impossible in reality
– However, rational agent is an agent that tries to achieve
more success from its decision.
– Rational agent could make a mistake because of
unpredictable factors at the time of making decision.
– For each possible percept sequence, a rational agent should
select an action that is expected to maximise its
performance
4/27/2024 AI/CSEg3206 70
What is…..contd

• Rationality is not the same as perfection.


• Rationality maximizes expected performance, while
perfection maximizes actual performance.
 Omniscient agent that act and think rationally never make a
mistake
 Omniscient agent is an ideal agent in real world
 Agents can perform actions in order to modify future
percepts so as to obtain useful information (information
gathering, exploration)
4/27/2024 AI/CSEg3206 71
What is…..contd

Factors to measure rationality of agents


1. Percept sequence perceived so far (do we have the entire
history of how the world evolve or not)
2. The set of actions that the agent can perform (agents
designed to do the same job with different action set will have
different performance)
3. Performance measures ( is it subjective or objective? What
are the factors and their weights)
4. The agent knowledge about the environment (what kind
of sensor does the agent have? Does the agent knows every
thing about the environment or not)

4/27/2024 AI/CSEg3206 72
What is…..contd
Ideal rational Agent

◦ For each possible percept sequence, an ideal rational agent


should do what every action is expected to maximize its
performance measure, on the basis of the evidence provided
by the percept sequence and what ever built-in knowledge the
agent has.

◦ Ideal rational agent implementation require perfection

◦ In real situation such agent is difficult to achieve

◦ Why car accident happened? Because drivers are not perfect


agent
4/27/2024 AI/CSEg3206 73
What is…..contd
 Autonomy
◦ An agent is autonomous if its behavior is determined by its own
experience (with ability to learn and adapt)
◦ Agent that lacks autonomous, if its actions are based completely on
built-in knowledge
◦ Example: student grade decider agent:
 Knowledge base given: rules for converting numeric grade to letter
grade
 Case 1: agent always follows the rule (lacks autonomous)
 Case 2: agent that modify the rules by learning exceptions from the
knowledge base as well as grade distribution.

4/27/2024 AI/CSEg3206 74
What is…..contd
Structure of Intelligent Agent
 Structure of AI Agent refers to the design of intelligent agent program (function that
implement agent mapping from percept to actions) that will run on some sort of
computing device called architecture
 This course focus on intelligent agent program function theory, design and
implementation plunk
 Design of intelligent agent needs prior knowledge of
◦ Performance measure or Goal the agent supposed to achieve,
◦ On what kind of Environment it operates
◦ What kind of Actuators it has (what are the possible Actions),
◦ What kind of Sensors the agent has (what are the possible Percepts)
 Performance measure  Environment  Actuators  Sensors are abbreviated as
PEAS
 Percepts Actions Goal  Environment are abbreviated as PAGE

4/27/2024 AI/CSEg3206 75
What is…..contd

Examples of agents structure and sample PEAS


 Agent: automated taxi driver:
◦ Environment: Roads, traffic, pedestrians, customers
◦ Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors,
keyboard
◦ Actuators: Steering wheel, accelerator, brake, signal, horn
◦ Performance measure: Safe, fast, legal, comfortable trip, maximize profits
 Agent: Medical diagnosis system
◦ Environment: Patient, hospital, physician, nurses, …
◦ Sensors: Keyboard (percept can be symptoms, findings, patient's answers)
◦ Actuators: Screen display (action can be questions, tests, diagnoses,
treatments, referrals)
◦ Performance measure: Healthy patient, minimize costs, lawsuits

4/27/2024 AI/CSEg3206 76
What is…..contd
Examples of agents structure and sample PEAS
 Agent: Interactive English tutor
◦ Environment: Set of students, testing agency
◦ Sensors: Keyboard (typed words)
◦ Actuators: Screen display (exercises, suggestions, corrections)
◦ Performance measure: Maximize student's score on test
 Agent: Satellite image analysis system
◦ Environment: Images from orbiting
◦ Sensors: Pixels of varying intensity, color
◦ Actuators: print categorization of scene
◦ Performance measure: Correct categorization
4/27/2024 AI/CSEg3206 77
What is…..contd

Examples of agents structure and sample PEAS


 Agent: Part picking robot
 Environment: Conveyor belt with parts, bins
 Sensors: pixels of varying intensity(Camera, joint angle sensors)
 Actuators: pickup parts and sort into bins(Jointed arm and hand)
 Performance measure: place parts in correct bins
 An agent is completely specified by the agent function that maps
percept sequences into actions
 Aim: find a way to implement the rational agent function
concisely

4/27/2024 AI/CSEg3206 78
4/27/2024 AI/CSEg3206 79
4/27/2024 AI/CSEg3206 80
What is…..contd

Agent programs
 Skeleton of the Agent
FUNCTION SKELETON-AGENT (percept) returns action
static memory, the agent’s memory of the world
memory UPDATE-MEMORY (memory, percept)
action  CHOOSE-BEST-ACTION (memory)
memory UPDATE-MEMORY (memory, action)
RETURN action
Note:
1. the function gets only a single percept at a time
Q: how to get the percept sequence?
2. The goal or performance measure is not part of the skeleton
4/27/2024 AI/CSEg3206 81
What is…..contd
Table-lookup agent
• Table look up agent store all the percept sequences –action pair into the
table
• For each percept, this type of agent will search for the percept entry and
return the corresponding actions.
• Table look up couldn’t be the right option to implement successful agent
• Why?
• Drawbacks:
– Huge table
– Take a long time to build the table
– No autonomy
– Even with learning, need a long time to learn the table entries
4/27/2024 AI/CSEg3206 82
What is…..contd
Agent types
• Based on memory of the agent, and they way the agent takes action we
can divide agents into five basic types:
• These are (according to their increasing order of generality) :
1. Simple reflex agents
2. Model-based reflex agents
3. Goal-based agents
4. Utility-based agents
5. Learning agent
Notation of model:
• Rectangles: used to represent the current internal state of the agent
decision process
• Ovals: used to represent the background information used in the process

4/27/2024 AI/CSEg3206 83
What is…..contd
Simple Reflex Agents
• It is the simplest type of Agent.
• It uses a set of condition-action rules.
• It uses only the current precepts .
• The rules are of the form “if this is the percept then this is the best
action”. which does not depend on the rest of the percept
history.
• They cannot make decisions on things that they cannot directly
perceive, i.e. they have no model of the state of the world.
• Simple reflex agents have the admirable property of being simple, but
they turn out to be of very limited intelligence
• Works only if the correct decision can be made on the basis of only the
current percept-that is, only if the environments is fully observable.

4/27/2024 AI/CSEg3206 84
What is…..contd
Simple reflex agents

4/27/2024 AI/CSEg3206 85
What is…..contd

• Simple reflex agent function prototype

• Function SIMPLE_REFLEX_AGENT(percept) return action

static : rules, a set of condition –action-rule

stateINTERPRET-INPUT(percept)

ruleRULE-MATCH(state, rules)

ActionRULE-ACTION[rule]

Return action

4/27/2024 AI/CSEg3206 86
What is…..contd

Model Based Agent


– It is a more complex type of Agent.

– Model based agents maintain an internal model of the world,


which is updated by precepts as they are received.

– In addition, they have built-in knowledge (i.e. prior knowledge) of


how the world tends to evolve.

– It can not able to plan to achieve longer terms goals.

– They live in the present only and do not think about the future.

4/27/2024 AI/CSEg3206 87
What is…..contd
Model-based reflex agents (also called a reflex agent
with internal state)

4/27/2024 AI/CSEg3206 88
What is…..contd
Model-based reflex agents
• Function MODEL_BASED_AGENT(PERCEPT) return action
static state, a description of the current world state
rules, a set of condition action rules
stateUPDATE_STATE(state, percept)
ruleRULE_MATCH(state, rues)
actionRULE_ACTION[rule]
stateUPDATE_STATE(state,action)
return action

4/27/2024 AI/CSEg3206 89
What is…..contd
Goal-based agents
• Knowing about the current state of the environment is not always
enough to decide what to do.
• For example, at a road junction, the taxi can turn left, turn right, or
go straight on.
• The correct decision depends on where the taxi is trying to get
to(Goal).
• That is, the agent needs some sort of goal information that
describes situations that are desirable
• It is a model-based, goal-based agent.
• It keeps track of the world state as well as a set of goals it is trying
to achieve, and chooses an action that will (eventually) lead to the
achievement of its goals
– Is it easy always?
4/27/2024 AI/CSEg3206 90
What is…..contd
Goal-based agents
Notice that decision making of this kind is fundamentally different from the
condition action rules described earlier, in that it involves consideration of the
future-both "What will happen if I do such-and-such?' and "Will that make me
happy?'

4/27/2024 AI/CSEg3206 91
What is…..contd
Goal-based agents structure
• Function GOAL_BASED_AGENT(PERCEPT) return action
static state, a description of the current world state
goal, a description of the goal to achieve may be in
terms of state
stateUPDATE_STATE(state, percept)
actionSetPOSSIBLE_ACTIONS(state)
actionACTION_THAT_LEADS_TO_GOAL(actionSet)
stateUPDATE_STATE(state,action)
return action

4/27/2024 AI/CSEg3206 92
What is…..contd
Utility Based Agents
• Goals alone are not really enough to generate high-quality behavior in
most environments.
• Goal can be useful, but are sometimes too simplistic.
• Clearly there can be many actions that lead to a goal being achieved, but
some are better than others.
• Utility based agents deal with this by assigning a utility to each state of
the world.
– This utility defines how “happy” the agent will be in such a state.
• Goal based agents implicitly contain a utility function which is difficult
to define more complex “desires”.
• Explicitly stating the utility function also makes it easier to define the
desired behaviour of utility based agents.
4/27/2024 AI/CSEg3206 93
What is…..contd

A complete utility-based agent

4/27/2024 AI/CSEg3206 94
What is…..contd
Utility-based agents structure
• Function UTILITY_BASED_AGENT(PERCEPT) return action
static state, a description of the current world state
goal, a description of the goal to achieve may be in
terms of state
stateUPDATE_STATE(state, percept)
actionSetPOSSIBLE_ACTIONS(state)
actionBEST_ACTION(actionSet)
stateUPDATE_STATE(state,action)
return action
Remark:
• A utility function maps a state (or a sequence of states) onto a
real number, which describes the associated degree of happiness.
• A complete specification of the utility function allows rational
decisions in two kinds of cases where goals are inadequate(many,

4/27/2024 AI/CSEg3206 95
What is…..contd

Learning Agents
• In many areas of AI, this is now the preferred method for
creating state-of-the-art systems
• A learning agent can be divided into four conceptual
components
• Learning Element
– Suggesting improvements to any part of the performance
element.
– The input to the learning element comes from the
Critic.(on how the agent is doing and determines how the
performance element should be modified to do better in the
future)
4/27/2024 AI/CSEg3206 96
What is…..contd

• Learning Agent
• Performance element
– Responsible for selecting external actions(it takes in percepts
and decides on actions)
• Critic
– Analyses incoming precepts and decides if the actions of the
agent have been good or not.
– To decide this it will use an external performance standard.
• Problem Generator
– Responsible for suggesting actions that will result in new
knowledge about the world being acquired.

4/27/2024 AI/CSEg3206 97
What is…..contd
Learning agents

4/27/2024 AI/CSEg3206 98
Types of environments:

An environment in artificial intelligence is the surrounding of


the agent. The agent takes input from the environment through
sensors and delivers the output to the environment through
actuators. There are several types of environments:

Fully Observable vs Partially Observable


Deterministic vs Stochastic
Competitive vs Collaborative
Single-agent vs Multi-agent
Static vs Dynamic
Discrete vs Continuous
Episodic vs Sequential
Known vs Unknown

4/27/2024 AI/CSEg3206 99
Fully Observable vs Partially-Observable
In a fully observable environment, The Agent is familiar with the complete
state of the environment at a given time. There will be no portion of the
environment that is hidden for the agent.

Real-life Example: While running a car on the road ( Environment ), The


driver ( Agent ) is able to see road conditions, signboard and pedestrians on
the road at a given time and drive accordingly. So Road is a fully observable
environment for a driver while driving the car.

In a partially observable environment, The agent is not familiar with the


complete environment at a given time.

Real-life Example: Playing card games is a perfect example of a partially-


observable environment where a player is not aware of the card in the
opponent’s hand. Why partially-observable? Because the other parts of the
environment, e.g. opponent, game name, etc are known for the player (Agent).

4/27/2024 AI/CSEg3206 100


Deterministic vs Stochastic

Deterministic are the environments where the next state is


observable at a given time. So there is no uncertainty in the
environment.

Real-life Example: The traffic signal is a deterministic environment


where the next signal is known for a pedestrian (Agent)

The Stochastic environment is the opposite of a deterministic


environment. The next state is totally unpredictable for the agent. So
randomness exists in the environment.

Real-life Example: The radio station is a stochastic environment


where the listener is not aware about the next song or playing a
soccer is stochastic environment.

4/27/2024 AI/CSEg3206 101


Episodic vs Sequential

Episodic is an environment where each state is independent


of each other. The action on a state has nothing to do with the
next state.

Real-life Example: A support bot (agent) answer to a question


and then answer to another question and so on. So each
question-answer is a single episode.

The sequential environment is an environment where the


next state is dependent on the current action. So agent current
action can change all of the future states of the environment.

Real-life Example: Playing tennis is a perfect example where a


player observes the opponent’s shot and takes action.

4/27/2024 AI/CSEg3206 102


Static vs Dynamic

The Static environment is completely unchanged while an


agent is precepting the environment.

Real-life Example: Cleaning a room (Environment) by a dry-


cleaner reboot (Agent ) is an example of a static environment
where the room is static while cleaning.

Dynamic Environment could be changed while an agent is


precepting the environment. So agents keep looking at the
environment while taking action.

Real-life Example: Playing soccer is a dynamic environment


where players’ positions keep changing throughout the game.
So a player hit the ball by observing the opposite team.

4/27/2024 AI/CSEg3206 103


Discrete vs Continuous
• Discrete Environment consists of a finite number of states
and agents have a finite number of actions.

• Real-life Example: Choices of a move (action) in a tic-tac


game are finite on a finite number of boxes on the board
(Environment).

• While in a Continuous environment, the environment can


have an infinite number of states. So the possibilities of
taking an action are also infinite.

• Real-life Example: In a basketball game, the position of


players (Environment) keeps changing continuously and
hitting (Action) the ball towards the basket can have
different angles and speed so infinite possibilities.
4/27/2024 AI/CSEg3206 104
Single Agent vs Multi-Agent
• Single agent environment where an environment is explored by a single
agent. All actions are performed by a single agent in the environment.

• Real-life Example: Playing tennis against the ball is a single agent


environment where there is only one player.

• If two or more agents are taking actions in the environment, it is known as


a multi-agent environment.

• Real-life Example: Playing a soccer match is a multi-agent environment.

Known vs Unknown :-
• In a known environment, the output for all probable actions is given.
Obviously, in case of unknown environment, for an agent to make a
decision, it has to gain knowledge about how the environment works.

4/27/2024 AI/CSEg3206 105


Types of Environment contd..
Types of Environment
• Based on the portion of the environment observable
• Fully observable: An agent's sensors give it access to the complete state of
the environment at each point in time. (chess vs. driving) Ex:
Chess – The board is fully observable, and so are the opponent’s moves.
Driving – the environment is partially observable because what’s around the corner is
not known.
– Partially observable
– Fully unobservable
• Based on the effect of the agent action
– Deterministic : The next state of the environment is completely
determined by the current state and the action executed by the agent.
– Strategic: If the environment is deterministic except for the actions of
other agents, then the environment is strategic
– Stochastic or probabilistic environment is the opposite of a deterministic
environment. The next state is totally unpredictable for the agent. So randomness
exists in the environment.
4/27/2024 AI/CSEg3206 106
Types of Environment
• Types of Environment
• Based on the number of agents involved
– Single agent A single agent operating by itself in an
environment.
– Multi-agent: multiple agents are involved in the
environment(Chess(Competitive) versus Taxi(Cooperative or
partially competitive)
• Based on the state, action and percept space pattern
– Discrete: A limited number of distinct, clearly defined state,
percepts and actions.
– Continuous: state, percept and action are consciously changing
variables
– Note: one or more of them can be discrete or continuous
4/27/2024 AI/CSEg3206 107
What is…..contd
Types of Environment cont …
• Based on the effect of time
– Static: The environment is unchanged while an agent is
deliberating.
– Dynamic: The environment changes while an agent is not
deliberating.
– semi-dynamic: The environment is semi-dynamic if the
environment itself does not change with the passage of time but the
agent's performance score does
• Based on loosely dependent sub-objectives
– Episodic: The agent's experience is divided into atomic "episodes"
(each episode consists of the agent perceiving and then performing a
single action), and the choice of action in each episode depends only
on the episode itself.
– Sequential: The agent's experience is a single atomic "episodes"
4/27/2024 AI/CSEg3206 108
What is…..contd

Environment Types: Example


Environment Chess Chess w/out a Taxi Driving
with a clock
clock
Fully Observable yes Yes No
Deterministic Strategic Strategic No
Episodic No No No
Static Semi Yes No
Discrete Yes yes No
Single Agent No No NO

4/27/2024 AI/CSEg3206 109


What is…..contd

Remark:
• The environment type largely determines the agent design
• The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent
• As one might expect, the hardest case is partially observable,
stochastic, sequential, dynamic, continuous, and multi agent.
• It also turns out that most real situations are so complex that
whether they are really deterministic is a moot point.
• For practical purposes, they must be treated as stochastic. Taxi
driving is hard in all these senses.

4/27/2024 AI/CSEg3206 110


Types of Environment cont..
Types of Environment cont …

4/27/2024 AI/CSEg3206 111


Summary of Unit-II
• An agent perceives and acts in an environment, has an architecture, and is
implemented by an agent program
• An ideal agent always chooses the action which maximizes its expected
performance, given its percept sequence so far
• An autonomous agent uses its own experience rather than built-in
knowledge of the environment by the designer
• An agent program maps from percept to action and updates its internal
state
– Reflex agents respond immediately to percepts
– Model-based reflex agents maintain internal state to track aspects of the
world that are not evident in the current percept
– Goal-based agents act in order to achieve their goal(s)
– Utility-based agents maximize their own utility function(“happiness”)
– All agents types can increase their performance through learning
• Representing knowledge is important for successful agent design
• The most challenging environments are partially observable, stochastic,
sequential, dynamic, and continuous, and contain multiple intelligent
agents.
4/27/2024 AI/CSEg3206 112
End of Unit-II

Questions are
Welcome

4/27/2024 AI/CSEg3206 113


Artificial Intelligence

Chapter III
(Problem Solving: Uninformed Search)

4/27/2024 AI/CSEg3206 114


Objectives

Identify the type of agent that solve problem by searching


Problem formulation and goal formulation
Types of problem based on environment type
Discuss various techniques of search strategies(Uninformed
Search)

4/27/2024 AI/CSEg3206 115


Problem…contd

• Four general steps in problem solving:


– Goal formulation
• What are the successful world states
– Problem formulation
• What actions and states to consider given the goal
– Search
• Determine the possible sequence of actions that lead to
the states of known values and then choosing the best
sequence.
– Execute
• Give the solution perform the actions.

4/27/2024 AI/CSEg3206 116


4/27/2024 AI/CSEg3206 117
Problem-solving agent
function SIMPLE-PROBLEM-SOLVING-AGENT(percept) return an action
static: seq, an action sequence
state, some description of the current world state
goal, a goal
problem, a problem formulation

state  UPDATE-STATE(state, percept)


if seq is empty then
goal  FORMULATE-GOAL(state)
problem  FORMULATE-PROBLEM(state,goal)
seq  SEARCH(problem) • A simple problem-solving agent.
action  FIRST(seq) • It first formulates a goal and a problem, searches for a
seq  REST(seq) sequence of actions that would solve the problem,
return action and then executes the actions one at a time.
• When this is complete, it formulates another goal and
starts over.
• Note that when it is executing the sequence it ignores
its percepts: it assumes that the solution it has found
4/27/2024 will always work.
AI/CSEg3206 118
Problem…contd

• A problem can be defined formally by four components


1. The initial state(the agent starts in)
2. A description of the possible actions(uses successor
function)
3. The goal test which determines whether a given state is a
goal state
4. A path cost is function that assigns a numeric cost to each
path(distance, etc)
• Together, the initial state and successor function implicitly define the
state space of the problem-the set of all states reachable from the
initial state.
• Path in the state space is a sequence of states connected by a
sequence of actions.

4/27/2024 AI/CSEg3206 119


Problem…contd
– A solution to a problem is a path from the initial state to a goal
state. Solution quality is measured by the path cost function, and
an optimal solution has the lowest path cost among all
solutions.
• Type of agent that solve problem by searching
– Such agent is not reflex or model based reflex agent because
this agent needs to achieve some target (goal)
– It can be goal based or utility based or learning agent
– Intelligent agent knows that to achieve certain goal, the
state of the environment will change sequentially and the
change should be towards the goal
– Intelligent agents are supposed to maximize their
performance measure
4/27/2024 AI/CSEg3206 120
Problem…contd
• Assume a problem is to reach specified place(location) as it is
indicated on the following slide
– A problem is defined by:
• An initial state, e.g. Arad
• Successor function S(X)= set of action-state pairs
– e.g. S(Arad)={<Arad  Zerind, Zerind>,…}
intial state + successor function = state space
• Goal test, can be
– Explicit, e.g. x=‘at bucharest’
– Implicit, e.g. checkmate(x)
• Path cost (additive)
– e.g. sum of distances, number of actions executed, …
– c(x,a,y) is the step cost, assumed to be >= 0
A solution is a sequence of actions from initial to goal state.
Optimal solution has the lowest path cost.
4/27/2024 AI/CSEg3206 121
4/27/2024 AI/CSEg3206 122
Problem…contd

States

Actions

Start Solution

Goal

4/27/2024 AI/CSEg3206 123


4/27/2024 AI/CSEg3206 124
4/27/2024 AI/CSEg3206 125
4/27/2024 AI/CSEg3206 126
4/27/2024 AI/CSEg3206 127
Problem solving agent example1.

4/27/2024 AI/CSEg3206 128


4/27/2024 AI/CSEg3206 129
4/27/2024 AI/CSEg3206 130
Problem…contd

• In the preceding section we proposed a formulation of the problem


of getting to Bucharest in terms of the initial state, successor
function, goal test, and path cost
• This formulation seems reasonable, yet it omits a great many aspects
of the real world. Real world is absurdly complex.
– The state of the world(state description) includes so many things
for example , :
• The traveling companions,
• What is on the radio,
• The scenery out of the window,
• Whether there are any law enforcement officers nearby,
• How far lit is to the next rest stop,
• The condition of the road,
4/27/2024
the weather, and so on.
AI/CSEg3206 131
Problem…contd

– All these considerations are left out of our state descriptions


because they are irrelevant to the problem of finding a route to
Bucharest.
– The process of removing detail from a representation is
called abstraction.
– In addition to abstracting the state description, we must abstract
the actions themselves.
• A driving action has many effects.
– Besides changing the location of the vehicle and its occupants, it takes
up time, consumes fuel, generates pollution, and changes the agent (as
they say, travel is broadening).
– In formulation, in our example, we take into account only the change
in location.
4/27/2024 AI/CSEg3206 132
Problem…contd

• Problem formulation:
– For vacuum world problem, the problem formulation involve:
• States: The agent is in one of two locations, each of which
might or might not contain dirt. Thus there are 2 x 2^2 = 8
possible world states.
• Initial state: Any state can be designated as the initial state.
• Successor function: This generates the legal states that result
from trying the three actions (Left, Right, and Suck).
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number
of steps in the path.
4/27/2024 AI/CSEg3206 133
Problem…contd

• Goal formulation: refers to the understanding


of the objective of the agent based on the state
description of the final environment
• For example, for the vacuum world problem,
the goal can be formulated as
[clean, Clean, agent at any block]

4/27/2024 AI/CSEg3206 134


Problem…contd
• In problem solving by searching, solution can be described into
two ways.
• Solution can be provided as state sequence or action
sequence
• For example consider the vacuum cleaner world with initial
state as shown bellow
• Solution as state sequence becomes:

Suck Move Right
In general, an agent with several immediate options of unknown
value can decide what to do by first examining different possible
sequences of actions that lead to states of known value, and then
choosing the best sequence.
4/27/2024 AI/CSEg3206 135
Problem…contd

– This process of looking for such a sequence is called search.


– A search algorithm takes a problem as input and returns a
solution in the form of an action sequence.
– Once a solution is found, the actions it recommends can be
carried out. This is called the execution phase.
– Thus, we have a simple "formulate, search, execute" design
for the agent

4/27/2024 AI/CSEg3206 136


Problem…contd

Agent Program

4/27/2024 AI/CSEg3206 137


Problem…contd
Example: Road map of Ethiopia
Aksum
100

200
Mekele
80
180
Lalibela
110 250
150
Bahr dar
Dessie
170

Debre markos 330


Dire Dawa
230

400
330
Addis Ababa
100
430 Adama 370

Gambela 230 320 Nekemt

Awasa
4/27/2024 AI/CSEg3206 138
Problem…contd

Example: Road map of Ethiopia


• Current position of the agent(Initial State): Awasa.
• Needs to arrive to: Gondar
• Formulate goal: be in Gondar
• Formulate problem:
– states: various cities
– actions: drive between cities
• Find solution:
– sequence of cities, e.g., Awasa, Adama, Addis Ababa, Dessie,
Godar

4/27/2024 AI/CSEg3206 139


Problem…contd

Types of Problems
• Four types of problems exist in the real situations:
1. Single-state problem
– The environment is Deterministic and fully observable
– Out of the possible state space, agent knows exactly which
state it will be in; solution is a sequence
2. Sensor less problem (conformant problem)
– The environment is non-observable
– It is also called multi-state problem
– Agent may have no idea where it is; solution is a sequence

4/27/2024 AI/CSEg3206 140


Problem…contd

3. Contingency problem
– The environment is nondeterministic and/or partially
observable
– It is not possible to know the effect of the agent action
– percepts provide new information about current state
4. Exploration problem
– The environment is partially observable
– It is also called unknown state space

4/27/2024 AI/CSEg3206 141


Problem…contd
• Problem type as a summery

Environment Type Problem Type


Deterministic, fully-observable Single-state problem

Non-observable, known state space Sensorless/conformant problem

Nondeterministic and/or partially- Contingency problem


observable
Partially observable, unknown state space Exploration problem

4/27/2024 AI/CSEg3206 142


Vacuum Cleaner World Example.

4/27/2024 AI/CSEg3206 143


Problem…contd

Example: vacuum world

Single-state
– Starting state us known
say in #5.
– What is the Solution?

4/27/2024 AI/CSEg3206 144


Problem…contd

Example: vacuum world


• Single-state, start in #5.
Solution? [Right, Suck]

4/27/2024 AI/CSEg3206 145


Problem…contd

Example: vacuum world


Sensorless,
– It doesn’t know what the
current state is
– So the current start is either of
the following: {1,2,3,4,5,6,7,8}
– What is the Solution?

4/27/2024 AI/CSEg3206 146


Problem…contd

Example: vacuum world

Sensorless
• Solution
• Right goes to {2,4,6,8}
Solution?
• [Right,Suck,Left,Suck]

4/27/2024 AI/CSEg3206 147


Problem…contd

Example: vacuum world


• Contingency
– Nondeterministic:
• Suck may dirty a clean carpet
– Partially observable:
• Hence we have partial information
– Let’s assume the current percept is: [L,
Clean] i.e. start in #5 or #7
– What is the Solution?

4/27/2024 AI/CSEg3206 148


Problem…contd

Example: vacuum world

• Contingency Solution
[Right, if dirt then Suck] Move right

suck

4/27/2024 AI/CSEg3206 149


Problem…contd

• Real-world Problems to be solved by searching algorithms


– We have seen two such problems:
• The road map problem and the vacuum cleaner world
problem
• Route finding
• Touring problems
• VLSI layout
• Robot Navigation
• Automatic assembly sequencing
• Drug design
• Internet searching
4/27/2024 AI/CSEg3206 150
Problem…contd

Example: vacuum world

• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??

4/27/2024 AI/CSEg3206 151


Problem…contd

Example: vacuum world

• States?? two locations with or without dirt: 2 x 2 2=8 states.


• Initial state?? Any state can be initial
• Actions?? {Left, Right, Suck}
• Goal test?? Check whether squares are clean.
• Path cost?? Number of actions to reach goal.

4/27/2024 AI/CSEg3206 152


Problem…contd

Example: 8-puzzle

• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??

4/27/2024 AI/CSEg3206 153


Problem…contd

Example: 8-puzzle

• States?? Integer location of each tile


• Initial state?? Any state can be initial
• Actions?? {Left, Right, Up, Down}
• Goal test?? Check whether goal configuration is reached
• Path cost?? Number of actions to reach goal

4/27/2024 AI/CSEg3206 154


Problem…contd

Example: 8-puzzle

8 2 1 2 3

3 4 7 4 5 6

5 1 6 7 8

Initial state Goal state

4/27/2024 AI/CSEg3206 155


Problem…contd
Example: 8-puzzle

8 2 7

3 4

8 2 5 1 6

3 4 7

5 1 6 8 2 8 2

3 4 7 3 4 7

5 1 6 5 1 6

4/27/2024 AI/CSEg3206 156


Problem…contd

Example: 8-puzzle
Size of the state space = 9!/2 = 181,440

15-puzzle  .65 x 1012

0.18 sec
24-puzzle  .5 x 1025
6 days

12 billion years

10 million states/sec

4/27/2024 AI/CSEg3206 157


EPP Example.1

4/27/2024 AI/CSEg3206 158


Problem…contd

Example: 8-queens
Place 8 queens in a chessboard so that no two queens
are in the same row, column, or diagonal.

A solution Not a solution

4/27/2024 AI/CSEg3206 159


Problem…contd

Example: 8-queens problem

Incremental formulation vs. complete-state formulation


• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??

4/27/2024 AI/CSEg3206 160


Problem…contd

Example: 8-queens

Formulation #1:
• States: any arrangement of 0 to 8 queens on
the board
• Initial state: 0 queens on the board
• Actions: add a queen in any square
• Goal test: 8 queens on the board, none
attacked
• Path cost: none

 648 states with 8 queens

4/27/2024 AI/CSEg3206 161


Problem…contd

Example: 8-queens
Formulation #2:
• States: any arrangement of k = 0 to 8
queens in the k leftmost columns with
none attacked
• Initial state: 0 queens on the board
• Successor function: add a queen to any
square in the leftmost empty column such
that it is not attacked by any other
queen
• Goal test: 8 queens on the bord

 2,067 states
4/27/2024 AI/CSEg3206 162
Problem…contd

Example: robot assembly

• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??

4/27/2024 AI/CSEg3206 163


Problem…contd

Example: robot assembly

• States?? Real-valued coordinates of robot joint angles; parts


of the object to be assembled.
• Initial state?? Any arm position and object configuration.
• Actions?? Continuous motion of robot joints
• Goal test?? Complete assembly (without robot)
• Path cost?? Time to execute

4/27/2024 AI/CSEg3206 164


Problem…contd
Searching For Solution (Tree search algorithms)
• Given state space, and network of states via actions.
• The network structure is usually a graph
• Tree is a network in which there is exactly one path defined
from the root to any node
• Given state S and valid actions being at S
– the set of next state generated by executing each action is
called successor of S
• Searching for solution is a simulated exploration of state space
by generating successors of already-explored states

4/27/2024 AI/CSEg3206 165


Problem…contd
Searching For Solution (Tree search algorithms)

• A state is a (representation of) a physical configuration


• A node is a data structure constituting part of a search tree
– It includes:
• state,
• parent node,
• action,
• depth and
• one or more costs [like path cost g(x), heuristic cost h(x),
evaluation function cost f(x)]

4/27/2024 AI/CSEg3206 166


Problem…contd

Searching For Solution (Tree search algorithms)

• The Successor-Fn generate all the successors state and the action
that leads moves the current state into the successor state
• The Expand function creates new nodes, filling in the various fields
of the node using the information given by the Successor-Fn and the
input parameters
4/27/2024 AI/CSEg3206 167
Problem…contd

Searching For Solution (Tree search algorithms)


• A search process can be viewed as building a search tree over
the state space
• Search tree is a tree structure defined by initial state and a
successor function.
• Search(root) Node is the root of the search tree representing
initial state and without a parent.
• A child node is a node adjacent to the parent node obtained by
applying an operator or rule.

4/27/2024 AI/CSEg3206 168


Problem…contd
Tree search example
Awasa

Adama Addis Ababa

Gambela
Dire Nekemt Debre Awasa
Gambela AA Adama Jima
Dawa Markos
Dessie
Awasa

BahrDar AA
Lalibela AA Gondar

Gondar Debre M.

4/27/2024 AI/CSEg3206 169


Problem…contd
Implementation: general tree search

4/27/2024 AI/CSEg3206 170


Problem…contd
Search strategies
 A search strategy is defined by picking the order of node expansion
 Strategies are evaluated along the following dimensions:

– completeness: does it always find a solution if one exists?

– time complexity: number of nodes generated

– space complexity: maximum number of nodes in memory

– optimality: does it always find a least-cost solution?


 Time and space complexity are measured in terms of
– b: maximum branching factor of the search tree
– d: depth of the least-cost solution
– m: maximum depth of the state space (may be ∞)
 Generally, searching strategies can be classified in to two as uninformed and informed
search strategies

4/27/2024 AI/CSEg3206 171


Uninformed search (blind search) strategies
 Uninformed search strategies (Blind Search)
– use only the information available in the problem definition
– They have no information about the number of steps or the path cost from the
current state to the goal
– They can distinguish the goal state from other states
– They are still important because there are problems with no additional information.
 Six kinds of such search strategies will be discussed and each depends on the order of
expansion of successor nodes.
1. Breadth-first search
2. Uniform-cost search
3. Depth-first search
4. Depth-limited search
5. Iterative deepening search
6. Bidirectional search

4/27/2024 AI/CSEg3206 172


Uninformed search (blind search) strategies

m
b
G

4/27/2024 AI/CSEg3206 173


Generating action sequences- search trees
• Leaf Node: is a node without successors ( or children).
• They have not been expanded yet or because they were expanded before.

• Depth (d): of a node is the number of actions required to


reach it from the initial state.

• Frontier or Fringe Nodes: are the collection of nodes


that are waiting to be expanded.

• Path cost: of a node is the total cost leading to this node.


• Branch Factor(b): Max. number of successors for any node.
4/27/2024 AI/CSEg3206 174
Breadth-first search

– Uses no prior information, nor knowledge


– It tracks all nodes because it does not know whether this
node leads to a goal or not
– Keeps on trying until it gets solution
– All nodes are expanded from the root node
– That is it is a simple strategy in which
• the root node is expanded first,
• then all the successors of the root node are expanded
next,
• then their successors, and so on.

4/27/2024 AI/CSEg3206 175


Breadth…contd

In general, all the nodes are expanded at a given depth in the
search tree before any nodes at the next level are expanded.

That is, BFS expands all nodes at level d before expanding


nodes at level d+1

It checks all paths of a given length before moving to any


longer path

Expands the shallowest node first

4/27/2024 AI/CSEg3206 176


Breadth…contd

•The figure shows the progress of the search on a simply


binary tree BFS trees after 0, 1, 2, 3, and 4 nodes expansion
4/27/2024 AI/CSEg3206 177
Breadth…contd

A D • Move
downwards,
B D A E level by
level, until
C E E B B F goal is
reached.
D F B F C E A C G
G C G F
G
4/27/2024 AI/CSEg3206 178
Breadth…contd
Algorithm for Breadth-first search(FIFO)
• Blind search in which the list of nodes is a queue
• To solve a problem using breadth-first search:
1.Set L to be a list of the initial node in the problem.
2.If L is empty, return failure otherwise pick the first
node n from L
3.If n is a goal state, quit and return the path from initial
node to n
4.Otherwise remove n from L and add to the end of L all
of n's children. Label each child with its path from initial
node
5.Return to 2.

4/27/2024 AI/CSEg3206 179


Breadth....contd
Algorithm for Breadth-first search(FIFO)
• BFS can be implemented using a queuing function that
puts the newly generated states at the end of the the
que, after all previously generated states

1. QUEUE <-- path only containing the root;

2. WHILE QUEUE is not empty


AND goal is not reached

DO remove the first path from the QUEUE;


create new paths (to all children);
reject the new paths with loops;
add the new paths to back of QUEUE;

3. IF goal reached
THEN success;
ELSE failure;

4/27/2024 AI/CSEg3206 180


Breadth…contd

Properties of breadth-first search

• Complete? Yes (if b is finite, which is true in most cases)


• Time? 1+b+b2+b3+… +bd = O(bd+1)
– at depth value = i , there are bi nodes expanded for i ≤d
• Space? O(bd) (keeps every node in memory)
– a maximum of this much node will be there while reaching
to the goal node
– This is a major problem for real problem
• Optimal? Yes (if cost = constant (k) per step)
• Space is the bigger problem (more than time)

4/27/2024 AI/CSEg3206 181


Breadth....contd
Using the same hypothetical state space find the time and memory
required for a BFS with branching factor b=10 and various values
of the solution depth d

Depth Nodes Time Memory


0 1 1 millisecond 100 bytes
2 111 0.1 second 11 kilobytes
4 11,111 11 seconds 1 megabyte
6 106 18 minutes 111 megabytes
8 108 31 hours 11 gigabytes
10 1010 128 days 1 terabyte
12 1012 35 years 111 terabytes
14 1014 3500 years 11,111 terabytes

4/27/2024 AI/CSEg3206 182


Depth-first Search (DFS)

• Pick one of the children at every node visited, and


work forward from that child
• Always expands the deepest node reached so far
(and therefore searches one path to a leaf before
allowing up any other path)
• Thus, it finds the left most solution

4/27/2024 AI/CSEg3206 183


Depth-first …..contd)

Depth-first search- Chronological backtracking

S
• Select a child
A • convention: left-to-right or
may be alphabetical order
B D
• Repeatedly go to next child, as
long as possible.
C E • Return to left-over alternatives
(higher-up) only when needed.
D F
G

4/27/2024 AI/CSEg3206 184


Depth-first …..contd)
Depth-first search(LIFO) algorithm

1. STACK <-- path only containing the root;

2. WHILE STACK is not empty


AND goal is not reached

DO remove the first path from the STACK;


create new paths (to all children);
reject the new paths with loops;
add the new paths to front of STACK;
3. IF goal reached
THEN success;
ELSE failure;
4/27/2024 AI/CSEg3206 185
Depth-first …..contd)

• Complete: Yes, if state space finite


No, if state contains infinite paths or loops
• Time: O(bm)
• Space: O(bm) or O(bm+1) (i.e. linear space)
• Optimal : No
Then the worst case time complexity is O(bm)
However, for very deep (or infinite due to cycles) trees this search may
spend a lot of time (forever) searching down the wrong branch
Backtracking search uses even less memory, one successor instead of all b.

4/27/2024 AI/CSEg3206 186


Depth-first …..contd)

• Time Requirements of Depth-First Search


– It is also more likely to return a solution path that is longer
than the optimal
– Because it may not find a solution if one exists, this search
strategy is not complete.
– Remarks: Avoid DFS for large or infinite maximum depths.

4/27/2024 AI/CSEg3206 187


Depth-Limited Strategy(Depth first search with cut off)

• Depth-first with depth cutoff k (maximal depth below which nodes


are not expanded)

• Three possible outcomes:


– Solution
– Failure (no solution)
– Cutoff (no solution within cutoff)

• Solves the infinite-path problem.


• If k< d then incompleteness results.
• If k> d then not optimal.
• Time complexity: O(bk)
• Space complexity O(bk)

4/27/2024 AI/CSEg3206 188


Depth-Limited Strategy(Depth first search with cut off)

DFS Evaluation:
• DFS is a method of choice when there is a known (and
reasonable) depth bound, and finding any solution is sufficient
1.Depth-first search:
IF the search space contains very deep branches without
solution, THEN Depth-first may waste much time in them.
2. Breadth-first search:
Is VERY demanding on memory !
Solutions ??
Iterative deepening
The order of expansion of states is similar to BFS, except
that some states are expanded multiple times
4/27/2024 AI/CSEg3206 189
Iterative Deepening Search l = 0

• Limit = 0

4/27/2024 AI/CSEg3206 190


Iterative Deepening Search l = 1

• Limit = 1

4/27/2024 AI/CSEg3206 191


Iterative Deepening Search l = 2

• Limit = 2

4/27/2024 AI/CSEg3206 192


Iterative Deepening Search l = 3

• Limit = 3

•As can be seen, from the three iterations, the order of expansion
of states is similar to BFS, except that some states are expanded
multiple
4/27/2024 times AI/CSEg3206 193
Iterative Deepening Search l = 1 to l=4

Stages in Iterative-Deepening Search

4/27/2024 AI/CSEg3206 194


Iterative Deepening Search (IDS)

• It requires little memory (a constant times depth


of the current node)
• Is complete
• Finds a minim-depth solution as does BFS
• It is a strategy that avoids (sidesteps) the issue of
choosing the best depth limit by trying all possible
depth limits
• Finds the best depth limit by gradually increase
the limit -> 0, 1, 2, …until goal is found at depth
limit d
4/27/2024 AI/CSEg3206 195
Iterative ….contd

Iterative Deepening Search Algorithm

1. DEPTH <-- 1

2. WHILE goal is not reached

DO perform Depth-limited search;


increase DEPTH by 1;

4/27/2024 AI/CSEg3206 196


Completeness and optimality of Iterative Deepening
Search

• Completeness
– It is complete
– It finds a solution if exists
• Optimality
– It is optimal
– Finds the shortest path (like breadth first)
• Guarantee shortest path
• Guarantee for goal node of minimal depth

4/27/2024 AI/CSEg3206 197


Uniform-cost search

• Expand least-cost unexpanded node


• Implementation:
– fringe = queue ordered by path cost
• Equivalent to breadth-first if step costs all equal
• Consider the problem that moves from node S to G
S

A, 1 B, 5 C, 15
A
1 10 S
5 B 5
S G
A, 1 B, 5 C, 15
15 C 5
G, 11
S

A, 1 B, 5 C, 15

G, 11 G, 10

4/27/2024 AI/CSEg3206 198


Bidirectional Search
S
Forward
A D Backwards

B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
4/27/2024 AI/CSEg3206 199
Bidirectional…contd

 Bi-directional Search

Initial State Final State

* Completeness: yes
d/2
* Optimality: yes
* Time complexity: O(bd/2)
d
* Space complexity: O(bd/2)

O(bd) vs. O(bd/2) ? with b=10 and d=6 results in 1,111,111 vs. 2,222.

4/27/2024 AI/CSEg3206 200


•Questions

4/27/2024 AI/CSEg3206 201

You might also like