Chapter 1 - 4
Chapter 1 - 4
Chapter 1 - 4
Introduction to AI
1
Outlines
Introduction to AI
Approaches to AI
The Foundations of AI
Application of AI
2
History of AI
3
What Is AI?
• Is composed of two words Artificial and Intelligence.
reasoning
problem solving
learning.
7
Reasoning
• It is the set of processes that enables us to provide basis for
hand.
Inductive Reasoning
8
Inductive Reasoning
• Takes specific information and makes a broader generalization.
• For example: Every time you eat peanuts, you start to cough.
9
Deductive Reasoning
• It starts with a general statement and examines the possibilities
• For example:
enabled systems.
• categorized as:
parent. 12
Con’t
• Perceptual Learning :is learning to recognize stimuli that one has seen
colors, maps, etc. For Example, A person can create roadmap in mind
14
Con’t
problems. 15
APPROACHES TO AI
• Different scholars define approaches to AI differently
16
AI: Thinking Humanly(Cognitive Modeling Approach)
• There are two ways to do this: through introspection: trying to catch our
• It requires getting inside of the human mind to see how it works and then
behavior, that is evidence that some of the program's mechanisms may also
be operating in humans.
• They were more interested in showing that it solved problems like people,
going through the same steps and taking around the same amount of time
18
to perform those steps.
AI: Acting Humanly( Turing test approach)
• The Turing Test, proposed by Alan Turing (1950), was designed to
interrogator.
• The computer would need to possess the following capabilities to pass the test:
• The Greek philosopher Aristotle was one of the first to attempt to codify
• These laws of thought were supposed to govern the operation of the mind,
beliefs.
• Rational behavior: doing the right thing. The right thing: is the
available information.
• In the "laws of thought" approach to AI, the whole emphasis was on correct
22
inferences.
Task Classification Of AI
• The domain of AI is classified into Formal tasks, Mundane
Theorem Proving 23
Task Classification Of AI
• For humans, the mundane tasks are easiest to learn. The same was
task domain.
domain now, as the expert task domain needs expert knowledge without
26
• What is an agent?
effectors.
27
How Agents Should Act?
• A rational agent is one that does the right thing. Obviously, this is
better than doing the wrong thing, but what does it mean?
• As a first approximation, we will say that the right action is the one
28
How Agents Should Act
• We use the term performance measure for the how the criteria that determine how
Agent should answer based on his opinion. Some agents are unable to
answer, some are delude them selves, some over estimate and some under
estimate their success. Therefore, subjective measure is not a better way.
30
Ideal Example Of Agent How
Vacuum-cleaner world amount of dirt cleaned up
• Percepts: location and
amount of time taken,
contents
amount of electricity consumed,
• e.g., [A, Dirty]
Actions: Left, Right, Suck amount of noise generated, etc.
When
If we measured how much dirt the
agent had cleaned up in the first
hour of the day, we would be
rewarding those agents that start
fast (even if they do little or no
work later on), and punishing
those that work consistently.
Thus, we want to measure 31
performance over the long run.
Structure Of Intelligent Agent
camera, a PC.
Agents And Environments
• Percept: the agent’s perceptual inputs.
• Percept sequence: the complete history of everything the agent has perceived.
[f: P* A]
What kind of Sensors the agent has (what are the possible Percepts)
abbreviated as PEAS.
34
Examples Of Agents Structure And Sample Peas
Agent: automated taxi driver:
keyboard
profits.
35
Examples Of Agents Structure And Sample Peas
answers)
treatments, referrals)
37
Examples Of Agents Structure And Sample Peas
Performance Measure (P): To Play, Make Goal & Win the Game.
Soccer Field.
38
Agent Programs
An agent is completely specified by the agent function that maps percept sequences into
actions.
RETURN action
Note:
39
a solution.
• The agent takes input from the environment through sensors and delivers
• Partially observable: if the agent does not have complete and relevant
observable. 41
Types Of Environment
• Example: In the Checker Game, the agent observes the environment
all environments.
42
Types Of Environment
• Based on the effect of the agent action
• Example: Chess – there would be only a few possible moves for a coin at
the current state and these moves can be determined.
43
• Self Driving Cars: the actions of a self-driving car are not unique, it varies
time to time.
Types Of Environment
• Based on the number agent involved
and actions.
variables 44
itself does not change with the passage of time but the agent's
single action), and the choice of action in each episode depends only
3. Goal-based agents
4. Utility-based agents
47
5. Learning agent
Simple Reflex Agents
• It is the simplest agent which acts according to the current percept only,
situation (as defined by the percept) and then doing the action
functionSIMPLE-REFLEX AGENT
(percept) returns action
static: rules, a set of condition-
action rules
state<—INTERPRET(INPUT)
rule<- RULE-MATCH(state, rules)
action <- RULE-ACTION[rule]
return action
• Simple reflex agents do not maintain the internal state and do not depend on the 49
percept theory.
Simplex Reflex Agent E.g
• Consider an artificial robot that stand at the center of Meskel square
• if the agent perceive an image which looks like a car it run away in
• The internal state depends on the percept history, which reflects at least
agent program
52
Example: When a person walks in a lane, he maps the pathway in his mind.
Goal-based Agents
• It is not sufficient to have the current state information unless the goal is
not decided. Therefore, a goal-based agent selects a way among multiple
possibilities that helps it to reach its goal.
53
Goal-based Agents Structure
• Function GOAL_BASED_AGENT(PERCEPT) return action
stateUPDATE_STATE(state, percept)
actionSetPOSSIBLE_ACTIONS(state)
actionACTION_THAT_LEADS_TO_GOAL(actionSet)
stateUPDATE_STATE(state,action)
return action 54
Utility-based Agents
• These types of agents are concerned about the performance measure.
The agent selects those actions which maximize the performance
measure and devote towards the goal.
55
Utility-based Agents Structure
• Function UTILITY_BASED_AGENT(PERCEPT) return action
stateUPDATE_STATE(state, percept)
actionSetPOSSIBLE_ACTIONS(state)
actionBEST_ACTION(actionSet)
stateUPDATE_STATE(state,action) 56
return action
Learning Agents
• Critic: It provides feedback to the learning agent about how well the
agent is doing, which could maximize the performance measure in
the future.
57
Learning Agents
• The main task of these agents is to teach the agent machines to operate
improvements.
informative experiences.
CHAPTER 3
59
OBJECTIVES
several solutions.
61
Therefore, a problem-solving agent is a goal-driven agent and focuses on
PROBLEM-SOLVING AGENT
The problem of AI is directly associated with the nature of humans and their
activities. We need a number of finite steps to solve a problem which makes
human works easy.
Steps performed by Problem-solving agent
Initial state, actions, and transition model together define the state-space of
the problem implicitly.
State-space of a problem is a set of all states which can be reached from
the initial state followed by any sequence of actions. 63
• In AI one must identify components of problems, These are:
Problem Statement
Definition
Solution Space
Operators (Action) 64
EXAMPLE
Definition of Problem: the information about what is to be done?
1. Single-state problem
location may or may not contain dirt, and the agent may be in
70
one location or the other.
Types Of Problem
71
TYPES OF PROBLEMS
3. Contingency problem
Breadth-first search
Depth-limited search
Uniform cost search
Iterative deepening search
Depth-first search Bidirectional search
77
SEARCH STRATEGIES
2.Informed Search (Heuristic Search): Contains some additional
information about the states beyond the problem definition. This search uses
problem-specific knowledge to find more efficient solutions.
This search maintains some sort of internal states via heuristic functions
(which provides hints), so it is also called heuristic search.
There are following types of informed searches:
78
Breadth-first Search
• Breadth-first search is a simple strategy in which the root node is expanded
first, then all the SEARCH successors of the root node are expanded next,
then their successors, and so on.
All the nodes at depth d in the search tree are expanded before the nodes at
depth d + 1
Implementation: Fringe (open list) is a FIFO queue, i.e., new successors go
at the end
79
Breadth-first Search con’td
B C
D E F G
If there is a solution, breadth-first search is guaranteed to find it, and if there are several
solutions, breadth-first search will always find the shallowest goal state first.
80
Uniform-cost Search
Unlike BFS, this search explores nodes based on their path cost from
the root node. It expands a node n having the lowest path cost g(n),
where g(n) is the total cost from a root node to node n.
Implementation:= queue ordered by path cost
81
Uniform-cost Search
Disadvantages of Uniform-cost search
It does not care about the number of steps a path has taken to reach the goal
state.
It may stick to an infinite loop if there is a path with infinite zero cost
sequence.
It works hard as it examines each node in search of lowest cost path.
The performance measure of Uniform-cost search
Space and time complexity: The worst space and time complexity of 82
the
uniform-cost search is O(b1+LC*).
Depth-first Search
Expands one of the nodes at the deepest level of the tree. Only when the
search hits a dead end the search go back and expand nodes at shallower
levels.
Implementation: = LIFO queue, i.e., put successors at front.
The drawback of depth-first search is that it can get stuck going down the
wrong path. Many problems have very deep or even infinite search trees,
so depth-first search will never be able to recover from an unlucky
choice.
Depth-first search will either get stuck in an infinite loop and never
83
84
Properties Of Depth-first Search
86
Iterative Deepening Search
This search is a combination of BFS and DFS, as BFS guarantees to reach the
goal node and DFS occupies less memory space.
It is a strategy that sidesteps the issue of choosing the best depth limit by
trying all possible depth limits: first depth 0, then depth 1, then depth 2, and
so on.
The order of expansion of states is similar to breadth-first, except that some
states are expanded multiple times.
87
Iterative Deepening Search
It gradually increases the depth-limit from 0,1,2 and so on and reach the goal node.
88
Bidirectional Search
Simultaneously search forward from the initial state and backward
from the goal state and terminate when the two search meet in the
middle.
89
Reconstruct the solution by backward tracking towards the root and
forward tracking towards the goal from the point of intersection.
Bidirectional Search
These algorithm is efficient if there any very limited one or two
nodes with solution state in the search space.
90
Bidirectional search can use search techniques such as BFS,
DFS, DLS
Advantages:
92
Informed Search Algorithms
Uninformed search strategies can find solutions to problems by
systematically generating new states and testing them against
the goal. Unfortunately, these strategies are incredibly inefficient
in most cases.
Informed search strategy uses problem-specific knowledge and
can find solutions more efficiently than uninformed search.
93
Informed Search Algorithms
Informed search is a strategy that uses information about the
cost that may incur to achieve the goal state from the current
state.
The information may not be accurate. But it will help the agent
to make better decision. This information is called heuristic
information.
There several algorithms that belongs to this group. Some of
these are:
94
1.Greedy best-first search
2.A* search
Greedy Best-first Search
Greedy best-first search algorithm always selects the path which appears
best at that moment.
It is the combination of depth-first search and breadth-first search
algorithms.
It uses the heuristic function and search.
In the best first search algorithm, we expand the node which is closest to
the goal node and the closest cost is estimated by heuristic function, i.e.
f(n)= h(n). Were, h(n)= estimated cost from node n to the goal.
95
Greedy Best-first Search: Example
Consider the search problem below, Each node is extended at each iteration
using the evaluation function f(n)=h(n), as shown in the table below:
96
Con’td
97
Con’td
98
Properties Of Greedy Best-first Search
The performance measure of Best-first search Algorithm:
Completeness: Best-first search is incomplete even in finite state space.
Optimality: It does not provide an optimal solution.
Time and Space complexity: It has O(bm) worst time and space
complexity, where m is the maximum depth of the search tree. If the
quality of the heuristic function is good, the complexities could be
reduced substantially.
99
A Search
*
A* search is the most widely used informed search algorithm where a node
n is evaluated by combining values of the functions g(n)and h(n).
The function g(n) is the path cost from the start/initial node to a node n and
h(n) is the estimated cost of the cheapest path from node n to the goal node.
Therefore, we have f(n)=g(n)+h(n) where f(n) is the estimated cost of the
cheapest solution through n.
So, in order to find the cheapest solution, try to find the lowest values of
f(n).
100
A Search
*
S is the root node, and G is the goal node. Starting from the root node S and
moving towards its next successive nodes A and B.
time complexities.
• Disadvantage of A* search
103
INTRODUCTION
For efficient decision-making and reasoning, an intelligent agent need
• Knowledge-base
• Inference system
105
LEVELS OF KNOWLEDGE-BASED AGENT
Knowledge level: The first level of a knowledge-based agent is the
knowledge level, where we must explain what the agent knows and what
the agent's goals are.
Let's say an automated taxi agent needs to get from station A to station
B, and he knows how to get there, so this is a knowledge problem.
Logical level: knowledge is encoded into logical statements. We can
expect the automated taxi agent to arrive at destination B on a rational
level.
Implementation level: Physical representation of logic and knowledge
(implementation level). Agents at the implementation level take actions
based on their logical and knowledge levels.
106
At this phase, an autonomous car driver puts his knowledge and logic
into action in order to go to his destination.
Approaches To Design Kba
Building a knowledge-based agent can be done in one of two ways:
1.Declarative approach: knowledge-based agent can be created by
starting with an empty knowledge base and informing the agent all
the sentences we wish to start with.
2.Procedural technique: We directly express desired behavior as a
program code in the procedural approach. That is, all we need to do is
develop a program that already has the intended behavior or agent
encoded in it.
In the actual world, however, a successful agent can be created by
mixing declarative and procedural approaches, and declarative
107
information can frequently be turned into more efficient procedural
code.
Cont’td
Data is collection of facts. The information is organized as data and
termed as knowledge.
knowledge.
108
Con’td
The types of knowledge that must be represented in AI systems are:
Object: All of the information on objects in our domain. Guitars,
for example, have strings, while trumpets are brass instruments.
Events: Events are the actions that take place in our world.
Performance: Performance is a term used to describe behavior that
entails knowing how to perform things.
Meta-knowledge: Meta-knowledge is information about what we
already know.
Facts: The truths about the real world and what we represent are
known as facts. 109
AI Knowledge Cycle
• For showing intelligent behavior, an artificial intelligence system must
110
Perception Block: This will help the AI system gain information regarding its
surroundings through various sensors, thus making the AI system familiar with
its environment and helping it interact with it.
Learning Block: The knowledge gained will help the AI system to run the
deep learning algorithms. These algorithms are written in the learning block,
making the AI system transfer the necessary information from the perception
block to the learning block for learning (training).
Knowledge and Reasoning Block: As mentioned earlier, we use the
knowledge, and based on it, we reason and then take any decision. Thus,
these two blocks are responsible for acting like humans go through all the
knowledge data and find the relevant ones to be provided to the learning
model whenever it is required.
Planning and Execution Block: These two blocks though independent, can 111
work in tandem. These blocks take the information from the knowledge block
and the reasoning block and, based on it, execute certain actions.
Knowledge Representation
like a person.
Approaches To Kr
There are basically four approaches to knowledge representation:
Simple relational knowledge: is the most basic technique of
storing facts that use the relational method, with each fact about a
group of objects laid out in columns in a logical order.
This method of knowledge representation is often used in database
systems to express the relationships between various things.
Example: The following is the simple relational knowledge
representation.
Player Weight Age
Player1 65 23
Player2 58 18
114
Player3 75 24
APPROACHES TO KR
Inheritable knowledge: data must be kept in a hierarchy of classes.
The instance relation is a type of inheritable knowledge that illustrates a
relationship between an instance and a class.
• Each individual frame might indicate a set of traits as well as their value.
Objects and values are represented in Boxed nodes in this technique.
Arrows are used to connect objects to their values.
115
Approaches To KR
Inferential knowledge: knowledge is represented in the form of formal
Marcus is a man
man(Marcus)
specific information.
117
But it is not important that we represent all the cases in this approach.
Requirements Of KR
A good knowledge representation system have to possess the following
properties.
Logic is the study of the methods and principles used to distinguish good
(correct) from bad (incorrect) reasoning.
Logic is a formal language. It has syntax, semantics, and a way of
manipulating expressions language.
Prepositional logic
First order logic can be used to design, represent or infer for any 120
environment in the real world.
Propositional Logic
The simplest kind of logic is propositional logic (PL), in
which all statements are made up of propositions.
A statement (proposition) is a declarative sentence
which may be asserted to be either true or false.
For example,
Five men cannot have eleven eyes.
121
PROPOSITIONAL LOGIC
• The term "Proposition“ refers to a declarative statement that can be
true or false. It's a method of expressing knowledge in logical and
mathematical terms.
• Example:
3. 3 + 3 = 7 (False proposition)
122
Propositional Logic
The sentences which are not propositions include questions, orders,
• For example,
Where is Abebe?
propositions
Propositional Logic: Exercise
Which of the following sentences are propositions? What are the truth
• In PL, symbolic variables are used to express the syntax, and any symbol
connectives.
125
PROPOSITIONAL LOGIC
• Propositions are divided into two categories:
Atomic propositions: is made up of only one proposition sign. These
are the sentences that must be true or untrue in order to pass.
Example:
• 2+2 is 4, it is an atomic proposition as it is a true fact.
• "The Sun is cold" is also a proposition as it is a false fact.
Compound proposition: atomic statements are combined to form
compound propositions.
Example:
• "It is raining today, and street is wet."
126
• “Abebe is a teacher, and his school is in Dilla."
PROPOSITIONAL LOGIC
• Logical connectives are used to link two simpler ideas or to logically
P = Abebe is intelligent,
127
¬P = Abebe is not intelligent. (It is not true Abebe is intelligent).
PROPOSITIONAL LOGIC
• Conjunction: is a sentence that contains ∧ connective such as, P ∧ Q.
• P= I am breathing,
130
EXERCISE: PROPOSITIONAL LOGIC
• Write the following statements with appropriate PL?
¬P
P= AI contains reasoning.
Q= AI contains Learning.
P∧Q
131
PROPOSITIONAL LOGIC
• The grammar is ambiguous if a sentence such as P A Q V R could be
parsed as either
+ (P n Q) v R
+ P n (Q v R).
• The way to resolve the ambiguity is, pick an order of precedence for the
operators, but use parentheses whenever there might be confusion.
• The order of precedence in propositional logic is (from highest to
lowest): ¬ , ∧ , v, =>, and <=>.
132
PROPOSITIONAL LOGIC: EXERCISE
1. Identify the correct precedence of the logical connectives?
A∧ ¬B A ∧ (¬B)
A ∧B v C (A ∧ B) v C
¬ A->B ∧ C (((¬ A ) ∧ B) v C )
133
TYPES OF SENTENCE
interpretation.
considered can be
Valid (tautology)
Invalid (contradiction)
135
Propositional Logic Limitations
• Some of the limitations of prepositional logic includes:
• very limited expressive power: unlike natural language,
propositional logic has very limited expressive power.
• It only represent declarative sentences: Propositional
logic is declarative (sentence always have truth value).
• Deals with only finite sentences: propositional logic
deals satisfactorily with finite sentences composed using
not, and, or, if . . . Then,
136
FIRST ORDER LOGIC
First-order logic does not only assume that the world contains facts
like propositional logic but also assumes the following things in the
world:
Objects: A, B, people, numbers, colors, wars, theories, squares, pits
Relations: It can be unary relation such as: red, round, is adjacent, or
n-any relation such as: the sister of, brother of, has color, comes
between
Function: Father of, best friend, third inning of, end of, ......
As a language, first-order logic also has two main parts:
+ Syntax
+ Semantics 137
FIRST ORDER LOGIC
The syntax of FOL determines which collection of symbols is a
logical expression in first-order logic. Basic Elements of First-order
logic:
Constant 1, 2, A, John, Mumbai, cat,....
Variables x, y, z, a, b,....
Predicates Brother, Father, >,....
Function sqrt, LeftLegOf, ....
Connectives ∧, ∨, ¬, ⇒, ⇔
Equality ==
Quantifier ∀, ∃
138
FIRST ORDER LOGIC
139
FIRST ORDER LOGIC
Atomic sentences are the most basic sentences of first-order logic.
These sentences are formed from a predicate symbol followed by a
parenthesis with a sequence of terms.
We can represent atomic sentences as Predicate (term1, term2, ......,
term n).
Example: Chala and Kebede are brothers: => Brothers(Chala,
Kebede).
Jerry is a cat: => cat (Jerry).
Complex sentences are made by combining atomic sentences using
connectives.
140
Con’td
141
FOL: UNIVERSAL QUANTIFIER
Universal quantifier is a symbol of logical representation, which
specifies that the statement within its range is true for everything or
every instance of a particular thing. The Universal quantifier is
represented by a symbol ∀, which resembles an inverted A.
If x is a variable, then ∀x is read as:
For all x
For each x
For every x.
142
FOR: EXISTENTIAL QUANTIFIERS
Existential quantifiers are the type of quantifiers, which express that
the statement within its scope is true for at least one instance of
something.
It is denoted by the logical operator ∃, which resembles as inverted E.
When it is used with a predicate variable then it is called as an
existential quantifier.
If x is a variable, then existential quantifier will be ∃x or ∃(x).
And it will be read as:
There exists a 'x.'
For some 'x.'
143
For at least one 'x.'
FIRST ORDER LOGIC
∀x bird(x) →fly(x).
145
INFERENCE IN FOL
• Inference in First-Order Logic is used to deduce new facts
or sentences from existing sentences.
• The following are some basic inference rules in FOL:
Universal Generalization
Universal Instantiation
Existential Instantiation
Existential introduction
146
INFERENCE IN FOL
• Universal generalization is a valid inference rule which states that if
• This rule can be used if we want to show that every element has a
similar property.
• Example:
which has a property P, then we can infer that there exists something
150
Knowledge Engineering
situation. 151
Knowledge Engineering
Steps in KE
154
Knowledge Engineering
3. Decide on vocabulary: is to select functions, predicate, and constants
to represent the system.
Student(X), Course, Teacher, Name, Id,
Department(X), registrar, Grade
4. Encode general knowledge about the domain: follow the rules and
encode the knowledge:
Student owns username and password.
159
KNOWLEDGE BASE AGENT
Declarative approach to building an agent:
161
EXPERT SYSTEM: KB
Backward Chaining
Expert System: Inference Engine
Forward Chaining: is a strategy of an expert system to answer the
question, “What can happen next?”
Forward inference engine follows the chain of conditions and
derivations and finally deduces the outcome.
This strategy is followed for working on conclusion, result, or effect.
For example, prediction of anything.
165
Expert System: Inference Engine
Backward Chaining finds out the answer to the question, “Why this
happened?”
On the basis of what has already happened, the inference engine tries to
find out which conditions could have happened in the past for this
result.
This strategy is followed for finding out cause or reason. For example,
diagnosis of blood cancer in humans.
166
Expert System: User Interface
3.User interface provides interaction between user of the ES
and the ES itself.