Chapter 1 - 4

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 168

Chapter 1

Introduction to AI
1
Outlines

Introduction to AI

Approaches to AI

 The Foundations of AI

Application of AI

2
History of AI

3
What Is AI?
• Is composed of two words Artificial and Intelligence.

• Artificial defines "man-made," and intelligence defines "thinking

power", or “the ability to learn and solve problems”.

• Hence Artificial Intelligence means "a man-made thinking power."

• The term AI is first used by John McCarthy (1956) who considers it

to mean the science and engineering of making intelligent


4
machine.
What Is Intelligence?
• Is "thinking power"

• Is general mental capability that involves the ability to reason,

plan, solve problems, think abstractly, comprehend ideas and

language, and learn.

• Quite simple human behavior can be intelligent yet quite complex

behavior performed by insects is unintelligent. Why?


5
Con’t
• Intelligence is a general mental ability for

 reasoning

 problem solving

learning.

• Because of its general nature, intelligence integrates cognitive

functions such as perception, attention, memory, language.


6
Cont’d
• Mainstream thinking in psychology regards human intelligence not as a
single ability or cognitive process but rather as an array of separate
components.
• Research in AI has focused chiefly on the following components of
intelligence:

7
Reasoning
• It is the set of processes that enables us to provide basis for

judgment, making decisions, and prediction.

• To reason is to draw inferences appropriate to the situation in

hand.

• Inferences (reasoning) are classified into.


 Deductive Reasoning

 Inductive Reasoning
8
Inductive Reasoning
• Takes specific information and makes a broader generalization.

• Even if all of the premises are true in a statement, inductive

reasoning allows for the conclusion to be false.

• For example: Every time you eat peanuts, you start to cough.

You are allergic to peanuts.

9
Deductive Reasoning
• It starts with a general statement and examines the possibilities

to reach a specific, logical conclusion.

• If something is true of a class of things in general, it is also

true for all members of that class.

• For example:

• “Abebe is either in the museum or the cafe; he isn't in the cafe;

so he's in the museum",


10
Learning
• It is the activity of gaining knowledge or skill by studying, practicing,

being taught, or experiencing something.

• It enhances the awareness of the subjects of the study.

• The ability of learning is possessed by humans, some animals, and AI-

enabled systems.

• categorized as:

• Auditory Learning :It is learning by listening and hearing. 11

For example, students listening to recorded audio lectures.


Con’t
• Episodic Learning :Is learn by remembering sequences of events that one

has witnessed or experienced. This is linear and orderly

• Motor Learning :Is learning by precise movement of muscles and

a complex process occurring in the brain in response to practice or experience

of a certain skill resulting in changes in the central nervous system.

• Observational Learning :To learn by watching and imitating others. For

example, child tries to learn by mimicking(masmesel or fakkeessuu) their

parent. 12
Con’t
• Perceptual Learning :is learning to recognize stimuli that one has seen

before. For example, identifying and classifying objects and situations.

• Spatial Learning : is learning through visual stimuli such as images,

colors, maps, etc. For Example, A person can create roadmap in mind

before actually following the road.

• Stimulus-Response Learning : is learning to perform a particular

behavior when a certain stimulus is present. For example, a dog raises


13
its ear on hearing doorbell.
Problem-solving
• Problem Solving is the process in which one perceives and tries to

arrive at a desired solution from a present situation by taking some

path, which is blocked by known or unknown hurdles.

• It includes decision making, which is the process of selecting the

best suitable alternative out of multiple alternatives to reach the

desired goal are available.

14
Con’t

• Problem-solving methods divide into special and general purpose.

• Special-purpose method is tailor-made for a particular problem,

and often exploits very specific features of the situation in which

the problem is embedded.

• General-purpose method is applicable to a wide range of different

problems. 15
APPROACHES TO AI
• Different scholars define approaches to AI differently

1. Thinking humanly 2. Acting humanly

3. Thinking rationally 4. Acting rationally

16
AI: Thinking Humanly(Cognitive Modeling Approach)

• If we are going to say that a given program thinks like a human, we

must have some way of determining how humans think.

• There are two ways to do this: through introspection: trying to catch our

own thoughts as they go by or through psychological experiments.

 AI as systems that think humanly

• It requires getting inside of the human mind to see how it works and then

comparing our computer programs to this. This is what cognitive science


17
attempts to do.
Thinking Humanly: Cognitive Modeling Approach

 If the program’s input/output and timing behavior matches human

behavior, that is evidence that some of the program's mechanisms may also

be operating in humans.

• General problem solver was an early computer program that attempted to

model human thinking.

• They were more interested in showing that it solved problems like people,

going through the same steps and taking around the same amount of time
18
to perform those steps.
AI: Acting Humanly( Turing test approach)
• The Turing Test, proposed by Alan Turing (1950), was designed to

provide a satisfactory operational definition of intelligence.

• Turing defined intelligent behavior as the ability to achieve human

level performance in all cognitive tasks, sufficient to fool an

interrogator.

• The test he proposed is that the computer should be interrogated

by a human via a teletype, and passes the test if the interrogator 19

cannot tell if there is a computer or a human at the other end.


Cont’td

• The computer would need to possess the following capabilities to pass the test:

 Natural language processing to enable it to communicate successfully

 Knowledge representation to store information provided before or


during the interrogation
 Automated reasoning to use the stored information to answer
questions and to draw new conclusions
20
 Machine learning to adapt to new circumstances and to detect and
extrapolate patterns.
AI: Thinking Rationally(Laws of thought approach)

• The Greek philosopher Aristotle was one of the first to attempt to codify

"right thinking," that is, irrefutable reasoning processes.

• His famous syllogisms provided patterns for argument structures that

always gave correct conclusions given correct premises.

• “Socrates is a man; all men are mortal; therefore Socrates is mortal.”

• These laws of thought were supposed to govern the operation of the mind,

and initiated the field of logic.


21
Acting Rationally(Rational Agent Approach)

• Acting rationally means acting so as to achieve one's goals, given one's

beliefs.

• In this approach, AI is viewed as the study and construction of rational agent.

• Rational behavior: doing the right thing. The right thing: is the

action/decision which is expected to maximize goal achievement, given the

available information.

• In the "laws of thought" approach to AI, the whole emphasis was on correct
22
inferences.
Task Classification Of AI
• The domain of AI is classified into Formal tasks, Mundane

tasks, and Expert tasks.

• Humans learn mundane (ordinary) tasks since their birth. They

learn by perception, speaking, using language, and locomotives.

• Formal tasks: are tasks which we done by formal learning.

•For example calculating Mathematics, Geometry, Chess, Verification,

Theorem Proving 23
Task Classification Of AI
• For humans, the mundane tasks are easiest to learn. The same was

considered true before trying to implement mundane tasks in

machines. Earlier, all work of AI was concentrated in the mundane

task domain.

• Later, it turned out that the machine requires more knowledge,

complex knowledge representation, and complicated algorithms for

handling mundane tasks. 24


Task Classification Of Ai
• This is the reason why AI work is more prospering in the Expert Tasks

domain now, as the expert task domain needs expert knowledge without

common sense, which can be easier to represent and handle.

Mundane (Ordinary) Formal Tasks Expert Tasks


Tasks
•Perception •Mathematics •Engineering
•Speech, Voice •Geometry •Manufacturing
•Language Translation •Logic •Medical Diagnosis
•Common Sense •Theorem Proving •Creativity
•Reasoning •Financial Analysis
•Scientific Analysis
25
Chapter 2:Intelligent Agent

26
• What is an agent?

• Is any thing that can be viewed as perceiving its environment

through sensors and acting upon the environment through the

effectors.

27
How Agents Should Act?
• A rational agent is one that does the right thing. Obviously, this is

better than doing the wrong thing, but what does it mean?

• As a first approximation, we will say that the right action is the one

that will cause the agent to be most successful.

• That leaves us with the problem of deciding how and when to

evaluate the agent's success.

28
How Agents Should Act
• We use the term performance measure for the how the criteria that determine how

successful an agent is.


 Performance measure (how?)

 Subjective measure using the agent

 How happy is the agent at the end of the action

 Agent should answer based on his opinion. Some agents are unable to
answer, some are delude them selves, some over estimate and some under
estimate their success. Therefore, subjective measure is not a better way.

 Objective Measure imposed by some authority is an alternative

• In other words, we as outside observers establish a standard of what it means to be


29
successful in an environment and use it to measure the performance of agents.
 Objective Measure

 Needs standard to measure success

 Provides quantitative value of success measure of agent

 Involves factors that affect performance and weight to each factors

 The time to measure performance is also important for success.

 It may include knowing starting time, finishing time, duration of job.

30
Ideal Example Of Agent How
Vacuum-cleaner world  amount of dirt cleaned up
• Percepts: location and
 amount of time taken,
contents
 amount of electricity consumed,
• e.g., [A, Dirty]
 Actions: Left, Right, Suck  amount of noise generated, etc.
When
 If we measured how much dirt the
agent had cleaned up in the first
hour of the day, we would be
rewarding those agents that start
fast (even if they do little or no
work later on), and punishing
those that work consistently.
 Thus, we want to measure 31
performance over the long run.
Structure Of Intelligent Agent

• To understand the structure of Intelligent Agents, we should be

familiar with Architecture and Agent programs.

• The job of AI is to design the agent program: a function that

implements the agent mapping from percepts to actions.

• We assume this program will run on some sort of ARCHITECTURE

computing device, which we will call the architecture.

• Architecture is the machinery that the agent executes on. It is a


32
device with sensors and actuators, for example, a robotic car, a

camera, a PC.
Agents And Environments
• Percept: the agent’s perceptual inputs.

• Percept sequence: the complete history of everything the agent has perceived.

• The agent function maps from percept histories to actions:

[f: P*  A]

• The agent program runs on the physical architecture to produce f. 33

agent = architecture + program


Structure Of Intelligent Agent
 Design of intelligent agent needs prior knowledge of

 Performance measure or Goal the agent supposed to achieve,

 On what kind of Environment it operates

 What kind of Actuators it has (what are the possible Actions),

 What kind of Sensors the agent has (what are the possible Percepts)

 Performance measure  Environment  Actuators  Sensors are

abbreviated as PEAS.
34
Examples Of Agents Structure And Sample Peas
 Agent: automated taxi driver:

 Environment: Roads, traffic, pedestrians, customers

 Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors,

keyboard

 Actuators: Steering wheel, accelerator, brake, signal, horn

 Performance measure: Safe, fast, legal, comfortable trip, maximize

profits.

35
Examples Of Agents Structure And Sample Peas

 Agent: Medical diagnosis system

 Environment: Patient, hospital, physician, nurses, …

 Sensors: Keyboard (percept can be symptoms, findings, patient's

answers)

 Actuators: Screen display (action can be questions, tests, diagnoses,

treatments, referrals)

 Performance measure: Healthy patient, minimize costs, lawsuits.


36
Examples Of Agents Structure And Sample Peas

 Agent: Interactive English tutor

 Environment: Set of students

 Sensors: Keyboard (typed words)

 Actuators: Screen display (exercises, suggestions, corrections)

 Performance measure: Maximize student's score on test

37
Examples Of Agents Structure And Sample Peas

 Agent: Robot Soccer Player.

 Performance Measure (P): To Play, Make Goal & Win the Game.

 Environment (E): Team Members, Opponents, Referee, Audience and

Soccer Field.

 Actuators (A): Navigator, Legs of Robot, View Detector for Robot.

 Sensors (S): Camera, Communicators and Orientation & Touch Sensors.

38
Agent Programs
 An agent is completely specified by the agent function that maps percept sequences into

actions.

FUNCTION SKELETON-AGENT (percept) returns action

static memory, the agent’s memory of the world

memory UPDATE-MEMORY (memory, percept)

action  CHOOSE-BEST-ACTION (memory)

memory UPDATE-MEMORY (memory, action)

RETURN action

Note:
39

1. the function gets only a single percept at a time

2. The goal or performance measure is not part of the skeleton


Types Of Environment
• An environment in artificial intelligence is the surrounding of the agent.

• A task environment is a problem to which a rational agent is designed as

a solution.

• The agent takes input from the environment through sensors and delivers

the output to the environment through actuators.


40
Types Of Environment
• Based on the portion of the environment observable

• Fully observable: if the agent sensors give it access to the complete

state of the environment at each point in time. (chess vs. driving)

• Partially observable: if the agent does not have complete and relevant

information of the environment, then the task environment is partially

observable. 41
Types Of Environment
• Example: In the Checker Game, the agent observes the environment

completely while in Poker Game, the agent partially observes the

environment because it cannot see the cards of the other agent.

• Maintaining a fully observable environment is easy as there is no need to

keep track of the history of the surrounding.

• An environment is called unobservable when the agent has no sensors in

all environments.
42
Types Of Environment
• Based on the effect of the agent action

• Deterministic : The next state of the environment is completely


determined by the current state and the action executed by the agent.

 Strategic: If the environment is deterministic except for the actions of


other agents, then the environment is strategic.

• Stochastic or probabilistic: is random in nature which is not unique and


cannot be completely determined by the agent.

• Example: Chess – there would be only a few possible moves for a coin at
the current state and these moves can be determined.
43

• Self Driving Cars: the actions of a self-driving car are not unique, it varies
time to time.
Types Of Environment
• Based on the number agent involved

• Single agent A single agent operating by itself in an environment.

• Multi-agent: multiple agents are involved in the environment.

• Based on the state, action and percept space pattern

• Discrete: A limited number of distinct, clearly defined state, percepts

and actions.

• Continuous: state, percept and action are consciously changing

variables 44

• Note: one or more of them can be discrete or continuous


Types Of Environment
• Based on the effect of time

• Static: The environment is unchanged while an agent is deliberating.

• Dynamic: The environment changes while an agent is not deliberating.

• semi-dynamic: The environment is semi-dynamic if the environment

itself does not change with the passage of time but the agent's

performance score does. 45


Types Of Environment
• Based on loosely dependent sub-objectives

• Episodic: The agent's experience is divided into atomic "episodes"

(each episode consists of the agent perceiving and then performing a

single action), and the choice of action in each episode depends only

on the episode itself.

• Sequential: The agent's experience is a single atomic "episodes" 46


Agent Types
• Based on memory of the agent, and they way the agent takes action

we can divide agents into five basic types:

• These are (according to their increasing order of generality) :

1. Simple reflex agents

2. Model-based reflex agents

3. Goal-based agents

4. Utility-based agents
47

5. Learning agent
Simple Reflex Agents
• It is the simplest agent which acts according to the current percept only,

pays no attention to the rest of the percept history.

• The agent function of this type relies on the condition-action rule –

“If condition, then action.”

• It makes correct decisions only if the environment is fully observable.

• It works by finding a rule whose condition matches the current

situation (as defined by the percept) and then doing the action

associated with that rule. 48


Simplex Reflex Agent

functionSIMPLE-REFLEX AGENT
(percept) returns action
static: rules, a set of condition-
action rules
state<—INTERPRET(INPUT)
rule<- RULE-MATCH(state, rules)
action <- RULE-ACTION[rule]
return action

• Simple reflex agents do not maintain the internal state and do not depend on the 49

percept theory.
Simplex Reflex Agent E.g
• Consider an artificial robot that stand at the center of Meskel square

(environment) Agent has camera and microphone (sensor)

• if the agent perceive a sound of very high frequency (say above

20Khz) then it start to fly up to the sky as much as possible.

• if the agent perceive an image which looks like a car it run away in

the forward direction.

• Otherwise it just turn in any direction randomly.


50
Model-based Reflex Agents
• These type of agents can handle partially observable environments by

maintaining some internal states.

• The internal state depends on the percept history, which reflects at least

some of the unobserved aspects of the current state.

• Therefore, as time passes, the internal state needs to be updated which

requires two types of knowledge or information to be encoded in an

agent program

 the evolution of the world on its own 51

 the effects of the agent’s actions.


Model-based Reflex Agents
• Function MODEL_BASED_AGENT(PERCEPT) return action
static state, a description of the current world state
rules, a set of condition action rules
stateUPDATE_STATE(state, percept)
ruleRULE_MATCH(state, rules)
actionRULE_ACTION[rule]
stateUPDATE_STATE(state,action)
return action

52

Example: When a person walks in a lane, he maps the pathway in his mind.
Goal-based Agents
• It is not sufficient to have the current state information unless the goal is
not decided. Therefore, a goal-based agent selects a way among multiple
possibilities that helps it to reach its goal.

53
Goal-based Agents Structure
• Function GOAL_BASED_AGENT(PERCEPT) return action

static state, a description of the current world state

goal, a description of the goal to achieve may be in


terms of state

stateUPDATE_STATE(state, percept)

actionSetPOSSIBLE_ACTIONS(state)

actionACTION_THAT_LEADS_TO_GOAL(actionSet)
stateUPDATE_STATE(state,action)

return action 54
Utility-based Agents
• These types of agents are concerned about the performance measure.
The agent selects those actions which maximize the performance
measure and devote towards the goal.

55
Utility-based Agents Structure
• Function UTILITY_BASED_AGENT(PERCEPT) return action

static state, a description of the current world state

goal, a description of the goal to achieve may be in terms of state

stateUPDATE_STATE(state, percept)

actionSetPOSSIBLE_ACTIONS(state)

actionBEST_ACTION(actionSet)

stateUPDATE_STATE(state,action) 56

return action
Learning Agents
• Critic: It provides feedback to the learning agent about how well the
agent is doing, which could maximize the performance measure in
the future.

57
Learning Agents
• The main task of these agents is to teach the agent machines to operate

in an unknown environment and gain as much knowledge as they can.

A learning agent is divided into four conceptual components:

• Learning element: This element is responsible for making

improvements.

• Performance element: It is responsible for selecting external actions

according to the percepts it takes.


58
• Problem Generator: It suggests actions which could lead to new and

informative experiences.
CHAPTER 3

Searching And Planning

Solving Problems By Searching


Search Algorithm

59
OBJECTIVES

Identify the type of agent that solve problem by searching

Problem formulation and goal formulation

Types of problem based on environment type

Discuss various techniques of search strategies


60
INTRODUCTION
 Problem is a situation, question, or thing that causes difficulty, stress, or doubt. It

is also a question raised to inspire thought.

 In mathematics, a problem is a statement or equation that requires a solution.

 Problem solving is a part of artificial intelligence that encompasses a number of

techniques and algorithms to solve a problem.

 The problem-solving agent preforms precisely by defining problems and its

several solutions.
61
 Therefore, a problem-solving agent is a goal-driven agent and focuses on
PROBLEM-SOLVING AGENT
 The problem of AI is directly associated with the nature of humans and their
activities. We need a number of finite steps to solve a problem which makes
human works easy.
 Steps performed by Problem-solving agent

 Goal Formulation: is the first and simplest step in problem-solving.

 It organizes the steps/sequence required to formulate one goal out of multiple


goals as well as actions to achieve that goal.
 Goal formulation is based on the current situation and the agent's performance
measure.
 Problem Formulation: It is the most important step of problem-solving which
62

decides what actions should be taken to achieve the formulated goal.


There are following five components involved in problem formulation:
Initial State: Is the starting state or initial step of the agent towards its goal.
Actions: It is the description of the possible actions available to the agent.
Transition Model: describes what each action does.
Goal Test: determines if the given state is a goal state.
Path cost: assigns a numeric cost to each path that follows the goal.

Initial state, actions, and transition model together define the state-space of
the problem implicitly.
State-space of a problem is a set of all states which can be reached from
the initial state followed by any sequence of actions. 63
• In AI one must identify components of problems, These are:

Problem Statement

Definition

Limitation or Constraints or Restrictions

Problem Solution (Goal)

Solution Space

Operators (Action) 64
EXAMPLE
Definition of Problem: the information about what is to be done?

“I want to predict the price of house using AI”.


Problem Limitation: there is always some limitations while solving
problems.
“I have only few features, some records are missing.
Goal or Solution: what is expected? The Goal state or final state or the
solution of problem is defined here.
Predict the price of house.
Operators (Action): are the actions taken during solving problem.
65
Complete problem is solved using tiny steps or actions and all these
Solution Space
Problem can be solve in many ways, But

 Some solution will be efficient than others


 Some will consume less resources
 some will be simple etc.
There are always alternatives exists.
Many possible ways with which we can solve problem is known as
Solution Space.
• “price of house can be predicted using many machine learning
66
algorithms”.
• For example: Mouse Path Problem  Problem Definition: Mouse is in a

puzzle where there are some cheese.


The mouse must eat cheese.
 Limitation: Some paths are close,
mouse can only travel through open
paths.
 Goal: Reach location where is cheese
and eat minimum one cheese.
 Solution Space: To reach cheese
there are multiple paths possible
 Operators: Mouse can move in four
67
possible directions, UP, DOWN,
LEFT and RIGHT
 list of 3 elements
Problem formulation: information of block A,
block B and the location
 Involves: of the Agent.

 Abstracting the real environment configuration into state


information using preferred data structure [dirty, dirty,
A]
 Describe the initial state according to the data structure
Suck, moveRight,
 Deciding the set of all possible action moveLeft
 The set of action possible on a given state at specific point
Determine which of the above
in the process.action are valid for a give action
 The cost of the action at each state
68
• For vacuum world problem, the problem formulation involve:
Goal formulation: refers to the understanding of the objective of the
agent based on the state description of the final environment
For example, for the vacuum world problem, the goal can be formulated
as
[Clean, Clean, agent at any block].
Intelligent agents are supposed to act in such a way that the environment
goes through a sequence of states that maximizes the performance
measure.
Such agent is not reflex or model based reflex agent because this agent
needs to achieve some target (goal). It can be goal based or utility based
69
or learning agent.
TYPES OF PROBLEMS
Four types of problems exist in the real situations:

1. Single-state problem

 The environment is Deterministic and fully observable

 Out of the possible state space, agent knows exactly which

state it will be in; solution is a sequence.

 Example: Let the world contain just two locations. Each

location may or may not contain dirt, and the agent may be in
70
one location or the other.
Types Of Problem

2. Sensor less problem (conformant problem)


 The environment is non-observable
 It is also called multi-state problem
 Agent may have no idea where it is; solution is a
sequence.
When the world is not fully accessible, the agent must reason about
sets of states that it might get to, rather than single states. We call
this a multiple-state problem.

71
TYPES OF PROBLEMS
3. Contingency problem

 The environment is nondeterministic and/or partially observable


 It is not possible to know the effect of the agent action
 Percepts provide new information about current state.
 Solving this problem requires sensing during the execution phase.
Notice that the agent must now calculate a whole tree of actions,
rather than a single action sequence.
 Many problems in the real, physical world are contingency problems,
because exact prediction is impossible. For this reason, many people
72
keep their eyes open while walking around or driving.
Types of problem

4. Exploration problem : the environment is partially observable. It is also


called unknown state space
Example: Assume the agent is some where outside the blocks and wants to
clean the block. So how to get into the blocks? No clear information about
their location. What will be the solution?
Solution is exploration
Example : The agent is at some point in the world and want to reach a city
called CITY which is unknown to the agent. The agent doesn’t have any
map.
73
Solution is exploration
Basically, there are two types of problem approaches:
1.Toy Problem: It is a concise and exact description of the problem which is
used by the researchers to compare the performance of algorithms. They are
intended to illustrate or exercise various problem-solving methods,
2.Real-world Problem: It is real-world based problems which require
solutions. Unlike a toy problem, it does not depend on descriptions, but we can
have a general formulation of the problem. They tend to be more difficult and
whose solutions people actually care about.
• Toy problems can be given a concise, exact description, but real-world
problems, on the other hand, tend not to have a single agreed-upon
74
description, but we will attempt to give the general flavor of their
formulations.
Searching for solutions
AI: is the study of building agents that act rationally. Most of the time, these
agents perform some kind of search algorithm in the background in order to
achieve their tasks.
A search problem consists of:
 A State Space. Set of all possible states where you can be.
 A Start State. The state from where the search begins.
 A Goal State. A function that looks at the current state returns whether or not it
is the goal state.

The Solution to a search problem is a sequence of actions, called


the plan that transforms the start state to the goal state.
This plan is achieved through search algorithms. 75
 We have seen many problems. There is a need to search for solutions to solve
them.
 For solving different kinds of problem, an agent makes use of different strategies
to reach the goal by searching the best possible algorithms. This process of
searching is known as search strategy.
 There are four ways to measure the performance of an algorithm:

 Completeness: It measures if the algorithm guarantees to find a solution (if any


solution exist).
 Optimality: It measures if the strategy searches for an optimal solution.

 Time Complexity: The time taken by the algorithm to find a solution.

 Space Complexity: Amount of memory required to perform a search.

 The complexity of an algorithm depends on branching factor or maximum


76
number of successors, depth of the shallowest goal node (i.e., number of steps
from root to the path) and the maximum length of any path in a state space.
SEARCH STRATEGIES
There are two types of strategies that describe a solution for a given problem:
1.Uninformed Search (Blind Search): does not have any additional
information about the states except the information provided in the problem
definition. These type of search does not maintain any internal state, that’s why
it is also known as Blind search.
• Types of uninformed searches are:

Breadth-first search
 Depth-limited search
Uniform cost search
 Iterative deepening search
Depth-first search  Bidirectional search
77
SEARCH STRATEGIES
2.Informed Search (Heuristic Search): Contains some additional
information about the states beyond the problem definition. This search uses
problem-specific knowledge to find more efficient solutions.
This search maintains some sort of internal states via heuristic functions
(which provides hints), so it is also called heuristic search.
There are following types of informed searches:

Greedy Best first search (Greedy search)


A* search

78
Breadth-first Search
• Breadth-first search is a simple strategy in which the root node is expanded
first, then all the SEARCH successors of the root node are expanded next,
then their successors, and so on.
 All the nodes at depth d in the search tree are expanded before the nodes at
depth d + 1
 Implementation: Fringe (open list) is a FIFO queue, i.e., new successors go
at the end

79
Breadth-first Search con’td

B C

D E F G

If there is a solution, breadth-first search is guaranteed to find it, and if there are several
solutions, breadth-first search will always find the shallowest goal state first.
80
Uniform-cost Search
Unlike BFS, this search explores nodes based on their path cost from
the root node. It expands a node n having the lowest path cost g(n),
where g(n) is the total cost from a root node to node n.
Implementation:= queue ordered by path cost

81
Uniform-cost Search
Disadvantages of Uniform-cost search

 It does not care about the number of steps a path has taken to reach the goal
state.
 It may stick to an infinite loop if there is a path with infinite zero cost
sequence.
 It works hard as it examines each node in search of lowest cost path.
The performance measure of Uniform-cost search

 Completeness: It guarantees to reach the goal state.

 Optimality: It gives optimal path cost solution for the search.

 Space and time complexity: The worst space and time complexity of 82
the
uniform-cost search is O(b1+LC*).
Depth-first Search
Expands one of the nodes at the deepest level of the tree. Only when the
search hits a dead end the search go back and expand nodes at shallower
levels.
Implementation: = LIFO queue, i.e., put successors at front.

The drawback of depth-first search is that it can get stuck going down the
wrong path. Many problems have very deep or even infinite search trees,
so depth-first search will never be able to recover from an unlucky
choice.
Depth-first search will either get stuck in an infinite loop and never
83

return a solution, or it may eventually find a solution path that is longer


than the optimal solution.
Depth-first Search

84
Properties Of Depth-first Search

• Complete? No: fails in infinite-depth spaces, spaces with loops


• Optimal? No
 Time? O(bm): terrible if m is much larger than d
• Space? O(bm), i.e., linear space!
Where
b: maximum branching factor of
the search tree
d: depth of the least-cost solution
m: maximum depth of the state
space
• Depth-first search is neither complete nor optimal. Because of this,
depth-first search should be avoided for search trees with large or
infinite maximum depths. 85
Depth-limited Search
Depth-limited search avoids the pitfalls of depth-first search by
imposing a cutoff on the maximum depth of a path.

86
Iterative Deepening Search
This search is a combination of BFS and DFS, as BFS guarantees to reach the
goal node and DFS occupies less memory space.
It is a strategy that sidesteps the issue of choosing the best depth limit by
trying all possible depth limits: first depth 0, then depth 1, then depth 2, and
so on.
The order of expansion of states is similar to breadth-first, except that some
states are expanded multiple times.

87
Iterative Deepening Search

It gradually increases the depth-limit from 0,1,2 and so on and reach the goal node.

88
Bidirectional Search
 Simultaneously search forward from the initial state and backward
from the goal state and terminate when the two search meet in the
middle.

89
 Reconstruct the solution by backward tracking towards the root and
forward tracking towards the goal from the point of intersection.
Bidirectional Search
These algorithm is efficient if there any very limited one or two
nodes with solution state in the search space.

90
Bidirectional search can use search techniques such as BFS,
DFS, DLS
Advantages:

Bidirectional search is fast.


Bidirectional search requires less memory
Disadvantages:

Implementation of the bidirectional search tree is difficult.


In bidirectional search, one should know the goal state in
advance.
91
Completeness: Bidirectional Search is complete if we use BFS in both
searches.
Time Complexity: Time complexity of bidirectional search using BFS
is O(bd).
Space Complexity: Space complexity of bidirectional search is O(bd).
Optimal: Bidirectional search is Optimal.

92
Informed Search Algorithms
 Uninformed search strategies can find solutions to problems by
systematically generating new states and testing them against
the goal. Unfortunately, these strategies are incredibly inefficient
in most cases.
 Informed search strategy uses problem-specific knowledge and
can find solutions more efficiently than uninformed search.

93
Informed Search Algorithms
 Informed search is a strategy that uses information about the
cost that may incur to achieve the goal state from the current
state.
 The information may not be accurate. But it will help the agent
to make better decision. This information is called heuristic
information.
 There several algorithms that belongs to this group. Some of
these are:
94
1.Greedy best-first search
2.A* search
Greedy Best-first Search
Greedy best-first search algorithm always selects the path which appears
best at that moment.
It is the combination of depth-first search and breadth-first search
algorithms.
It uses the heuristic function and search.
In the best first search algorithm, we expand the node which is closest to
the goal node and the closest cost is estimated by heuristic function, i.e.
f(n)= h(n). Were, h(n)= estimated cost from node n to the goal.

95
Greedy Best-first Search: Example
Consider the search problem below, Each node is extended at each iteration
using the evaluation function f(n)=h(n), as shown in the table below:

96
Con’td

97
Con’td

98
Properties Of Greedy Best-first Search
The performance measure of Best-first search Algorithm:
Completeness: Best-first search is incomplete even in finite state space.
Optimality: It does not provide an optimal solution.
Time and Space complexity: It has O(bm) worst time and space
complexity, where m is the maximum depth of the search tree. If the
quality of the heuristic function is good, the complexities could be
reduced substantially.

99
A Search
*
 A* search is the most widely used informed search algorithm where a node
n is evaluated by combining values of the functions g(n)and h(n).
 The function g(n) is the path cost from the start/initial node to a node n and
h(n) is the estimated cost of the cheapest path from node n to the goal node.
 Therefore, we have f(n)=g(n)+h(n) where f(n) is the estimated cost of the
cheapest solution through n.
 So, in order to find the cheapest solution, try to find the lowest values of
f(n).

100
A Search
*
 S is the root node, and G is the goal node. Starting from the root node S and
moving towards its next successive nodes A and B.

 f(A)=(distance from node S to A)+h(A)


 2+12=14
 f(B)=(distance from node S to B)+h(B)
 3+14=17
 F(C)=9+11=20
 F(D)=5+4=9
 F(G)=3+1=4
 S-->A-->D-->G 101
A* Search
• The performance measure of A* search

• Completeness: The star(*) in A* search guarantees to reach the goal node.

• Optimality: An underestimated cost will always give an optimal solution.

• Space and time complexity: A* search has O(bd) space and

time complexities.

• Disadvantage of A* search

• A* mostly runs out of space for a long period.


102
CHAPTER FOUR
Knowledge Representation & Reasoning

103
INTRODUCTION
 For efficient decision-making and reasoning, an intelligent agent need

knowledge about the real world.

 Knowledge-based agents are capable of maintaining an internal state

of knowledge, reasoning over that knowledge, updating their

knowledge following observations, and taking actions.

 These agents can use some type of formal representation to represent

the world and act intelligently.


104
Con’td
• Knowledge-based agents are composed of two main parts:

• Knowledge-base

• Inference system

105
LEVELS OF KNOWLEDGE-BASED AGENT
 Knowledge level: The first level of a knowledge-based agent is the
knowledge level, where we must explain what the agent knows and what
the agent's goals are.
 Let's say an automated taxi agent needs to get from station A to station
B, and he knows how to get there, so this is a knowledge problem.
 Logical level: knowledge is encoded into logical statements. We can
expect the automated taxi agent to arrive at destination B on a rational
level.
 Implementation level: Physical representation of logic and knowledge
(implementation level). Agents at the implementation level take actions
based on their logical and knowledge levels.
106
 At this phase, an autonomous car driver puts his knowledge and logic
into action in order to go to his destination.
Approaches To Design Kba
Building a knowledge-based agent can be done in one of two ways:
1.Declarative approach: knowledge-based agent can be created by
starting with an empty knowledge base and informing the agent all
the sentences we wish to start with.
2.Procedural technique: We directly express desired behavior as a
program code in the procedural approach. That is, all we need to do is
develop a program that already has the intended behavior or agent
encoded in it.
 In the actual world, however, a successful agent can be created by
mixing declarative and procedural approaches, and declarative
107
information can frequently be turned into more efficient procedural
code.
Cont’td
 Data is collection of facts. The information is organized as data and

facts about the task domain.

 Data, information, and past experience combined together are

termed as knowledge.

 Knowledge is required to exhibit intelligence. The success of any AI

majorly depends upon the collection of highly accurate and precise

knowledge.

108
Con’td
 The types of knowledge that must be represented in AI systems are:
 Object: All of the information on objects in our domain. Guitars,
for example, have strings, while trumpets are brass instruments.
 Events: Events are the actions that take place in our world.
 Performance: Performance is a term used to describe behavior that
entails knowing how to perform things.
 Meta-knowledge: Meta-knowledge is information about what we
already know.
 Facts: The truths about the real world and what we represent are
known as facts. 109
AI Knowledge Cycle
• For showing intelligent behavior, an artificial intelligence system must

have the following components:

110
Perception Block: This will help the AI system gain information regarding its
surroundings through various sensors, thus making the AI system familiar with
its environment and helping it interact with it.
 Learning Block: The knowledge gained will help the AI system to run the
deep learning algorithms. These algorithms are written in the learning block,
making the AI system transfer the necessary information from the perception
block to the learning block for learning (training).
 Knowledge and Reasoning Block: As mentioned earlier, we use the
knowledge, and based on it, we reason and then take any decision. Thus,
these two blocks are responsible for acting like humans go through all the
knowledge data and find the relevant ones to be provided to the learning
model whenever it is required.
 Planning and Execution Block: These two blocks though independent, can 111
work in tandem. These blocks take the information from the knowledge block
and the reasoning block and, based on it, execute certain actions.
Knowledge Representation

Humans excel in comprehending, reasoning, and


interpreting information.
Humans have knowledge about things and use that
knowledge to accomplish various activities in the real
world.
However, knowledge representation and reasoning deal with
how robots achieve all of these things.
Knowledge representation and reasoning (KR, KRR) is a
branch of artificial intelligence that studies how AI agents
112
think and how their thinking influences their behavior.
Knowledge Representation
It is in charge of describing information about the real world
in such a way that a computer can comprehend and use it to
solve difficult real-world problems such as diagnosing a
medical ailment or conversing in natural language with
humans.
It's also a means of describing how artificial intelligence can
represent knowledge.
Knowledge representation is more than just storing data in a
database; it also allows an intelligent machine to learn from
its knowledge and experiences in order to act intelligently 113

like a person.
Approaches To Kr
 There are basically four approaches to knowledge representation:
 Simple relational knowledge: is the most basic technique of
storing facts that use the relational method, with each fact about a
group of objects laid out in columns in a logical order.
 This method of knowledge representation is often used in database
systems to express the relationships between various things.
 Example: The following is the simple relational knowledge
representation.
Player Weight Age
Player1 65 23
Player2 58 18
114
Player3 75 24
APPROACHES TO KR
 Inheritable knowledge: data must be kept in a hierarchy of classes.
The instance relation is a type of inheritable knowledge that illustrates a
relationship between an instance and a class.
• Each individual frame might indicate a set of traits as well as their value.
Objects and values are represented in Boxed nodes in this technique.
Arrows are used to connect objects to their values.

115
Approaches To KR
 Inferential knowledge: knowledge is represented in the form of formal

logics. More facts can be derived using this method.

 Example: Let's suppose there are two statements:

 Marcus is a man

 All men are mortal

 Then it can represent as

 man(Marcus)

∀x = man (x) ----------> mortal (x)s 116


Approaches To Kr
 Procedural knowledge: Small programs and codes are used in the

procedural knowledge approach to specify how to do specific things

and how to proceed.

 One significant rule employed in this method is the If-Then rule.

 We may employ several coding languages, such as LISP and Prolog,

with this information.

 Using this method, we can readily represent heuristic or domain-

specific information.
117

 But it is not important that we represent all the cases in this approach.
Requirements Of KR
 A good knowledge representation system have to possess the following
properties.

 Representational Accuracy: The KR system should be able to


represent any type of knowledge that is necessary.

 Inferential Adequacy: The KR system should be able to change


representational structures in order to generate new knowledge that
matches the existing structure.

 Inferential Efficiency: The ability to store appropriate guides and steer


the inferential knowledge process in the most productive ways.

 Acquisitive efficiency: The ability to quickly acquire fresh information 118

utilizing automated means.


Logical Representation
 Logic is the study of the principles of reasoning and arguments towards the
truth of a given conclusion given premises.

 Logic is the study of the methods and principles used to distinguish good
(correct) from bad (incorrect) reasoning.
 Logic is a formal language. It has syntax, semantics, and a way of
manipulating expressions language.

 Syntax is a description of what you are allowed to write down,


what expressions are legal in a language.

 Semantics is what legal expressions mean. Therefore, syntax is


form and semantics is the content ( meaning).
119
 Inference Procedure: it is a method for computing (deriving)
new (true) sentences from existing sentences
Logical Representation
 Logic in Computer Science is the intersection between mathematical logic

and computer science. Logic in computer science is also known as the

calculus of Computer Science.

 Logic in AI is the key idea for KB design, KB representation and

inferencing (reasoning). In mathematics there are different kinds of logics.

 Some of these according to order of their generality are

 Prepositional logic

 First order logic

 Second order logic and more

 First order logic can be used to design, represent or infer for any 120
environment in the real world.
Propositional Logic
 The simplest kind of logic is propositional logic (PL), in
which all statements are made up of propositions.
 A statement (proposition) is a declarative sentence
which may be asserted to be either true or false.

For example,
 Five men cannot have eleven eyes.

 The sum of the numbers 3 and 5 equals 8.

121
PROPOSITIONAL LOGIC
• The term "Proposition“ refers to a declarative statement that can be
true or false. It's a method of expressing knowledge in logical and
mathematical terms.
• Example:

1. It is Tuesday. (True proposition)

2. The Sun rises from West. (False proposition)

3. 3 + 3 = 7 (False proposition)

4. 5 is a prime number. (True proposition)


• Propositions can be true or untrue, but not both at the same time.

122
Propositional Logic
 The sentences which are not propositions include questions, orders,

exclamations etc. for which we may not associate a truth value.

• For example,

 How are you?

 Ready, steady, go!

 May fortune come your way.

 Where is Abebe?

 What is your name?

Statements that are inquiries, demands, or opinions are not 123

propositions
Propositional Logic: Exercise
 Which of the following sentences are propositions? What are the truth

values of those that are propositions?

a) Dilla is the capital of Ethiopia.


Propositional, False
b) 2 + 3 = 5.
Propositional, True
c) 5 + 7 = 10.
Propositional, False
d) x + 2 = 11.
Propositional, True
e) Answer this question
Not Propositional
f) What time is it?
Not Propositional
124
Propositional Logic
• PL operates with 0 and 1, also known as Boolean logic.

• In PL, symbolic variables are used to express the syntax, and any symbol

can be used to represent a proposition, such as A, C, P, and logical

connectives make up propositional logic.

• The essential parts of propositional logic are propositions and

connectives.

• Connectives are logical operators that link two sentences together.

125
PROPOSITIONAL LOGIC
• Propositions are divided into two categories:
 Atomic propositions: is made up of only one proposition sign. These
are the sentences that must be true or untrue in order to pass.
 Example:
• 2+2 is 4, it is an atomic proposition as it is a true fact.
• "The Sun is cold" is also a proposition as it is a false fact.
 Compound proposition: atomic statements are combined to form
compound propositions.
 Example:
• "It is raining today, and street is wet."
126
• “Abebe is a teacher, and his school is in Dilla."
PROPOSITIONAL LOGIC
• Logical connectives are used to link two simpler ideas or to logically

represent a statement. With the use of logical connectives, we can form

compound assertions. There are five primary connectives:

• Negation (¬): A statement like ¬P is referred to as a negation of P.

• There are two types of literals: positive and negative literals.

• Example: Abebe is intelligent. It can be written as,

P = Abebe is intelligent,
127
¬P = Abebe is not intelligent. (It is not true Abebe is intelligent).
PROPOSITIONAL LOGIC
• Conjunction: is a sentence that contains ∧ connective such as, P ∧ Q.

• Example: “Melat is a doctor and Engineer",

• Here P = Melat is Doctor.

Q = Melat is Engineer , so we can write it as P∧ Q.

• Disjunction: is a sentence with a connective ∨ , such as P ∨ Q, where


P and Q are the propositions.
• Example: “Melat is a doctor or Engineer",

• Here P = Melat is Doctor.

Q = Melat is Engineer , so we can write it as P ∨ Q. 128


PROPOSITIONAL LOGIC
• Implication: is a statement such as P → Q. If-then rules are another
name for implications:
• Example: If it rains, the street is flooded. Because P denotes rain and
Q denotes a wet street, the situation is written as P and Q
• We call p the assumption of p → q, and q its conclusion.

• Biconditional: sentence like P⇔Q, for example, is a biconditional


sentence which represents if and only if.
• Example: I am alive if I am breathing.

• P= I am breathing,

• Q = I am alive, it can be represented as P ⇔ Q. 129


PROPOSITIONAL LOGIC:
CONNECTORS
Connective Technical Term Word Example
Symbol

∧ Conjunction AND P∧Q


V Disjunction OR PVQ
→ Implication Implies P→Q
⇔ Biconditional If and only If P⇔Q
¬ or ~ Negation Not ¬P or ¬Q

130
EXERCISE: PROPOSITIONAL LOGIC
• Write the following statements with appropriate PL?

1. Computer science does not take four year.

P=Computer science takes four year.

¬P

2. AI contains Reasoning and Learning.

P= AI contains reasoning.

Q= AI contains Learning.

P∧Q

131
PROPOSITIONAL LOGIC
• The grammar is ambiguous if a sentence such as P A Q V R could be
parsed as either
+ (P n Q) v R
+ P n (Q v R).
• The way to resolve the ambiguity is, pick an order of precedence for the
operators, but use parentheses whenever there might be confusion.
• The order of precedence in propositional logic is (from highest to
lowest): ¬ , ∧ , v, =>, and <=>.

132
PROPOSITIONAL LOGIC: EXERCISE
1. Identify the correct precedence of the logical connectives?

 A∧ ¬B A ∧ (¬B)

 A ∧B v C (A ∧ B) v C

 ¬ A->B ∧ C (((¬ A ) ∧ B) v C )

133
TYPES OF SENTENCE

 Any world in which a sentence is true under a particular

interpretation is called a model of that sentence under that

interpretation.

 Given a sentence α, this sentence according to the world

considered can be

 Valid (tautology)

 Invalid (contradiction)

 Satisfiable (neither valid nor invalid) 134


Propositional Logic Limitations
• In propositional logic, we can only represent the facts, which
are either true or false. PL is not sufficient to represent the
complex sentences or natural language statements.
• The propositional logic has very limited expressive power.
• Consider the following sentence, which we cannot represent
using PL logic.
"Some humans are intelligent", or
“Abel likes football."

135
Propositional Logic Limitations
• Some of the limitations of prepositional logic includes:
• very limited expressive power: unlike natural language,
propositional logic has very limited expressive power.
• It only represent declarative sentences: Propositional
logic is declarative (sentence always have truth value).
• Deals with only finite sentences: propositional logic
deals satisfactorily with finite sentences composed using
not, and, or, if . . . Then,

136
FIRST ORDER LOGIC
 First-order logic does not only assume that the world contains facts
like propositional logic but also assumes the following things in the
world:
 Objects: A, B, people, numbers, colors, wars, theories, squares, pits
 Relations: It can be unary relation such as: red, round, is adjacent, or
n-any relation such as: the sister of, brother of, has color, comes
between
 Function: Father of, best friend, third inning of, end of, ......
 As a language, first-order logic also has two main parts:

+ Syntax
+ Semantics 137
FIRST ORDER LOGIC
 The syntax of FOL determines which collection of symbols is a
logical expression in first-order logic. Basic Elements of First-order
logic:
Constant 1, 2, A, John, Mumbai, cat,....
Variables x, y, z, a, b,....
Predicates Brother, Father, >,....
Function sqrt, LeftLegOf, ....
Connectives ∧, ∨, ¬, ⇒, ⇔
Equality ==
Quantifier ∀, ∃

138
FIRST ORDER LOGIC

 First-order logic statements can be divided into two parts:


 Subject: Subject is the main part of the statement.
 Predicate: A predicate can be defined as a relation, which binds two
atoms together in a statement.
 E.g: x is an integer.

139
FIRST ORDER LOGIC
 Atomic sentences are the most basic sentences of first-order logic.
 These sentences are formed from a predicate symbol followed by a
parenthesis with a sequence of terms.
 We can represent atomic sentences as Predicate (term1, term2, ......,
term n).
 Example: Chala and Kebede are brothers: => Brothers(Chala,
Kebede).
Jerry is a cat: => cat (Jerry).
 Complex sentences are made by combining atomic sentences using
connectives.
140
Con’td

 A quantifier is a language element which generates


quantification, and quantification specifies the quantity of
specimen in the universe of discourse.
 These are the symbols that permit to determine or identify
the range and scope of the variable in the logical
expression. There are two types of quantifier:
 Universal Quantifier, (for all, everyone, everything)
 Existential quantifier, (for some, at least one).

141
FOL: UNIVERSAL QUANTIFIER
 Universal quantifier is a symbol of logical representation, which
specifies that the statement within its range is true for everything or
every instance of a particular thing. The Universal quantifier is
represented by a symbol ∀, which resembles an inverted A.
 If x is a variable, then ∀x is read as:
 For all x
 For each x
 For every x.

142
FOR: EXISTENTIAL QUANTIFIERS
 Existential quantifiers are the type of quantifiers, which express that
the statement within its scope is true for at least one instance of
something.
 It is denoted by the logical operator ∃, which resembles as inverted E.
When it is used with a predicate variable then it is called as an
existential quantifier.
 If x is a variable, then existential quantifier will be ∃x or ∃(x).
And it will be read as:
 There exists a 'x.'
 For some 'x.'
143
 For at least one 'x.'
FIRST ORDER LOGIC

 The main connective for universal quantifier ∀ is implication →.

 The main connective for existential quantifier ∃ is and ∧.

 Eg 1: All birds fly, the predicate is fly(bird).

∀x bird(x) →fly(x).

 Eg 2: Some boys are intelligent.

∃x: boys(x) ∧ intelligent(x)

 E.g 3. Every man respects his parent.


144
∀x man(x) → respects (x, parent).
FIRST ORDER LOGIC
 Eg 3:Not all students like both Mathematics and Science, the
predicate is like(x,y)
¬∀ (x) [ student(x) → like(x, Mathematics) ∧ like(x, Science)].
 Eg4:Only one student failed in Mathematics, the predicate is failed(x,
y)
∃(x) [ student(x) → failed (x, Mathematics)]

145
INFERENCE IN FOL
• Inference in First-Order Logic is used to deduce new facts
or sentences from existing sentences.
• The following are some basic inference rules in FOL:
 Universal Generalization
 Universal Instantiation
 Existential Instantiation
 Existential introduction

146
INFERENCE IN FOL
• Universal generalization is a valid inference rule which states that if

premise P(c) is true for any arbitrary element c in the universe of

discourse, then we can have a conclusion as ∀ x P(x).

• This rule can be used if we want to show that every element has a

similar property.

• Example:

 Let's represent, A(c): "CS constitute 4 years, so for ∀ x A(x) "All 4

years taking course are CS.", it will also be true. 147


INFERENCE IN FOL
• Universal instantiation is also called as universal elimination or UI is a
valid inference rule. It can be applied multiple times to add new
sentences.
• Example:
• All students who study hard are graduate . So let our knowledge base
contains this detail as in the form of FOL:
∀x students(x) ∧ study hard (x) → graduate (x),
• So from this information, we can infer any of the following statements
using Universal Instantiation:
Student (Berket) ∧ study hard (Berket) → graduate (Berket),
148
Student (Betelihem)∧ study hard(Betelihem)→
graduate(Betelihem),
INFERENCE IN FOL
• Existential instantiation is also called as Existential Elimination,
which is a valid inference rule in first-order logic. This rule states that
one can infer P(c) from the formula given in the form of ∃x P(x) for a
new constant symbol c.
• Example:
• There are some students who study hard and graduate:

∃(x) students(x) ∧ study hard (x) n graduate (x),


• We can infer the following statements

Student (Betelihem)∧ study hard(Betelihem) n


graduate(Betelihem),
149
INFERENCE IN FOL
• Existential introduction: is also known as an existential

generalization, which is a valid inference rule in first-order logic. This

rule states that if there is some element c in the universe of discourse

which has a property P, then we can infer that there exists something

in the universe which has the property P.

• Example: Let's say that, “Abebe got A in AI.“,

"Therefore, someone got good marks in AI."

150
Knowledge Engineering

• The process of constructing a knowledge-base in first-order logic is

called as knowledge- engineering.

• In knowledge-engineering, someone who investigates a particular

domain, learns important concept of that domain, and generates a

formal representation of the objects, is known as knowledge engineer.

• There are main steps of the knowledge-engineering process used to

develop a knowledge base which will allow us to reason about the

situation. 151
Knowledge Engineering
Steps in KE

 Identify the task.

 Assemble the relevant knowledge.

 Decide on a vocabulary of predicates, functions, and constants.

 Encode general knowledge about the domain.

 Encode a description of the specific problem instance.

 Pose queries to the inference procedure and get answers.


152
 Debug the knowledge base.
Knowledge Engineering
1. Identify the task: the first step is to identify the task.
• Example: Online registrar system
 At the first level, we will examine the functionality of the system:

+ Does the system allow search?


+ Does the system allow to view grade
+ What will be the output if the student fail?
 At the second level, we will examine structure details such as:

+ How many pages are there?


+ How will the home page looks like?
153
KNOWLEDGE ENGINEERING
2. Assemble the relevant knowledge: then will assemble the relevant
knowledge which is required for the system. So for registrar system, we
have the following required knowledge:
 Student owns username and password.
 Every Id owner should view grade.
 No registered student can view grade.
 In the system, there are two main function: registrar and view grade.
 Not all students can register.
 Every student should have department.

154
Knowledge Engineering
3. Decide on vocabulary: is to select functions, predicate, and constants
to represent the system.
 Student(X), Course, Teacher, Name, Id,
 Department(X), registrar, Grade

4. Encode general knowledge about the domain: follow the rules and
encode the knowledge:
 Student owns username and password.

student(x)-> owns(username, x) ∧ owns( password, x)

 Every student should have department.

∀x student(x) → have (department). 155


Knowledge Engineering
5. Encode a description of the specific problem instance:
 Berket is student.
 Berket have username and password.
 Betelihem is student
 Betelihem have department.

6. Pose queries to the inference procedure and get answers:


• In this step, we will find all the possible set of values of for the
system. The query can be:
 Is Berket a student?
 Did Betelihem have username? 156
Knowledge Base Agent (Es)
are one of the prominent research domains of AI.
Introduced by the researchers at Stanford University, Computer
Science Department.
are computer applications developed to solve complex problems
in a particular domain, at the level of extra-ordinary human
intelligence and expertise.
It help users to accomplish their goals in shortest possible way.
It is designed to work for user’s existing or desired work
practices.
Its technology should be adaptable to user’s requirements; not the 157

other way round.


Knowledge Base Agent(expert System)
 Knowledge base agent is an agent that perform action using the
knowledge it has and reason about their action using its inference
procedure.

 Knowledge base is a set of representation of facts and their relation


ships called rules about the world.
 Each fact/rules are called sentences which is represented using a
language called knowledge representation language.
158
Knowledge Base Agent
They are incapable of:
Substituting human decision makers

Possessing human capabilities

Producing accurate output for inadequate knowledge base

Refining their own knowledge

159
KNOWLEDGE BASE AGENT
 Declarative approach to building an agent:

 Tell it what it needs to know (Knowledge base)

 Ask what it knows

 Answers should follow from the KB

 The agent must be able to:

• Represent states of the world, actions, etc.

• Incorporate new percepts (facts and rules)

• Deduce hidden properties of the world

• Deduce appropriate actions

• Update internal representations of the world


160
EXPERT SYSTEM
 The components of ES include: Knowledge Base, Inference Engine
and User Interface.

161
EXPERT SYSTEM: KB

1.Knowledge Base: contains domain-specific and high-


quality knowledge.
The knowledge base of an ES is a store of both, factual
and heuristic knowledge.
Factual Knowledge is the information widely accepted by
the Knowledge Engineers and scholars in the task domain.
Heuristic Knowledge is about practice, accurate judgment,
one’s ability of evaluation, and guessing.
162
Expert System: Kb
Knowledge representation: is the method used to organize
and formalize the knowledge in the knowledge base.
Knowledge Acquisition:The knowledge base is formed by
readings from various experts, scholars, and the Knowledge
Engineers.
The knowledge engineer is a person with the qualities of
empathy, quick learning, and case analyzing skills.
 He acquires information from subject expert by recording,
interviewing, and observing him at work, etc.
163
Expert System: Inference Engine(Rules of Engine)
2.Inference Engine :Known as brain of the expert system as it
is the main processing unit of the system.

 It applies inference rules to the knowledge base to derive a


conclusion or deduce new information.

It acquires and manipulates the knowledge from the


knowledge base to arrive at a particular solution.

To recommend a solution, the inference engine uses the


following strategies:
 Forward Chaining 164

Backward Chaining
Expert System: Inference Engine
 Forward Chaining: is a strategy of an expert system to answer the
question, “What can happen next?”
 Forward inference engine follows the chain of conditions and
derivations and finally deduces the outcome.
 This strategy is followed for working on conclusion, result, or effect.
For example, prediction of anything.

165
Expert System: Inference Engine
Backward Chaining finds out the answer to the question, “Why this
happened?”

 On the basis of what has already happened, the inference engine tries to
find out which conditions could have happened in the past for this
result.

 This strategy is followed for finding out cause or reason. For example,
diagnosis of blood cancer in humans.

166
Expert System: User Interface
3.User interface provides interaction between user of the ES
and the ES itself.

The user of the ES need not be necessarily an expert in


Artificial Intelligence.

It explains how the ES has arrived at a particular


recommendation. The explanation may in the following
forms:

 Natural language displayed on screen

Verbal narrations in natural language 167


Participants in the development of Expert System

1.Expert: The success of an ES much depends on the


knowledge provided by human experts.

 These experts are those persons who are specialized in that


specific domain.

2.Knowledge Engineer: Is the person who gathers the


knowledge from the domain experts and then codifies that
knowledge to the system according to the formalism.

3.End-User: This is a particular person or a group of people


who may not be experts, and working on the expert system
168
needs the solution or advice for his queries, which are complex.

You might also like