0% found this document useful (0 votes)
7 views36 pages

AI Notes

The document discusses knowledge and reasoning in artificial intelligence, focusing on logical agents that represent complex knowledge about the world and use inference to derive new information. It emphasizes key issues such as knowledge representation, reasoning processes, and the formal language used for knowledge bases, along with the concepts of entailment and propositional logic. The Wumpus World is used as an illustrative example to demonstrate how agents can use knowledge and percepts to make decisions based on their environment.

Uploaded by

izahri495
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views36 pages

AI Notes

The document discusses knowledge and reasoning in artificial intelligence, focusing on logical agents that represent complex knowledge about the world and use inference to derive new information. It emphasizes key issues such as knowledge representation, reasoning processes, and the formal language used for knowledge bases, along with the concepts of entailment and propositional logic. The Wumpus World is used as an illustrative example to demonstrate how agents can use knowledge and percepts to make decisions based on their environment.

Uploaded by

izahri495
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Artificial Intelligence

Logical or Knowledge Based Agents


CS-412
Week-09-Fall 2024

17/11/2024 1:26 am
Knowledge and Reasoning
Knowledge and Reasoning:
humans are very good at acquiring new information by
combining raw knowledge, experience with reasoning.
AI-slogan: “Knowledge is power” (or “Data is power”?)

Examples:
Medical diagnosis --- physician diagnosing a patient
infers what disease, based on the knowledge he/she
acquired as a student, textbooks, prior cases
Common sense knowledge / reasoning ---
common everyday assumptions / inferences
e.g., “lecture starts at four” infer pm not am;
when traveling, I assume there is some way to get from the
airport to the hotel.
Logical agents:
Agents with some representation of the complex knowledge about
the world / its environment, and uses inference to derive new
information from that knowledge combined with new inputs (e.g.
via perception).

Key issues:
1- Representation of knowledge
What form? Meaning / semantics?
2- Reasoning and inference processes
Efficiency.
Knowledge-base Agents
• Key issues:
• Representation of knowledge knowledge base
• Reasoning processes inference/reasoning

Knowledge base = set of sentences in a formal language


representing facts about the world(*)

(*) called Knowledge Representation (KR) language


Knowledge bases
•Key aspects:
• How to add sentences to the knowledge base
• How to query the knowledge base

Both tasks may involve inference – i.e. how to derive new sentences from old
sentences

Logical agents – inference must obey the fundamental requirement that when
one asks a question to the knowledge base, the answer should follow from
what has been told to the knowledge base previously. (In other words the
inference process should not “make things” up…)
Please read section7.2 to understand “The Wumpus world”
A simple knowledge-based agent

• The agent must be able to:


• Represent states, actions, etc.
• Incorporate new percepts
• Update internal representations of the world
• Deduce hidden properties of the world
• Deduce appropriate actions
KR language candidate:
logical language (propositional / first-order) combined
with a logical inference mechanism

How close to human thought? (mental-models ).

What is “the language of thought”?

Why not use natural language (e.g. English)?

We want clear syntax & semantics (well-defined


meaning), and, mechanism to infer new information.
Soln.: Use a formal language.
Soundness and Completeness: A reasoning system is sound if
everything it proves is true (i.e., it only proves things that are
entailed), and it is complete if it can prove all things that are
true (i.e., it can derive all entailments).
Syntax & Semantics
• All sentences should be well formed
• The notion of syntax is clear enough in ordinary arithmetic:
“x+y = 4” is a well-formed sentence, whereas “x4y+ =” is not
• A logic must also define the semantics, or meaning, of sentences.
• The semantics defines the truth of each sentence with respect to each
possible world.
• For example, the semantics Possible world for arithmetic specifies that the
sentence:
“x+y=4” is true in a world where x is 2 and y is 2, but false in a world where x
is 1 and y is 1.
• In standard logics, every sentence must be either true or false in each
possible world—there is no “in between.
Model
• we use the term model in place of “possible world.”
• Models are mathematical abstractions, each of which has a fixed
truth value (true or false) for every relevant sentence.
• Informally, we may think of a possible world as, for example, having x
men and y women sitting at a table playing bridge, and the sentence
x+y=4 is true when there are four people in total.
• Formally, the possible models are just all possible assignments of
nonnegative integers to the variables x and y. Each such assignment
determines the truth of any sentence of arithmetic whose variables
are x and y. If a sentence α is true in model m, we say that m satisfies
α or sometimes m is a model of α. We use the notation M(α) to mean
the set of all models of α.
Entailment
• Logical reasoning involves the relation of logical entailment between sentences
• The idea that a sentence follows logically from another sentence.
• In mathematical notation, we write α |= β to mean that the sentence α entails
the sentence β. The formal definition of entailment is this: α |= β if and only if, in
every model in which α is true, β is also true.
• Using the notation just introduced, we can write α |= β if and only if M(α) ⊆
M(β). (Note the direction of the ⊆ here: if α |= β, then α is a stronger assertion
than β: it rules out more possible worlds.)
• Entailment in the context of logic and artificial intelligence refers to the
relationship between statements (or propositions) where one statement logically
follows from another. In other words, if a set of statements (known as premises)
entails a conclusion, this means that the conclusion must be true whenever the
premises are true.
• Entailment vs. Implication: Entailment is a formal, logical relationship where the
conclusion must be true given the premises. Implication, on the other hand, can
be more general and is not necessarily tied to logical truth.
Propositional Logic: A Very Simple Logic
• The syntax of propositional logic defines the allowable sentences.
• Atomic sentences consist of a single proposition symbol.
• Each such symbol stands for a proposition that can be true or false.
• We use symbols that start with an uppercase letter and may contain other
letters or subscripts, for example: P, Q, R, W1,3 and FacingEast.
• The names are arbitrary but are often chosen to have some mnemonic
value—we use W1,3 to stand for the proposition that the wumpus is in
[1,3].
(Remember that symbols such as W1,3 are atomic, i.e., W, 1, and 3 are not
meaningful parts of the symbol.)
• There are two proposition symbols with fixed meanings: True is the
always-true proposition and False is the always-false proposition.
Propositional Logic: A Very Simple Logic
• Complex sentences are constructed from simpler sentences, using parentheses
and operators called logical connectives. There are five connectives in common
use:
1. ¬ (not). A sentence such as ¬W1,3 is called the negation of W1,3. A literal is
either an atomic sentence (a positive literal) or a negated atomic sentence (a
negative literal).
2. ∧ (and). A sentence whose main connective is ∧, such as W1,3 ∧P3,1, is
called a conjunction; its parts are the conjuncts. (The ∧ looks like an “A” for
“And.”)
3. ∨ (or). A sentence whose main connective is ∨, such as (W1,3 ∧P3,1)∨W2,2,
is a disjunction; its parts are disjuncts—in this example, (W1,3 ∧P3,1) and
W2,2.
4. ⇒ (implies). A sentence such as (W1,3 ∧P3,1) ⇒ ¬W2,2 is called an implication
(or con ditional). Its premise or antecedent is (W1,3 ∧P3,1), and its conclusion
or consequent is ¬W2,2. Implications are also known as rules or if–then
statements. The implication symbol is sometimes written in other books as ⊃
or →.
5. ⇔ (if and only if). The sentence W1,3 ⇔ ¬W2,2 is a biconditional
Propositional Logic: Semantics
• The semantics defines the rules for determining the truth of a
sentence with respect to a particular Truth value model.
• In propositional logic, a model simply sets the truth value—true or
false—for every proposition symbol.
• For example, if the sentences in the knowledge base make use of the
proposition symbols P1,2, P2,2, and P3,1, then one possible model is
m1 = {P1,2 =false, P2,2 =false, P3,1 =true}. With three proposition
symbols, there are 23 =8 possible models
Propositional Logic: Semantics
For complex sentences, we have five rules, which hold for any
subsentences P and Q (atomic or complex) in any model m (here “iff”
means “if and only if”):
• ¬P is true iff P is false in m.
• P∧Q is true iff both P and Q are true in m.
• P∨Q is true iff either P or Q is true in m.
• P ⇒ Q is true unless P is true and Q is false in m.
• P ⇔ Q is true iff P and Q are both true or both false in m
Illustrative example: Wumpus World
•Performance measure (Somewhat whimsical!)
• gold +1000,
• death -1000
(falling into a pit or being eaten by the wumpus)
• -1 per step, -10 for using the arrow
•Environment
• Rooms / squares connected by doors.
• Squares adjacent to wumpus are smelly
• Squares adjacent to pit are breezy
• Glitter iff gold is in the same square
• Shooting kills wumpus if you are facing it
• Shooting uses up the only arrow
• Grabbing picks up gold if in same square
• Releasing drops the gold in same square
• Randomly generated at start of game. Wumpus only senses current room.
•Sensors: Stench, Breeze, Glitter, Bump, Scream [perceptual inputs]
•Actuators: Left turn, Right turn, Forward, Grab, Release, Shoot
Wumpus world characterization
• Fully Observable No – only local perception
• Deterministic Yes – outcomes exactly specified
• Static Yes – Wumpus and Pits do not move
• Discrete Yes
• Single-agent? Yes – Wumpus is essentially a “natural
feature.”
Exploring a wumpus world
The knowledge base of the agent
consists of the rules of the
Wumpus world plus the percept
“nothing” in [1,1]

Boolean percept
feature values:
<0, 0, 0, 0, 0>
None, none, none, none, none

Stench, Breeze, Glitter, Bump, Scream


World “known” to agent
at time = 0.

T=0 The KB of the agent consists of


the rules of the Wumpus world plus
None, none, none, none, none the percept “nothing” in [1,1].
By inference, the agent’s knowledge
Stench, Breeze, Glitter, Bump, Scream base also has the information that
[2,1] and [1,2] are okay.
Added as propositions.
Further
T=0
exploration
T=1

P?

A/B P?
V

None, breeze, none, none, none


None, none, none, none, none A – agent
V – visited
Stench, Breeze, Glitter, Bump, Scream B - breeze

@ T = 1 What follows?
Where next? Pit(2,2) or Pit(3,1)
T=3
4

3 W

S P? S
2 P

1 P? P

1 2 3 4

Stench, none, none, none, none


Stench, Breeze, Glitter, Bump, Scream
Where is Wumpus?

Wumpus cannot be in (1,1) or in (2,2) (Why?) Wumpus in (1,3)


Not breeze in (1,2) no pit in (2,2); but we know there is
pit in (2,2) or (3,1) pit in (3,1)
We reasoned about the possible states
W
the Wumpus world can be in, given our
percepts and our knowledge of the rules P
of the Wumpus world.
P
I.e., the content of KB at T=3.
What follows is what holds true in all those worlds that satisfy what is known at
that time T=3 about the particular Wumpus world we are in.

Example property: P_in_(3,1)

Models(KB) Models(P_in_(3,1))

Essence of logical reasoning:


Given all we know, Pit_in_(3,1) holds.
(“The world cannot be different.”)
Formally: Entailment
Knowledge Base (KB) in the Wumpus World
Rules of the wumpus world + new percepts

• Situation after detecting nothing in [1,1],


moving right, breeze in [2,1]. I.e. T=1.

• Consider possible models for KB with respect to T=1


the cells (1,2), (2,2) and (3,1), with respect to
the existence or non existence of pits

• 3 Boolean choices ⇒
• 8 possible interpretations
• (enumerate all the models or
• “possible worlds” wrt Pitt location)
Is KB consistent with all
8 possible worlds? Worlds
that violate KB
(are inconsistent
with what we
know)

• KB = Wumpus-world rules + observations (T=1)

Q: Why does world violate KB?


Entailment in Wumpus World
So, KB defines
all worlds that
we hold possible.
Queries: we want to know the properties of those worlds.
That’s how the semantics of logical entailment is defined.

Models of the KB and α1


Note: \alpha_1
holds in more
models than KB.
That’s OK, but we
don’t care about
those worlds.

• KB = Wumpus-world rules + observations


• α1 = "[1,2] has no pit", KB ╞ α1
• In every model in which KB is true, α1 is True (proved by “model checking”)
Wumpus models
• KB = wumpus-world rules + observations
• α2 = "[2,2] has no pit", this is only True in some
• of the models for which KB is True, therefore KB ╞ α2
• Model Checking

Models of α2

A model of KB where α2 does NOT hold!


Entailment via
“Model Checking”
• Inference by Model checking –
• We enumerate all the KB models and check if α1 and α2 are
• True in all the models (which implies that we can only use it
• when we have a finite number of models).

• I.e. using semantics directly.

Models(KB) Models( )
Example: More formal

P?

A/B P?
V

None, none, none, none, none None, breeze, none, none, none
Stench, Breeze, Glitter, Bump, Scream A – agent
V – visited
B - breeze

How do we actually encode background


knowledge and percepts in formal language?
Wumpus
• Define propositions:
World KB
• Let Pi,j be true if there is a pit in [i, j].
• Let Bi,j be true if there is a breeze in [i, j].

Sentence 1 (R1): ¬ P1,1 [Given.]


Sentence 2 (R2): ¬ B1,1 [Observation T = 0.]
Sentence 3 (R3): B2,1 [Observation T = 1.]

• "Pits cause breezes in adjacent squares”


Sentence 4 (R4): B1,1 ⇔ (P1,2 ∨ P2,1)
Sentence 5 (R5): B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1)
etc.
• Notes: (1) one such statement about Breeze for each square.
• (2) similar statements about Wumpus, and stench
and Gold and glitter. (Need more propositional
letters.)
Some example inferences Actions and inputs up to time 6
Section 7.7.1 R&N Note: includes turns!

Define “OK”:

In milliseconds, with modern SAT solver.

You might also like