0% found this document useful (0 votes)
21 views22 pages

Knowledge Representation

Knowledge Representation (KR) is a field in Artificial Intelligence that enables machines to understand and utilize information for reasoning and decision-making. It encompasses various types of knowledge, including declarative, procedural, meta-knowledge, heuristic, and structural knowledge, which enhance AI's ability to learn and make informed choices. The document also discusses approaches to knowledge representation, issues in representation, and the role of inference in deriving conclusions from existing knowledge.

Uploaded by

utkarshm025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views22 pages

Knowledge Representation

Knowledge Representation (KR) is a field in Artificial Intelligence that enables machines to understand and utilize information for reasoning and decision-making. It encompasses various types of knowledge, including declarative, procedural, meta-knowledge, heuristic, and structural knowledge, which enhance AI's ability to learn and make informed choices. The document also discusses approaches to knowledge representation, issues in representation, and the role of inference in deriving conclusions from existing knowledge.

Uploaded by

utkarshm025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Knowledge Representation (KR) is a way for computers to "know" things and make

decisions based on that knowledge. Just like humans use their knowledge to understand and
act in the world, AI systems need a method to understand information and use it to make
smart choices.​
It is how to represent and structure information (knowledge) in a way that machines can use it
for reasoning, problem-solving, and decision-making.

Knowledge=Information +rules

What is Knowledge Representation?

Knowledge Representation and Reasoning (KRR) is a field of Artificial Intelligence that


focuses on how AI systems can think and act intelligently. It involves:

1.​ Storing information about the world in a way that computers can understand.
2.​ Using that information to solve real-life problems, like diagnosing a medical issue
or chatting with people in a natural way.

Instead of just storing data in a database, Knowledge Representation helps AI learn from
experiences and data, enabling it to make better decisions in the future.

Example of Knowledge Representation

Suppose we have an AI assistant in a hospital:

●​ Knowledge stored: Symptoms of diseases, patient history, and treatment options.


●​ Use of knowledge: When a patient describes their symptoms, the AI can match these
to known diseases and suggest possible diagnoses.

Types of knowledge

Following are the various types of knowledge:


1.​ Declarative Knowledge
o​ Definition: This is knowledge about something. It includes facts, concepts,
and descriptions that the AI can recall or use for reference. This type of
knowledge is typically straightforward and is often expressed in statements.
o​ Example: Imagine an AI assistant in a geography app:
▪​ Fact: "The capital of France is Paris."
▪​ Concept: "Mountains are large landforms that rise above the
surrounding land."
▪​ Object: "A smartphone is an electronic device with a touchscreen."​
In this case, the AI stores and recalls factual information about
locations, natural features, or objects.
2.​ Procedural Knowledge
o​ Definition: This is knowledge about how to do something. It involves rules,
processes, or instructions that can be directly applied to a task. Procedural
knowledge is action-oriented.
o​ Example: Consider a robot programmed to bake cookies:
▪​ Steps: The robot follows a series of actions — “Preheat oven to
350°F,” “Mix flour, sugar, and butter,” “Shape dough into balls and
place on baking sheet,” “Bake for 10 minutes.”​
In this case, the AI has procedural knowledge about baking, which it
can apply step-by-step to produce cookies.
3.​ Meta-Knowledge
o​ Definition: This is knowledge about other types of knowledge.
Meta-knowledge allows AI to decide which type of knowledge or method is
best suited to a situation.
o​ Example: Imagine an AI that can switch between search strategies:
▪​ If it needs to find the quickest route in a maze, it may decide to use the
"Shortest Path Algorithm."
▪​ If exploring all possible solutions in a large puzzle, it may choose
“Depth-First Search” for exhaustive checking.​
Here, meta-knowledge helps the AI choose the right approach for each
problem by understanding what it knows about these search methods.
4.​ Heuristic Knowledge
o​ Definition: Heuristic knowledge is based on experience or "rules of thumb"
developed over time. This type of knowledge isn’t guaranteed to be perfect but
is often useful for quick decision-making.
o​ Example: In a chess game, an AI might use the heuristic “Control the center of
the board for a strong position”:
▪​ By following this general rule, the AI can make moves to control the
center of the chessboard even if it hasn’t calculated the entire game
outcome.​
Heuristic knowledge helps AI make efficient decisions in complex
situations where trying every possible outcome would take too long.
5.​ Structural Knowledge
o​ Definition: Structural knowledge defines relationships between concepts,
showing how things are organized or related. This knowledge helps AI
understand connections, like how items group together or relate as parts of a
whole.
o​ Example: In an AI for biology education:
▪​ Relationship: "A plant has roots, stems, and leaves."
▪​ Grouping: "Mammals includes humans, dogs, and elephants."​
Structural knowledge allows the AI to understand that plants have
specific parts and that certain animals belong to a larger group, like
mammals, which helps organize information for better understanding
and problem-solving.

These types of knowledge make AI more versatile by enabling it to handle facts, perform
actions, choose methods, rely on useful rules of thumb, and understand relationships between
concepts.

The relation between knowledge and intelligence:


Knowledge of real-worlds plays a vital role in intelligence and same for creating artificial
intelligence. Knowledge plays an important role in demonstrating intelligent behavior in AI
agents. An agent is only able to accurately act on some input when he has some knowledge or
experience about that input.

Let's suppose if you met some person who is speaking in a language which you don't know,
then how you will able to act on that. The same thing applies to the intelligent behavior of the
agents.

As we can see in below diagram, there is one decision maker which act by sensing the
environment and using knowledge. But if the knowledge part will not present then, it cannot
display intelligent behavior.

Approaches to Knowledge Representation


i. Simple Relational Knowledge

●​ Facts are stored in a table format, showing relationships between entities. It’s
commonly used in databases with limited inference capabilities.
●​ Example:

Player Weight Age

Player1 65 23

Player2 58 18

ii. Inheritable Knowledge

●​ Data is organized hierarchically, with subclasses inheriting properties from parent


classes. It helps in categorizing and reusing attributes.
●​ Example:
o​ Animal → Can Fly
o​ Bird → Inherits Can Fly
o​ Penguin → Cannot Fly (overrides inheritance)

iii. Inferential Knowledge

●​ This approach uses logic to derive new facts from existing knowledge. It enables
reasoning based on rules and premises.
●​ Example:
o​ "Marcus is a man" and "All men are mortal" → Infer "Marcus is mortal."

iv. Procedural Knowledge

●​ Describes step-by-step procedures or "how-to" knowledge to perform tasks. It is often


encoded in rules or routines.
●​ Example:
o​ If noun phrase → process article, adjective, noun.

ISSUES UN KNOWLEDGE REPRESENATTION

●​  Important Attributes: Deciding which attributes to represent is crucial for


accuracy. For example, representing a car might require attributes like "engine type"
and "color," but not "tire pressure" unless relevant.
●​  Relationship Among Attributes: Defining how attributes are connected is key.
For instance, "dog" → "has fur" and "dog" → "can bark" represent different
relationships between attributes.
●​  Choosing Granularity: Finding the right level of detail is important.
Representing a book might include "author" (coarse) or "author birth year" (fine),
depending on the need.
●​  Set of Objects: Identifying relevant objects is vital. For example, in an
educational knowledge base, objects like "student" and "course" are essential, while
"lunchbox" may not be.

Propositional Logic:-
Propositional logic is a type of logic that deals with statements, called propositions. These
statements can only be either true or false.
For example:

●​ "It is raining." (True or False)


●​ "2 + 2 = 5." (False)

Alphabet set:-

In propositional logic, an alphabet refers to the collection of symbols used to form logical
expressions. These symbols can be divided into the following sets:

1. Set of Variables or Propositional Symbols


●​ What it is:​
This is the set of symbols that represent propositions or statements.​
Each variable (e.g., P,Q,R,…) is a placeholder for a proposition that can be either true
(T) or false (F).
●​ Examples:
o​ P: "It is raining."
o​ Q: "It is cold."
o​ R: "The sun is shining."
●​ Usage:​
These variables form the basic building blocks for more complex logical expressions.

2. Logical Constant
●​ What it is:​
Logical constants are fixed values that represent the truth values True (T) and False
(F).
●​ Examples:
o​ T: Always true.
o​ F: Always false.
●​ Usage:​
Logical constants are used as default values in expressions, such as:
o​ P∨T=T (A proposition OR true is always true).
o​ P∧F=F (A proposition AND false is always false).

3. Two Parentheses
●​ What it is:​
Parentheses ( and )are used to group logical expressions and ensure the correct order
of operations.
●​ Examples:​
Consider the expression (P∨Q)∧R:
o​ The parentheses ensure that P∨Q is evaluated first, before applying ∧R.
●​ Usage:​
Parentheses prevent ambiguity in complex expressions and follow the same priority
rules as in mathematics.

4. Set of Logical Operators


●​ What it is:​
Logical operators define how propositions combine and interact. These operators
correspond to common logical operations.
●​ Common Operators:
1.​ NOT (¬): Negation of a proposition.​
Example: ¬P (If P is true, ¬Pis false).
2.​ AND (∧): True if both propositions are true.​
Example: P∧Q (True only if P and Q are both true).
3.​ OR (∨): True if at least one proposition is true.​
Example: P∨Q(True if P or Q is true).
4.​ IMPLIES (→): True if the first proposition implies the second.​
Example: P→Q (False only if P is true and Q is false).
5.​ BICONDITIONAL (↔): True if both propositions have the same truth value.​
Example: P↔Q (True if both are true or both are false).

P↔Q=(P⟹Q)∧(Q⟹P).​
This means:

●​ If P is true, Q must also be true.


●​ If Q is true, P must also be true.
▪​
●​ Usage:​
Logical operators are used to combine propositions into more complex statements.
Concept to analyse logical statement

atomic propositions and compound propositions are key concepts used to form and analyze
logical statements.

1. Atomic Proposition

●​ Definition: An atomic proposition is a simple, indivisible statement that cannot be


broken down into smaller parts. It is either true or false, without involving any logical
connectives (e.g., AND, OR, NOT).
●​ Examples:
o​ "The sky is blue."
o​ "2 + 2 = 4."

Atomic propositions are the building blocks of logic.


2. Compound Proposition

●​ Definition: A compound proposition combines two or more atomic propositions


using logical connectives such as AND (∧), OR (∨), NOT (¬), IMPLIES (→), or
BICONDITIONAL (↔).
●​ Examples:
o​ "The sky is blue AND grass is green." (Logical form: P∧Q, where P: "The sky
is blue," and Q: "Grass is green.")
o​ "If it rains, then the ground will be wet." (Logical form: P→Q, where P: "It
rains," and Q: "The ground will be wet.")
Predicate Logic (First-Order Logic)

Predicate Logic is an extension of propositional logic that allows the expression of


statements involving variables and predicates. While propositional logic is limited to
statements that are either true or false, predicate logic introduces the concept of quantifiers,
predicates, and variables, enabling the expression of more complex and detailed statements.

Key Concepts in Predicate Logic:

1.​ Predicate: A predicate is a function that takes an argument (or multiple arguments)
and returns a truth value. It is like a statement that becomes true or false depending on
the values of its variables.
o​ Example:
▪​ P(x) could represent "x is a student".
▪​ If x = John, then P(John) becomes "John is a student".
2.​ Variables: These are symbols (like x, y, z) that represent an object in the domain. The
truth of a predicate depends on the value assigned to these variables.
3.​ Domain: The set of possible values that variables can take is called the domain (also
called the universe of discourse).

Quantifiers in Predicate Logic:

Quantifiers are used to express the extent to which a predicate or statement applies to the
variables in question.

1.​ Universal Quantifier (∀):


o​ Denoted by ∀, it means "for all" or "for every". It is used to indicate that a
predicate is true for all possible values of a variable in a domain.
o​ Example:
▪​ ∀x P(x) means "For all x, P(x) is true", or "Everyone is a student."
o​ In a domain of people, ∀x P(x) could be "Every person is a student".
2.​ Existential Quantifier (∃):
o​ Denoted by ∃, it means "there exists" or "there is at least one". It indicates that
there is at least one element in the domain for which the predicate is true.
o​ Example:
▪​ ∃x P(x) means "There exists at least one x such that P(x) is true", or
"There is at least one student".
o​ In a domain of people, ∃x P(x) could mean "There is at least one student".

Inference in propositional Logic​


Definition:​
Inference in Artificial Intelligence:

Inference refers to the process of deriving new information or conclusions from existing facts
or evidence. In Artificial Intelligence (AI), inference helps the system generate logical
conclusions from known data.

Inference Rules:

Inference rules are logical guidelines or templates used to derive conclusions or proofs. In AI,
these rules are applied to create valid arguments or deduce information based on given
premises.

The basic concept of inference involves logical implications among propositions. Here are
key terms and examples related to inference rules:

●​ Implication (P → Q): This means "if P is true, then Q is true."


●​ Converse (Q → P): This switches the positions of P and Q in the implication.
●​ Contrapositive (¬Q → ¬P): This negates both P and Q in the converse.
●​ Inverse (¬P → ¬Q): This negates both P and Q in the original implication.

Types of Inference Rules:

1.​ Modus Ponens (Affirming the Antecedent):


o​ Rule: If "P → Q" and "P" are both true, then "Q" must also be true.
o​ Example:
▪​ Statement-1: "If I am sleepy, then I go to bed" (P → Q).
▪​ Statement-2: "I am sleepy" (P).
▪​ Conclusion: "I go to bed" (Q).
o​ Truth Table Proof:

P Q P → Q Modus Ponens
TT T T
TF F F
F T T -does not apply
P Q P → Q Modus Ponens
F F T -does not apply

2.​ Modus Tollens (Denying the Consequent):


o​ Rule: If "P → Q" is true, and "¬Q" (not Q) is true, then "¬P" (not P) must be
true.
o​ Example:
▪​ Statement-1: "If I am sleepy, then I go to bed" (P → Q).
▪​ Statement-2: "I do not go to bed" (¬Q).
▪​ Conclusion: "I am not sleepy" (¬P).
o​ Truth Table Proof:

P Q P → Q ¬Q ¬P Modus Tollens
TT T F F -
TF F T F T
F T T F T -
F F T T T -

P Q P→Q ¬Q ¬P Modus Tollens Applies?

T (It is raining) T (The ground is wet) T F F - (Does not apply; Q is true)

T (It is raining) F (The ground is not wet) F T F - (Does not apply; P→Q is false)

F (It is not raining) T (The ground is wet) T F T - (Does not apply; Q is true)

F (It is not raining) F (The ground is not wet) T T T Yes (Applies; ( \neg P = \text{True})

3.​ Hypothetical Syllogism:


o​ Rule: If "P → Q" and "Q → R" are true, then "P → R" must be true.
o​ Example:
▪​ Statement-1: "If you have my home key, then you can unlock my
home" (P → Q).
▪​ Statement-2: "If you can unlock my home, then you can take my
money" (Q → R).
▪​ Conclusion: "If you have my home key, then you can take my money"
(P → R).
4.​ Disjunctive Syllogism:
o​ Rule: If "P ∨ Q" is true (either P or Q is true), and "¬P" (not P) is true, then Q
must be true.
o​ Example:
▪​ Statement-1: "Today is Sunday or Monday" (P ∨ Q).
▪​ Statement-2: "Today is not Sunday" (¬P).
▪​ Conclusion: "Today is Monday" (Q).
5.​ Addition:
o​ Rule: If P is true, then "P ∨ Q" (P or Q) is also true.
o​ Example:
▪​ Statement: "I have vanilla ice cream" (P).
▪​ Conclusion: "I have vanilla or chocolate ice cream" (P ∨ Q).
6.​ Simplification:
o​ Rule: If "P ∧ Q" (P and Q) is true, then "P" or "Q" is true individually.
o​ Example:
▪​ Statement: "I am happy and I am tired" (P ∧ Q).
▪​ Conclusion: "I am happy" (P) or "I am tired" (Q).
7.​ Resolution:
o​ Rule: If "P ∨ Q" (P or Q) is true, and "¬P ∧ R" (not P and R) is true, then "Q ∨
R" must be true.
o​ Example:
▪​ Statement-1: "It is raining or snowing" (P ∨ Q).
▪​ Statement-2: "It is not raining and it is cold" (¬P ∧ R).
▪​ Conclusion: "It is snowing or it is cold" (Q ∨ R).

Unification and Resolution in Artificial Intelligence


In Artificial Intelligence (AI), unification and resolution are fundamental concepts used in
automated reasoning, especially in the context of logic and inference. These methods are used
to manipulate logical expressions, and they play an important role in search algorithms,
theorem proving, and knowledge representation.
Unification:
Unification is the process of finding a substitution that makes two logical expressions
identical. It is a fundamental operation in logic programming, particularly in systems like
Prolog, and is used to match terms or predicates.
Key Points:

●​ Unification is used to match two expressions or predicates.


●​ It involves finding a substitution (replacement of variables with terms) that makes two
expressions identical.
●​ A substitution is a set of variable-to-term assignments.

Process:

1.​ Match two terms or expressions.


2.​ Find a substitution (if possible) such that when the substitution is applied, both
terms become identical.

Example of Unification:
Consider two expressions:

1.​ P(X, Y)
2.​ P(a, b)

To unify these two expressions, we look for a substitution that makes both expressions
identical.
●​ Here, X can be unified with a and Y can be unified with b.
●​ So, the unification result would be the substitution:
o​ {X -> a, Y -> b}

Now both expressions P(a, b) and P(a, b) are identical.


Example 2:
Unifying father(X, Y) and father(a, b):

●​ X can be unified with a and Y can be unified with b.


●​ The substitution would be {X -> a, Y -> b}.

If we apply this substitution to father(X, Y), we get father(a, b), which matches the second
term.
Resolution:
Resolution is a rule of inference used in propositional logic and first-order logic. It is a
method for deriving a conclusion from a set of premises by eliminating variables through
unification.
In logic, resolution is typically used in conjunctive normal form (CNF), where all logical
expressions are expressed as a conjunction of disjunctions.
Key Points:

●​ Resolution works on clauses (disjunctions of literals) and is used to deduce new


clauses.
●​ It uses the unification process to find a common literal that can be resolved.
●​ The idea is to eliminate complementary literals (i.e., a literal and its negation) to
derive new information.

Process of Resolution:

1.​ Identify complementary literals in two clauses.


2.​ Unify the two clauses (if necessary).
3.​ Resolve the two clauses by removing the complementary literals.
4.​ The result is a new clause derived from the original clauses.

Example of Resolution:
Let's consider two clauses:

1.​ P(X) ∨ Q(Y) (Clause 1)


2.​ ¬Q(a) ∨ R(X) (Clause 2)

We want to resolve these clauses.

●​ We notice that Q(Y) and ¬Q(a) are complementary literals.


●​ We can unify Y with a (i.e., substitute Y -> a).

After unification, the two clauses become:


1.​ P(X) ∨ Q(a) (after substituting Y -> a)
2.​ ¬Q(a) ∨ R(X)

Now, we can resolve the complementary literals Q(a) and ¬Q(a):

●​ Remove Q(a) and ¬Q(a) from both clauses.

This results in the new clause:

●​ P(X) ∨ R(X)

So, the resolved clause is P(X) ∨ R(X).


Semantic nets
Semantic networks are a powerful tool in the field of artificial intelligence (AI), used to
represent knowledge and understand relationships between different concepts. They are
graphical representations that connect nodes (representing concepts) with edges (representing
relationships). Semantic networks are widely used in natural language processing (NLP),
knowledge representation, and reasoning systems.
1. Definitional Networks
Definitional networks are used to represent hierarchical relationships between concepts,
often seen in taxonomies or ontologies. These networks define a concept by relating it to
more general or more specific concepts.
Example:

●​ "Dog" is defined as a type of "Mammal," and "Mammal" is defined as a type of


"Animal."

In this case:

●​ "Dog" is a specific instance of the broader concept "Mammal."


●​ "Mammal" is a more general category that includes "Dog."

Key Purpose: Helps in categorizing and defining concepts based on generalization and
specialization.

2. Assertional Networks
Assertional networks represent specific facts or assertions about individual instances of
concepts. These networks describe properties or attributes related to particular entities.
Example:

●​ "Rex is a Dog."
●​ "Rex has Brown Fur."

In this case:
●​ "Rex" is an individual instance, and the assertion is that Rex is a member of the
concept "Dog."
●​ It also asserts that "Rex" has a particular property (brown fur).

Key Purpose: Used to describe specific attributes or facts about individual entities.

3. Implicational Networks
Implicational networks represent logical implications between concepts. They focus on how
certain relationships imply other knowledge, allowing AI systems to infer new facts from
existing ones.
Example:

●​ "All Dogs are Mammals."


●​ "Rex is a Dog."
●​ Implication: "Rex is a Mammal."

Here, the implicational network uses the relationship that "All Dogs are Mammals" to infer
that if "Rex is a Dog," then "Rex must also be a Mammal."
Key Purpose: Allows logical reasoning and the ability to infer new knowledge from existing
facts.

4. Executable Networks
Executable networks represent procedural knowledge. The relationships in these networks
include actions or sequences of events that can be executed or carried out by an AI system.
Example:

●​ "Add Water to Pot."


●​ "Boil Water."

In this case:

●​ The network represents a sequence of actions that could be performed to complete a


task (like cooking).
●​ An AI system could follow this sequence to carry out the steps, such as in an
automated cooking assistant.

Key Purpose: Helps in representing processes or procedures that can be performed by a


system.

5. Learning Networks
Learning networks are dynamic networks that evolve over time as the system learns new
information. They update their relationships and nodes based on new data or experiences,
allowing the AI system to adapt.
Example:

●​ A learning network may update its knowledge about "Dog" as it encounters new
breeds or traits.

As the system learns more about the concept of "Dog," it could include new attributes such as
specific breed types or behaviors.
Key Purpose: Used to represent evolving knowledge that changes with experience and new
data.

6. Hybrid Networks
Hybrid networks combine elements from two or more types of semantic networks, allowing
for a more complex and versatile representation of knowledge. These networks can
represent both the general structure of concepts and specific facts or relationships.
Example:

●​ A hybrid network might combine a definitional representation of "Dog" (as a type of


"Mammal") with an assertional representation that "Rex is a Dog" and "Rex has
Brown Fur."

Key Purpose: Provides a flexible and detailed representation of knowledge that can address
multiple needs (e.g., definitions and specific instances).
Frames in AI
Frames are a data structure used in Artificial Intelligence (AI) to represent knowledge about
the world. A frame is a collection of attributes or properties, called slots, that describe an
object or concept, along with the values these attributes can take. Frames are particularly
useful for representing structured knowledge, like how we think about objects in the world.
Frames are similar to objects in object-oriented programming (OOP), where each frame
represents a concept or object, and the slots represent its features or properties. This allows
the AI system to organize knowledge in a way that is easy to understand and manipulate.
Structure of a Frame:

●​ Frame Name: The name of the object or concept being described.


●​ Slots: Attributes or properties of the object.
●​ Slot Values: The values associated with those attributes (e.g., types, ranges, or default
values).

Example:
Imagine representing a "Car" in a frame:

●​ Frame Name: Car


●​ Slots:
o​ Color: Red
o​ Make: Toyota
o​ Model: Corolla
o​ Year: 2020
o​ Engine: 1.8L
o​ Fuel Type: Gasoline

In this example, the frame represents a "Car" object, and the slots describe various attributes
of that car.

Exception Frames
Exception frames are a type of frame used to represent special cases or exceptions to general
rules. They are used when there is a need to handle exceptions or special conditions that do
not conform to the default or usual behavior represented by the general frames.
Example:
Imagine a frame for a "Bird" object, which is typically used to represent general birds.

●​ Frame Name: Bird


●​ Slots:
o​ Can Fly: Yes
o​ Has Feathers: Yes
o​ Has Beak: Yes

However, an exception could be made for a penguin, which is a type of bird that cannot fly.
In this case, we create an exception frame for the penguin:

●​ Frame Name: Penguin (Exception to Bird)


●​ Slots:
o​ Can Fly: No (Override the default behavior)
o​ Has Feathers: Yes
o​ Has Beak: Yes

Here, the "Penguin" frame represents an exception to the general "Bird" frame, where the
property Can Fly is overridden by the specific case of the penguin.
Key Purpose of Exception Frames: To handle cases where the normal rules do not apply,
allowing AI systems to account for special conditions or exceptions to general knowledge.

Default Frames
Default frames are used to represent general assumptions or typical behavior that can be
applied unless explicitly overridden. They provide default values for attributes or properties
that can be assumed to be true in the absence of specific information.
Example:
Let’s consider a "Person" frame:

●​ Frame Name: Person


●​ Slots:
o​ Has Hair: Yes (default assumption that people have hair)
o​ Has Eyes: Yes
o​ Has Ears: Yes

However, there may be a situation where we need to represent a Person with no hair, such as
a bald person. In this case, the default assumption that the person has hair would be
overridden:

●​ Frame Name: Bald Person (Override of Person)


●​ Slots:
o​ Has Hair: No (Override the default)
o​ Has Eyes: Yes
o​ Has Ears: Yes

In this case, the default frame for "Person" assumes the presence of hair, but the "Bald
Person" frame provides a specific override for the Has Hair slot.
Key Purpose of Default Frames: To provide initial or default values that can be applied
unless there is specific information to override them. It helps the system avoid redundancy
when many objects share similar attributes.

Summary of Frames, Exceptions, and Default Frames


Type Description Example

Represent knowledge in a structured "Car" frame: Color, Make, Model,


Frames
way with attributes (slots) and values. Year, Engine, etc.

Exception Handle special cases where normal rules "Penguin" frame overrides the "Can
Frames don't apply. Overriding default behavior. Fly" property of the "Bird" frame.

"Person" frame assumes Has Hair:


Default Provide general assumptions or typical
Yes unless overridden for bald
Frames behaviors that can be overridden.
persons.

.
Inconsistent and Incomplete Knowledge:
Truth Maintenance Systems
A truth maintenance system (TMS) in artificial intelligence is designed to manage and
maintain the consistency of beliefs and knowledge within a reasoning system. It keeps track
of dependencies between propositions, allowing a system to retract beliefs when the
supporting evidence changes, thereby ensuring that the knowledge base remains consistent.​

Concept of Uncertainty​
Uncertainty in AI refers to the difficulty in accurately predicting outcomes due to incomplete
knowledge, variability in data, or inherent randomness in systems. It can stem from various
sources such as measurement errors, model limitations, and unpredictability in the
environment. Handling uncertainty is crucial for making reliable predictions and decisions in
AI applications.

Reasoning refers to the process of drawing conclusions from available information or facts.
In Artificial Intelligence (AI), reasoning is the mechanism that allows machines to simulate
human cognitive abilities, such as problem-solving, decision-making, and inference
generation. AI systems need to reason in order to understand the world, make decisions,
and predict outcomes based on evidence or knowledge.
Reasoning is often categorized into different types based on the structure and process of how
the reasoning is performed. The two primary types of reasoning are deductive reasoning and
inductive reasoning. These types can further be broken down into various sub-categories,
each serving a distinct purpose

Deductive Reasoning

●​ Definition: Deductive reasoning uses general rules or premises to reach a specific


conclusion. If the premises are true, the conclusion is guaranteed to be true.
●​ Approach: Top-down.
●​ Example:
o​ Premise 1: All birds have wings.
o​ Premise 2: A sparrow is a bird.
o​ Conclusion: Therefore, a sparrow has wings.

2. Inductive Reasoning

●​ Definition: Inductive reasoning uses specific observations to infer a general


conclusion. The conclusion is probable but not guaranteed to be true.
●​ Approach: Bottom-up.
●​ Example:
o​ Observation 1: The sun has risen in the east every day observed.
o​ Observation 2: Today, the sun rose in the east.
o​ Conclusion: The sun will always rise in the east.

3. Abductive Reasoning

●​ Definition: Abductive reasoning infers the best or most likely explanation for a set of
observations. It is used when there is incomplete information.
●​ Approach: Hypothesis-driven.
●​ Example:
o​ Observation: The ground is wet.
o​ Possible Explanation 1: It rained.
o​ Possible Explanation 2: Someone watered the garden.
o​ Best Explanation: It probably rained.
4. Analogical Reasoning

●​ Definition: Analogical reasoning draws conclusions based on the similarity between


two situations or concepts. If two things are alike in one way, they are inferred to be
alike in another.
●​ Approach: Comparative.
●​ Example:
o​ Known Situation: A car needs fuel to run.
o​ New Situation: A truck is similar to a car.
o​ Conclusion: A truck also needs fuel to run.

5. Monotonic Reasoning

●​ Definition: In monotonic reasoning, once a conclusion is drawn, it cannot be


invalidated, even if new information is added.
●​ Approach: Static and consistent.
●​ Example:
o​ Premise: All fruits are sweet.
o​ Premise: Apples are fruits.
o​ Conclusion: Apples are sweet.​
(This conclusion remains valid even if other information, like some apples are
sour, is added later.)

6. Probabilistic Reasoning

●​ Definition: Probabilistic reasoning deals with uncertainty. It calculates the likelihood


of an event or conclusion based on probability.
●​ Approach: Statistical.
●​ Example:
o​ Observation: It has rained on 80% of cloudy days.
o​ Current Situation: Today is cloudy.
o​ Conclusion: There is an 80% chance it will rain today.

Bayes’ Theorem
Bayes' theorem, also known as Bayes' rule or Bayes' law, is a fundamental concept in
probability theory and statistics. It provides a way to update the probability of a hypothesis
based on new evidence. This theorem is named after Reverend Thomas Bayes and has
significant applications in artificial intelligence (AI) and machine learning.

It is a way to calculate the value of P(B|A) with the knowledge of P(A|B).

Bayes' theorem allows updating the probability prediction of an event by observing new
information of the real world.

Example: If cancer corresponds to one's age then by using Bayes' theorem, we can determine
the probability of cancer more accurately with the help of age.
Bayes' theorem can be derived using product rule and conditional probability of event A with
known event B:

As from product rule we can write:

1.​ P(A ⋀ B)= P(A|B) P(B) or


Similarly, the probability of event B with known event A:

1.​ P(A ⋀ B)= P(B|A) P(A)


Equating right hand side of both the equations, we will get:

The above equation (a) is called as Bayes' rule or Bayes' theorem. This equation is basic of
most modern AI systems for probabilistic inference.

It shows the simple relationship between joint and conditional probabilities. Here,

P(A|B) is known as posterior, which we need to calculate, and it will be read as Probability
of hypothesis A when we have occurred an evidence B.

P(B|A) is called the likelihood, in which we consider that hypothesis is true, then we
calculate the probability of evidence.

P(A) is called the prior probability, probability of hypothesis before considering the
evidence

P(B) is called marginal probability, pure probability of an evidence.

In the equation (a), in general, we can write P (B) = P(A)*P(B|Ai), hence the Bayes' rule can
be written as:

Where A1, A2, A3,........, An is a set of mutually exclusive and exhaustive events.

Applying Bayes' rule:


Bayes' rule allows us to compute the single term P(B|A) in terms of P(A|B), P(B), and P(A).
This is very useful in cases where we have a good probability of these three terms and want
to determine the fourth one. Suppose we want to perceive the effect of some unknown cause,
and want to compute that cause, then the Bayes' rule becomes:
Example-1:

Question: what is the probability that a patient has diseases meningitis with a stiff neck?

Given Data:

A doctor is aware that disease meningitis causes a patient to have a stiff neck, and it occurs
80% of the time. He is also aware of some more facts, which are given as follows:

o​ The Known probability that a patient has meningitis disease is 1/30,000.


o​ The Known probability that a patient has a stiff neck is 2%.
Let a be the proposition that patient has stiff neck and b be the proposition that patient has
meningitis. , so we can calculate the following as:

P(a|b) = 0.8

P(b) = 1/30000

P(a)= .02

Hence, we can assume that 1 patient out of 750 patients has meningitis disease with a stiff
neck.

Example-2:

Question: From a standard deck of playing cards, a single card is drawn. The
probability that the card is king is 4/52, then calculate posterior probability
P(King|Face), which means the drawn face card is a king card.

Solution:

P(king): probability that the card is King= 4/52= 1/13

P(face): probability that a card is a face card= 3/13

P(Face|King): probability of face card when we assume it is a king = 1


Putting all values in equation (i) we will get:

Application of Bayes' theorem in Artificial intelligence:


Following are some applications of Bayes' theorem:

o​ It is used to calculate the next step of the robot when the already executed step is
given.
o​ Bayes' theorem is helpful in weather forecasting.
o​ It can solve the Monty Hall problem.

You might also like