0% found this document useful (0 votes)
77 views93 pages

Unit III

Unit III focuses on knowledge representation and engineering, emphasizing the transformation of domain-specific knowledge into computable formats. It discusses the challenges of formalizing informal specifications, the principles of knowledge representation, and various approaches (procedural, declarative, hybrid) for modeling systems like traffic lights. Additionally, it covers the importance of communication between knowledge engineers and domain experts, as well as the limitations and applications of different representation methods.

Uploaded by

Shaik Abuzar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views93 pages

Unit III

Unit III focuses on knowledge representation and engineering, emphasizing the transformation of domain-specific knowledge into computable formats. It discusses the challenges of formalizing informal specifications, the principles of knowledge representation, and various approaches (procedural, declarative, hybrid) for modeling systems like traffic lights. Additionally, it covers the importance of communication between knowledge engineers and domain experts, as well as the limitations and applications of different representation methods.

Uploaded by

Shaik Abuzar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 93

UNIT III

• Knowledge Representations:
• Knowledge Engineering, Representing
structure in frames, Rules and data, Object-
oriented systems, Natural language Semantics,
Levels of representation

• Knowledge Engineering
• Definition:
– Knowledge engineering applies logic and ontology
to create computable models for specific domains
and purposes.
– It is distinct from pure mathematics and empirical
sciences due to its focus on practical problem-
solving within constraints like budgets and
deadlines.
• Purpose:
– Transforms domain-specific knowledge into
computable formats to address real-world
challenges.
• Informal Specifications:
– Translating informal descriptions into executable
programs requires addressing ambiguities and implicit
knowledge.
– Example: A traffic light system specification involves
clarifying terms like "automatic," "manual control,"
and "special circumstances."
• Challenges in Formalization:
– Experts’ "obvious" terms are puzzles for knowledge
engineers.
– Example: A medical expert might say "monitor vitals,"
assuming the engineer understands it involves heart rate,
blood pressure, etc.
– Computers, like Martians unfamiliar with Earth, lack the
background knowledge required to interpret informal
specifications.
– Example: A computer wouldn't understand the phrase
"book a table" without explicit instructions about
restaurants and reservations.
– Alan Perlis emphasized the difficulty of translating informal
to formal specifications due to extensive background
knowledge required.
• Example of Formalization:
– Informal description of a traffic light system
expanded to:
• States: Red or Green.
• Controls: Automatic switch (On/Off).
• Rules: Define color durations (e.g., Green for g seconds,
Red for r seconds).
– Formalization relies on assumptions and detailed
background knowledge from experts or
references.
• Principles of Knowledge Representation (KR):
• Outlined by Davis, Schrobe, and Szolovits (1993), these
principles define the purpose and structure of KR:
• Surrogate: KR uses symbols to represent external
systems, enabling computers to simulate or reason
about them.
• For example, a traffic light's behavior is modeled
through variables like time and color.
• Ontological Commitments: KR embodies the
designer's assumptions about the domain, such as
categories and types of entities.
• For instance, the traffic light example uses templates
to define variables (e.g., color, time) and their
constraints.
• Fragmentary Theory of Reasoning: KR describes how
domain entities behave and interact, forming a
partial theory. It could be encoded as explicit axioms
or executable programs.
• Efficient Computation: KR must be designed for
effective processing on available hardware, balancing
expressiveness with performance.
• For example, simulation programs for a traffic light
might use loops or declarative axioms.
• Human Expression Medium: A good KR facilitates
communication between knowledge engineers and
domain experts. Conceptual graphs, diagrams, and
structured English help bridge this gap.
• # Procedural Approach
• 1. Focus: Specifies "how" to perform a task or solve a problem.
• 2. Representation: Uses algorithms, procedures, and rules to represent
knowledge.
• 3. Example: A recipe that lists step-by-step instructions on how to make a
dish.
• # Declarative Approach
• 1. Focus: Specifies "what" is true or false, without detailing how to
achieve it.
• 2. Representation: Uses statements, facts, and constraints to represent
knowledge.
• 3. Example: A statement that says "The capital of France is Paris" without
explaining how to find the capital.
• To illustrate the difference:-
• Procedural: "To make a cup of coffee, boil water, add coffee grounds, and
stir." (Focuses on the steps)-
• Declarative: "A cup of coffee is made with coffee grounds and hot
water." (Focuses on the fact)
• Surrogate Model for Traffic Light:
• What is a Surrogate Model?
A computational model represents real-world
systems (e.g., traffic lights) using variables like time
(current time) and light_color (traffic light state).
• Procedural Approach
• Uses programs or rules to update variables.
• Key steps:
– Define events as operations (e.g., change light to green
after 30 seconds).
– Use a control loop to trigger operations when conditions
are met.
– Simulate system behavior by running the program.
• Efficient but less flexible for reasoning.
• Declarative Approach
• Defines system behavior with logic and constraints.
• Key steps:
– Represent the starting state with logical formulas.
– Add rules for events and transformations.
– Use theorem-proving to calculate system state over time.
• Adaptable for reasoning but computationally intensive.
• Comparison
• Procedural: Direct and efficient for execution.
• Declarative: Better for reasoning and flexibility but
slower without optimization.
Ontological Commitments and Traffic Light
Representation
• Procedural Approach:
The system uses a simple program to change the lights (red → green → yellow → red)
based on time.
Example: A loop that switches lights every 30 seconds. It’s efficient but cannot explain
why a light changes.

• Declarative Approach:

Rules for traffic lights are written in logic-based systems like Prolog (e.g., "Green
follows Red if no pedestrians"). This allows reasoning, such as predicting the next
state.

• Hybrid Approach:

Combines both approaches, using rules for reasoning and efficient procedures for
execution.

• Example: A rule triggers when a pedestrian presses a button, and a procedural loop
efficiently changes the light
• Traffic Light Example and Ontological Commitments
• Significant Entities:
Variables represent key components of the traffic light
system:
– Light itself
– Current time and color
– Time when the color last changed
– Duration of red/green lights
– Switch for automatic/manual control
• Representation Methods:
These commitments can be modeled using:
– Predicate Calculus: Using variables and relationships
– Graphs: Concepts as nodes
– Data Structures: Frames, schemas, or templates
• CLIPS Template:
Example of a trafficLight template with six slots:

– Slots: Define attributes of the traffic light


– Facets: Add constraints on values (e.g., color must be red or
green)
• Instantiation:
After defining the template, you can create instances with specific
values:

– AutoSwitch Slot: Defaults to on but can be set to off for manual control.
• Logical Correspondence:
– Each slot corresponds to a relation in logic, like redTime(x, y), where x is
the traffic light and y is the red light duration.
• Programming Analogy:
– The template resembles a record or structure in programming.
– Each assertion initializes storage for a specific traffic light instance.
• This setup enables both simulation and reasoning about traffic
light behavior in various control scenarios.
Efficient Computation in Traffic Light
Simulation:
• Here’s a more condensed version:
• Efficient Traffic Light Simulation:
• Procedural Approach:
– A loop alternates between red and green lights.
– Limitation: It doesn't track or explain why the light
changes or keep any records.
• Declarative Approach:
– Uses logic rules (e.g., "if red time is over, turn green").
– Allows reasoning and answering queries like "When will
the light turn green?"
– Prolog example: turns_green(X, T2) :- traffic_light(X),
turns_red(X, T), red_time(X, R), sum_times(T, R, T2).
• Hybrid (Forward-Chaining):
– Mixes both procedural and declarative logic.
– Monitors and updates states (e.g., light turns
green when red time is over).
• Inference Engines:
– Backward-Chaining: Inference based on known
facts (like Prolog).
– Forward-Chaining: Continuously updates based on
real-time conditions (e.g., light turning green).
• Reasoning Strategies for Traffic Lights:
• Procedural Loop:
– How it works: Step-by-step simulation (turn red, wait, then turn green).
– Strengths: Simple and mirrors the problem’s sequence.
– Weaknesses: Not suitable for complex, non-sequential problems or parallel
operations.
• Logical Formulas:
– How it works: Uses logic to describe relationships (e.g., "If red time ends, turn
green").
– Strengths: Great for complex, interdependent relationships.
– Weaknesses: Needs extra constructs to handle time and parallel processes.
• Forward-Chaining Rules:
– How it works: Rules activate based on conditions (e.g., red time finishes, then
turn green).
– Strengths: Good for real-time, dynamic systems.
– Weaknesses: Can get complex with unpredictable events.
• When to Use Each Approach:
• Procedural Loop: Use for problems with clear, sequential steps.
• Logic: Best for complex relationships without a natural sequence.
• Forward-Chaining: Ideal for systems that react to real-time changes.
MEDIUM FOR HUMAN ExPRESSION.
• Communication with Experts: Knowledge engineers need to use simple
languages and diagrams to talk to experts in other fields, avoiding
technical jargon.
• Conceptual Graphs (CGs):
– Visual representations of rules, easier to understand than text-based formulas.
– But, new users need an explanation of how CGs work (e.g., meaning of arrows,
boxes).
• Simpler Notation (Stylized English):
– You can explain rules in simpler language like: "If the light turns red at time t, it
turns green after r seconds."
• Familiar Diagrams:
– Experts prefer using diagrams they know (like flow charts or circuit diagrams)
rather than complex notations.
• Converting Between Formats:
– It's easier to convert formal rules into simple diagrams or code, using tools that
automatically do this translation.
• In short, visual and simple language representations help knowledge
Simulation vs Theorem Proving:
• Simulation vs Theorem Proving:
– Simulation predicts what happens in specific cases but can’t prove
general rules.
– Theorem Proving can prove general rules with certainty, like how often
Blinky turns green.
• Limitations of Simulation:
– Simulation shows patterns, but they might be random or based on
specific starting conditions.
– Theorem proving explains why those patterns happen.
• Benefits of Theorem Proving:
– Provides proven rules, like how long the traffic light stays red.
– Helps verify and explain patterns seen in simulations.
• Combining Both:
– Use simulation to suggest patterns, then theorem proving to confirm
them.
• Declarative vs Procedural:
– Declarative (e.g., logic rules) is concise and easier to verify.
• Persistence in Simulation:
– While simulations can tell us when the traffic light changes color, they don't
explain what happens in between those changes. The light should stay the
same color until the next scheduled change.
• Persistence Rules:
– Additional rules (called persistence axioms) are needed to say that the light
stays the same color until a new change happens. For example, if the light
turns red at time t1 and stays red for r minutes, it remains red for that entire
duration.
• Leibniz’s Principle:
– According to Leibniz’s principle, nothing changes without a reason. So, the
traffic light stays the same color unless something else (like time passing)
causes it to change.
• The Frame Problem:
– Computers can struggle with the idea that everything else stays the same
unless explicitly told to change. These persistence axioms help the system
understand that the light should remain the same color until a reason for
change occurs.
• In short, persistence axioms ensure that the light's color stays the
same until the rules tell it to change, helping to solve the frame
problem in simulations.
3.2 Representing Structure in Frames:
• A knowledge representation language must
analyze knowledge into low-level primitives,
organize it into high-level structures like
graphs, and provide methods for grouping or
nesting to manage complex structures
efficiently.
SCHEMATA
• Aristotle: Introduced "schema" for patterns of valid
syllogisms.
• Kant (1787): Schemata are rules for constructing
general concepts (e.g., a "triangle" applies to all types
of triangles, not just a specific one).
• Bartlett (1932): Schemata organize past experiences
into cohesive units that guide responses (e.g., knowing
how to ride a bike without relearning every step).
• Selz (1913, 1922): Schemata as concept networks
direct thought by filling in missing information (e.g.,
solving a puzzle by identifying missing pieces).
FRAMES
• Definition (Minsky, 1975): A frame is a data
structure representing stereotyped situations,
with fixed "top levels" and variable "slots" for
specific data (e.g., a frame for a birthday
party).
• Components: Frames include slots with
conditions for specific assignments (e.g., a slot
for "currentColor" in a traffic light frame must
be either red or green).
• Inheritance: Frames use hierarchies, where
subtypes inherit attributes from supertypes,
overriding conflicts when necessary (e.g., a
"TrailerTruck" inherits properties from "Truck"
but specifies 18 wheels).
• AI Impact: Minsky's paper inspired numerous
frame systems, such as FRL (1977) and KRL
(1977), emphasizing the need for structured
knowledge representation.
• Modern Protocols: Generic Frame Protocol
(GFP) supports frame-based knowledge
sharing with a neutral syntax.
MAPPING FRAMES TO LOGIC:
1.Declarative Information to Logic:
– Frames can be mapped to first-order logic (FOL)
using existential conjunctive (EC) subsets with only
existential quantifiers (∃) and conjunction (∧).
2.Instance Example in Conceptual Graphs (CG):
• [TrafficLight: Blinky](currentColor) ~ [Color:
green]
• Maps to: (∃x: TrafficLight) (x = Blinky ∧
currentColor(x, green)).
3.Type Definitions in CG:
• Example:
– Truck: [Vehicle: ?x] → (UnloadedWt) ~ [WtMeasure].
– TrailerTruck: [Truck: ?x] → (HasPart) ~ [Trailer] ∧
(NumberOfWheels) ~ [Integer: 18].
4.Predicate Calculus Mapping:
• Truck: (∀x: Vehicle)(∃y1: WtMeasure)
unloadedWt(x, y1) ∧ ….
• TrailerTruck: (∀x: Truck)(∃y1: Trailer) hasPart(x,
y1) ∧ ….
5.Unification in Logic:
• Instantiating frames merges type slots with instance slots using
unification rules.
• Example 1: Matching Two Statements
• 👉 Goal: Find X so both statements become identical.
• 1 Statement 1: parent(X, John)
2 Statement 2: parent(Mary, John)
• 🔹 Unification: X = Mary
✅ Result: parent(Mary, John)

6.Complex Concepts Example:


• MaxGrossWt: Links a vehicle to its maximum gross weight,
summing unloaded weight and cargo weight.
• 1Unloaded Weight (empty vehicle weight)
2Cargo Weight (weight of the loaded cargo)
• 1. Why EC Logic is Limited?
• EC Logic can express relationships but lacks built-in
support for arithmetic operations or complex conditions.
What EC Logic Can Do:
Represent structured knowledge (e.g., "Truck T123 has an
unloaded weight of 10,000 kg").
Define relationships between concepts.
Handle simple inference rules.
 What EC Logic Cannot Do Alone:
 Compute values dynamically (e.g., "MaxGrossWt =
UnloadedWt + CargoWt").
 Handle constraints like "If MaxGrossWt > 20,000 kg,
restrict movement".Implement procedural logic (e.g.,
decision-making based on conditions)
7.Frame System Limitations:
• EC logic is limited to facts and cannot express:
– Negations (e.g., "There is no hippopotamus in the
room").
– Implications (e.g., "If A, then B").
– Generalizations (e.g., "All X are Y").
8.Knowledge Base Structure:
• Type Definitions: Hierarchical structure for frame types.
• Instances: Concrete data using EC logic, equivalent to a
relational database.
• [Vehicle]
├── [Truck]
│ ├── [TrailerTruck]
│ ├── [DumpTruck]
├── [Car]
│ ├── [Sedan]
│ ├── [SUV]

• 9.Beyond EC Logic:
• Arithmetic computations and advanced logical expressions
require procedural attachments or external programming.
• Rules and Data
• During the 1970s, universities pioneered
expert systems, while Ted Codd developed
relational databases at IBM.
• Although expert systems and database
systems differ in scale, their functionalities are
converging as database systems handle more
complex operations and expert systems are
applied to larger data sets.
• Key Differences:
• Expert Systems: Focus on executing repeated
rules on small data sets.
• Database Systems: Execute short chains of
rules on large data sets.
• Common Logical Foundations: Both systems
rely on the existential-conjunctive (EC) subset
of logic, using two main inference rules:
• Modus Ponens: From p and (p → q), infer q
(Forward Chaining).
• Modus Tollens: From ¬q and (q → p), infer ¬p
(Backward Chaining).
Modus Ponens:
• If p is true and p → q (if p then q) is true, then
we can infer that q is true.
• ✅ Logical Form:
• p,(p→q)
• If it rains, the ground will be wet. (p → q)
• It is raining. (p)
• Therefore, the ground is wet. (q)
• 2. Modus Tollens (Backward Chaining)
• If q is false and q → p (if q then p) is true, then
we can infer that p is false.
• ✅ Logical Form:
• ¬q,(q→p)
• Example:
• If the alarm is ringing, there is a fire. (q → p)
• The alarm is NOT ringing. (¬q)
• Therefore, there is NO fire. (¬p)
• Application in AI & Expert Systems
• Forward Chaining (Modus Ponens) → Used in
rule-based systems (e.g., medical diagnosis,
inference engines).
• Backward Chaining (Modus Tollens) → Used
in goal-driven reasoning (e.g., Prolog,
theorem proving).
Historical Developments and Key Systems
• Planner (1971, MIT): Combined forward and
backward chaining with relational databases.
• MYCIN (Stanford, 1976): A backward-chaining
system for diagnosing bacterial infections.
• OPS5 (Carnegie-Mellon): A forward-chaining
system that became the foundation for
commercial expert systems, such as CLIPS.
• Prolog: Developed in Europe, combined
backward-chaining with logical operations,
influencing database integration.
• Integration of Systems and Query Techniques
• Systems like Microplanner, Prolog, and SQL
use backtracking to solve queries:
– Search relevant relations to find the desired data.
– Backtracking tries different options if a goal
cannot be met.
• Microplanner
• Derived from the Planner language.
• Used for expert systems and relational
databases.
• Implemented in SHRDLU by Terry Winograd
(1972) for natural language understanding.
• Supports both forward-chaining (commands)
and backward-chaining (queries).
• Uses goal-based logic with variables marked
by ?.
• Performs backtracking to satisfy conditions.
• Prolog
• Logic programming language based on
predicate logic.
• Uses uppercase variables and omits explicit
goal statements.
• Performs backtracking to find valid solutions.
• Syntax example: objects(X1,block,*) &
objects(X2,pyramid,*) & supports(X1,X2).
• Queries follow logical rules without
predefined execution order.
• SQL
• Structured Query Language for relational databases.
• Uses a verbose syntax with SELECT, FROM, and WHERE
clauses.
• Performs optimizations like indexing and reordering to
reduce backtracking.
• Syntax example: SELECT supporter FROM supports,
objects X1, objects X2 WHERE X1.shape = 'block' AND
X2.shape = 'pyramid' AND supporter = X1.id AND
supportee = X2.id;
• Query execution is optimized based on database
structure.
• All three use backtracking for query resolution, but SQL
optimizes execution dynamically, while Prolog and
Microplanner require manual goal ordering.
• Optimization
• In SQL databases, optimizations like indexing,
hash coding, and goal reordering improve
query performance.
• In contrast, systems like Prolog and
Microplanner require manual goal ordering or
use preprocessors for optimization.
• PLURALS AND SETS
• Handling plural expressions in logic is more
cumbersome than in English.
• Microplanner uses find operators to find
multiple matches.
• In Prolog, the setof predicate accumulates
search results into a list.
• SQL includes built-in operations such as
GROUP BY and HAVING clauses to efficiently
manipulate large sets of data.
• Overall, expert systems and relational
databases share a logical foundation, with a
convergence in their capabilities due to the
integration of relational operations and logical
inference techniques.
• Microplanner
• Uses the find operator to retrieve multiple
results.
• (find all ?xl
• (goal (objects ?xl block?))
• (find 3 ?x2
• (goal (objects ?x2 pyramid?))
• (goal (supports ?x2 ?xl))))
• Paraphrase: Find all xl (blocks) where three x2
(pyramids) support xl.
• 2. Prolog
• Prolog uses a rule-based logical syntax, emphasizing
backward chaining and pattern matching.
• Prolog Rule for sup_color:
• sup_color(S, C) :- supports(S, X), objects(X, *, C).
• In this Prolog rule:
– sup_color(S, C) is true if S supports some object X, and X
has a color C.
– It searches for a combination of S and X that meets the
conditions in the database.
• 3. CLIPS
• CLIPS is a forward-chaining rule-based system
and is commonly used in expert systems. It
emphasizes pattern matching and proactive
updates.
• Forward-Chaining Rule in CLIPS to maintain
contains relationships:
• (defrule checkForBoxSupporter
• (supports ?x ?y)
• (objects ?x box ?)
• =>
• (assert (contains ?x ?y)))
• In this CLIPS rule:
– If ?x is a box and ?x supports some object ?y, it
asserts that ?x contains ?y.
– This proactively updates the database's contains
relationships to maintain constraints (e.g., boxes
containing anything they support).
• 4. Conceptual Graph Representation
• In Conceptual Graphs, queries are expressed
in a graph format close to natural language.
• For Example:
• Natural Language Query:
"Which blocks are supported by 3 pyramids?“
• CG Representation:
• [Person: ?]~(Inst)~[Like]-(Thme)-[IceCream].
• Breakdown of the Expression:
• [Person: ?]:
– This represents a Person. The ? is a variable, meaning the identity of the
person is unknown (we're looking for someone who has this property).
• ~(Inst):
– This is the relation between Person and the Like concept. The Inst
(instrument) indicates that the Person is the instrument or subject
performing the action of liking.
• [Like]:
– The Like is the action or relation, indicating the act of liking.
• -(Thme)-:
– The Thme (theme) is another relation, indicating the object or the thing
being liked. Here, it connects the Like relation to the object IceCream.
• [IceCream]:
– The object being liked is IceCream.
• A conceptual graph consists of:
• Concept Nodes → Represent entities, objects,
or ideas
– Example: [Car: Tesla], [Person: Alice]
• Relation Nodes → Represent relationships
between concepts
– Example: (Drives), (HasColor), (Supports)
• Edges → Connect concepts to relations
• [Person: Alice] → (Drives) → [Car: Tesla]
• Practical Implications
• Backward Chaining (Prolog, SQL): Focuses on deducing
results by exploring relationships backward through known
facts.
• Forward Chaining (CLIPS): Updates the database by
proactively ensuring relationships (e.g., boxes containing
contents).
• Each system has trade-offs:
– SQL excels at simple, declarative queries with large datasets.
– Prolog uses recursion and pattern matching but lacks native
support for some database operations.
– CLIPS provides fast updates and real-time constraint checks.
3.4 Object-Oriented Systems
• Integration of Declarations and Operations:

• Object-Oriented Systems combine object


definitions (declarations) and actions
(procedures) into a single package, integrating
data and behavior
• Object-Oriented Declarations (Java Example):
• Inheritance: A class like Truck extends another
class, Vehicle, inheriting its properties. This
allows for a hierarchical relationship between
classes.
• Instance Variables & Methods: Each instance
variable (e.g., unloadedWt, maxGrossWt)
corresponds to slots in a frame, and the
methods (e.g., unloadedWt()) provide access to
these variables.
• Constructor: A constructor (e.g., in TrailerTruck)
initializes new objects and instance variables
(e.g., creating 18 Wheel objects).
MAPPING FRAMES AND RULES TO OBJECTS.
• Frames vs. Object-Oriented Systems: Frames
map to instance variables and inheritance,
while object-oriented systems encapsulate
data and require methods for access.
• Example
• (frame Truck (slots (unloadedWt WtMeasure)
(maxGrossWt WtMeasure)))
• The Truck frame has slots like unloadedWt and
maxGrossWt.
• Access Control in Frames vs. Objects: Frames
allow direct access to data, while object-
oriented systems restrict access through
methods.
• Example
• In frames, you can directly access
unloadedWt,
• Java, you must call getUnloadedWt() to access
the data.
• Compiling Logic to Code: Compilers can
automatically translate frames and type
definitions into language-specific declarations
(e.g., C++ or Java), but generating procedural code
requires more complexity, like backward and
forward chaining techniques.
• Chaining in Object-Oriented Systems:
• Forward and backward chaining can be supported
in object-oriented systems with techniques like
recursive descent for backward chaining or
observer patterns for forward chaining, triggering
actions based on data changes.
• Example
• Backward Chaining (Logic):
If a rule A -> B exists, backward chaining tries
to prove A by finding conditions to satisfy B.
It’s implemented as if-then in programming.
• Forward Chaining (Observer Pattern):
Java’s Observer pattern updates observers
when the Observable object changes, similar
to forward chaining in a rule system where
actions are triggered by changes.
• Logic-Based vs. Object-Oriented Systems
• Logic-Based Systems:
• Represent knowledge as general propositions that can
be used flexibly for different purposes.
• Example: A single assertion "Every trailer truck has 18
wheels" can answer questions, initialize values, or
trigger errors.
• Object-Oriented (O-O) Systems:
• Depend on procedural encoding, meaning the same
information must be written separately for each
purpose.
• Risk of inconsistencies due to different programmers
modifying different parts of the code.
• Logic-Based vs. Object-Oriented Systems:
• Logic-based systems use single assertions for multiple
purposes, while O-O systems require separate procedural
encodings, risking inconsistencies.
• Encapsulation in O-O Systems:
• Conceptual graphs use nested boxes to represent
encapsulated objects.
• Predicate calculus uses dscr(x, p) to relate objects to their
descriptions.
• Example (Birthday Party on 26 May 1996):
• 40 guests gave presents to Marvin; 50 candles were on a cake.
• Graphical representation shows nested details; predicate
calculus expresses it formally.
• Encapsulation ensures modularity for reuse under different
conditions.
Encapsulation in Object-Oriented Systems:

• Encapsulation: Object-oriented systems


separate an object's behavior from its internal
structure for modularity.
• Example: A Trip class associates trucks and
drivers dynamically without redundancy.
• Access Issues: Storing driver info in Truck or
Driver causes inefficient searching.
• Trip Class: A Trip class avoids data duplication
and simplifies object relationships.
• Frame vs. O-O Systems: Frame systems lack
encapsulation; O-O systems use methods for
data access.
• Consistency: Logic-based systems use contexts
to manage complex, dynamic relationships.
• Nesting: Contextual graphs organize nested
data to reflect real-world relationships.
• Logic Representation: Predicate calculus uses
description predicates to manage and
encapsulate event details.
ZOOMING IN AND ZOOMING OUT.
• Zooming In/Out: Clicking on context boxes
expands or contracts details, allowing deeper
exploration.
• Nested Contexts: A process box contains
steps, like states and events, represented as
nested contexts.
• Variables: Variables like *x and *y help link
details across graphs, reducing clutter in
complex systems.
• Interactive Display: Users can toggle between
viewing coreference links as variables or
dotted lines.
• State and Event Details: Clicking on a state or
event box reveals specific actions like guests
singing or candles burning.
• Context Expansion: Lower-level concepts can
be s, like a guest singing in a specific key.
Translations to Natural Language
• Conceptual Graphs & Readability: Though
conceptual graphs are readable, they are a
formal language mainly used by programmers,
system analysts, and professionals.
• Natural Language Preference: End users prefer
natural language; even programmers use it for
comments, documentation, and help systems.
• Bridge Between Languages: Conceptual graphs
serve as a bridge between computer languages
and natural languages.
• Example of Translation:
– Describes a birthday party scenario with guests, candles,
and a process involving different states (e.g., singing,
blowing candles, generating smoke).
– Events and states are structured in conceptual graphs but
can be expressed in natural language sentences.
• Translation Challenges:
– Translating from natural to formal language is hard due to
ambiguity.
– Translating from formal to natural language is simpler but
requires organizing information into sentences and
paragraphs.
• Stylized English Generation: Mapping conceptual
graph contexts directly to structured English helps
simplify language generation.
Objects and Theories
• Hierarchy Correspondence:
– The hierarchy of theories in ontology aligns with the
hierarchy of types/classes in object-oriented (O-O) systems.
• Comparison Between Ontology and O-O Systems:
– Each O-O class corresponds to a type in ontology.
– Methods' preconditions/postconditions align with axioms
of the corresponding ontology type.
– Each object instance follows axioms via propositions or
frame-like slots.
– Multiple inheritance in O-O corresponds to multiple
inheritance in ontology theories.
• Logical Representation:
– Preconditions, postconditions, and class definitions can be
expressed using first-order logic.
• Difference Between Procedural & Logic-Based O-O
Languages:
– Procedural languages have an implicit execution order,
simplifying time-sequenced subjects.
– Logic-based languages are better for unordered processes
where execution order is irrelevant.
• Challenges in GUI Programming:
– GUIs don’t follow a fixed execution sequence but rely on
event-driven mechanisms (user interactions).
– Event-driven programming is necessary for GUIs, as seen in
Petri nets.
Natural Language Semantics
• Natural Languages as Knowledge Tools:
Natural languages (like English) are powerful
tools for representing knowledge, used by
everyone, from children to scientists.
• Aristotle’s Contribution: Aristotle’s study of
how language expresses knowledge laid the
groundwork for understanding knowledge
representation, which is still used in modern
AI and programming.
• Comparison to Other Languages: While
mathematics and programming languages are
more concise and precise, they are also
limited and less flexible than natural
languages.
• Ambiguity and Flexibility: Natural languages
are flexible and can express complex ideas,
even though they can be vague. Artificial
languages rely on natural languages to define
and explain them.
BACKGROUND KNOWLEDGE
• Understanding Natural Language is Hard:
– Computers can analyze sentence structure but
struggle with understanding the meaning.
– This requires a lot of background knowledge about
the world.
• Example Problem:
– To find a Casablanca scene, the system must know
the movie and actor.
– It also needs to find the misquoted line in the
movie.
• Need for Specialized Knowledge:
– Systems need experts to add domain-specific knowledge
(e.g., about movies or museums).
– This makes natural language understanding challenging
for computers.
• Search Engines Aren’t Perfect:
– Search engines match keywords, but don’t always find the
right results.
– Incorrect or vague wording can make searches difficult.
• Keyword Search Problems:
– Keyword searches work when you know the exact terms.
– Finding things described differently or with different
words is harder.
Language Analysis
• Morphology:
– The system looks up each word in a dictionary,
finds its root form and parts of speech.
– For example, "degli" is broken down into "di" (of)
and "gli" (the), and "approvato" is the past
participle of "approvare" (to approve).
• Syntax:
– Grammar rules are used to break the sentence
into phrases and subphrases.
– The noun phrase "L'associazione degli industriali"
is split into an article ("la"), noun ("associazione"),
and prepositional phrase ("di gli industriali").
• Semantics:
– A semantic interpreter translates the sentence
structure into a conceptual graph representing
meaning.
– In this case, it shows that "the association has as
part the industrialists."
• This process involves using linguistic
knowledge like dictionaries, grammar rules,
and conceptual patterns to understand and
analyze the sentence.
CONCEPTS AND RELATIONS.
• Agent (Agnt):
– Links an action to the being performing it.
– Example: "approve" has the association as the agent.
• Theme (Thme):
– Links an event or state to the main entity involved.
– Example: "approve" has "project" as the theme.
• Past (Past):
– Marks the action as occurring in the past.
– Example: "approve" happened in the past.
• Part (Part):
– Links an entity to something that is part of it.
– Example: "association" has "industrialists" as parts.
• Attribute (Attr):
– Links a concept to something representing its attribute.
– Example: "project" has "new" as its attribute.
• Goal (Goal):
– Links a concept to its goal.
– Example: "project" has "investments" as its goal.
• In (In):
– Links a concept to another that contains it.
– Example: "investments" are "in the south of Italy."
• These relations are foundational for creating more
complex connections in databases or expert
systems.
Resolving Ambiguities:
• Ambiguous Word - "Industriali":
– The word "industriali" could mean "industrialists,"
"industrial," or the verb form of "to be busy."
– The correct meaning was determined by context
in the syntax stage.
• Ambiguous Word - "Piano":
– The word "piano" could mean "project," "plan," or
"plane," and others.
– Syntax eliminated some meanings, and the word
"project" was chosen based on context.
• Ambiguous Word - "Mezzogiorno":
– "Mezzogiorno" literally means noon, but in context, it
refers to the "south of Italy."
– The preposition used helped resolve this ambiguity.
• Ambiguous Preposition - "Di":
– The preposition "di" can have multiple meanings, but
context helps determine its role.
– It can mean "part" or "goal," depending on the context,
like in "association of the industrialists" or "plan of
investments."
• These examples highlight how morphology and syntax
issues can be solved by context, but semantic issues
need a large dictionary, background knowledge, and
effective inference systems.
Question Answering
• Question Answering:
• Input Question: DANTE processes the
question "Che cosa si approva?" (What thing is
approved?).
• Query Creation: The system creates a query
graph representing the question's meaning
with a concept and a question mark (e.g.,
[Thing: ?]).
• Knowledge Base Search: The system searches
its knowledge base and finds a match, such as
[Project] for [Investment: {*}].
• Answer Generation: The answer is generated
as "Si approva un progetto di investimenti" (A
project of investments is approved). DANTE
can also provide the original text if needed
Inference
• Inference Example: For the question "What do the
industrialists belong to?", DANTE infers that they
belong to an association.
• Complex Question: DANTE can answer basic
questions (e.g., "Who was appointed president?") but
struggles with more complex questions, such as "Who
was president before or after the appointment?".
• Knowledge Requirement: Complex questions require
specialized knowledge, such as understanding the role
of a board of directors and the process of
appointments. Systems like Cyc use detailed
background knowledge to answer these.
1. Levels of Representation
• Ron Brachman's Levels (1979)
• Ron Brachman divided representation into 5 levels,
focusing on how knowledge is organized and
implemented:
• Implementational Level
– Involves data structures used in programming.
– Examples: Atoms, pointers, lists.
• Logical Level
– Uses symbolic logic (propositions, variables, quantifiers,
etc.).
– Example: Logical statements like "All humans are mortal".
• Epistemological Level
– Defines concept types, subtypes, inheritance, and
relationships among concepts.
– Example: A "vehicle" concept that includes "car," "bus," and
"motorcycle."
• Conceptual Level
– Involves semantic relations, objects, actions, and linguistic
roles.
– Example: How actions relate to objects, like "a car moving on
a road."
• Linguistic Level
– Represents arbitrary concepts, words, and expressions in
natural language.
– Example: English phrases like "I am going to the store."
• 2. Competence Levels (Rodney Brooks, 1986)
• Rodney Brooks outlined 8 levels of competence for
mobile robots, showing increasing sophistication:
• Avoiding
– Robots avoid obstacles, both moving and stationary.
• Wandering
– Robots move randomly while avoiding obstacles.
• Exploring
– Robots search for reachable areas and head toward
them.
• Mapping
– Build a map of the environment and note routes.
• Noticing
– Detect environment changes, like new obstacles or
areas.
• Reasoning
– Robots identify objects, reason about their
relationships, and act on them.
• Planning
– Create plans to change the environment in desired
ways.
• Anticipating
– Predict the actions of other objects, adjust plans
proactively.
• 3. Design Levels (Zachman Framework)
• Zachman's 5 levels outline different perspectives of information
representation in systems architecture:
• Scope (Level 1)
– Describes aspects independent of computer representation.
– Focus: The big picture of business operations.
• Enterprise Model (Level 2)
– Still independent of computer implementation, but describes business
interactions.
• System Model (Level 3)
– Descriptions are implementation-independent, selected by a system
analyst.
• Technology Model (Level 4)
– Connects data structures to physical representations, linking programming
and operations.
• Component Level (Level 5)
– Focuses on specific implementation details, where the connection to the
outside world becomes less apparent.
• Zachman’s Matrix
• A matrix with 6 columns and 5 rows, showing
30 perspectives on knowledge representation.
• It captures various views, such as business
goals, processes, and IT implementation
details.
summary
• Representation Levels (Brachman) focus on
abstract conceptual understanding and
programming details.
• Competence Levels (Brooks) outline how
robotic systems evolve in complexity and
functionality.
• Design Levels (Zachman) show how systems
are structured from business concepts down
to technical implementation details.

You might also like