0% found this document useful (0 votes)
138 views3 pages

Ontological Engineering

The document discusses resolution as a key inference mechanism in logic programming, outlining its steps and the importance of converting statements into Conjunctive Normal Form. It also covers Knowledge Representation (KR) methods, including semantic networks, frames, and ontologies, which are crucial for structuring data in AI systems. Additionally, it explores ontological engineering, emphasizing the challenges of creating general-purpose ontologies and the various methods used to develop them.

Uploaded by

Aniruddh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
138 views3 pages

Ontological Engineering

The document discusses resolution as a key inference mechanism in logic programming, outlining its steps and the importance of converting statements into Conjunctive Normal Form. It also covers Knowledge Representation (KR) methods, including semantic networks, frames, and ontologies, which are crucial for structuring data in AI systems. Additionally, it explores ontological engineering, emphasizing the challenges of creating general-purpose ontologies and the various methods used to develop them.

Uploaded by

Aniruddh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Resolution

Resolution is the primary inference mechanism in logic programming and automated


reasoning, especially for propositional and first-order logic. It's used to deduce new
information from existing knowledge by applying a rule of inference on a set of clauses.
Steps in Resolution:
1. Convert all statements into Conjunctive Normal Form (CNF), a standardized way of
expressing logical formulas.
2. Apply the resolution rule to combine clauses that contradict each other, to derive new
clauses.
3. If the goal can be derived (i.e., the negation of the goal leads to a contradiction), then
the original goal is true.

Knowledge Representation (KR)


KR is the method used by AI systems to represent data about the real world in a
structured, formal way. It is fundamental to tasks such as automated reasoning, natural
language understanding, and decision-making.
Forms of KR:
 Semantic Networks: Graph structures where nodes represent entities, and edges
represent relationships between them.
 Frames: Data structures for dividing knowledge into substructures by representing
"objects" and "attributes."
 Logic-based Representations: Using formal logic (propositional or predicate logic) to
express knowledge.
 Ontologies: Structured frameworks that categorize entities and define relationships
between them.
Ontological Engineering
Ontologies are a powerful tool for organizing and understanding information in a
structured way. They provide a clear framework for defining the relationships between
different concepts, making it easier to share and analyze data across various fields.
This article will explore what ontologies are, how they are used, and why they are important
for improving data management and communication in areas like artificial intelligence,
semantic web, and knowledge management.
Ontologies
Ontologies are formal definitions of vocabularies that allow us to define difficult or complex
structures and new relationships between vocabulary terms and members of classes that we
define. Ontologies generally describe specific domains such as scientific research areas.
Example:
Ontology depicting Movie: -

Components Of Ontology:
 Classes (Concepts): Categories or kinds of things (e.g., "Person," "Animal").
 Relations: Relationships between entities (e.g., "is a part of," "has a").
 Instances (Individuals): Concrete examples of a class (e.g., "John," "Tina").
 Properties (Attributes): Characteristics of instances (e.g., age, color).
Ontology defines the shared vocabulary needed for a domain of knowledge, which allows
reasoning engines to process and infer information.
Example: In the domain of medicine, an ontology could define concepts like "Disease,"
"Symptom," and "Treatment," and relationships such as "caused by" or "treats."

In “toy” domains, the choice of representation is not that important; many choices will work.
Complex domains such as shopping on the Internet or driving a car in traffic require more
general and flexible representations. This chapter shows how to create these representations,
concentrating on general concepts—such as Events, Time, Physical Objects, and Beliefs—
that occur in many different domains. Representing these abstract concepts is sometimes
called ontological engineering.
For any special-purpose ontology, it is possible to make changes like these to move toward
greater generality. An obvious question then arises: do all these ontologies converge on a
general-purpose ontology? After centuries of philosophical and computational investigation,
the answer is “Maybe.” In this section, we present one general-purpose ontology that
synthesizes ideas from those centuries. Two major characteristics of general-purpose
ontologies distinguish them from collections of special-purpose ontologies:
 Ageneral-purpose ontology should be applicable in more or less any special-purpose
domain (with the addition of domain-specific axioms). This means that no
representational issue can be finessed or brushed under the carpet.
 In any sufficiently demanding domain, different areas of knowledge must be unified,
because reasoning and problem solving could involve several areas simultaneously. A
robot circuit-repair system, for instance, needs to reason about circuits in terms of
electrical connectivity and physical layout, and about time, both for circuit timing
analysis and estimating labor costs. The sentences describing time therefore must be
capable of being combined with those describing spatial layout and must work equally
well for nanoseconds and minutes and for angstroms and meters.
We should say up front that the enterprise of general ontological engineering has so far had
only limited success. None of the top AI applications (as listed in Chapter 1) make use of a
shared ontology—they all use special-purpose knowledge engineering. Social/political
considerations can make it difficult for competing parties to agree on an ontology. As Tom
Gruber (2004) says, “Every ontology is a treaty—a social agreement—among people with
some common motive in sharing.” When competing concerns outweigh the motivation for
sharing, there can be no common ontology. Those ontologies that do exist have been created
along four routes:
1. Byateam of trained ontologist/logicians, who architect the ontology and write axioms.
The CYC system was mostly built this way (Lenat and Guha, 1990).
2. By importing categories, attributes, and values from an existing database or databases.
DBPEDIA was built by importing structured facts from Wikipedia (Bizer et al., 2007).
3. By parsing text documents and extracting information from them. TEXTRUNNER
was built by reading a large corpus of Web pages (Banko and Etzioni, 2008).
4. By enticing unskilled amateurs to enter commonsense knowledge. The OPENMIND
system was built by volunteers who proposed facts in English (Singh et al., 2002;
Chklovski and Gil, 2005).

You might also like