0% found this document useful (0 votes)
23 views79 pages

AI Note

The document discusses various applications of artificial intelligence including Google Maps, face detection, text editors, search and recommendation algorithms, chatbots, digital assistants, social media, and e-payments. It provides details on how each uses concepts from AI like machine learning, deep learning and natural language processing.

Uploaded by

bushraameen2016
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views79 pages

AI Note

The document discusses various applications of artificial intelligence including Google Maps, face detection, text editors, search and recommendation algorithms, chatbots, digital assistants, social media, and e-payments. It provides details on how each uses concepts from AI like machine learning, deep learning and natural language processing.

Uploaded by

bushraameen2016
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

Terminology

Artificial Intelligence refers to the phenomenon where a machine acts as a blueprint of the
human mind, by being able to understand, analyze, and learn from data through specially
designed algorithms. Artificially intelligent machines can remember human behavior patterns
and adapt according to their preferences.
The major concepts closely related to AI are machine learning, deep learning and natural
language processing (NLP).

Machine Learning (ML) involves teaching machines about important concepts via examples
by means of big data that needs to be structured (in machine language) for the machines to
understand. This is all done by feeding them the right algorithms.
Deep Learning is a step ahead of ML, meaning it learns through representation but the data
does not need to be structured for it to make sense of it. This is due to the artificial neural
networks that are inspired by the human neural structure.

Natural Language Processing (NLP) is a linguistic tool in computer science. It enables


machines to read and interpret human language. NLP allows an automatic translation of
human language data and enables two entities (computers and humans) who speak different
languages to interact.

Applications / Examples of Artificial Intelligence.

Following are applications of AI that we come across in daily life.

1. Google Maps and Ride-Hailing Applications

One doesn’t have to put much thought into traveling to a new destination anymore. Instead of
having to rely on confusing address directions, you can now simply open up the handy map
application on your phone and type in your destination.
So how does the application know the exact directions, the optimal route, and even road
barriers and traffic congestions? Not too long ago, only GPS (satellite-based navigation) was
used as guidance for commuting. But now, artificial intelligence is being incorporated to give
users a much more enhanced experience in regards to their specific surroundings.
Via machine learning, the app algorithm remembers the edges of the buildings that have been
fed into the system after the staff has manually identified them. This allows the addition of
clear visuals of buildings on the map. Another feature is the quality of recognizing and
understanding handwritten house numbers which help commuters reach the exact house they
were looking for. Places that lack formal street signs can also be identified with their outlines
or handwritten labels.
The application has been taught to understand and identify traffic. Thus, it recommends the
best route that avoids roadblocks and congestion. The AI-based algorithm also tells users
the exact distance and time they will reach their destination as it has been taught to calculate
this based on traffic conditions. Users can also view the pictures of their locations before
getting there.
So by employing a similar AI technology, various ride-hailing applications have also
come into existence.

2. Face Detection and Recognition


Using virtual filters on our face when taking pictures and using face ID for unlocking our phones
are two applications of AI that are now part of our daily lives. The former incorporates face
detection meaning any human face is identified. The latter uses face recognition through which
a specific face is recognised.
How does this work?
Intelligent machines often match – and sometimes go even above and beyond! – human
capabilities. Human babies start recognizing facial features like eyes, nose, lips and face
shapes. But that is not all there is to a face. There are a plethora of factors that make human
faces unique. Smart machines are taught to identify facial coordinates (x, y, w, and h; that
make a square around the face as an area of interest), landmarks (eyes, nose etc), and
alignment (geometric structures). This takes the human ability to recognize faces several
notches up.
Face recognition is also used for surveillance and security by government facilities or at
airports. For example, Gatwick Airport, London, uses face recognition cameras as ID checks
before allowing passengers to board the plane.

3. Text Editors or Autocorrect.

When you’re typing out documents, there are inbuilt or downloadable auto-correcting tools for editors
that check for spelling mistakes, grammar, readability, and plagiarism depending on their complexity
level.
It must have taken you a while to learn your language before you became fluent in it. Similarly, artificially
intelligent algorithms also use machine learning, deep learning, and natural language processing to
identify incorrect usage of language and suggest corrections.
Linguists and computer scientists work together to teach machines grammar, just like you were taught
at school. Machines are fed with copious amounts of high-quality language data, organized in such a
manner that machines can understand it. So when you use even a single comma incorrectly, the editor
will mark it red and prompt suggestions.
The next time you have a language editor check your document, know that you are using one of the
many examples of artificial intelligence.

4. Search and Recommendation Algorithms


When you want to watch your favorite movies or listen to songs or perhaps shop online, have
you noticed that the items suggested to you are perfectly aligned with your interests? This is
the beauty of AI.
These smart recommendation systems learn your behavior and interests from your online
activities and offer you similar content. The personalized experience is made possible by
continuous training. The data is collected at the frontend (from the user), stored as big data
and analyzed through machine learning and deep learning. It is then able to predict your
preferences by recommendations that keep you entertained without having to search any
further.
Similarly, the optimized search engine experience is another example of artificial
intelligence. Usually, our top search results have the answer we’re looking for. How does that
happen?
Quality controlling algorithms are fed with data to recognize high-quality content over SEO-
spammed, poor content. This helps make an ascending order of search results based on
quality for the best user experience.
As search engines are made up of codes, natural language processing technology helps these
applications to understand humans. In fact, they are also able to predict what a human wants
to ask by compiling top-ranked searches and predicting their queries when they start to type.
New features such as voice search and image search are also constantly being programmed
into machines. If you want to find a song that is playing at a mall, you can simply hold your
phone up to it, and a music-identifying app will tell you what it is within seconds. After sifting
through the rich database of songs, the machine will also tell you all the details related to that
song.
5. Chatbots
As a customer, getting queries answered can be time-consuming. An artificially intelligent
solution to this is the use of algorithms to train machines to cater to customers via chatbots.
This enables machines to answer FAQs, and take and track orders.
Chatbots are taught to impersonate the conversational styles of customer representatives
through natural language processing (NLP). Advanced chatbots no longer require specific
formats of inputs (e.g. yes/no questions). They can answer complex questions requiring
detailed responses. They will give the impression of a customer representative when, in fact,
they are just another example of artificial intelligence.
6. Digital Assistants
When we have our hands full, we often resort to ordering digital assistants to perform tasks
on our behalf. When you are driving with a cup of coffee in one hand, you might ask the
assistant to call your mom. The assistant, for example, Siri will access your contacts, identify
the word “Mom”, and call the number.
Interestingly, Siri is old news, as it is an example of a lower-tier model that can only respond
when spoken to and not give complex answers. The latest digital assistants are well-versed
in human language and incorporate advanced NLP and ML. They understand complex
command inputs and give satisfactory outputs. They have adaptive capabilities that can
analyze your preferences, schedules, and habits. This allows them to systemize, organize and
plan things for you in the form of reminders, prompts and schedules.

7. Social Media
The advent of social media provided a new narrative to the world with excessive freedom of
speech. However, this brought some societal evils such as cybercrime, cyberbullying, and
hate speech. Various social media applications are using the support of AI to control these
problems and provide users with other entertaining features.
Social media, being a great example of artificial intelligence, also has the ability to understand
the sort of content a user resonates with and suggests similar content to them. The facial
recognition feature is also utilized in social media accounts, helping people tag their friends
through automatic suggestions. Smart filters can identify and automatically weed out spam or
unwanted messages. Smart replies are another feature users can enjoy.
Some future plans of the social media industry include using artificial intelligence to identify
mental health problems such as suicidal tendencies through analyzing the content posted and
consumed. This can be forwarded to mental health doctors.

8. E-Payments
Having to run to the bank for every transaction can be a hectic errand. Good news! Banks are
now leveraging artificial intelligence to facilitate customers by simplifying payment processes.
Artificial intelligence has made it possible to deposit cheques from the comfort of your home.
AI is proficient in deciphering handwriting, making online cheque processing practicable.
The way fraud can be detected by observing users’ credit card spending patterns is also an
example of artificial intelligence. For example, the algorithms know what kind of products User
X buys, when and from where they are bought, and in what price bracket they fall. When there
is some unusual activity that does not fit in with the user profile, the system instantly alerts
user X.

Importance of AI
Artificial intelligence could be used to do lots of impressive tasks and jobs. AI can help designers
and artists make quick tweaks to visuals. AI can also help researchers identify “fake” images or
connect touch and sense. AI is being used to program websites and apps by combining symbolic
reasoning and deep learning. Basically, artificial intelligence goes beyond deep learning. Here are
five reasons why AI is important.

Artificial Intelligence will create new jobs.


It is no news that AI will replace “repetitive jobs.” It literally means that these kinds of jobs will
be automated, like what robots are currently doing in factories. Robots are rendering the humans
that are supposed to do those tasks practically jobless.
And it goes further than that – many “white collar” tasks in the fields of law, hospitality, marketing,
healthcare, accounting, and others are adversely affected. The situation seems scary
because scientists are just scratching the surface as extensive research and development of AI. AI
is advancing rapidly (and it is more accessible to everybody).
The great news about AI is that it can create new jobs.
Some believe that AI can create even more new jobs than ever before. According to this school of
thought, AI will be the most significant job engine the world has ever seen. Artificial intelligence
will eliminate low-skilled jobs and effectively create massive high-skilled job
opportunities that will span all sectors of the economy.
For example, if AI becomes fully adapt to language translation, it will create a considerable
demand for high-skilled human translators. If the costs of essential translations drop to nearly
zero, this will encourage MORE companies that need this particular service to expand their
business operations abroad.
To those who speak different languages than the community in which they reside, this help will
inevitably create more work for high-skilled translators, boost more economic activities. As a
result of this, and more people will be employed in these companies due to the increased
workload.
Boosting international trade it one of the most significant benefits of our “global” times. So yes, AI
will eliminate some jobs, but it will create many, many more.

Artificial Intelligence will improved healthcare.

AI can be used extensively in the healthcare industry. It is applicable in automated


operations, predictive diagnostics, preventive interventions, precision surgery, and a
host of other clinical operations. Some individuals predict that AI will completely reshape
the healthcare landscape for the better.
AI is revolutionizing how the health sector works by reducing spending and improve patient
outcomes.
And here are some of the applications of artificial intelligence in healthcare:

• Doing repetitive jobs.


• Managing medical records and other data.
• Digital consultation.
• Treatment design.
• Medical management.
• Virtual nurses.
• Precision medicine.
• Drug creation, and a myriad of other uses of AI.

Artificial Intelligence will revolutionize agriculture.

AI is also used in the agriculture industry extensively. Robots can be used to plant seeds, fertilized
crops and administer pesticides, among a lot of other uses. Farmers can use a drone to monitor
the cultivation of crops and also collect data for analysis.
The value-add data will be used to increase the final output. How? The data collected is analyzed
by AI on such variables as crop health and soil conditions, boosting final production, and it can
also be used in harvesting, especially for crops that are difficult to gather.

Artificial Intelligence will eliminate the need for you to perform tedious tasks.
AI is changing the workplace, and there are plenty of reasons to be optimistic. It is used to do lots
of tedious and lengthy tasks, especially the low-skilled types of jobs that are labor-intensive. It
means that employees will be retasked away from boring jobs and bring significant and positive
change in the workplace.
For instance, artificial intelligence is used in the automotive industry to do repetitive tasks such
as performing a routine operation in the assembly line, for example. Allowing a robot to care for
well, robotic-tasks, has created a shift in the workforce.

AI is used to increase auto safety and decrease traffic complications.

Auto accidents are one of the most popular types of accidents that happen in America. It kills
thousands of people annually. A whopping 95 percent of these accidents are caused by human
error, meaning accidents are avoidable.
The number of accident cases will reduce as artificial intelligence is being introduced into the
industry by the use of self-driving cars. On-going research in the auto industry is looking at ways
AI can be used to improve traffic conditions.
Smart systems are currently in place in many cities that are used to analyze traffic lights at the
intersections. Avoiding congestion leads to safer movements of vehicles, bicycles, and
pedestrians.

Conclusion

Artificial intelligence is very useful in all industries as more research is being done to advance it.
The advancements in this AI tech will be most useful if it is understood and trusted. An important
part of it is that artificial intelligence and related technologies such as drones, robots, and
autonomous vehicles can create around tens of millions of jobs over the next decade.
What is knowledge?

Knowledge is the body of facts and principles. Knowledge can be language, concepts, procedures,
rules, ideas, abstractions, places, customs and so on. Study of knowledge is called Epistemology.

What is knowledge representation?

Humans are best at understanding, reasoning, and interpreting knowledge. Human knows things,
which is knowledge and as per their knowledge they perform various actions in the real world. But
how machines do all these things comes under knowledge representation and reasoning.

Knowledge representation is a study of ways of how knowledge is actually picturized and how
effectively it resembles the representation in human brain.

Hence we can describe Knowledge representation as following:

o Knowledge representation and reasoning (KR, KRR) is the part of Artificial intelligence which
concerned with AI agents thinking and how thinking contributes to intelligent behavior of
agents.
o It is responsible for representing information about the real world so that a computer can
understand and can utilize this knowledge to solve the complex real world problems such as
diagnosis a medical condition or communicating with humans in natural language.
o It is also a way which describes how we can represent knowledge in artificial intelligence.
Knowledge representation is not just storing data into some database, but it also enables an
intelligent machine to learn from that knowledge and experiences so that it can behave
intelligently like a human.

A knowledge representation system should provide way of representing complex knowledge and
should posses the following characteristics.
• The representation should have a set of well defined syntax and semantics.
• The knowledge representation scheme should have a good expressive capacity.
• The representation must be efficient. That is, it should use only limited resources without
compromising on the expressive power.

What to Represent:
Following are the kind of knowledge which needs to be represented in AI systems:
o Object: All the facts about objects in our world domain. E.g., Guitars contains strings, trumpets
are brass instruments.
o Events: Events are the actions which occur in our world.
o Performance: It describe behavior which involves knowledge about how to do things.
o Meta-knowledge: It is knowledge about what we know.
o Facts: Facts are the truths about the real world and what we represent.
o Knowledge-Base: A knowledge base is an organized collection of facts about the system's
domain. An inference engine interprets and evaluates the facts in the knowledge base in
order to provide an answer.
A knowledge-based system (KBS) is a form of artificial intelligence (AI) that aims to capture
the knowledge of human experts to support decision-making. Examples of knowledge-based systems
include expert systems, which are so called because of their reliance on human expertise.

Knowledge: Knowledge is awareness or familiarity gained by experiences of facts, data, and situations.
Following are the types of knowledge in artificial intelligence:

Types of knowledge

Following are the various types of knowledge:

1. Declarative Knowledge:
o Declarative knowledge is to know about something and it is a passive knowledge..
o It includes concepts, facts, and objects.
o It is also called descriptive knowledge and expressed in declarative sentences.
o It is simpler than procedural language.
o Example mark statement of a student.
2. Procedural Knowledge
o It is also known as imperative knowledge.
o Procedural knowledge is a type of knowledge which is responsible for knowing how to do
something.
o It can be directly applied to any task.
o It includes rules, strategies, procedures, agendas, etc.
o Procedural knowledge depends on the task on which it can be applied.
3. Meta-knowledge:
o Knowledge about the other types of knowledge is called Meta-knowledge.
4. Heuristic knowledge:
o Heuristic knowledge is representing knowledge of some experts in a field or subject.
o Heuristic knowledge is based on previous experiences, awareness of approaches, and which
are good to work but not guaranteed.
o Heuristic knowledge are rules or tricks used to make judgement and also to simplify solutions
of problems. It is acquired through experience. An expert uses his knowledge that has
gathered due to his experience and learning.
5. Structural knowledge:
o Structural knowledge is basic knowledge to problem-solving.
o It describes relationships between various concepts such as kind of, part of, and grouping of
something.
o It describes the relationship that exists between concepts or objects.

knowledge-based systems (KBS)

A knowledge-based system (KBS) is a form of artificial intelligence (AI) that aims to capture the
knowledge of human experts to support decision-making. Examples of knowledge-based systems
include expert systems, which are so called because of their reliance on human expertise.

The typical architecture of a knowledge-based system, which informs its problem-solving method,
includes a knowledge base and an inference engine. The knowledge base contains a collection of
information in a given field -- medical diagnosis, for example. The inference engine deduces insights
from the information housed in the knowledge base. Knowledge-based systems also include an
interface through which users query the system and interact with it.

A knowledge-based system may vary with respect to its problem-solving method or approach. Some
systems encode expert knowledge as rules and are therefore referred to as rule-based systems.
Another approach, case-based reasoning, substitutes cases for rules. Cases are essentially solutions to
existing problems that a case-based system will attempt to apply to a new problem.
Techniques of knowledge representation
There are mainly four ways of knowledge representation which are given as follows:

1. Logical Representation
2. Semantic Network Representation
3. Frame Representation
4. Production Rules / Scripts

1. Logical Representation
Logical representation is a language with some concrete rules which deals with
propositions and has no ambiguity in representation. Logical representation means
drawing a conclusion based on various conditions. This representation lays down some
important communication rules. It consists of precisely defined syntax and semantics
which supports the sound inference. Each sentence can be translated into logics using
syntax and semantics.
Syntax:
o Syntaxes are the rules which decide how we can construct legal sentences in the
logic.
o It determines which symbol we can use in knowledge representation.
o How to write those symbols.
Semantics:
o Semantics are the rules by which we can interpret the sentence in the logic.
o Semantic also involves assigning a meaning to each sentence.
o
Logical representation can be categorised into mainly two logics:
a. Propositional Logics
b. Predicate logics
Note: We will discuss Prepositional Logics and Predicate logics in later chapters.
Advantages of logical representation:
1. Logical representation enables us to do logical reasoning.
2. Logical representation is the basis for the programming languages.
Disadvantages of logical Representation:
1. Logical representations have some restrictions and are challenging to work with.
2. Logical representation technique may not be very natural, and inference may not
be so efficient.
Note: Do not be confused with logical representation and logical reasoning as logical representation is a
representation language and reasoning is a process of thinking logically.

2. Semantic Network Representation


Semantic networks are alternative of predicate logic for knowledge representation.
In Semantic networks, we can represent our knowledge in the form of graphical
networks. This network consists of nodes representing objects and arcs which
describe the relationship between those objects. Semantic networks can
categorize the object in different forms and can also link those objects. Semantic
networks are easy to understand and can be easily extended.
This representation consist of mainly two types of relations:
a. IS-A relation (Inheritance)
b. Kind-of-relation
Example: Following are some statements which we need to represent in the form
of nodes and arcs.
Statements:
a. Jerry is a cat.
b. Jerry is a mammal
c. Jerry is owned by Priya.
d. Jerry is white colored.
e. All Mammals are animal.
In the above diagram, we have represented the different type of knowledge in the
form of nodes and arcs. Each object is connected with another object by some
relation.
Drawbacks in Semantic representation:
1. Semantic networks take more computational time at runtime as we need to
traverse the complete network tree to answer some questions. It might be
possible in the worst case scenario that after traversing the entire tree, we
find that the solution does not exist in this network.
2. Semantic networks try to model human-like memory (Which has 1015
neurons and links) to store the information, but in practice, it is not possible
to build such a vast semantic network.
3. These types of representations are inadequate as they do not have any
equivalent quantifier, e.g., for all, for some, none, etc.
4. Semantic networks do not have any standard definition for the link names.
5. These networks are not intelligent and depend on the creator of the system.
Advantages of Semantic network:
1. Semantic networks are a natural representation of knowledge.
2. Semantic networks convey meaning in a transparent manner.
3. These networks are simple and easily understandable.

3. Frame Representation
A frame is a record like structure which consists of a collection of attributes and its
values to describe an entity in the world. Frames are the AI data structure which divides
knowledge into substructures by representing stereotypes situations. It consists of a
collection of slots and slot values. These slots may be of any type and sizes. Slots have
names and values which are called facets.

Slots Filters

Title Artificial Intelligence

Genre Computer Science

Author Peter Norvig

Edition Third Edition

Year 1996

Page 1152

Facets: The various aspects of a slot is known as Facets. Facets are features of frames
which enable us to put constraints on the frames. Example: IF-NEEDED facts are called
when data of any particular slot is needed. A frame may consist of any number of slots,
and a slot may include any number of facets and facets may have any number of values.
A frame is also known as slot-filter knowledge representation in artificial
intelligence.

Frames are derived from semantic networks and later evolved into our modern-day
classes and objects. A single frame is not much useful. Frames system consist of a
collection of frames which are connected. In the frame, knowledge about an object or
event can be stored together in the knowledge base. The frame is a type of technology
which is widely used in various applications including Natural language processing and
machine visions.

Example: 1
Let's take an example of a frame for a book

Example 2:
Let's suppose we are taking an entity, Peter. Peter is an engineer as a profession, and
his age is 25, he lives in city London, and the country is England. So following is the
frame representation for this:

Slots Filter

Name Peter

Profession Doctor

Age 25

Marital status Single

Weight 78

Advantages of frame representation:


1. The frame knowledge representation makes the programming easier by
grouping the related data.
2. The frame representation is comparably flexible and used by many
applications in AI.
3. It is very easy to add slots for new attribute and relations.
4. It is easy to include default data and to search for missing values.
5. Frame representation is easy to understand and visualize.
Disadvantages of frame representation:
1. In frame system inference mechanism is not be easily processed.
2. Inference mechanism cannot be smoothly proceeded by frame
representation.
3. Frame representation has a much generalized approach.
4. Production Rules (or) Scripts

Production rules system consist of (condition, action) pairs which mean, "If
condition then action". It has mainly three parts:
o The set of production rules
o Working Memory
o The recognize-act-cycle
In production rules agent checks for the condition and if the condition exists
then production rule fires and corresponding action is carried out. The condition
part of the rule determines which rule may be applied to a problem. And the action
part carries out the associated problem-solving steps. This complete process is
called a recognize-act cycle.
The working memory contains the description of the current state of problems-
solving and rule can write knowledge to the working memory. This knowledge
match and may fire other rules.
If there is a new situation (state) generates, then multiple production rules will
be fired together, this is called conflict set. In this situation, the agent needs to
select a rule from these sets, and it is called a conflict resolution.
Example:
o IF (at bus stop AND bus arrives) THEN action (get into the bus)
o IF (on the bus AND paid AND empty seat) THEN action (sit down).
o IF (on bus AND unpaid) THEN action (pay charges).
o IF (bus arrives at destination) THEN action (get down from the
bus).
Advantages of Production rule:
1. The production rules are expressed in natural language.
2. The production rules are highly modular, so we can easily remove, add or
modify an individual rule.
Disadvantages of Production rule:
1. Production rule system does not exhibit any learning capabilities, as it does
not store the result of the problem for the future uses.
2. During the execution of the program, many rules may be active hence rule-
based production systems are inefficient.

The Knowledge Organization System (KOS)

The Knowledge Organization System (KOS) provides a framework for the different
classification schemes used to organize knowledge. Some KOSs are library
classifications, taxonomies, subject headings, thesauri, ontologies, etc. KOS is a
corner stone of Knowledge Organization tools.

Knowledge Organization techniques are used to build KOSs. These techniques


outline principles to build, manage, and visualize KOS. Knowledge Organization
Systems show a simplified view of the concepts of a domain. The goal is provide
a way to improve the understanding and the management of a field of knowledge.

Taking account of the variety of disciplines needs to facilitate their understanding,


KO Systems are present in a wide range of fields of knowledge. There are
examples of KOS in e-learning, Artificial Intelligence, Software Engineering, and
Information Science. Each of these fields gives to KO Systems one or more
different names; each KOS has its own design characteristics, according its specific
goals. In this manner, e-learning talks about mind maps and concept maps;
Artificial Intelligence addresses ontologies and semantic networks; Software
Engineering talks about UML diagrams; Information Science uses thesauri, subject
headings, library classifications, etc. Although, each approach has a different
semantic structure, depending on its goals, all of them develop and maintain a
domain vocabulary to represent concepts, and semantic relationships between
these concepts.

The construction of a KOS requires a high level of intellectual effort to reach an


agreement about the representation. This involves analyzing the domains to
extract the main concepts and relationships and to agree these analyses, in order
to develop a shared representation. The process can be simplified by using a
suitable methodology and software applications that have been developed to
facilitate this work. Examples can be found in Software Engineering and Ontology
Engineering.

One of the main bottlenecks is knowledge acquisition. This phase tries to identify
the main concepts, by looking at different information sources and seeking the
advice of domain experts. The next step is conceptualization, by structuring the
domain. This means analyzing terminology, synonyms and hierarchical and
associative structures. Also, it is important to identify the constraints of each
relation or attribute.

Knowledge Organization and Manipulation


The organization of knowledge in memory is key to efficient processing
Knowledge based systems performs their intended tasks.
The facts and rules are easy to locate and retrieve. Otherwise much time is
wasted in searching and testing large numbers of items in memory.
Knowledge can be organized in memory for easy access by a method known as
indexing.
As a result, the search for some specific chunk of knowledge is limited to
the group only.
Knowledge Manipulation

Decisions and actions in knowledge based systems come from manipulation of


the knowledge.
The known facts in the knowledge base be located, compared, and altered in
some way.
This process may set up other subgoals and require further inputs, and so on
until a final solution is found.
The manipulations are the computational equivalent of reasoning. This requires a
form of inference or deduction, using the knowledge and inferring rules.
All forms of reasoning requires a certain amount of searching and matching.
The searching and matching operations consume greatest amount of computation
time in AI systems.
It is important to have techniques that limit the amount of search and matching
required to complete any given task.

Knowledge acquisition

Knowledge acquisition is a method of learning, first proposed by Aristotle. Aristotle


proposed that the mind at birth is a blank slate, or tabula rasa. As a blank slate it
contains no knowledge of the objective, empirical universe, nor of itself.
Knowledge acquisition is the process of extracting, structuring and organizing
knowledge from one source, usually human experts, so it can be used in software
such as an Expert System. This is often the major obstacle in building an Expert
System. As a method, it is opposed to the concept of "a priori" knowledge (A priori
knowledge is that which is independent from experience. Examples include
mathematics, tautologies, and deduction from pure reason, A posteriori
knowledge is that which depends on empirical evidence. Examples include most
fields of science and aspects of personal knowledge.), and to "intuition" when
conceived as religious revelation. It has been suggested that the mind is "hard
wired" to begin operating at birth, beginning a lifetime process of acquisition
through abstraction, induction, and conception.
First-Order Logic or Predicate Logic

Predicate Logic deals with predicates, which are propositions, consist of variables.

Predicate Logic - Definition

A predicate is an expression of one or more variables determined on some specific


domain. A predicate with variables can be made a proposition by either authorizing
a value to the variable or by quantifying the variable.

In propositional logic, we can only represent the facts, which are either true or
false. PL is not sufficient to represent the complex sentences or natural language
statements. The propositional logic has very limited expressive power. Consider
the following sentence, which we cannot represent using PL logic.

o "Some humans are intelligent", or


o "Sachin likes cricket."

To represent the above statements, PL logic is not sufficient, so we required some


more powerful logic, such as first-order logic.

First-Order logic:

o First-order logic is another way of knowledge representation in artificial


intelligence. It is an extension to propositional logic.
o FOL is sufficiently expressive to represent the natural language statements in
a concise way.
o First-order logic is also known as Predicate logic or First-order predicate
logic. First-order logic is a powerful language that develops information
about the objects in a more easy way and can also express the relationship
between those objects.
o First-order logic (like natural language) does not only assume that the world
contains facts like propositional logic but also assumes the following things in
the world:
o Objects: A, B, people, numbers, colors, wars, theories, ......
o Relations: It can be unary relation such as: red, round, is
adjacent, or n-any relation such as: the sister of, brother of, has
color, comes between
o Function: Father of, best friend, third inning of, end of, ......
o As a natural language, first-order logic also has two main parts:
o Syntax
o Semantics

Syntax:
Syntax has to do with what ‘things’ (symbols, notations) one is allowed to use in the
language and in what way; there is/are a(n):

• Alphabet
• Language constructs
• Sentences to assert knowledge

The syntax of FOL determines which collection of symbols is a logical expression in


first-order logic. The basic syntactic elements of first-order logic are symbols. We
write statements in short-hand notation in FOL.
Semantics:

Formal meaning of semantics is which has to do what those sentences with the
alphabet and constructs are supposed to mean.
Atomic sentences:
o Atomic sentences are the most basic sentences of first-order logic. These
sentences are formed from a predicate symbol followed by a parenthesis with
a sequence of terms.
o We can represent atomic sentences as Predicate (term1, term2, ......,
term n).

Example: Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).


Chinky is a cat: => cat (Chinky).

Complex Sentences:
o Complex sentences are made by combining atomic sentences using
connectives.

First-order logic statements can be divided into two parts:

o Subject: Subject is the main part of the statement.


o Predicate: A predicate can be defined as a relation, which binds two atoms
together in a statement.

Consider the statement: "x is an integer.", it consists of two parts, the first part
x is the subject of the statement and second part "is an integer," is known as a
predicate.
Quantifiers in First-order logic:

o A quantifier is a language element which generates quantification, and


quantification specifies the quantity of specimen in the universe of discourse.
o These are the symbols that permit to determine or identify the range and
scope of the variable in the logical expression. There are two types of
quantifier:
1. Universal Quantifier, (for all, everyone, everything)
2. Existential quantifier, (for some, at least one).

Universal Quantifier:
Universal quantifier is a symbol of logical representation, which specifies that the
statement within its range is true for everything or every instance of a particular
thing.

The Universal quantifier is represented by a symbol ∀, which resembles an inverted


A.

Note: In universal quantifier we use implication "→".

If x is a variable, then ∀x is read as:

o For all x
o For each x
o For every x.

Example:
All man drink coffee.

Let a variable x which refers to a man so all x can be represented as below:

∀x man(x) → drink (x, coffee).

It will be read as: There are all x where x is a man who drink coffee.

Existential Quantifier:
Existential quantifiers are the type of quantifiers, which express that the statement
within its scope is true for at least one instance of something.

It is denoted by the logical operator ∃, which resembles as inverted E. When it is


used with a predicate variable then it is called as an existential quantifier.

Note: In Existential quantifier we always use AND or Conjunction symbol (∧).

If x is a variable, then existential quantifier will be ∃x or ∃(x). And it will be read as:

o There exists a 'x.'


o For some 'x.'
o For at least one 'x.'

Example:
Some boys are intelligent.

We can write it as

∃x: boys(x) ∧ intelligent(x)

It will be read as: There are some x where x is a boy who is intelligent.

Points to remember:

o The main connective for universal quantifier ∀ is implication →.


o The main connective for existential quantifier ∃ is and ∧.

Properties of Quantifiers:

o In universal quantifier, ∀x∀y is similar to ∀y∀x.


o In Existential quantifier, ∃x∃y is similar to ∃y∃x.
o ∃x∀y is not similar to ∀y∃x.

The following table shows the predicate calculus expression and their meaning
Some Examples of FOL using quantifier:

1. All birds fly.


In this question the predicate is "fly(bird)."
And since there are all birds who fly so it will be represented as follows.
∀x bird(x) →fly(x).

2. Every man respects his parent.


In this question, the predicate is "respect(x, y)," where x=man, and y= parent.
Since there is every man so will use ∀, and it will be represented as follows:
∀x man(x) → respects (x, parent).

3. Some boys play cricket.


In this question, the predicate is "play(x, y)," where x= boys, and y= game. Since
there are some boys so we will use ∃, and it will be represented as:
∃x boys(x) → play(x, cricket).

4. Not all students like both Mathematics and Science.


In this question, the predicate is "like(x, y)," where x= student, and y=
subject.
Since there are not all students, so we will use ∀ with negation, so following
representation for this:
¬∀ (x) [ student(x) → like(x, Mathematics) ∧ like(x, Science)].

5. For example from Natural Language to First order logic consider the following
three sentences:
– “Each animal is an organism”
– “All animals are organisms”
– “If it is an animal then it is an organism”
This can be formalised as:
∀x(Animal(x)→Organism(x))

6. “There are books that are heavy”


Written as:

∃x(Book(x)∧heavy(x))

Free and Bound Variables:


The quantifiers interact with variables which appear in a suitable way. There are two
types of variables in First-order logic which are given below:

Free Variable: A variable is said to be a free variable in a formula if it occurs


outside the scope of the quantifier.

Example: ∀x ∃(y)[P (x, y, z)], where z is a free variable.

Bound Variable: A variable is said to be a bound variable in a formula if it occurs


within the scope of the quantifier.

Example: ∀xy [A (x) B( y)], here x and y are the bound variables.

Properties of WFFS

The evaluation of complex formulas in FOPL can often be facilitiated through the
substitution of equivalent formulas. The table shows a number of equivalent
expressions or properties.

Conversion to CLAUSAL form


The predicate calculus uses the following types of symbols:
Constants: A constant symbol denotes a particular entity. E.g. John,
Muriel, 1.
Functions: A function symbol denotes a mapping from a number of
entities to a single entities: E.g. fatherOf is a function with one argument.
Plus is a function with two arguments. fatherOf(John) is some person.
plus(2,7) is some number.
Predicates: A predicate denotes a relation on a number of entities. e.g.
Married is a predicate with two arguments. Odd is a predicate with one
argument. Married(John, Sue) is a sentence that is true if the relation of
marriage holds between the people John and Sue. Odd(Plus(2,7)) is a true
sentence.
Variables: These represent some undetermined entity. Examples: x, s1,
etc.
Boolean operators: ¬, ∨, ∧, ⇒, ⇔.
Quantifiers: The symbols ∀ (for all) and ∃ (there exists).
Grouping symbols: The open and close parentheses and the comma.
A term is either
1. A constant symbol; or
2. A variable symbol; or
3. A function symbol applied to terms.
Examples: John, x, fatherOf(John), plus(x,Plus(1,3)).

An atomic formula is a predicate symbol applied to terms.


Examples: Odd(x). Odd(plus(2,2)). Married(Sue,fatherOf(John)).

A formula is either
1. An atomic formula; or
2. The application of a Boolean operator to formulas; or
3. A quantifier followed by a variable followed by a formula.
Examples: Odd(x). Odd(x) ∨ ¬Odd(Plus(x,x)). ∃x Odd(Plus(x,y)).
∀x Odd(x) ⇒ ¬Odd(Plus(x,3)).

A sentence is a formula with no free variables. (That is, every


occurrence of every variable is associated with some quantifier.)
Clausal Form
A literal is either an atomic formula or the negation of an atomic
formula.
Examples: Odd(3). ¬Odd(Plus(x,3)). Married(Sue,y).

A clause is the disjunction of literals. Variables in a clause are interpreted


as universally quantified with the largest possible scope.
Example: Odd(x) ∨ Odd(y) ∨ ¬Odd(Plus(x,y)) is interpreted as
∀x,y Odd(x) ∨ Odd(y) ∨ ¬Odd(Plus(x,y)).
Converting a sentence to clausal form
1. Replace every occurrence of α⇔β by
(A⇒B) ∧ (B⇒A). When this is complete, the sentence will have no
occurrence of ⇔.
2. Replace every occurrence of A⇒B by ¬A∨B. When this is complete,
the only Boolean operators
will be ∨, ¬, and ∧.
3. Replace every occurrence of ¬(A ∨ B) by ¬A ∧ ¬B; every occurrence
of ¬(A ∧ B) by ¬A ∨ ¬B;
and every occurrence of ¬¬A by A.
New step: Replace every occurrence of ¬∃xf(x) by ∀x¬f(x) and every
occurrence of ¬∀xf(x) by ∃x¬f(x).
Repeat as long as applicable. When this is done, all negations will be
next to an atomic sentence.
4. (New Step: Skolemization). For every existential quantifier ∃x in
the formula, do the following:
If the existential quantifier is not inside the scope of any universal
quantifiers, then
i. Create a new constant symbol γ.
ii. Replace every occurrence of the variable x by γ.
iii. Drop the existential quantifier.
If the existential quantifier is inside the scope of universal quantifiers
with variables Δ1 . . . Δk,then
i. Create a new function symbol γ.
ii. Replace every occurrence of the variable x by the term γ(Δ1 . . . Δk)
iii. Drop the existential quantifier.
Example. Change ∃x Blue(x) to Blue(Sk1).
Change ∀x∃y Odd(Plus(x,y)) to ∀x Odd(Plus(x,Sk2(x)).
Change ∀x,y∃z∀a∃b P(x,y,z,a,b) to P(x,y,Sk3(x,y),a,Sk4(x,y,a)).
5. New step: Elimination of universal quantifiers:
Part 1. Make sure that each universal quantifier in the formula uses a
variable with a different name, by changing variable names if
necessary.
Part 2. Drop all univeral quantifiers.
Example. Change [∀x P(x)] ∨ [∀x Q(x)] to P(x) ∨ Q(x1).
6. (Same as step 4 of CNF conversion.) Replace every occurrence of
(A ∧ B) ∨ C by (A ∨ γ) ∧ (B ∨ γ), and every occurrence of A ∨ (B ∧ C)
by (A ∨ B) ∧ (A ∨ C).
Repeat as long as applicable. When this is done, all conjunctions will
be at top level.
7. (Same as step 5 of CNF conversion.) Break up the top-level
conjunctions into separate sentences. That is, replace A ∧ B by the
two sentences A and B. When this is done, the set will be in CNF.

Example:
Start. ∀x [Even(x) ⇔ [∀y Even(Times(x,y))]]
After Step 1: ∀x [[Even(x) ⇒ [∀y Even(Times(x,y))]] ∧
[[∀y Even(Times(x,y))] ⇒ Even(x)]].
After step 2: ∀x [[¬Even(x) ∨ [∀y Even(Times(x,y))]] ∧
[¬[∀y Even(Times(x,y))] ∨ Even(x)]].
After step 3: ∀x [[¬Even(x) ∨ [∀y Even(Times(x,y))]] ∧
[[∃y ¬Even(Times(x,y))] ∨ Even(x)]].
After step 4: ∀x [[¬Even(x) ∨ [∀y Even(Times(x,y))]] ∧
[¬Even(Times(x,Sk1(x))) ∨ Even(x)]].
After step 5: [¬Even(x) ∨ Even(Times(x,y))] ∧
[¬Even(Times(x,Sk1(x))) ∨ Even(x)].
Step 6 has no effect.
After step 7: ¬Even(x) ∨ Even(Times(x,y)).

¬Even(Times(x,Sk1(x))) ∨ Even(x)
Logic
One of the prime activities of human intelligence is reasoning. The activity of reasoning
involves construction, organization and manipulation of statements to arrive at new
conclusions. Thus, logic can be defined as a scientific study of the process of reasoning and
system of rules and procedures that help in the reasoning process.

Basically, the logic process takes some information (called premises) and produces
some outputs (called conclusions). Logic is basically classified into two categories,
propositional logic and predicate logic.

Propositional logic in Artificial intelligence

Propositional logic (PL) is the simplest form of logic where all the statements are made by
propositions. A proposition is a declarative statement which is either true or false. It is a
technique of knowledge representation in logical and mathematical form.

Example:

1. a) It is Sunday.
2. b) The Sun rises from West (False proposition)
3. c) 3+3= 7(False proposition)
4. d) 5 is a prime number.

. There are two types of Propositions. They are Atomic Propositions (simple propositions)
and Molecular propositions (compound propositions).

Atomic Proposition: Atomic propositions are the simple propositions. It consists of a single
proposition symbol. These are the sentences which must be either true or false.

Example:

1. a) 2+2 is 4, it is an atomic proposition as it is a true fact.


2. b) "The Sun is cold" is also a proposition as it is a false fact.
Compound proposition: Compound propositions are constructed by combining simpler or
atomic propositions, using parenthesis and logical connectives.

Example:

1. a) "It is raining today, and street is wet."


2. b) "Ankit is a doctor, and his clinic is in Mumbai."

Following are some basic facts about propositional logic:

o Propositional logic is also called Boolean logic as it works on 0 and 1.


o In propositional logic, we use symbolic variables to represent the logic, and we can
use any symbol for a representing a proposition, such A, B, C, P, Q, R, etc.
o Propositions can be either true or false, but it cannot be both.
o Propositional logic consists of an object, relations or function, and logical
connectives.
o These connectives are also called logical operators.
o The propositions and connectives are the basic elements of the propositional logic.
o Connectives can be said as a logical operator which connects two sentences.
o A proposition formula which is always true is called tautology, and it is also called a
valid sentence.
o A proposition formula which is always false is called Contradiction.
o A proposition formula which has both true and false values is called Contingency
o Statements which are questions, commands, or opinions are not propositions such as
"Where is Rohini", "How are you", "What is your name", are not propositions.

Syntax of propositional logic:

i). The letters A, B, … Z and these letters with subscripted numerals are well-formed atomic
propositions

ii). If A and B are well-formed atomic proposition then they can be connected with logical
connectives.

iii) Nothing else is a well defined preposition.

Logical Connectives:

Logical connectives are used to connect two simpler propositions or representing a sentence
logically. We can create compound propositions with the help of logical connectives. There
are mainly five connectives, which are given as follows:

1. Negation: A sentence such as ¬ P is called negation of P. A literal can be either


Positive literal or negative literal.
2. Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a
conjunction.
Example: Rohan is intelligent and hardworking. It can be written as,
P= Rohan is intelligent,
Q= Rohan is hardworking. → P∧ Q.
3. Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called disjunction,
where P and Q are the propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor. Q= Ritika is Engineer, so we can write it as P ∨ Q.
4. Implication: A sentence such as P → Q, is called an implication. Implications are also
known as if-then rules. It can be represented as
If it is raining, then the street is wet.
Let P= It is raining, and Q= Street is wet, so it is represented as P → Q
5. Biconditional: A sentence such as P⇔ Q is a Biconditional sentence, It is known as if
and only if rules.

example If I am breathing, then I am alive


P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.(or) P →Q

Following is the summarized table for Propositional Logic Connectives:

Semantics of logical proposition

A clear meaning of the logical propositions can be arrived at by constructing appropriate truth
tables for the molecular propositions. The following tables give the truth table for all
connectives.

Truth Table:

In propositional logic, we need to know the truth values of propositions in all possible
scenarios. We can combine all the possible combination with logical connectives, and the
representation of these combinations in a tabular format is called Truth table. Following are
the truth table for all logical connectives.
Truth table with three propositions:

We can build a proposition composing three propositions P, Q, and R. This truth table is
made-up of 8n Tuples as we have taken three proposition symbols.

Precedence Operators

First Precedence Parenthesis

Second Precedence Negation

Third Precedence Conjunction(AND)

Fourth Precedence Disjunction(OR)

Fifth Precedence Implication

Six Precedence Biconditional


Precedence of connectives:

Just like arithmetic operators, there is a precedence order for propositional connectors or
logical operators. This order should be followed while evaluating a propositional problem.
Following is the list of the precedence order for operators:

Note: For better understanding use parenthesis to make sure of the correct interpretations. Such as
¬R∨ Q, It can be interpreted as (¬R) ∨ Q.

Properties of statements.

Satisfiable: A statement is satisfiable if there is some interpretation for which it is true.

Contradiction. A sentence is contradictory (unsatisfiable) if there is no interpretation for


which it is true.

Valid : A sentence is valid if it is true for every interpretation. Valid sentences are also called
tautologies.

Equivalence: two sentences are equivalent if they have the same truth value under every
interpretation.
Logical consequences: A sentence is logical consequence of another if it is satisfied by all
interpretations which satisfy the first. More generally, it is a logical consequence of other
statements if and only if for any interpretation in which the statements are true, the resulting
statement is also true.
A valid statement is satisfiable, and a contradictory statement is invalid, but the
converse is not necessarily true.

As example of the above definition consider the following statements.

P is satisfiable but not valid since an interpretation that assigns false to P assigns false to the
sentence P.

(P ∨¬P) is valid since every interpretation results in a value of true for (P ∨¬P)
(P ∧¬P) is a contradiction since every interpretation results in a value of false for (P ∧¬P).

P and ¬(¬P) are equivalent since each has the same truth values under every interpretation.

P is a logical consequence of (P ∧ Q) since any interpretation for which (P ∧ Q) is true, P is


also true.

Logical equivalence

Logical equivalence is one of the features of propositional logic. Two propositions are said to
be logically equivalent if and only if the columns in the truth table are identical to each other.

For example (A ∧ B) is logically equivalent to (B ∧ A). It can be written as A∧B = B∧A

A B A∧B B∧A
True True True True
True False False False
False True False False
False False False False

Consider another example.

A B A→B ¬A ¬A V B
True True True False true
True False False False False
False True True True True
False False True True true
This gives A →B = ¬A V B

Some commonly used logical equivalences are listed in the following table.
Equivalence laws (or) Properties of Operators:

o Idempotency:
o P ∨ P= P, or
o P ∧ P = P.
o Commutativity:
o P∧ Q= Q ∧ P, or
o P ∨ Q = Q ∨ P.
o Associativity:
o (P ∧ Q) ∧ R= P ∧ (Q ∧ R),
o (P ∨ Q) ∨ R= P ∨ (Q ∨ R)
o Identity element:
o P ∧ True = P,
o P ∨ True= True.
o Distributive:
o P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
o P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
o DE Morgan's Law:
o ¬ (P ∧ Q) = (¬P) ∨ (¬Q)
o ¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
o Double-negation elimination:
o ¬ (¬P) = P.
o Conditional elimination:
o P → Q = ¬P ∨ Q
o Bi-conditional elimination:
o P ⇔ Q = (P → Q) ∧ (Q → P)

Limitations of Propositional logic:

o We cannot represent relations like ALL, some, or none with propositional logic.
Example:
1. All the girls are intelligent.
2. Some apples are sweet.
o Propositional logic has limited expressive power.
o In propositional logic, we cannot describe statements in terms of their properties or
logical relationships.
Tautologies
A Tautology is a formula which is always true for every value of its propositional
variables.
Example − Prove [(A→B)∧A]→B[(A→B)∧A]→B is a tautology
The truth table is as follows −

A B A→B (A → B) ∧ A [( A → B ) ∧ A] → B

True True True True True

True False False False True

False True True False True

False False True False True

As we can see every value of [(A→B)∧A]→B[(A→B)∧A]→B is "True", it is a


tautology.

Well Formed Formulas (WFFs)


Wffs are defined recursively as follows.

An atomic formula is a wff.

If P and Q are wffs, then ¬P, P ∧ Q, P V Q, P Q

P ⇔ Q, ∀x P(x), and ∃x P(x) are wffs.

Wffs are formed only by applying the abo→ve rules a finite number of
tumes.

The abovr rules state that all wffs are formed from atomic formulas and the proper applicatiob of
quamtifiers and logical connectives. Some examples of valid wffs are:

MAN(john)

PILOT(father-of(bill))

∀xyz((FATHER(x,y) ∧ FATHER(y,z)) → GRANDFATRER(x,y))

∀x NUMBER(x) → (∃y GREATER-THAN(y,x))

∀x ∃y ((P(x) ∧ Q(y)) ) → R(a) V Q(b))


The above group od examples are wffs since they are properly formed expressions composed of
atoms, logical connectives, and valid quantifications.

Some of the examples of statement that are not wffs are:

∀p P(x) → Q(x)

MAN(¬john)

Father-of(Q(x))

MARRIED(MAM,WOMAN)

The first statement is invalid because universal quantifier is applied to the predicate
P(x). the second expression is invalid since the term john, a constant, is negated.
(terms can not be negated). The third expression is invalid since it is a function with
predicate argument. The last expression fails since it is a predicate with two predicate
arguments.
Syntax of Predicate Calculus
The predicate calculus uses the following types of symbols:
Constants: A constant symbol denotes a particular entity. E.g. John,
Muriel, 1.
Functions: A function symbol denotes a mapping from a number of
entities to a single entities: E.g.
fatherOf is a function with one argument. Plus is a function with two
arguments. fatherOf(John)
is some person. plus(2,7) is some number.
Predicates: A predicate denotes a relation on a number of entities.
e.g. Married is a predicate
with two arguments. Odd is a predicate with one argument.
Married(John, Sue) is a sentence that
is true if the relation of marriage holds between the people John and
Sue. Odd(Plus(2,7)) is a true
sentence.
Variables: These represent some undetermined entity. Examples: x,
s1, etc.
Boolean operators: ¬, ∨, ∧, ⇒, ⇔.
Quantifiers: The symbols ∀ (for all) and ∃ (there exists).
Grouping symbols: The open and close parentheses and the comma.
A term is either
1. A constant symbol; or
2. A variable symbol; or
3. A function symbol applied to terms.
Examples: John, x, fatherOf(John), plus(x,Plus(1,3)).
An atomic formula is a predicate symbol applied to terms.
Examples: Odd(x). Odd(plus(2,2)). Married(Sue,fatherOf(John)).
A formula is either
1. An atomic formula; or
2. The application of a Boolean operator to formulas; or
3. A quantifier followed by a variable followed by a formula.
Examples: Odd(x). Odd(x) ∨ ¬Odd(Plus(x,x)). ∃x Odd(Plus(x,y)).
∀x Odd(x) ⇒ ¬Odd(Plus(x,3)).
A sentence is a formula with no free variables. (That is, every
occurrence of every variable is
associated with some quantifier.)
Clausal Form
A literal is either an atomic formula or the negation of an atomic
formula.
Examples: Odd(3). ¬Odd(Plus(x,3)). Married(Sue,y).
A clause is the disjunction of literals. Variables in a clause are
interpreted as universally quantified with the largest possible
scope.
Example: Odd(x) ∨ Odd(y) ∨ ¬Odd(Plus(x,y)) is interpreted as
∀x,y Odd(x) ∨ Odd(y) ∨ ¬Odd(Plus(X,Y)).
Converting a sentence to clausal form
1. Replace every occurrence of α⇔β by
(A⇒B) ∧ (B⇒A). When this is complete, the sentence will have no
occurrence of ⇔.
2. Replace every occurrence of A⇒B by ¬A∨B. When this is complete,
the only Boolean operators
will be ∨, ¬, and ∧.
3. Replace every occurrence of ¬(A ∨ B) by ¬A ∧ ¬B; every occurrence
of ¬(A ∧ B) by ¬A ∨ ¬B;
and every occurrence of ¬¬A by A.
New step: Replace every occurrence of ¬∃xf(x) by ∀x¬f(x) and every
occurrence of ¬∀xf(x) by ∃x¬f(x).
Repeat as long as applicable. When this is done, all negations will be
next to an atomic sentence.
4. (New Step: Skolemization). For every existential quantifier ∃x in
the formula, do the following:
If the existential quantifier is not inside the scope of any universal
quantifiers, then
i. Create a new constant symbol γ.
ii. Replace every occurrence of the variable x by γ.
iii. Drop the existential quantifier.
If the existential quantifier is inside the scope of universal quantifiers
with variables ∆1 . . . ∆k,then
i. Create a new function symbol γ.
ii. Replace every occurrence of the variable x by the term γ(∆1 . . . ∆k)
iii. Drop the existential quantifier.
Example. Change ∃x Blue(x) to Blue(Sk1).
Change ∀x∃y Odd(Plus(x,y)) to ∀x Odd(Plus(x,Sk2(x)).
Change ∀x,y∃z∀a∃b P(x,y,z,a,b) to P(x,y,Sk3(x,y),a,Sk4(x,y,a)).
5. New step: Elimination of universal quantifiers:
Part 1. Make sure that each universal quantifier in the formula uses a
variable with a different name, by changing variable names if
necessary.
Part 2. Drop all univeral quantifiers.
Example. Change [∀x P(x)] ∨ [∀x Q(x)] to P(x) ∨ Q(x1).
6. (Same as step 4 of CNF conversion.) Replace every occurrence of
(A ∧ B) ∨ C by (A ∨ γ) ∧ (B ∨ γ), and every occurrence of A ∨ (B ∧ C)
by (A ∨ B) ∧ (A ∨ C).
Repeat as long as applicable. When this is done, all conjunctions will
be at top level.
7. (Same as step 5 of CNF conversion.) Break up the top-level
conjunctions into separate sentences. That is, replace A ∧ B by the
two sentences A and B. When this is done, the set will be in CNF.
Example:
Start. ∀x [Even(x) ⇔ [∀y Even(Times(x,y))]]
After Step 1: ∀x [[Even(x) ⇒ [∀y Even(Times(x,y))]] ∧
[[∀y Even(Times(x,y))] ⇒ Even(x)]].

After step 2: ∀x [[¬Even(x) ∨ [∀y Even(Times(x,y))]] ∧


[¬[∀y Even(Times(x,y))] ∨ Even(x)]].
After step 3: ∀x [[¬Even(x) ∨ [∀y Even(Times(x,y))]] ∧
[[∃y ¬Even(Times(x,y))] ∨ Even(x)]].

After step 4: ∀x [[¬Even(x) ∨ [∀y Even(Times(x,y))]] ∧


[¬Even(Times(x,Sk1(x))) ∨ Even(x)]].
After step 5: [¬Even(x) ∨ Even(Times(x,y))] ∧
[¬Even(Times(x,Sk1(x))) ∨ Even(x)].
Step 6 has no effect.
After step 7: ¬Even(x) ∨ Even(Times(x,y)).
¬Even(Times(x,Sk1(x))) ∨ Even(x)
Structural knowledge Representation:
Structural knowledge is basic knowledge to problem-solving. It describes
relationships between various concepts such as kind of, part of, and grouping of
something. It describes the relationship that exists between concepts or objects.
The following are the basic techniques of structured knowledge representation.
Associative Networks(otherwise known as Semantic Nets), Frame Structures, Conceptual
Dependencies and Scripts.
Scripts.
Conceptual Dependency

It is an another knowledge representation technique in which we can represent


any kind of knowledge. It is based on the use of a limited number of primitive
concepts and rules of formation to represent any natural language statement.
Conceptual dependency theory is based on the use of knowledge representation
methodology was primarily developed to understand and represent natural
language structures. The conceptual dependency structures were originally
developed by Roger C SChank in 1977.

If a computer program is to be developed that can understand wide phenomenon


represented by natural languages, the knowledge representation should be
powerful enough to represent these concepts. The conceptual dependency
representation captures maximum concepts to provide canonical form of meaning
WHAT IS LINGUISTICS?
Linguistics is the study of human languages. It follows scientific approach. So it is also
referred to as linguistic science. Linguistics deals with describing and explaining the
nature of human languages. It treats language and the ways people use it as
phenomena to be studied. Linguist is one who is expertise in linguistics. Linguist
studies the general principles of language organization and language behavior.

Linguistic analysis concerns with identifying the structural units and classes of
language. Linguists also attempt to describe how smaller units can be combined to
form larger grammatical units such as how words can be combined to form phrases,
phrases can be combined to form clauses, and so on. They also concern what
constrains the possible meanings for a sentence.
Natural Language Processing (NLP)
Natural Language Processing (NLP) refers to AI method of communicating with an
intelligent systems using a natural language such as English.
Processing of Natural Language is required when you want an intelligent system like
robot to perform as per your instructions, when you want to hear decision from a
dialogue based clinical expert system, etc.
The field of NLP involves making computers to perform useful tasks with the natural
languages humans use. The input and output of an NLP system can be −

• Speech
• Written Text

Components of NLP
There are two components of NLP as given −

Natural Language Understanding (NLU)

Understanding involves the following tasks −

• Mapping the given input in natural language into useful representations.


• Analyzing different aspects of the language.

Natural Language Generation (NLG)

It is the process of producing meaningful phrases and sentences in the form of natural
language from some internal representation.
It involves −
• Text planning − It includes retrieving the relevant content from knowledge
base.
• Sentence planning − It includes choosing required words, forming meaningful
phrases, setting tone of the sentence.
• Text Realization − It is mapping sentence plan into sentence structure.
The NLU is harder than NLG.

Difficulties in NLU
NL has an extremely rich form and structure.
It is very ambiguous. There can be different levels of ambiguity −
• Lexical ambiguity − It is at very primitive level such as word-level.
For example, treating the word “board” as noun or verb?
• Syntax Level ambiguity − A sentence can be parsed in different ways.
For example, “He lifted the beetle with red cap.” − Did he use cap to lift the
beetle or he lifted a beetle that had red cap?
• Referential ambiguity − Referring to something using pronouns. For example,
Rima went to Gauri. She said, “I am tired.” − Exactly who is tired?
One input can mean different meanings.
Many inputs can mean the same thing.

NLP Terminology / Levels of Knowledge used in language understanding

• Phonology/Phonological – this is the knowledge which relates sounds to the


words are recognize. A phoneme is the smallest unit of sound. Phones are
aggregated into word sounds.
• Morphology/Morphological − It is a study of construction of words from basic
units called morphemes. A morpheme is tte smallest unit of meaning in a
language. For example, the construction of friendly from the root friend and the
suffix ly..
• Syntax/Syntactic − It refers to arranging words to make a sentence. It also
involves determining the structural role of words in the sentence and in
phrases.
• Semantics − It is concerned with the meaning of words and how to combine
words into meaningful phrases and sentences.
• Pragmatics − It deals with using and understanding sentences in different
situations and how the interpretation of the sentence is affected.
• Discourse − It deals with how the immediately preceding sentence can affect
the interpretation of the next sentence.
• World Knowledge − It includes the general knowledge about the world.

Steps in NLP
There are general five steps −
• Lexical Analysis − It involves identifying and analyzing the structure of words.
Lexicon of a language means the collection of words and phrases in a
language. Lexical analysis is dividing the whole chunk of text into paragraphs,
sentences, and words.
• Syntactic Analysis (Parsing) − It involves analysis of words in the sentence
for grammar and arranging words in a manner that shows the relationship
among the words. The sentence such as “The school goes to boy” is rejected
by English syntactic analyzer.

• Semantic Analysis − It draws the exact meaning or the dictionary meaning


from the text. The text is checked for meaningfulness. It is done by mapping
syntactic structures and objects in the task domain. The semantic analyzer
disregards sentence such as “hot ice-cream”.
• Discourse Integration − The meaning of any sentence depends upon the
meaning of the sentence just before it. In addition, it also brings about the
meaning of immediately succeeding sentence.
• Pragmatic Analysis − During this, what was said is re-interpreted on what it
actually meant. It involves deriving those aspects of language which require
real world knowledge.

Implementation Aspects of Syntactic Analysis


There are a number of algorithms researchers have developed for syntactic analysis,
but we consider only the following simple methods −

• Context-Free Grammar
• Parser

Context-Free Grammar
A context-free grammar (CFG) is a list of rules that define the set of all well-formed
sentences in a language. Each rule has a left-hand side, which identifies a syntactic
category, and a right-hand side, which defines its alternative component parts, reading from
left to right.

It is the grammar that consists rules with a single symbol on the left-hand side of the
rewrite/production rules. Let us create grammar to parse a sentence −
“The bird pecks the grains”
Articles (DET) − a | an | the
Nouns − bird | birds | grain | grains
Noun Phrase (NP) − Article + Noun | Article + Adjective + Noun
= DET N | DET ADJ N
Verbs − pecks | pecking | pecked
Verb Phrase (VP) − NP V | V NP
Adjectives (ADJ) − beautiful | small | chirping
The parse tree breaks down the sentence into structured parts so that the computer
can easily understand and process it. In order for the parsing algorithm to construct
this parse tree, a set of rewrite/production rules, which describe what tree structures
are legal, need to be constructed.
These rules say that a certain symbol may be expanded in the tree by a sequence of
other symbols. According to first order logic rule, if there are two strings Noun Phrase
(NP) and Verb Phrase (VP), then the string combined by NP followed by VP is a
sentence. The rewrite rules for the sentence are as follows −
S → NP VP
NP → DET N | DET ADJ N
VP → V NP
Lexicon −
In the second part, the individual words will be combined to provide meaning in
sentences.
The most important task of semantic analysis is to get the proper meaning of the
sentence. For example, analyze the sentence “Ram is great.” In this sentence, the
speaker is talking either about Lord Ram or about a person whose name is Ram. That
is why the job, to get the proper meaning of the sentence, of semantic analyzer is
important.

Elements of Semantic Analysis


Followings are some important elements of semantic analysis −

Hyponymy

It may be defined as the relationship between a generic term and instances of that
generic term. Here the generic term is called hypernym and its instances are called
hyponyms. For example, the word color is hypernym and the color blue, yellow etc.
are hyponyms.

Homonymy

It may be defined as the words having same spelling or same form but having different
and unrelated meaning. For example, the word “Bat” is a homonymy word because
bat can be an implement to hit a ball or bat is a nocturnal flying mammal also.

Polysemy

Polysemy is a Greek word, which means “many signs”. It is a word or phrase with
different but related sense. In other words, we can say that polysemy has the same
spelling but different and related meaning. For example, the word “bank” is a
polysemy word having the following meanings −
• A financial institution.
• The building in which such an institution is located.
• A synonym for “to rely on”.

Difference between Polysemy and Homonymy


Both polysemy and homonymy words have the same syntax or spelling. The main
difference between them is that in polysemy, the meanings of the words are related
but in homonymy, the meanings of the words are not related. For example, if we talk
about the same word “Bank”, we can write the meaning ‘a financial institution’ or ‘a
river bank’. In that case it would be the example of homonym because the meanings
are unrelated to each other.

Synonymy

It is the relation between two lexical items having different forms but expressing the
same or a close meaning. Examples are ‘author/writer’, ‘fate/destiny’.
Antonymy

It is the relation between two lexical items having symmetry between their semantic
components relative to an axis. The scope of antonymy is as follows −
• Application of property or not − Example is ‘life/death’, ‘certitude/incertitude’
• Application of scalable property − Example is ‘rich/poor’, ‘hot/cold’
• Application of a usage − Example is ‘father/son’, ‘moon/sun’.

Meaning Representation
Semantic analysis creates a representation of the meaning of a sentence. But before
getting into the concept and approaches related to meaning representation, we need
to understand the building blocks of semantic system.

Building Blocks of Semantic System

In word representation or representation of the meaning of the words, the following


building blocks play an important role −
• Entities − It represents the individual such as a particular person, location etc.
For example, Haryana. India, Ram all are entities.
• Concepts − It represents the general category of the individuals such as a
person, city, etc.
• Relations − It represents the relationship between entities and concept. For
example, Ram is a person.
• Predicates − It represents the verb structures. For example, semantic roles
and case grammar are the examples of predicates.
Now, we can understand that meaning representation shows how to put together the
building blocks of semantic systems. In other words, it shows how to put together
entities, concepts, relation and predicates to describe a situation. It also enables the
reasoning about the semantic world.

Approaches to Meaning Representations


Semantic analysis uses the following approaches for the representation of meaning

• First order predicate logic (FOPL)
• Semantic Nets
• Frames
• Rule-based architecture
• Case Grammar

Lexical Semantics
The first part of semantic analysis, studying the meaning of individual words is called
lexical semantics. It includes words, sub-words, affixes (sub-units), compound words
and phrases also. All the words, sub-words, etc. are collectively called lexical items.
In other words, we can say that lexical semantics is the relationship between lexical
items, meaning of sentences and syntax of sentence.
Following are the steps involved in lexical semantics −
• Classification of lexical items like words, sub-words, affixes, etc. is performed
in lexical semantics.
• Decomposition of lexical items like words, sub-words, affixes, etc. is performed
in lexical semantics.
• Differences as well as similarities between various lexical semantic structures
is also analyzed.

Natural Language Systems


The following are a few of the more successful natural language systems.

The LUNAR System

It was developed by Woods in 1970. It is one of the largest and most successful
question-answering system using AI techniques. This system had a separate
syntax analyzer and a semantic interpreter. Its parser was written in ATN
(Augmented Transition Network) form. The system was used in various tests and
responded successfully to queries like followings:

→ How many oak trees have height greater than 15 inches?

→ What is the average concentration of hydrogen and oxygen in water?

→ Which one is the oldest material between Iron, Bauxite and Aluminum?

The LUNAR system is mainly deal with queries. But the performance of the
system is very good than other systems.

LUNAR has three main components.

1. A general purpose grammar and an ATN parser capable of handling large


subset of English.
2. A rule-driven semantic interpreter which transforms the synthetic
representation into logical form suitable for querying the data base.
3. A data base retrieval and inference component which is used to determine
answers to quires and to make changes to the data base.

The LUNAR project was considered an operational success since it related to a


real world problem in need of a solution. It failed to parse or find the correct
semantic interpretation on only about 10% of the question presented to it.

The LIFER System

LIFER/LADDER was one of the first database natural language processing


systems. It was designed as a natural language interface to a database of
information about US Navy ships. This system, as described in a paper by Hendrix
(1978), used a semantic grammar to parse questions and query a distributed
database.
LIFER consists of two major components, a set of interactive functions for
language specifications and a parser.

The specification functions are used to define an application language as a subset


of English that is capable of interacting with existing software.

The parser interprets the language inputs and translates them into appropriate
structures that interact with application software.

LIFER has proven to be effective as a front end for a number of systems. The main
disadvantage is the potentially large number of patterns that may be required for
a system which requires many, diverse patterns.

The SHRDLU System


SHRDLU was developed by Terry Winograd as part of his doctoral work at M.I.T in
1972. The system simulates a simple robot arm that manipulates blocks on a
table. The unique aspect of the system is that the meaning of words and phrases
are encoded into procedures that are activated by input sentences. Furthermore,
the syntactic and semantic analysis, as well as the reasoning process are more
closely integrated.

The system can be roughly divided into four component domains: 1) a syntactic
parser which is governed by a large English grammar, 2) a semantic component
of programs that interpret the meanings of words and structures, 3) a cognitive
deduction component used to examine consequences of facts, carry out
commands, and final answers, and 4) an English response generation component.
In addition there is a knowledge base containing blocks world knowledge, and a
model of its own reasoning process, used to explain its action.

Integrating the parts of the understanding process with procedural knowledge has
resulted in an efficient and effective understanding system. Of course, the domain
of SHRDLU is very limited and closed, greatly simplifying the problem.
Parsing and its relevance in NLP
The word ‘Parsing’ whose origin is from Latin word ‘pars’ (which means ‘part’), is
used to draw exact meaning or dictionary meaning from the text. It is also called
Syntactic analysis or syntax analysis. Comparing the rules of formal grammar, syntax
analysis checks the text for meaningfulness. The sentence like “Give me hot ice-
cream”, for example, would be rejected by parser or syntactic analyzer.
The process of determining the syntactical structure of a sentence is known as
parsing. It is a process of analysing a sentence by taking it apart word-by-word and
determining its structure. When given an input string, the lexical parts or terms (root
words) must first be identified by type, and then the role they play in a sentence must
be determined. These parts can then be combined successively into larger units a
complete tree structure has been completed.
To determine the meaning of a word, a parser must have access to a lexicon.
When the parser selects a word from the input stream it locates the word in the lexicon
and obtains the word’s possible function and other features, including semantic
information. This information is then used in building a tree or other representation
structure. The general parsing process is illustrated in the following figure.

The Lexicon: A lexicon is a dictionary of words (usually morphemes or root words


together with their derivatives), where each word contains some syntactic, semantic,
and possibly some pragmatic information. The information in the lexicon is needed to
help determine the function and meaning of the word in a sentence.

We can understand the relevance of parsing in NLP with the help of following points

• Parser is used to report any syntax error.
• It helps to recover from commonly occurring error so that the processing of the
remainder of program can be continued.
• Parse tree is created with the help of a parser.
• Parser is used to create symbol table, which plays an important role in NLP.
• Parser is also used to produce intermediate representations (IR).
Syntactic analysis or parsing or syntax analysis is the third phase of NLP. The
purpose of this phase is to draw exact meaning, or you can say dictionary meaning
from the text. Syntax analysis checks the text for meaningfulness comparing to the
rules of formal grammar. For example, the sentence like “hot ice-cream” would be
rejected by semantic analyzer.
In this sense, syntactic analysis or parsing may be defined as the process of
analyzing the strings of symbols in natural language conforming to the rules of formal
grammar.

Types of Parsing
Derivation divides parsing into the followings two types −
• Top-down Parsing
• Bottom-up Parsing

Top-down Parsing

In this kind of parsing, the parser starts constructing the parse tree from the start
symbol and then tries to transform the start symbol to the input. The most common
form of top-down parsing uses recursive procedure to process the input. The main
disadvantage of recursive descent parsing is backtracking.
For example, a possible top-down parse of the sentence “Kathy jumped the horse”
would written as

Bottom-up Parsing
In this kind of parsing, the parser starts with the input symbol and tries to construct
the parser tree up to the start symbol.
A possible bottom-up parse of the same sentence might written as follows.
Difference between Top down parsing and Bottom up parsing

There are 2 types of Parsing Technique present in parsing, first one is Top-down
parsing and second one is Bottom-up parsing.

Top-down Parsing is a parsing technique that first looks at the highest level of the
parse tree and works down the parse tree by using the rules of grammar while Bottom-
up Parsing is a parsing technique that first looks at the lowest level of the parse tree
and works up the parse tree by using the rules of grammar.

There are some differences present to differentiate these two parsing techniques, which
are given below:
S.No Top Down Parsing Bottom Up Parsing

It is a parsing strategy that It is a parsing strategy that


first looks at the highest first looks at the lowest
level of the parse tree and level of the parse tree and
works down the parse tree works up the parse tree by
by using the rules of using the rules of
1. grammar. grammar.

top-down parsing follows


method to construct a bottom-up parsing, a
parse tree for an input reverse method where
string which begins at the the parsing starts from
root and grow towards the the leaves and directed
leaves. towards the leaves.
Top-down parsing attempts Bottom-up parsing can be
to find the left most defined as an attempts to
derivations for an input reduce the input string to
2. string. start symbol of a grammar.

Initiates from Root Initiates from Leaves

In this parsing technique In this parsing technique


we start parsing from top we start parsing from
(start symbol of parse tree) bottom (leaf node of parse
to down (the leaf node of tree) to up (the start
parse tree) in top-down symbol of parse tree) in
3. manner. bottom-up manner.

Production is used to derive


and check the similarity in the Starts from the token and
string. then go to the start symbol

This parsing technique


This parsing technique uses uses Right Most
4. Left Most Derivation. Derivation.

It’s main decision is to


It’s main decision is to select when to use a
select what production rule production rule to reduce
to use in order to construct the string to get the starting
5. the string. symbol.

Never wastes time exploring Never wastes time building


trees that cannot be trees that cannot lead to
6. deriving from S. input text segments.

Can generate trees that Can generate subtrees


are not consistent with the that can never lead to an
7. input. S node

Strength is Moderate Strength is More powerful

Producing a parser Producing a parser


Hard
Simple
Type of derivation Type of derivation
Rightmost derivation
Leftmost
derivation

Key Differences Between Top-down and Bottom-up


Parsing
1. The top-down parsing starts with the root while bottom-up begin
the generation of the parse tree from the leaves.
2. Top-down parsing can be done in two ways with backtracking
and without backtracking where leftmost derivation is used. On
the other hand, the bottom-up parsing uses a shift-reduce
method where the symbol is pushed and then popped from the
queue. It employs rightmost derivation.
3. Bottom-up parsing is more powerful than the top-down parsing.
4. It is difficult to produce a bottom-up parser. As against, a top-
down parser can be easily structured and formed.
Limitations of top-down parsing

These conditions could hamper the construction of the parser.

• Backtracking: It is a method of expanding non-terminal symbol


where one alternative could be selected until any mismatch
occurs otherwise another alternative is checked.
• Left recursion: This result in a serious problem where the top
down parser could enter an infinite loop.
• Left factoring: Left factoring is used when the suitability of the
two alternatives is checked while expanding the non-terminals.
• Ambiguity: Ambiguous grammar creates more than one parse
tree of the single string which is not acceptable in top-down
parsing and need to be eliminated.

Limitation of bottom-up parsing

The major disadvantage of the bottom up parser is the production of the


ambiguous grammar. We know that it generates more than one parse tree
which must be eliminated. However, the operator precedence parser in
bottom-up parser could also work with the ambiguous grammar.

Conclusion

The parsing methods top-down and bottom-up are differentiated by the


process used and type of parser they generate such as the parsing which
starts from the roots and expands to the leaves is known as top-down
parsing. Conversely, when the parse tree is built from leaves to root then it
is bottom-up parser.
What is an Expert System?
An expert system is a computer program that is designed to solve complex problems
and to provide decision-making ability like a human expert. It performs this by
extracting knowledge from its knowledge base using the reasoning and inference
rules according to the user queries.

The expert system is a part of AI, and the first ES was developed in the year 1970,
which was the first successful approach of artificial intelligence. It solves the most
complex issue as an expert by extracting the knowledge stored in its knowledge
base. These systems are designed for a specific domain, such as medicine,
science, etc.

The performance of an expert system is based on the expert's knowledge stored in


its knowledge base. The more knowledge stored in the KB, the more that system
improves its performance. One of the common examples of an ES is a suggestion of
spelling errors while typing in the Google search box.

Below is the block diagram that represents the working of an expert system:

It is important to remember that an expert system is not used to replace the human
experts; instead, it is used to assist the human in making a complex decision. These
systems do not have human capabilities of thinking and work on the basis of the
knowledge base of the particular domain.

Below are some popular examples of the Expert System:

o DENDRAL: It was an artificial intelligence project that was made as a


chemical analysis expert system. It was used in organic chemistry to detect
unknown organic molecules with the help of their mass spectra and
knowledge base of chemistry.

o MYCIN: It was one of the earliest backward chaining expert systems that
was designed to find the bacteria causing infections like bacteraemia and
6. High Performance: The expert system provides high performance for solving
any type of complex problem of a specific domain with high efficiency and accuracy.

7. Understandable: It responds in a way that can be easily understandable by the


user. It can take input in human language and provides the output in the same way.

7. Reliable: It is much reliable for generating an efficient and accurate output.

8. Highly responsive: ES provides the result for any complex query within a very
short period of time.

Expert System Architecture / Components of Expert System /


Rule based System Architecture
An expert system mainly consists of three components:

1. User Interface (I/O Interface)

2. Inference Engine

3. Knowledge Base

4. Explanation System/Module

5. Knowledge base Editor

The following diagram shows the general out sketch of an Expert System
Architecture.
The following figure represents the relation between each components of a typical Expert
System.

1. User Interface (I/O Interface)

With the help of a user interface, the expert system interacts with the user, takes
queries as an input in a readable format, and passes it to the inference engine. After
getting the response from the inference engine, it displays the output to the user. In
other words, it is an interface that helps a non-expert user to communicate
with the expert system to find a solution.

2. Inference Engine

o The inference engine is known as the brain of the expert system as it is the
main processing unit of the system. It applies inference rules to the
knowledge base to derive a conclusion or deduce new information. It helps in
deriving an error-free solution of queries asked by the user.

o With the help of an inference engine, the system extracts the knowledge from
the knowledge base.

o The inferring process is carried out recursively in three stages: (1) match (2)
select and (3) execute. During the match stage, the contents of working
memory are compared to facts and rules contained in the knowledge base.
Once all the matched rules have been found, one of the rules is selected for
execution and the selected rule is then executed.

There are two types of inference engine:

o Deterministic Inference engine: The conclusions drawn from this type of


inference engine are assumed to be true. It is based on facts and rules.

o Probabilistic Inference engine: This type of inference engine contains


uncertainty in conclusions, and based on the probability.

Inference engine uses the below modes to derive the solutions:

o Forward Chaining: It starts from the known facts and rules, and applies the
inference rules to add their conclusion to the known facts.

o Backward Chaining: It is a backward reasoning method that starts from the


goal and works backward to prove the known facts.

3. Knowledge Base

o The knowledgebase is a type of storage that stores knowledge acquired from


the different experts of the particular domain. It is considered as big storage
of knowledge. The more the knowledge base, the more precise will be the
Expert System.

o It contains facts and rules about some specialized knowledge domain.


o It is similar to a database that contains information and rules of a particular
domain or subject.

o One can also view the knowledge base as collections of objects and their
attributes. Such as a Lion is an object and its attributes are it is a mammal, it
is not a domestic animal, etc.

Components of Knowledge Base

o Factual Knowledge: The knowledge which is based on facts and accepted


by knowledge engineers comes under factual knowledge.

o Heuristic Knowledge: This knowledge is based on practice, the ability to


guess, evaluation, and experiences.

Knowledge Representation: It is used to formalize the knowledge stored in


the knowledge base using the If-else rules.

Knowledge Acquisitions: It is the process of extracting, organizing, and


structuring the domain knowledge, specifying the rules to acquire the knowledge
from various experts, and store that knowledge into the knowledge base.

4. Explanation System/Module

o The explanation module provides the user with an explanation of the


reasoning process when requested. This is done in response to a how query
or a why query.

o To respond to a how query, the explanation module traces all the sequence of
rules that led to the conclusion and is printed for the user in an easy to
understand human-language style.

o To respond to a why query, the explanation module must be able to explain


why certain information is needed by the inference engine before it can
proceed.

5. Knowledge base Editor

o The editor is used by developers to create new rules for addition to the
knowledge base, to delete outmoded rules, or to modify existing rules.
The people involved / Participants in the development of Expert System

There are three primary participants in the building of Expert System:

1. Expert/ Domain Expert: The success of an ES much depends on the


knowledge provided by human experts. These experts are those persons who
are specialized in that specific domain.

2. Knowledge Engineer: Knowledge engineer is the person who gathers the


knowledge from the domain experts and then codifies that knowledge to the
system according to the formalism.

3. End-User: This is a particular person or a group of people who may not be


experts, and working on the expert system needs the solution or advice for
his queries, which are complex.

Need / Advantages for Expert System?


Before using any technology, we must have an idea about why to use that
technology and hence the same for the ES. Although we have human experts in
every field, then what is the need to develop a computer-based system. So below
are the points that are describing the need of the ES:

1. No memory Limitations: It can store as much data as required and can


memorize it at the time of its application. But for human experts, there are
some limitations to memorize all things at every time.

2. High Efficiency: If the knowledge base is updated with the correct


knowledge, then it provides a highly efficient output, which may not be
possible for a human.

3. Expertise in a domain: There are lots of human experts in each domain,


and they all have different skills, different experiences, and different skills, so
it is not easy to get a final output for the query. But if we put the knowledge
gained from human experts into the expert system, then it provides an
efficient output by mixing all the facts and knowledge
4. Not affected by emotions: These systems are not affected by human
emotions such as fatigue, anger, depression, anxiety, etc.. Hence the
performance remains constant.

5. High security: These systems provide high security to resolve any query.

6. Considers all the facts: To respond to any query, it checks and considers all
the available facts and provides the result accordingly. But it is possible that a
human expert may not consider some facts due to any reason.

7. Regular updates improve the performance: If there is an issue in the


result provided by the expert systems, we can improve the performance of
the system by updating the knowledge base.

8. Advising: It is capable of advising the human being for the query of any
domain from the particular ES.

9. Provide decision-making capabilities: It provides the capability of


decision making in any domain, such as for making any financial decision,
decisions in medical science, etc.

10. Explaining a problem: It is also capable of providing a detailed description


of an input problem

11. Interpreting the input: It is capable of interpreting the input given by the
user.

12. Predicting results: It can be used for the prediction of a result.

13. Diagnosis: An ES designed for the medical field is capable of diagnosing a


disease without using multiple components as it already contains various
inbuilt medical tools.

14. These systems are highly reproducible.

15. They can be used for risky places where the human presence is not safe.

16. Error possibilities are less if the KB contains correct knowledge.

17. The performance of these systems remains steady as it is not affected by


emotions, tension, or fatigue.

18. They provide a very high speed to respond to a particular query.

Limitations of Expert System

o The response of the expert system may get wrong if the knowledge base
contains the wrong information.
o Like a human being, it cannot produce a creative output for different
scenarios.

o Its maintenance and development costs are very high.

o Knowledge acquisition for designing is much difficult.

o For each domain, we require a specific ES, which is one of the big limitations.

o It cannot learn from itself and hence requires manual updates.

Applications of Expert System

o In designing and manufacturing domain


It can be broadly used for designing and manufacturing physical devices such
as camera lenses and automobiles.

o In the knowledge domain


These systems are primarily used for publishing the relevant knowledge to
the users. The two popular ES used for this domain is an advisor and a tax
advisor.

o In the finance domain


In the finance industries, it is used to detect any type of possible fraud,
suspicious activity, and advise bankers that if they should provide loans for
business or not.

o In the diagnosis and troubleshooting of devices


In medical diagnosis, the ES system is used, and it was the first area where
these systems were used.

o Planning and Scheduling


The expert systems can also be used for planning and scheduling some
particular tasks for achieving the goal of that task.

You might also like